Functory
functory.
7 min read
Functory

Compute First Response Time from Intercom Exports in Python for Quarterly Customer Success Reviews

This article shows how to take messy Intercom CSV exports — raw conversation event rows — and turn them into a deterministic, reproducible metric: first response time (FRT) per conversation. We cover the exact input schema expected, robust rules for selecting the first customer message and the first valid agent reply, and produce a concise CSV or JSON that teams can drop into quarterly customer success reviews or automated dashboards.

You'll get a concrete single-file Python example (pandas-based) you can run locally or wrap as a Functory function. The result is a reproducible tool that replaces spreadsheet fiddling, yields consistent SLAs, and supports auditing of who responded when.

What this function expects and produces

Input: a CSV file exported from Intercom (or pre-normalized to this schema). Each row is an event (a message, note, or system event). Required columns:

  • conversation_id (string)
  • event_type (string): one of ['message', 'note', 'assignment', 'system']
  • author_type (string): one of ['user', 'admin', 'bot']
  • author_id (string or int): id of agent or user
  • body (string): text content, may be empty for system rows
  • created_at (ISO8601 timestamp e.g. 2024-04-10T13:45:23+00:00)
  • conversation_url (optional string): link to the Intercom conversation

Processing steps (implemented in code below): parse timestamps with timezones, group rows by conversation_id, identify the first event where author_type == 'user' as first_customer_ts, then find the earliest subsequent event where author_type in ['admin','bot'] and event_type in ['message','note'] as first_response_ts. Compute delta in seconds and a human-readable first_response_minutes. If no reply exists, mark response fields as NULL and optionally flag as SLA breach.

Output: CSV or JSON summary with columns: conversation_id, first_customer_ts, first_response_ts, first_response_seconds (int), first_response_minutes (float), responder_id, conversation_url, sla_breach (bool).

Real-world scenario: quarterly review for a content team

Example: The content ops team gets Intercom exports for Q3 with 5,200 event rows spanning 1,250 conversations. They need a single source-of-truth FRT number per conversation for the quarterly customer success review slide deck: median FRT, 75th percentile, and a list of top 10 conversations with longest FRTs for qualitative review.

Concrete input sample row (CSV):

conversation_id,event_type,author_type,author_id,body,created_at,conversation_url
c_001,message,user,usr_42,"I'm having trouble loading the guide",2024-07-11T09:14:22+00:00,https://app.intercom.com/a/conversations/1
c_001,message,admin,ag_7,"Hi — can you confirm your browser?",2024-07-11T09:18:01+00:00,https://app.intercom.com/a/conversations/1

Desired output row:

conversation_id,first_customer_ts,first_response_ts,first_response_seconds,first_response_minutes,responder_id,conversation_url,sla_breach
c_001,2024-07-11T09:14:22+00:00,2024-07-11T09:18:01+00:00,239,3.98,ag_7,https://app.intercom.com/a/conversations/1,False

Example dataset and the specific problem solved

Example dataset: 5,000 rows exported from Intercom covering 2 months; 1,200 unique conversation_ids; columns as above. Problem: customer success currently copies/pastes conversation rows into Google Sheets, filters manually to find first user message, then scans for the first agent reply. Results are inconsistent because timestamps are in mixed formats and some rows are system events or assignment notes. This function automates parsing, enforces rules (ignore assignment/system events when selecting replies), and produces stable metrics for reporting.

Step-by-step mini workflow (where this function fits)

  1. Export Intercom conversation events as CSV from Intercom UI (or use Intercom API to pull events).
  2. Drop the CSV into a team folder or upload to the Functory UI / API.
  3. Run the Python function to generate first_response_summary.csv.
  4. Load the summary into Looker/Tableau or Google Sheets to compute median/percentiles and create the quarterly slide metrics.
  5. Optional: chain this result into a Functory function that generates a PDF report or sends a Slack summary with top SLAs.

Algorithm (high-level)

  1. Normalize timestamps to UTC and cast types.
  2. For each conversation_id: sort events by created_at ascending.
  3. Find first event where author_type == 'user' → first_customer_ts.
  4. Find first subsequent event with author_type in ['admin','bot'] and event_type in ['message','note'] → first_response_ts.
  5. Compute difference (first_response_ts - first_customer_ts) in seconds; mark SLA breach if > threshold (e.g., 3600s).

Python implementation example

The snippet below is a small, runnable pandas-based script. It exposes a single main(...) function, which is the pattern you would adapt when publishing to Functory (see next section).

from pathlib import Path
import pandas as pd
from datetime import datetime
from typing import Optional

def parse_iso(ts: str) -> pd.Timestamp:
    return pd.to_datetime(ts, utc=True)

def compute_first_response(df: pd.DataFrame, sla_seconds: int = 3600) -> pd.DataFrame:
    df = df.copy()
    df['created_at'] = pd.to_datetime(df['created_at'], utc=True)
    # Keep only relevant rows
    df = df[df['event_type'].isin(['message','note','assignment','system'])]

    out_rows = []
    for conv_id, group in df.groupby('conversation_id'):
        group = group.sort_values('created_at')
        first_user = group[group['author_type'] == 'user']
        if first_user.empty:
            out_rows.append({
                'conversation_id': conv_id,
                'first_customer_ts': None,
                'first_response_ts': None,
                'first_response_seconds': None,
                'first_response_minutes': None,
                'responder_id': None,
                'conversation_url': group['conversation_url'].iloc[0] if 'conversation_url' in group.columns else None,
                'sla_breach': None
            })
            continue
        first_customer_ts = first_user['created_at'].iloc[0]
        # find first subsequent admin/bot message or note
        replies = group[(group['created_at'] > first_customer_ts) & (group['author_type'].isin(['admin','bot'])) & (group['event_type'].isin(['message','note']))]
        if replies.empty:
            out_rows.append({
                'conversation_id': conv_id,
                'first_customer_ts': first_customer_ts.isoformat(),
                'first_response_ts': None,
                'first_response_seconds': None,
                'first_response_minutes': None,
                'responder_id': None,
                'conversation_url': group['conversation_url'].iloc[0] if 'conversation_url' in group.columns else None,
                'sla_breach': True
            })
        else:
            first_response = replies.iloc[0]
            delta = (first_response['created_at'] - first_customer_ts).total_seconds()
            out_rows.append({
                'conversation_id': conv_id,
                'first_customer_ts': first_customer_ts.isoformat(),
                'first_response_ts': first_response['created_at'].isoformat(),
                'first_response_seconds': int(delta),
                'first_response_minutes': round(delta/60.0, 2),
                'responder_id': first_response['author_id'],
                'conversation_url': group['conversation_url'].iloc[0] if 'conversation_url' in group.columns else None,
                'sla_breach': delta > sla_seconds
            })
    return pd.DataFrame(out_rows)

# Single-file entrypoint style
def main(input_csv: str, output_csv: Optional[str] = 'first_response_summary.csv') -> str:
    df = pd.read_csv(input_csv)
    summary = compute_first_response(df)
    out_path = Path(output_csv)
    summary.to_csv(out_path, index=False)
    return str(out_path)

# Example local call
if __name__ == '__main__':
    sample = 'intercom_export.csv'
    print('Running example on', sample)
    # main('intercom_export.csv', 'first_response_summary.csv')

How Functory Makes It Easy

To publish this as a Functory function, wrap the core logic exactly as above in a single main(input_csv: str, output_csv: str) -> str entrypoint. On Functory you must choose a full Python patch version (for example 3.11.11) and prepare a requirements.txt file with pinned versions, for example:

pandas==2.2.2
python-dateutil==2.8.2

Functory treats the parameters of main(...) as UI and API inputs — input_csv can be a path to an uploaded file or URL string, output path returned by main(...) will be exposed as a downloadable file. No CLI wrapper is required: Functory calls main(...) directly. Benefits: no servers to manage, your function can run on CPU/GPU tiers automatically, logs printed via print() are captured in the Functory UI, and each execution is billed pay-per-use. You can chain functions: e.g., preprocess_export -> compute_first_response -> generate_report, each as separate Functory functions triggered by the previous step’s result URL or a webhook.

Comparison with current approaches

Many teams compute FRT in Google Sheets or ad-hoc Jupyter notebooks. Spreadsheets are fragile with ISO8601 parsing and timezone issues, and notebooks often leave messy local dependencies and no stable API. Legacy BI tools can calculate aggregated FRT but typically cannot produce an auditable row-level CSV linking a conversation URL to the exact responder. This function-based approach gives a reproducible, testable, version-controlled script that can be run on a schedule or exposed as a lightweight API for other services (CRM syncs, Slack alerts).

Business benefit

Automating the conversion of Intercom exports into a canonical FRT summary removes manual spreadsheet work. We estimate a content operations team of 3 people spends ~8 hours per quarter doing manual cleanup; automating this reduces manual processing time by ~60% (saves ~4.8 hours per quarter), and speeds reporting cadence from quarterly to weekly for ad-hoc SLA investigations.

Industry context: According to a 2024 customer service industry analysis, 72% of companies track first response time as a KPI for customer success teams (source: 2024 Customer Service Trends Report).

When to use and limitations

Use this when you have row-level event exports and need deterministic, per-conversation first response times. Limitations: if Intercom export lacks fine-grained event ordering (identical timestamps), you must use an additional sequence id; if messages are threaded into multiple participants, you may need to refine author_type rules. For high-volume exports (millions of rows), move processing to chunked reads or a Spark job instead of pandas.

Conclusion: Converting messy Intercom exports into clean first response time metrics gives content and customer success teams reliable, auditable data for quarterly reviews and SLA tracking. Next steps: (1) run the provided script on a sample export and inspect the top 10 slowest conversations, (2) publish the function to Functory with pinned dependencies and schedule it to run after each export so your dashboard always uses consistent metrics. Consider publishing your function internally so teammates can call it via API rather than re-running spreadsheets — then iterate on edge cases (timezones, multiple agents) and version the function for auditability.

Thanks for reading.