Functory
functory.
7 min read
Functory

Compute CTR from Facebook Ads CSV in Python for SaaS Weekly Growth Standups (Zapier, Make, n8n)

This article explains how to write a focused Python function that computes click-through rate (CTR) and related KPIs from Facebook Ads campaign CSV exports, and how to expose that function as an API-backed microservice you can call from Zapier, Make, or n8n for automated weekly growth standups. The implementation uses pandas to reliably parse large CSV exports, applies date filtering and grouping by keyword, and produces both a JSON summary and a downloadable CSV report suitable for dashboards or Slack messages.

Long-tail search phrases covered: 'python compute ctr from facebook ads csv', 'automate ctr calculation for weekly standups', and 'facebook ads csv to zapier'.

What this function expects and produces

Input data: a Facebook Ads CSV export file (UTF-8 or Excel CSV) containing per-row metrics by ad or keyword with at least these columns: 'campaign_name' (string), 'adset_name' (string), 'keyword' (string), 'date' (YYYY-MM-DD), 'impressions' (int), 'clicks' (int), 'spend_usd' (float). The CSV may be 1k–50k rows per month depending on account size.

Transformations performed:

  • Parse dates and sanitize numeric columns.
  • Optional date-range filter (start_date, end_date).
  • Group by keyword (or other specified dimension) and compute aggregates: total_impressions, total_clicks, ctr_pct = 100 * total_clicks / total_impressions, avg_cpc_usd = total_spend_usd / total_clicks (safe divide), and conversion-rate-ready fields.
  • Sort and threshold (e.g., only keywords with >= 50 impressions).

Outputs:

  • JSON summary: list of keyword rows with fields: 'keyword', 'impressions', 'clicks', 'ctr_pct', 'avg_cpc_usd', 'spend_usd'.
  • CSV report path returned by the function (e.g., 'reports/weekly_ctr_2025-03-10.csv'). Functory will expose this file for download.

Real-world scenario: weekly SaaS growth standup

Scenario: A SaaS growth lead exports Facebook Ads performance for the last 7 days and wants a one-click report showing top keywords by CTR and spend so the team can decide which keyword to scale or pause during the weekly standup.

Concrete input: a file named 'fb_ads_export_2025-03-03_2025-03-09.csv' with 3,420 rows and these example columns:

  • date: '2025-03-03' — '2025-03-09'
  • campaign_name: 'SaaS Trial - Search'
  • adset_name: 'US - B2B'
  • keyword: 'project management software'
  • impressions: 124
  • clicks: 7
  • spend_usd: 18.40

Concrete output (JSON excerpt):

[{ 'keyword': 'project management software', 'impressions': 124, 'clicks': 7, 'ctr_pct': 5.65, 'avg_cpc_usd': 2.629, 'spend_usd': 18.40 }, ...]

CSV report produced: 'reports/weekly_ctr_2025-03-09.csv' (downloadable).

Example dataset and problem statement

Example dataset: 12,000 rows of Facebook Ads logs for March (columns as above). Problem solved: automated, repeatable weekly aggregation that eliminates manual copy/paste into Sheets, corrects inconsistent column names ('Keyword' vs 'keyword'), handles missing clicks safely, and provides a stable signature (CSV path or JSON payload) that other tools can consume.

Size & nature: ~3k rows/week, typical cardinality: 400 distinct keywords. Threshold: ignore keywords with <50 impressions to reduce noise.

Step-by-step developer workflow

  1. Drop or upload the raw CSV from Facebook Ads into a place Zapier/Make/n8n can reach (S3, Google Drive, or file upload to Functory UI).
  2. Trigger the Functory function with the CSV path, start_date, end_date, and group_by='keyword'.
  3. Functory runs the single-file Python main(...) to parse, aggregate, and write a CSV report.
  4. Functory returns a JSON summary and a downloadable CSV URL; forward the JSON to Slack or update a weekly dashboard via Zapier action.
  5. At standup, the team reviews top keywords by CTR and decides on actions (scale, test creative, pause).

Algorithm (high-level)

  1. Read CSV with pandas.read_csv(parse_dates=['date']).
  2. Normalize column names to lowercase and ensure numeric types for impressions, clicks, spend.
  3. Filter rows by start_date/end_date if provided.
  4. Group by 'keyword' (or given dimension) and aggregate: sum(impressions), sum(clicks), sum(spend).
  5. Compute ctr = clicks/impressions * 100 and avg_cpc = spend / clicks (guard zero).
  6. Apply minimum impression threshold and sort by ctr or spend, then serialize to CSV and JSON.

Python implementation example

The following small, complete function is designed as the core logic you would wrap in Functory's main(...). It uses pandas and typing and returns a path to the CSV report.

from typing import Optional
import pandas as pd

def main(csv_path: str, group_by: str = 'keyword', start_date: Optional[str] = None, end_date: Optional[str] = None, min_impressions: int = 50, output_path: str = 'reports/weekly_ctr_report.csv') -> str:
    df = pd.read_csv(csv_path, parse_dates=['date'], dayfirst=False)

    # normalize columns
    df.columns = [c.strip().lower() for c in df.columns]

    # required columns
    for col in ['date', 'impressions', 'clicks', 'spend_usd', group_by]:
        if col not in df.columns:
            raise ValueError(f'missing required column: {col}')

    df['impressions'] = pd.to_numeric(df['impressions'], errors='coerce').fillna(0).astype(int)
    df['clicks'] = pd.to_numeric(df['clicks'], errors='coerce').fillna(0).astype(int)
    df['spend_usd'] = pd.to_numeric(df['spend_usd'], errors='coerce').fillna(0.0)

    if start_date:
        df = df[df['date'] >= pd.to_datetime(start_date)]
    if end_date:
        df = df[df['date'] <= pd.to_datetime(end_date)]

    agg = df.groupby(group_by).agg({'impressions': 'sum', 'clicks': 'sum', 'spend_usd': 'sum'}).reset_index()
    agg['ctr_pct'] = (agg['clicks'] / agg['impressions'].replace({0: pd.NA})) * 100
    agg['ctr_pct'] = agg['ctr_pct'].fillna(0).round(2)
    agg['avg_cpc_usd'] = (agg['spend_usd'] / agg['clicks'].replace({0: pd.NA})).fillna(0).round(3)

    agg = agg[agg['impressions'] >= min_impressions].sort_values(by='ctr_pct', ascending=False)

    agg.to_csv(output_path, index=False)
    return output_path

# Example call:
# report = main('fb_ads_export_2025-03-03_2025-03-09.csv', start_date='2025-03-03', end_date='2025-03-09')
# print(report)

How Functory Makes It Easy

To publish this as a Functory function you wrap the above core logic in a single Python main(...) entrypoint (as shown). On Functory you must pick an exact Python runtime, for example '3.11.11', and declare a requirements.txt where every dependency is pinned, for example:

pandas==2.0.3
numpy==1.25.0
python-dateutil==2.8.2

Functory will expose the parameters of your main(...) as UI fields and as JSON fields in the HTTP API (csv_path as a FilePath or URL string, start_date/end_date as strings, min_impressions as int). If your main returns a path-like string, Functory makes that CSV downloadable in the function UI and via the API response. You do not need to write a CLI wrapper: Functory calls main(...) directly in an isolated environment with autoscaling, logs printed via print(), and pay-per-use billing.

Typical developer steps on Functory:

  1. Create a single-file Python function with main(...) and pinned imports.
  2. Select Python runtime '3.11.11' and add a requirements.txt with exact pins.
  3. Upload the function; Functory auto-generates UI fields from main(...) signature.
  4. Call the function from Zapier/Make/n8n by POSTing JSON or by uploading a file to the Functory UI. The returned report CSV can be fed into a Slack or Google Drive step.

Functory benefits: no servers to manage, automatic cloud execution (CPU/GPU tiers if needed), autoscaling across concurrent requests, built-in logging via print(), and pay-per-use billing handled by the platform. You can chain this function with a downstream Functory function that posts results to Slack or updates a dashboard for a full automation pipeline.

Alternatives and why this function-based approach wins

Common alternatives:

  • Manual spreadsheets: copy/paste CSV into Google Sheets and use formulas (error-prone when column names change).
  • Ad-hoc Jupyter notebooks: good for exploration but not for scheduled automation or programmatic calls from Zapier/Make/n8n.
  • ETL pipelines in Airflow: heavyweight for a simple weekly aggregation and requires infra and maintenance.
  • Vendor dashboards: sometimes lack per-keyword granularity or the exact CTR threshold logic you want.

A small single-file function (exposed via Functory) is superior when you want reproducible, callable logic that non-technical tools (Zapier, Make, n8n) can trigger without running your own servers. It is lightweight, testable, and integrates into existing automation with minimal operational overhead.

Business impact and metrics

Concrete benefit: automating the weekly CTR report reduces manual processing time from ~90 minutes to ~15 minutes per week for a growth lead (≈83% time saved). For a team of 3 people running weekly standups, that is ~3.75 hours saved per month. Financially, if the average hourly cost is $75, automating saves ≈$281/month.

Industry trend: According to a 2024 MarTech report, 62% of SaaS growth teams rely on weekly ads performance summaries for prioritization (Source: 2024 MarTech Trends Report).

Comparison to other developer workflows

Developers often build brittle ETL scripts that live in a repo and require a scheduler, or they rely on manual exports and Sheets. This function-based approach centralizes the logic in one versioned function, makes it callable via API, and avoids the need for cron servers or ephemeral notebooks. It also ensures the exact same aggregation code runs in each execution, which improves correctness and traceability for growth decisions.

Conclusion: A small, well-tested Python function that computes CTR from Facebook Ads CSV exports gives SaaS growth teams a reliable, automatable input to their weekly standups. Next steps: publish the function on Functory with pinned dependencies (e.g., pandas==2.0.3) and wire a Zapier/Make/n8n workflow to trigger it after each export. Try running the example against a sample export and iterate on thresholds or grouping fields (campaign, adset, country) to match your team's decision process.

Thanks for reading.