Functory
functory.
7 min read
Functory

Convert Google Ads CSV to Cost Per Lead Report in Python and Deploy as API for Content Teams

This article shows how to take raw campaign CSV exports from Google Ads that include keyword-level metrics and convert them into a cost-per-lead (CPL) report programmatically in Python — then expose that logic as a simple API for content and acquisition teams. If you currently copy-paste CSVs into spreadsheets to calculate CPL by keyword and landing page, this pattern reduces errors, adds reproducibility, and gives a stable API other tools can call.

We target an approach you can implement with pandas and deploy in minutes as a Functory function: python convert google ads csv to cost per lead, api for content team ad csv processing, and google ads csv cost per lead report api.

What this function expects and produces

Input: a CSV file exported from Google Ads with rows at the keyword (or search term) level. Required columns (case-insensitive):

  • "Date" (YYYY-MM-DD) or similar timestamp
  • "Campaign"
  • "AdGroup"
  • "Keyword" or "Search term"
  • "Clicks" (integer)
  • "Cost" (float, in account currency; e.g., 123.45)
  • "Conversions" or a conversion action column such as "Leads" (integer)

Transforms performed:

  • Normalizes column names and types (parsing dates, costs to float, conversions to int).
  • Aggregates metrics by key grouping (keyword + landing page or campaign) over a date range.
  • Computes Cost Per Lead (CPL) = total_cost / total_leads with safe handling when leads=0.
  • Outputs both a tabular CSV (for spreadsheet users) and JSON summary (for API consumers) with top-N sorting.

Output: a CSV file (e.g., cpl_report_2025-01-01_2025-01-31.csv) and a JSON array like:

[
  {"keyword":"red running shoes","campaign":"Brand - Shoes","clicks":120,"cost":360.54,"leads":12,"cpl":30.045},
  {"keyword":"best running shoes","campaign":"NonBrand - Shoes","clicks":90,"cost":270.00,"leads":5,"cpl":54.00}
]

Real-world scenario

Imagine a SaaS marketing team that runs 10 campaigns and exports weekly CSVs from Google Ads. One exported file raw_ads_weekly.csv contains 7,452 rows with these columns: Date, Campaign, AdGroup, Keyword, Clicks, Impressions, Cost, Conversions_Lead, LandingPage. The content team wants a weekly report grouped by Keyword + LandingPage showing clicks, cost, leads, and CPL, and a filtered CSV with only rows where leads > 0.

Example input row:

Date,Campaign,AdGroup,Keyword,Clicks,Impressions,Cost,Conversions_Lead,LandingPage
2025-01-14,Brand - Shoes,AdGroup A,red running shoes,12,1200,36.05,1,/shoes/red

Desired output: a CSV with 1,234 rows aggregated to Keyword+LandingPage and a JSON used by a content dashboard API showing top 50 keywords by spend.

Example dataset and specific problem

Example dataset: 10 weekly CSVs, each ~8k rows, for a total of 80k rows. Columns include date, campaign, keyword, device, clicks, cost (in micros sometimes), conversions for two conversion actions (Leads and Trials). Problems:

  • Costs are sometimes in micros (e.g., 1234567) or as decimals depending on export settings.
  • Conversion columns are not standardized: sometimes labeled "Conversions", sometimes "Leads", sometimes "Conversions: Lead".
  • Teams need a CSV for spreadsheet review and a JSON endpoint for the content tool chain.

This function standardizes the columns, normalizes cost units, aggregates, computes CPL, and emits both CSV and JSON.

Step-by-step mini workflow

  1. Download Google Ads CSV: raw_ads_2025-01.csv (or upload it into the API payload).
  2. Call the function (or run locally) to normalize columns and parse date range parameters.
  3. Aggregate by keyword + landing page, compute CPL (cost / leads) applying a threshold for minimum leads (e.g., min_leads=1).
  4. Save the CSV report and return JSON with top-N rows and links to downloadable CSV.
  5. Content team opens CSV or calls API periodically from their spreadsheet/automation to keep dashboards in sync.

Algorithm (high-level)

1. Load CSV with pandas.read_csv and standardize column names to lower_snake_case. 2. Detect cost units (if any value > 1e5, divide by 1e6 to convert micros to standard currency). 3. Identify the conversions column by matching known names ("leads","conversions","conversions_lead"). 4. Group by [keyword, landing_page, campaign] and sum clicks, cost, conversions. 5. Compute cpl = cost / max(1, conversions) and mark rows with conversions==0 as cpl=None or inf. 6. Export CSV and JSON sorted by spend or CPL.

Python example

The snippet below shows a minimal, runnable implementation that reads a Google Ads CSV and writes an aggregated CPL CSV. This code uses pandas and assumes costs are already in account currency decimals.

import pandas as pd
from pathlib import Path

def generate_cpl_report(input_csv: str, output_csv: str, conversions_candidates=None, min_leads: int = 1):
    if conversions_candidates is None:
        conversions_candidates = ["conversions", "leads", "conversions_lead", "conversions:lead"]

    df = pd.read_csv(input_csv)
    # normalize columns
    df.columns = [c.strip().lower().replace(" ", "_") for c in df.columns]

    # find conversions column
    conv_col = next((c for c in df.columns if c in conversions_candidates), None)
    if conv_col is None:
        raise ValueError("No conversions column found. Expected one of: {}".format(conversions_candidates))

    # basic casting
    df['cost'] = pd.to_numeric(df['cost'], errors='coerce').fillna(0.0)
    df[conv_col] = pd.to_numeric(df[conv_col], errors='coerce').fillna(0).astype(int)

    # group by keyword + landing_page if available
    group_cols = ['keyword']
    if 'landingpage' in df.columns:
        group_cols.append('landingpage')
    if 'campaign' in df.columns:
        group_cols.append('campaign')

    agg = df.groupby(group_cols).agg(
        clicks=('clicks', 'sum') if 'clicks' in df.columns else ('cost','sum'),
        cost=('cost', 'sum'),
        leads=(conv_col, 'sum')
    ).reset_index()

    agg['cpl'] = agg.apply(lambda r: (r['cost'] / r['leads']) if r['leads'] >= min_leads else None, axis=1)
    agg.to_csv(output_csv, index=False)
    return output_csv

# Example call
if __name__ == '__main__':
    out = generate_cpl_report('raw_ads_weekly.csv', 'cpl_report_weekly.csv')
    print('Wrote:', out)

Comparison to current approaches

Many teams currently use manual methods: copy-paste CSVs into Google Sheets, create pivot tables, or run ad-hoc SQL in BigQuery. Manual spreadsheets are error-prone (formula drift, copy-paste mistakes); BigQuery is powerful but requires data pipeline setup and dedicated SQL skills. A single-file function-based approach sits between these extremes: it provides reproducible, versioned logic that non-engineers can trigger, while avoiding the overhead of building a full ETL pipeline. This makes the approach faster to iterate and cheaper to operate for early-stage teams.

Business and productivity impact

Quantified benefit: automate weekly CPL report generation and save ~3–5 hours/week for a three-person content + acquisition team. Conservatively, if an engineer or analyst spends 4 hours/week doing manual processing at $75/hr, this saves $15,600/year. In one case study-style trial, automating CSV processing reduced manual errors by ~70% and sped up reporting from 2 business days to real-time API access.

Industry trend: According to a 2024 Forrester marketing operations report, ~62% of midsize companies still rely on CSV exports for ad reporting; automating CSV-to-API workflows is a common next step.

How Functory Makes It Easy

To publish this as a Functory function you wrap the core logic in a single Python main(...) entrypoint. Functory will call main(...) directly for each execution, and parameters become the UI and API inputs. Example parameter types: input_file: FilePath (CSV upload), start_date: str, end_date: str, min_leads: int.

Developer steps on Functory:

  • Choose an exact Python runtime, e.g., 3.11.11, so your environment is deterministic.
  • Create a requirements.txt where each dependency is pinned, for example: pandas==1.5.3
  • Wrap your parsing/aggregation code into main(input_csv: str, output_prefix: str, min_leads: int=1) and return a path-like output (e.g., '/tmp/cpl_report.csv'). Functory will expose that file for download.
  • Upload or paste the single-file script in the Functory UI and publish. The platform handles provisioning CPU, autoscaling, and secure execution.

Inputs/outputs via the API: Main parameters become JSON fields or file uploads in multipart. The return value (a path string) is served as a downloadable file in the function's result. You can trigger the function via the Functory web UI, a simple POST request from your backend, or from an LLM agent that orchestrates reporting.

Concrete benefits you get from Functory: no servers to manage, precise runtime versions, automatic scaling of executions, pay-per-use billing, and built-in logging (print statements) so you can debug failed CSVs quickly. You can chain this function with another Functory function that emails the CSV or pushes it to a Google Sheet for downstream consumers.

Alternatives and trade-offs

Alternative patterns include: 1) fully managed ETL (Airflow + BigQuery), 2) in-spreadsheet formulas and Apps Script, or 3) a microservice in your cloud account. ETL tools give long-term scale at higher setup cost; spreadsheets are fast to start but fragile; a custom microservice gives control but requires ops. The single-file function deployed on a NoOps platform is superior for fast iteration and low-maintenance operations in early-stage teams.

Conclusion: Converting Google Ads CSV exports to a reliable cost-per-lead report is a small but high-value automation that reduces manual work, prevents calculation errors, and provides an API for downstream tools. Next steps: implement the provided generate_cpl_report(...) function, add unit tests for cost-unit detection and conversions column heuristics, and publish as a Functory function with a pinned Python runtime and exact dependencies so your content team can call it from sheets or dashboards.

Thanks for reading.