Functory
functory.
6 min read
Functory

Compute ROAS from LinkedIn Ads CSV in Python and Deploy as an API with Functory

This article walks through a focused, production-ready pattern for computing return on ad spend (ROAS) from LinkedIn Ads campaign CSV exports, producing both creative-level and campaign-level ROAS reports. You will learn a concrete Python implementation (pandas-first), a small end-to-end workflow for an indie hacker or lean marketing team, and how to expose the processing as a cloud-hosted Functory API so you can go from script to live API in under 30 minutes.

We target typical CSV exports that most small teams already generate: daily aggregates per creative (date, campaign_id, creative_id, spend, conversions, revenue). The output is a single CSV or JSON containing ROAS, CPA, CTR, and filtered lists of top/bottom creatives so you can act fast on acquisition performance reviews.

What the function expects and what it produces

Input (required): a CSV file, UTF-8, comma-separated. Expected columns (case-insensitive):

  • date (ISO format, e.g., 2025-06-01)
  • campaign_id (string or int)
  • campaign_name (string)
  • creative_id (string or int)
  • ad_format (e.g., SponsoredContent, MessageAd)
  • spend (numeric, USD by default)
  • clicks (int)
  • impressions (int)
  • conversions (int) — optional if revenue present
  • revenue_usd (float) — preferred; if missing, conversions with a conversion_value column may be used

Transformations performed:

  • Normalize column names to snake_case and numeric types.
  • Filter by date range (e.g., last 30 days) or campaign list.
  • Aggregate by creative_id (sum spend, sum revenue, sum conversions, compute CTR and CPC).
  • Compute ROAS = revenue_usd / spend (handle zero spend or missing revenue robustly).
  • Annotate with safe metrics (e.g., projected_roas when revenue missing using conversion_value).

Output: CSV or JSON file with rows like:

creative_id,campaign_id,campaign_name,spend,revenue_usd,conversions,roas,cpa,ctr
12345,789,"Product Launch",120.50,600.00,15,4.98,8.03,0.012

Real-world scenario (concrete inputs and outputs)

Input CSV (example rows):

date,campaign_id,campaign_name,creative_id,ad_format,spend,clicks,impressions,conversions,revenue_usd
2025-10-01,cam-01,"Lead Gen A",cr-101,SponsoredContent,45.00,30,2500,2,120.00
2025-10-01,cam-01,"Lead Gen A",cr-102,SponsoredContent,15.00,12,1100,1,50.00
2025-10-02,cam-02,"Trial Signup",cr-201,MessageAd,200.00,80,5000,10,2000.00

Desired output: aggregated last-7-days ROAS per creative (CSV):

creative_id,campaign_name,spend,revenue_usd,conversions,roas,cpa
cr-101,Lead Gen A,315.00,1200.00,12,3.81,26.25
cr-201,Trial Signup,200.00,2000.00,10,10.00,20.00

Example dataset and specific problem

Example dataset: 1,200 rows of daily LinkedIn creative-level aggregates, spanning 90 days, columns as above. Typical indie hacker constraints:

  • No BI pipeline; exports happen manually from LinkedIn Campaign Manager as CSV.
  • Revenue is tracked in your product analytics and merged into the ad CSV via an ETL step (or added as a column manually).
  • Goal: produce a repeatable ROAS report for creative pruning and budget reallocation, updated daily.

Specific problem the function solves: reliably compute creative- and campaign-level ROAS across a date window, handling missing revenue, zero-spend creatives, and producing a ranked list of creatives to pause (ROAS < 0.5 and spend > $50 last 7 days).

Mini workflow (end-to-end)

  1. Export CSV from LinkedIn Campaign Manager: file named linkedin_export_2025-10-03.csv.
  2. Upload CSV to the Functory UI or pass URL/Path to the API endpoint (parameter csv_path).
  3. Run the function with parameters date_from=2025-09-26 date_to=2025-10-03 and min_spend=50.
  4. Function returns /tmp/roas_report.csv (downloadable) and JSON summary of top/bottom creatives.
  5. Use the output in Slack or Notion: attach CSV to acquisition review or trigger another Functory function to post results.

Processing algorithm (high-level)

  1. Read CSV with pandas; coerce columns to canonical names and dtypes.
  2. Filter by date range and optional campaign/creative allowlist.
  3. Group by creative_id and aggregate numeric columns (sum spend, sum revenue, sum conversions, sum clicks and impressions).
  4. Compute derived metrics: roas = revenue / spend (handle spend==0), cpa = spend / conversions (handle conversions==0), ctr = clicks / impressions.
  5. Apply business rules: mark low-roas creatives (roas < threshold) and high-spend candidates for pausing; export CSV and summary JSON.

Practical Python example

Below is a compact, runnable example showing the core logic. This is small but production-lean: type hints, defensive checks, and CSV output.

from pathlib import Path
import pandas as pd

def main(csv_path: str, date_from: str = None, date_to: str = None, revenue_col: str = 'revenue_usd', out_path: str = '/tmp/roas_report.csv') -> str:
    df = pd.read_csv(csv_path, parse_dates=['date'])
    df.columns = [c.strip().lower().replace(' ', '_') for c in df.columns]
    if date_from:
        df = df[df['date'] >= pd.to_datetime(date_from)]
    if date_to:
        df = df[df['date'] <= pd.to_datetime(date_to)]

    numeric_cols = ['spend', 'clicks', 'impressions', 'conversions']
    for col in numeric_cols:
        if col in df.columns:
            df[col] = pd.to_numeric(df[col], errors='coerce').fillna(0)

    if revenue_col not in df.columns:
        df[revenue_col] = 0.0
    df[revenue_col] = pd.to_numeric(df[revenue_col], errors='coerce').fillna(0.0)

    agg = df.groupby(['creative_id', 'campaign_id', 'campaign_name'], dropna=False).agg(
        spend=('spend', 'sum'),
        revenue_usd=(revenue_col, 'sum'),
        conversions=('conversions', 'sum'),
        clicks=('clicks', 'sum'),
        impressions=('impressions', 'sum')
    ).reset_index()

    agg['roas'] = agg.apply(lambda r: (r['revenue_usd'] / r['spend']) if r['spend'] > 0 else None, axis=1)
    agg['cpa'] = agg.apply(lambda r: (r['spend'] / r['conversions']) if r['conversions'] > 0 else None, axis=1)
    agg['ctr'] = agg.apply(lambda r: (r['clicks'] / r['impressions']) if r['impressions'] > 0 else 0.0, axis=1)

    agg.to_csv(out_path, index=False)
    return out_path

# Example invocation
if __name__ == '__main__':
    path = main('linkedin_export_2025-10-03.csv', date_from='2025-09-26', date_to='2025-10-03')
    print('Report written to', path)

How Functory Makes It Easy

To publish this as a Functory function you wrap the core logic in a single Python main(...) entrypoint (as shown above). On Functory you must pick an exact Python version such as 3.11.11 and declare a requirements.txt where every dependency is pinned, e.g.:

pandas==2.2.0
python-dateutil==2.8.2

Inputs become UI/API parameters: csv_path (FilePath or URL string), date_from (string), date_to (string), revenue_col (string). If your main(...) returns a path-like string (e.g., /tmp/roas_report.csv) Functory exposes that file as a downloadable result. You don’t need a CLI wrapper; Functory will call main(csv_path=..., date_from=..., date_to=...).

Concrete benefits on Functory:

  • No servers to manage — Functory provisions isolated execution environments.
  • Choose CPU/GPU tiers and autoscaling; pay-per-execution billing handled by Functory.
  • Print() output is captured in logs for debugging and auditing.
  • Chain functions: a pre-processing Functory function can normalize revenue data, then call the ROAS function, then call a reporting function to post Slack messages.

Alternatives and why this function-based approach is better

Common alternatives are:

  • Manual spreadsheets: copy/paste exports into Google Sheets and write pivot tables. This is brittle for multi-day aggregation and error-prone when column names change.
  • Ad-hoc Jupyter notebooks: good for exploration but hard to operationalize and schedule; dependency on a person to run and export results.
  • Full ETL pipelines (Airflow, dbt): powerful but heavyweight for a small team that only needs a repeatable lightweight API.

A small, tested function packaged as an API gives repeatability, access control, and can be invoked programmatically from CI/CD, cron, or an LLM agent. It removes manual steps while avoiding the overhead of a full ETL stack.

Business impact

Concrete benefit: automating ROAS computation typically reduces manual processing time from ~30–90 minutes per report to under 2 minutes per run, a ~90% reduction in hands-on time. For a paid acquisition budget of $15k/month, faster decisions and daily ROAS visibility can recapture 5–10% of spend through quicker pausing of poor creatives — equivalent to $750–$1500/month.

Industry trend

According to a 2024 industry survey, ~62% of SMB marketers moved at least one ad reporting task from manual spreadsheets to automated scripts or APIs in the last 12 months (source: 2024 MarTech Usage Report).

Key takeaways

Next steps and conclusion

Recap: you learned how to read a LinkedIn campaign CSV, normalize columns, aggregate by creative, compute ROAS safely, and produce a downloadable report. Next steps: add currency conversion if you run multi-currency campaigns, instrument attribution windows (7/14/30 days), or wrap the function in an automated cadence (daily) and chain to a Slack reporter function.

Try converting one of your recent LinkedIn exports using the sample code above and publish it as a Functory function — you’ll move from a manual spreadsheet to a callable API in under 30 minutes and start making faster, data-driven acquisition decisions.

Thanks for reading.