Drop-in cost tracking, attribution, and budget enforcement for LLM APIs. One line to add. Zero config.
pip install compute-cfonpm install compute-cfofrom compute_cfo import wrap, CostTracker, BudgetPolicy from openai import OpenAI # One line to add cost tracking tracker = CostTracker( budget=BudgetPolicy(max_cost=10.0, window="daily", on_exceed="raise"), ) client = wrap(OpenAI(), tracker=tracker) # Every call is now tracked and budget-enforced response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], metadata={"project": "search", "agent": "summarizer"}, ) tracker.total_cost # $0.000795 tracker.cost_by("project") # {"search": 0.000795}
Wrap your existing SDK client with one line. No proxy, no config, no code changes. Works with your current codebase.
Set hard spending limits per hour, day, or month. Raises an exception before you overspend. No more surprise bills.
Tag every call by project, agent, team, or feature. Know exactly where your money goes with cost_by() queries.
One SDK for OpenAI, Anthropic, Google Gemini, and Mistral. Shared tracker across providers. Same API everywhere.
Up-to-date pricing for GPT-4.1, Claude 4, Gemini 2.5, Mistral Large, and more. Unknown models still work gracefully.
Send cost events to console, JSONL files, or webhooks. Build dashboards, alerts, or feed into your billing system.
Python and TypeScript. More providers coming soon.
Open source. Apache 2.0. Free forever.
Dashboards, alerts, team billing, and more. Get early access.