This is a demo report. Numbers come from PitCrew’s real engine on a hand-curated agent description.
Audit your own agent →
Audit · May 7, 2026Brain: anthropic · Haiku 4.5Generation: suno · Suno v3.5

Your forecast is in

Production-music shop spinning up royalty-free tracks on demand. Marketer types a brief (tempo, mood, genre, target length), the agent drafts a Suno prompt + style cue, generates the track, and queues it for review. ~20…

What we assumed

These are the inputs we used. If anything looks off, re-run the audit with better numbers.

System prompt
86 tokens (estimated)
Avg user input
250 tokens
Avg output
400 tokens
Calls per month
600
Batch share
0%
Pricing as of
Apr 28, 2026
Output per call (unit)
1
Generations per agent call
1
Regeneration rate
1.50×
How precise is this?
Savings band spans 209% of the central estimate. Top sources of uncertainty:
  • Call volume is your guess — typical pre-deploy estimates land within ±50% of actual.
  • Conversation length is a coarse bucket — actual tokens vary by ±40% per call.

Real-bill expectation

PitCrew forecasts steady-state inference cost — the dollars the LLM provider bills for the deterministic, no-extras workload your wizard described. Real production bills are typically 1.2-1.5× higher because the steady-state model excludes:

  • Dev / eval loops (often 10-30% of total spend)
  • Retries, error recovery, idempotency replays
  • Background batch jobs (summaries, classification of past data)
  • A/B traffic on alternate models
  • Embeddings + fine-tunes that ride alongside the agent
ScenarioSteady-state (PitCrew)Expected real bill
Default build$37/mo$45–$56/mo
PitCrew plan$36/mo$44–$55/mo

The 20-50% multiplier comes from public engineering postmortems and the validation cases in docs/accuracy-validation.md. If your team has tight eval loops and minimal retry traffic, target the low end.

Run another audit
for a different build

Tweak inputs, swap the model, see how the forecast moves.

New audit