Your Portfolio's GTM Dashboard Was Built for 2019
The metrics operating partners use to evaluate portco GTM health — ARR, LTV:CAC, pipeline coverage, SDR productivity — are systematically misleading for AI-era companies.
Most operating partners reviewing a portco’s GTM health look at the same dashboard: ARR growth, pipeline coverage, CAC, LTV:CAC, AE quota attainment, SDR activity metrics.
That dashboard was designed for a world where software had predictable seat-based pricing, linear buying cycles, and human-paced sales motion. That world is ending faster than most boards have updated their reporting cadence.
The metrics aren’t just imprecise. In a number of cases, they’re actively misleading. A portco can look healthy by every traditional GTM metric while its actual competitive position deteriorates in real time.
Here’s where the breakdowns are happening — and what to track instead.
ARR Is Now a Lagging Indicator of GTM Dysfunction
ARR was always a trailing metric. What’s new is how far it trails.
In a traditional SaaS motion, ARR problems showed up 6-9 months after the GTM started failing. A bad outbound quarter turned into weak pipeline, which turned into missed quota, which showed up in ARR around the time the board was already aware of the problem.
For AI-native companies, that lag is getting longer, and the signal is getting noisier. Credit-based and consumption-based pricing models — which surged 126% in 2025 — mean ARR is increasingly a function of usage patterns, not just customer count. A portco can show ARR growth while simultaneously losing pricing power, contracting average contract sizes, and watching gross margin deteriorate.
Clay voluntarily took a 10% revenue hit in March 2026 by repricing their platform — cutting data credit costs 50-90% and moving to zero-markup on AI token pass-through — because the old credit model was creating the wrong incentives. From the outside, that shows as an ARR dip. From the inside, it’s a deliberate flywheel investment. Traditional ARR reporting can’t tell the difference.
If you’re an operating partner and ARR is your primary signal for GTM health, you are looking at a metric that tells you where the business was, not where it’s going.
LTV:CAC Is Broken When LTV Is Unknowable
LTV:CAC was always a somewhat creative exercise. Now it’s close to fiction.
The inputs required for a meaningful LTV calculation — stable churn rate, predictable expansion, consistent gross margins — are assumptions that almost no AI-native company can honestly make right now. Anthropic and OpenAI are shipping model improvements at a pace that reprices the underlying cost structure every quarter. Buyers have AI experimentation budgets that don’t behave like traditional software budgets. Enterprise procurement for AI tooling is genuinely unsettled.
As Altimeter’s Jamin Ball has pointed out, the fastest-growing AI companies today are the ones sitting in the token path. Their revenue scales with consumption. But consumption-driven revenue has very different retention dynamics than subscription revenue. A customer who is deeply consuming your product is high-LTV. A customer who bought a subscription but hasn’t activated their usage is masking churn inside your ARR number.
The practical implication: any LTV:CAC ratio your portcos are showing you right now should be treated as directional at best. The question isn’t “what is our LTV:CAC?” It’s “what are the leading indicators that a customer will expand versus contract, and are those moving in the right direction?”
SDR Productivity Metrics Assume a Human-Paced Motion
The classic SDR productivity stack: dials per day, emails sent, meetings booked, pipeline generated. These metrics made sense when outbound was a purely human activity constrained by hours in a day.
When a portco deploys AI-native outbound — Clay enrichment, Claygent personalization, HeyReach on LinkedIn, Instantly for email sequences — the denominator changes completely. A two-person outbound function running AI tooling correctly can generate the activity volume of a ten-person team running manually. Measuring them by “dials per day” is meaningless. Measuring meetings booked without adjusting for outreach volume is equally useless.
More dangerously: a portco running AI outbound badly will show high activity volume with terrible conversion. The traditional SDR metrics will make it look like the team is working. What’s actually happening is high-velocity spam with a grammar upgrade — and the deliverability consequences will compound quietly until email domains are burned and reply rates have collapsed.
What to actually ask: What is the positive reply rate on outbound sequences? What percentage of booked meetings show up and convert to opportunities? Is the AI system producing specific, researched outreach or generic personalization? Is contact data being verified before sequences run?
Those questions surface the actual GTM health. Dials per day does not.
The Metrics That Actually Matter Now
Rather than abandoning structure entirely, the operating partner lens needs to shift toward metrics that are predictive rather than trailing, and that account for the consumption and AI dynamics reshaping how portcos grow.
Pipeline quality over pipeline quantity. Opportunity count and pipeline coverage ratio are noisy. Conversion rate from first meeting to closed-won, broken down by ICP segment and acquisition channel, is the signal. A portco with 3x pipeline coverage converting at 12% is in worse shape than one with 2x coverage converting at 28%.
Gross profit per customer, not just ARR per customer. As pricing models fragment into platform plus consumption, average contract value becomes a poor proxy for economics. Gross profit per customer — inclusive of AI token costs, data costs, and support overhead — is what tells you whether growth is compounding or eroding.
Token consumption trends at portcos selling AI products. For portcos that have AI-native products (and increasingly, this is most of them), token consumption per active customer is the analog of daily active usage. It shows whether customers are actually using the AI, whether they’re going deeper over time, and whether the product is delivering enough value to justify continued consumption spend. A flat or declining token consumption trend inside a growing ARR number is a warning sign worth investigating.
Outbound system maturity, not headcount. The correct benchmark for a portco’s outbound capability is no longer “how many SDRs do you have?” It’s “how automated, how verified, and how iterated is your outbound system?” A portco with two people running a well-built Clay-to-sequence pipeline is more valuable than one with eight SDRs running manual outreach. The operating partner review should include the actual system architecture, not just the headcount.
What to Ask in the Next Portco GTM Review
The standard board deck GTM slide shows ARR, pipeline, headcount, and maybe a win/loss ratio. That slide is not giving you what you need.
Add these questions to the next review:
- What percentage of outbound replies are positive versus “remove me”? What’s the trend?
- What is gross margin per customer, not just ACV? Is it stable or compressing?
- For AI-native products: what is monthly token consumption per active customer? Is it growing?
- How long does it take to get a new account from the target list into an active outbound sequence? Days or weeks?
- When did you last audit your CRM contact data for job changes? What percentage of your active sequences are hitting stale contacts?
- What’s the conversion rate from first meeting to closed-won, by ICP segment?
These questions won’t appear on a standard GTM slide. That’s the point. The portcos that can answer them coherently are the ones building GTM infrastructure that compounds. The ones that can’t are running 2019 playbooks and calling it AI because they added an AI-generated opener to their cold email.
The Underlying Problem
The GTM metrics most operating partners use were designed to make patterns visible in a predictable, seat-based, human-paced sales world. That world hasn’t disappeared — but it’s no longer the whole picture, and for many portcos, it’s increasingly not even the dominant picture.
The tools and models available to GTM teams have changed faster than the reporting frameworks used to evaluate those teams. The result is a growing gap between what the dashboard shows and what’s actually happening.
Closing that gap is one of the highest-leverage things an operating partner can do for a portfolio right now. Not because the new metrics are harder, but because almost no one is asking for them yet.
The Dark Funnel covers AI GTM infrastructure for operators who build, not advise. New posts when there’s something worth reading.
More Intelligence
Your GTM Stack Is Running Blind
GTM automation is eating engineering headcount. But it's still being run without any of the observability practices that make engineering work. That's a problem.
Your Portco's Next SDR Hire Is Probably a Mistake
AI roles are hockey-sticking. Design is being automated away. And most portfolio companies are still building SDR teams from a playbook written in 2018.
AI Outbound Prospecting Best Practices: Why Most Teams Are Getting It Wrong
Generic AI outbound is dead. Real AI outbound — agent chaining, account-first signal finding, pipeline thinking — is generating pipeline most teams can't touch.
The Brief
Get the AI GTM brief before your portfolio figures it out.
AI outbound teardowns, portfolio GTM patterns, stack analysis.
1–2x/month. Only when something's worth reading.