📋 Methodology

How CompFrame benchmarks are built

CompFrame benchmarks are opinionated, not comprehensive. They reflect how comp plans are actually designed at real seed-to-Series-C SaaS companies, built from three sources in priority order:

  1. Plans designed by CompFrame's founder across 2,000+ engagements from seed to public companies.
  2. Anonymized plans built by founders using CompFrame since March 2026.
  3. Published sales-specific research, used as a sanity check on our numbers, not as a primary source.

We show a typical range per role, stage, and ACV tier. We show sample size per cell. We show when we're guessing. We do not sell statistical precision we do not have.

On this page
  1. What we show you per cell
  2. What we do not use and why
  3. How sample size is disclosed
  4. Update cadence (the honest version)
  5. How geography is handled
  6. What our benchmarks do not cover
  7. Why we use ranges, not percentiles
  8. How benchmarks feed the plan generator
  9. Known limitations and biases

What we show you per cell

Every benchmark cell is a combination of three dimensions:

That gives us 45 cells across our coverage area. For each cell, we publish:

Where relevant, we also publish role-specific context (for example, BDRs are typically measured on qualified pipeline, not closed revenue, and that changes the whole variable pay structure).

What we do not use and why

We do not use Glassdoor, Payscale, LinkedIn-reported salaries, Indeed, or any consumer self-report site as a benchmark source. Reasons:

We reference Bridge Group's Sales Development and SaaS Sales reports because those are built on surveys of sales leaders at SaaS companies with standardized definitions. We cross-check our numbers against theirs when a Bridge Group cohort maps to one of ours. We do not re-publish their figures.

How sample size is disclosed

Every benchmark cell shows one of:

Sample size grows as more founders build plans in CompFrame. Cells with n=Founder experience today may become n=[number] cells in future quarters.

Update cadence (the honest version)

We update benchmarks when one of three things happens:

  1. A meaningful number of new plans have been added to our dataset for a given cell (at minimum, a 20% change in underlying sample).
  2. A relevant third-party report is published that shifts our view (for example, a new Bridge Group report).
  3. We observe a market inflection that we judge important enough to reflect (for example, a compression in SDR base salaries in a down funding environment).

In practice this means most cells update every few quarters. A "Last updated" stamp is shown per cell. We do not claim daily or continuous updates because our sources do not update that way.

How geography is handled

Geography is applied as a multiplier on top of the base cell:

These multipliers are blunt. They apply uniformly across roles and stages, which is a simplification. Real compensation premia in high-cost metros vary by role (senior enterprise AEs typically command a larger geo premium than entry-level SDRs). We do not currently model this. If we add role-specific geography later, we will flag it in the change log.

What our benchmarks do not cover

We are honest about gaps:

Why we use ranges, not percentiles

A statistical percentile breakdown (P25 / P50 / P75) implies a dataset with hundreds of observations per cell, enough to compute real quartiles. We are not there yet. Using that language would overstate our precision and imply a rigor we do not have.

Instead we publish a range: a low, a typical, and a high. These reflect where we see real plans land, with the typical figure being a founder-informed center of gravity for the cell. If and when our dataset grows enough to compute real percentiles with defensible confidence, we will switch the language and say so on this page.

How benchmarks feed the plan generator

Benchmarks are the evidence base behind CompFrame's plan generator. When you build a plan, every number we recommend can be traced back to a specific benchmark cell and the sample size behind it. If a cell is n=Founder experience, the plan generator marks the number as "founder rule of thumb" in the plan output so you know how much to weight it.

We also use the opposite loop. Every time a founder builds and saves a plan in CompFrame, the anonymized numbers feed back into the benchmarks dataset for future updates. You can opt out of this in your account settings.

Known limitations and biases

A few biases we want to be explicit about:

Last reviewed: April 22, 2026. Questions or disagreements about specific cells? Email hello@compframe.com and we'll either update the cell or explain why we disagree.
← Back to Benchmarks