📋 Methodology
How CompFrame benchmarks are built
CompFrame benchmarks are opinionated, not comprehensive. They reflect how comp plans are actually designed at real seed-to-Series-C SaaS companies, built from three sources in priority order:
- Plans designed by CompFrame's founder across 2,000+ engagements from seed to public companies.
- Anonymized plans built by founders using CompFrame since March 2026.
- Published sales-specific research, used as a sanity check on our numbers, not as a primary source.
We show a typical range per role, stage, and ACV tier. We show sample size per cell. We show when we're guessing. We do not sell statistical precision we do not have.
What we show you per cell
Every benchmark cell is a combination of three dimensions:
- Role: AE, SDR, BDR, AM, or CSM.
- Company stage: Seed, Series A, or Series B/C+.
- ACV range: SMB ($5K to $25K), Mid-Market ($25K to $100K), or Enterprise ($100K+).
That gives us 45 cells across our coverage area. For each cell, we publish:
- A typical OTE range (low, typical, high). This is a range of plausible totals for this role at this stage and ACV. It is not a statistical quartile breakdown. We use ranges because the underlying dataset is small and opinion-informed.
- Base/variable split norm for the role, with any meaningful variation by stage or ACV called out.
- Quota-to-OTE ratio as a planning rule of thumb. This assumes a reasonable account load and territory, which is usually not your situation at seed stage.
- Accelerator structure that is typical for the role.
- Sample size (n=) for that specific cell, or an explicit "founder experience" tag if we don't have enough plans to publish a real n.
Where relevant, we also publish role-specific context (for example, BDRs are typically measured on qualified pipeline, not closed revenue, and that changes the whole variable pay structure).
What we do not use and why
We do not use Glassdoor, Payscale, LinkedIn-reported salaries, Indeed, or any consumer self-report site as a benchmark source. Reasons:
- These sites aggregate self-attested numbers from anyone who filled out a form. The respondent pool is heavily skewed toward junior roles, non-quota-carrying roles, and people motivated to either over-report or under-report for personal reasons.
- Role titles are inconsistent. "Account Executive" on Glassdoor can mean anything from an inside SMB AE to a non-quota-carrying customer-facing account manager. These get blended into one number.
- Quota attainment is never disclosed, so reported earnings cannot be reverse-engineered into a credible OTE at plan.
- They are consumer-grade compensation data. They are not sales-ops-grade.
We reference Bridge Group's Sales Development and SaaS Sales reports because those are built on surveys of sales leaders at SaaS companies with standardized definitions. We cross-check our numbers against theirs when a Bridge Group cohort maps to one of ours. We do not re-publish their figures.
How sample size is disclosed
Every benchmark cell shows one of:
- n=[number] when we have ten or more plans observed in the cell. We update the count when it changes materially.
- n=[number] (small sample) when we have between three and nine observed plans. Use these as directional, not definitive.
- Founder experience when we have fewer than three observed plans in the cell. These figures reflect the judgment of CompFrame's founder based on 2,000+ plans designed across stages and roles. We label them clearly so you can weight them accordingly.
Sample size grows as more founders build plans in CompFrame. Cells with n=Founder experience today may become n=[number] cells in future quarters.
Update cadence (the honest version)
We update benchmarks when one of three things happens:
- A meaningful number of new plans have been added to our dataset for a given cell (at minimum, a 20% change in underlying sample).
- A relevant third-party report is published that shifts our view (for example, a new Bridge Group report).
- We observe a market inflection that we judge important enough to reflect (for example, a compression in SDR base salaries in a down funding environment).
In practice this means most cells update every few quarters. A "Last updated" stamp is shown per cell. We do not claim daily or continuous updates because our sources do not update that way.
How geography is handled
Geography is applied as a multiplier on top of the base cell:
- Remote or secondary metro: 1.0x (the base figure assumes this).
- Major metro (Boston, Seattle, Austin, Chicago, Los Angeles, Denver, Washington DC): 1.10x.
- SF Bay Area or NYC: 1.25x.
These multipliers are blunt. They apply uniformly across roles and stages, which is a simplification. Real compensation premia in high-cost metros vary by role (senior enterprise AEs typically command a larger geo premium than entry-level SDRs). We do not currently model this. If we add role-specific geography later, we will flag it in the change log.
What our benchmarks do not cover
We are honest about gaps:
- Outside the US: our data is US-centric. We do not publish international benchmarks.
- Non-SaaS sales: benchmarks assume a SaaS revenue model. Marketplace comp, hardware, services, usage-based pricing, and open-source commercial roles are not covered.
- Late stage (public and near-public): our Series B/C+ cell is the top end of our coverage. Public-company comp is out of scope.
- Leadership roles: no VP of Sales, CRO, RVP, or management OTE benchmarks yet.
- Vertical-specific variation: SaaS to SMB retail comps differently than SaaS to healthcare or financial services. We do not filter by vertical today.
- Ramp schedules and attainment distributions: we publish rules of thumb in the plan generator but not per-cell benchmark pages. We are working on this.
Why we use ranges, not percentiles
A statistical percentile breakdown (P25 / P50 / P75) implies a dataset with hundreds of observations per cell, enough to compute real quartiles. We are not there yet. Using that language would overstate our precision and imply a rigor we do not have.
Instead we publish a range: a low, a typical, and a high. These reflect where we see real plans land, with the typical figure being a founder-informed center of gravity for the cell. If and when our dataset grows enough to compute real percentiles with defensible confidence, we will switch the language and say so on this page.
How benchmarks feed the plan generator
Benchmarks are the evidence base behind CompFrame's plan generator. When you build a plan, every number we recommend can be traced back to a specific benchmark cell and the sample size behind it. If a cell is n=Founder experience, the plan generator marks the number as "founder rule of thumb" in the plan output so you know how much to weight it.
We also use the opposite loop. Every time a founder builds and saves a plan in CompFrame, the anonymized numbers feed back into the benchmarks dataset for future updates. You can opt out of this in your account settings.
Known limitations and biases
A few biases we want to be explicit about:
- Seed and Series A weighting. Our dataset is heavier on early-stage plans than on mature Series C+ plans. Our Series B/C+ cells rely more on founder experience than on observed sample.
- SaaS bias. All of our observed plans are from SaaS companies. If you are building a sales team at a non-SaaS company, treat every number with more skepticism.
- Survivor bias. The plans we see tend to come from companies that are growing enough to be hiring sales reps. Comp at companies that flamed out is underrepresented.
- US hiring market. All cells assume US hires in the US labor market. International compensation is materially different and we do not model it.
Last reviewed: April 22, 2026. Questions or disagreements about specific cells? Email
hello@compframe.com and we'll either update the cell or explain why we disagree.
← Back to Benchmarks