The programme had a strong reputation. Four cohorts, three years, fifty-eight companies. The demo day deck reported EUR 23 million raised by alumni in the twelve months post-graduation, forty-three media mentions across the most recent cohort, and a sponsor satisfaction score that had improved for the third consecutive year. The programme director shared these numbers with funders and at ecosystem conferences. They were the numbers everyone expected to see.
What the programme director had not looked at closely was what had happened to the companies eighteen months after graduating. A researcher working on a regional ecosystem evaluation matched the alumni list against Companies House equivalents in four EU jurisdictions. Of the forty-one companies that had graduated across the first three cohorts, fourteen had ceased trading or become dormant. Eleven more had revenues below the level at which they had entered the programme. Twelve were operational but had not raised a subsequent institutional round. Four had raised a seed or Series A.
The funding raised by alumni in the first twelve months post-graduation was real. Much of it had come from demo day investor connections made during the programme. Much of it had also been consumed by companies that then stalled, because raising a small round off the back of a demo day pitch is a different thing from building a company that grows after the capital is deployed.
The programme had optimised for what it measured. It measured activity. It had not measured whether the activity produced durable company outcomes. It had produced, at significant cost to its funders, a pipeline of startups that had raised money and then, in many cases, not built much with it.
Most accelerator impact metrics measure activity rather than value. Demo day attendance, cohort size, total funding raised by alumni, and press mentions are reported as outcomes. They are not outcomes. They are indicators of activity levels. What they measure is the programme’s ability to generate visible events, not the programme’s ability to produce companies that are better for having participated.
Why Activity Metrics Dominate, a Three-Level Impact Framework, and a Lean Measurement Protocol
Why activity metrics dominate
The institutional pressures that produce activity-focused measurement are structural, not a consequence of bad faith.
Funder reporting requirements create the first pressure. Most public and corporate funders of accelerator programmes require output reporting at the end of a grant cycle. Outputs are defined as countable things that happened during the funded period: number of founders supported, workshops delivered, investors introduced, capital facilitated. Outcomes, which require longitudinal tracking, are more expensive to collect and often extend beyond the funded period. The reporting framework rewards what is easy to count within the grant window.
Press and ecosystem visibility create the second pressure. Demo day events, cohort announcements, and funding milestone press releases are the primary mechanism through which an accelerator builds its brand with potential applicants and corporate sponsors. A programme whose press coverage is a company that raised EUR 3 million on demo day is more visible than one whose press coverage is a company that had EUR 500,000 in revenue eighteen months after graduation. The funding event is a story. The operational outcome is a data point.
Corporate and government sponsor relationships create the third pressure. Sponsors evaluate programmes partly on the quality of the companies they see at demo days, partly on whether the programme’s visible success supports their own communication goals, and partly on whether the reporting they receive looks like the reporting they are used to seeing from comparable programmes. A programme that reports activity metrics is easy to benchmark against other programmes. A programme that reports eighteen-month company survival rates is harder to benchmark, and harder to present to a communications team.
Research by Wharton professors Assenova and Amit, examining data from 8,580 companies across 408 accelerators in 176 countries, found that accelerated startups grew more on average than their peers, but that the extent of these benefits varied significantly depending on programme design. The research finding that gets quoted is the positive headline. The research finding that gets less attention is that programme design, specifically the depth versus breadth of knowledge transfer and the presence or absence of structured educational content, determined whether the positive effect materialised. Activity metrics do not capture programme design quality.
A three-level impact measurement framework
Genuine accelerator impact measurement requires tracking at three levels, each with different data collection requirements and time horizons.
| Level | What It Measures | Specific Indicators | Collection Method | Time Horizon |
| Level 1: Founder development | Change in founder capability, network quality, and decision quality attributable to programme participation | Pre/post self-assessment on five founder competencies (sales, hiring, investor communication, product prioritisation, financial management); network density change measured by warm introductions made; decision quality measured by 90-day plan accuracy | Pre/post survey; structured 90-day review with each founder at programme end | 0 to 6 months post-programme |
| Level 2: Company outcomes | Operational metrics at 6, 12, and 24 months post-programme | Revenue or ARR at 6, 12, and 24 months post-graduation (not just at demo day); retention of at least one full-time employee at 12 months; survival rate at 24 months; comparison of post-programme growth rate against matched cohort of non-accelerated peers | Longitudinal alumni survey at 6, 12, and 24-month intervals; Companies House or equivalent registry checks for survival; voluntary metric disclosure by alumni | 6 to 24 months post-programme |
| Level 3: Ecosystem contribution | The programme’s contribution to the broader ecosystem, beyond the outcomes of individual companies | Follow-on funding from investors introduced through the programme (tracked separately from demo day capital); knowledge transfer to the mentor network (measured by mentor re-engagement rate and mentor-reported learning); alumni who become mentors or co-investors in subsequent cohorts; programme-attributable introductions that produced commercial outcomes | Annual alumni survey; mentor feedback survey; cap table review for programme-attributed investment | 12 months onwards, ongoing |
The most important single indicator in this framework, and the one most absent from current practice, is the company outcome rate at twenty-four months. Analysis of exit rates across more than 3,000 accelerator programmes found that smaller, more selective programmes often deliver stronger results than larger, better-known programmes, but that metrics such as founder satisfaction, valuation growth, and long-term outcomes would provide a more complete picture than exit rates alone. The exit rate is one outcome dimension. Revenue trajectory, survival, and team stability are equally important dimensions for programmes that are not primarily venture-focused.
The Level 1 and Level 3 measurements matter because they reveal whether a programme is producing durable change or event-based activity. A programme that produces strong Level 2 outcomes but weak Level 1 outcomes is likely benefiting from a strong selection effect: it is finding good founders rather than developing them. A programme that produces weak Level 2 outcomes but strong Level 3 outcomes is contributing to ecosystem infrastructure even if its direct company outcomes are modest. Understanding which of these applies to a specific programme changes what its funders and directors should invest in.
Lean measurement protocol for a team of two to four
The most common objection to impact measurement from programme directors is bandwidth: a team of two to four people managing a cohort of fifteen companies cannot also run a research function. This is true. It is also a false constraint. The measurement protocol does not require a research function. It requires three embedded practices.
The first is a standardised intake and exit interview. Every company entering the programme completes a ten-minute intake form covering five metrics: current monthly revenue (or users if pre-revenue), team size, number of paying customers, prior funding raised, and the founder’s self-assessed top capability gap. Every company exiting the programme completes the same form, plus a six-question programme evaluation. The data takes thirty minutes to collect per company and creates a longitudinal baseline at no additional cost.
The second is a twelve-month alumni check-in. A single email survey, sent at six and twelve months post-graduation, asking for five data points: current monthly revenue, headcount, most recent funding (if any), whether the company is still actively operating, and one thing they wish the programme had done differently. Response rates of 60 to 70 per cent are achievable with a warm, brief survey sent by a named programme staff member rather than an automated platform. The responses take two hours per cohort to collate into a summary.
The third is a funder-facing outcome report that separates activity metrics from outcome metrics. The activity section reports what happened during the programme. The outcome section reports what happened to the companies afterwards, with twenty-four-month tracking. Research on accelerator effectiveness consistently finds that structured educational content is particularly beneficial for first-time founders, and that the depth of knowledge transfer within a cohort tends to produce higher revenue growth, while breadth of connections tends to produce higher funding rates. A programme that can show its funders which design elements drove which outcomes is in a substantially stronger position at renewal than one that can show only that events occurred.
The Implication
Better impact measurement changes what programmes prioritise, not just how they report.
A programme that measures twenty-four-month company survival discovers that the founders who survive are the ones who left the programme with one resolved strategic question, not the ones who left with the most investor introductions. This changes curriculum design. A programme that measures Level 1 founder development discovers that the mentors whose companies produce the strongest outcomes are the ones who challenge founders’ assumptions rather than validate their pitch. This changes mentor selection. A programme that measures Level 3 ecosystem contribution discovers that its most durable impact is not the companies it produces directly but the investor-founder relationships it facilitates that compound over multiple cohorts. This changes how the programme structures its alumni network.
Activity metrics are not wrong to collect. They are wrong to report as evidence of value. The programmes that will build genuine ecosystem credibility over the next decade are the ones that can answer, specifically and honestly: of the founders who went through this programme, what was true of their companies twenty-four months later that would not have been true without it? That question requires data that most programmes are not currently collecting. Starting to collect it now is the structural decision that separates programmes with a measurement strategy from those with a reporting strategy.
