Post Image

The Problem With Judging Accelerator Success By The Numbers

robert indiana numbers by clarkmaxwell via Flickr

This article is part of a series on the Springboard process.

For an organization that runs accelerator programs like Springboard, metrics are intended to be an important and objective measure of success.

We recently applied for the SBA's Global Accelerator Fund and TechCrunch Include and both asked us about how we measure our impact and track our success.

The problem is that impact is difficult to measure, and it's because it is often impossible to tell whether a portfolio company's growth was a direct result of a program, or if it was unrelated.

As far as metrics go, an accelerator that sources great companies but has a shitty program could achieve the same results as an accelerator that sources mediocre companies but turns them into Unicorns.

I'd love to know how others track their metrics and if there is a better way to demonstrate, quantitatively, that your accelerator is having a measurable impact on the participants.

How We Track Our Statistics

We have tracked metrics at Springboard since Day One and post them publicly. It's a practice that many other accelerators do as well: TechStars, Unreasonable Institute, Startupbootcamp are a few.

We collect our statistics by keeping in touch with our alumnae, tracking them on Google Alerts, Mattermark, and even FormDs.com. We add this data to a massive excel spreadsheet called "Presenter Tracking" that crunches all the numbers for us and rolls them up by class (year) and total.

We closely follow things like:

  • Total capital raised
  • Capital raised after participating in program
  • # IPOs
  • # mergers and acquisitions
  • # companies that close
  • % companies still in business

Our track record is above average, which means that we pick good companies, help mediocre companies, or both. So how do we tell which scenario describes us?

We look at the qualitative feedback we get from our alumnae, and celebrate small wins which we don't track (doing so would be time intensive and challenging to standardize) but do circulate with our team and board. This includes things like:

  • A coach we matched with a company that ends up formally joining the board or advisory board after the program
  • Bringing companies to private pitch sessions with senior level officials at major corporations, and learning several months after the meeting that one of them begins a sizable pilot with the company
  • Service providers (legal, accounting, insurance) that demonstrate their expertise as coaches or speakers during our program and are retained after the program
  • Entrepreneurs who tell us that going through the Springboard process forever changed the way they speak publicly and think (bigger) about their company
  • Entrepreneurs that surprise us by blogging about their experience

This is how we know we have impact, and we have made changes to our program (like cutting the average size of a Springboard class from 35 to 10) in order to maximize the number of small wins we can create for each of our entrepreneurs.

We gauge success by the quality of impact we have on the entrepreneurs that go through our program. We just need a better way to measure it.

Because if we can measure it, then we can manage it... better.