Custora Blog  /

# How I Predicted Pebble Backers with 98% Accuracy

The following is a guest post by R.M. Bryce, winner of the Pebble competition. On April 28th, he predicted that pebble would raise \$10,071,981.47 and have 68,157 backers. His numbers were just 2% away from the final numbers when fundraising ended on May 17th.

The challenge was to predict the total number of backers for the Pebble e-paper watch on kickstarter.com.

We can speculate all we want about how backers discover a given project, or what qualities of a project are most likely to capture someone's interest, and so on, but most systems are simply too rich to successfully model the underlying factors that drive an observable. However, we can take various plausible fits to the current data, make a few predictions, and, if multiple fits are performed, average those predictions—this is exactly what the Custora team did.

It shouldn't be a surprise that most contestants also adopted this "fit and forecast" approach, suggesting that most guesses will be focused in the solution space. This means that any given guess will likely be very close to the others; even if the true outcome is in this region it is unlikely that even a good guess will be best. More importantly, the number of backers was decaying at the time of the contest deadline, meaning predictions will probably underestimate the true number of backers. Looking at the data revealed that backers tended to come in large waves, so it seemed most plausible that new, unpredictable waves would occur, dominating the total number of new backers).

What do we do, then? Our most principled approach places us in heavy competition, and the forecast is most likely to underestimate the future outcome. At the time I submitted my forecast, the number of backers was either decaying, or it had reached a plateau, so I decided to constrain the fit and forecast to a flat trailing edge hypothesis. Such an approach would have two advantages: first, it would bias the forecast upward, and furthermore, it would be consistent with the hypothesis that backers come in two flavors—seekers, those informed via others, and stumblers, those who find projects by happenstance. Such a bet would come at low cost and have an upward bias while remaining fairly close to the most-principled best guess, while at the same time drawing away from other predictions. Additionally, it would remain more robust to the (very) likely occurrence of new waves of backers.

In the end, my guess (68,157 backers) was 2% off the final number of backers. While I believe the approach was reasonable I’m under no illusions. For example, it should be noted that supply was limited and the watches sold out long before the end of the buy-in period. Rather than stopping uptake, Pebble created a \$1 pledge option. This change illustrates the dynamic and unpredictable nature of almost any social system we would be interested in. Despite the fact that the project was the largest Kickstarter ever, I did not consider this (in hindsight obvious) possibility of scarcity—if I had, I would have underestimated the total. Furthermore, my speculation that the number of backers had plateaued was simply wrong.

In essence, a prediction is a bet, often made in a game where the rules are unknown and subject to change. One should bet only when the costs are in relation to the reward, and when one would not regret the means taken toward uncertain ends. Here, my costs were negative—it was an interesting and fun problem to consider and even small projects are opportunities to teach and improve—and the upside was positive. Why not make a bet?

I'm looking forward to my new Pebble, and having a tasty pint with the Custora team the next time I'm in Brooklyn.

### Like this? You might also enjoy these.

We break down the basics of predictive analytics, its best practices in the...