Hello my fellow data addicts. Today I wanted to touch on the progress of the experimental projection cone chart. As some of you may have seen a few weeks ago, I added a tabbed interface to the charts so the experimental charts were a little less top-secret. So, now that more people are seeing the projection chart, I’ve gotten more and more questions about how it works & performs.
Let me be clear, though the projections are an exciting part of what I do on Kicktraq, you have to understand that 100% accurate projections are highly unlikely unless I make them so loose that you could drive a mac truck through the range. Even with a mountain of data, projections are completely best-guess. There are so many factors that can modify the pledge movement drastically, some of these I cover in “When Projects Gets Noticed“, and I’ll cover a few additional examples of this phenomenon and how it applies to projections at the end.Because of this, it’s best to look at a projection as a sort of windsock for the project. It can tell you which direction the wind is blowing, but some external change can suddenly turn the windsock in a completely different direction. As much as I’d love for projections to be a crystal ball (or a Delorean, that’d be sweet), it just isn’t realistic to assume it would ever be one.
Adjusting Projections
The projection charts are a continuous improvement process, and the more projects I have the more data I’m able to feed back in to adjust their accuracy. Backer activity and tastes change over time, economic factors change pledging behavior, the creativity of projects ebb and flow, and a concentration of popular projects amplify one another. These factors, along with many others, provide unique opportunities to sample this behavior and continue that improvement.
What I normally do is check the weighting for each category every few weeks and make adjustments as I gather more data. Unfortunately with a considerable amount of additional maintenance and updates lately, I’ve not had a chance to do much adjustment. I decided, instead of doing checking by purely the numbers, I’d write a more visual report to check on the status of the last round of adjustments, and for the first time share them with you.
A word of note, these are demonstration charts I whipped up late last night on a whim, and thus have a few rendering issues I noticed after I ran the test batches. Regardless, I wanted to share them anyway, so just keep that in mind. They say “projection cone” but they aren’t the same as the other projection cone chart, though I’ve considered swapping this new chart out for what appears once a project closes once I get it cleaned up a bit if folks are interested. Lastly, when I speak of accuracy percentage, I’m calculating accuracy based on how many days the projection was in range of the final total vs total days the projection covered. So, 10 days within range over a total of 20 days = 50% accuracy.
I picked a few closed projects from different categories from various time periods to have a variety of projects to share and test against. So, let’s take a look:
Sentinels of the Multiverse – Infernal Relics: 75% accuracy over 24 days
The first thing you’ll notice as we go through these, each chart shows the final total by the blue dashed line and the goal total in a thicker grey dashed line. Also, as we look at the projection, the first couple days on most projects continue to be high even with a weighted adjustment to try and counteract initial surges. Also, I’m testing the next set of adjustments, which when applied is illustrated in the upper-right. The last set of adjustments I made were somewhat on the conservative side, and you’ll see that reflected as we go along. Most projection ranges are lower than the goal as I’ve been hesitant to factor in the surge at the end of most projects. I feel safer under-projecting than over-projecting, at least it feels more reasonable for project backers and owners to be surprised when their project over-performs than disappointed when it’s under.
As you can see in the graph above, this results in the projections tending to hug at or below the actual final total. However, the new test adjustments loosen the overall ranges and factor in a slight bump based on the average of the last 3-days of successful projects, in addition to some adjustment in the category specific weighting. In this example, when we apply the adjustment, it changes the accuracy from 75% to 96%. Not so bad, let’s keep going.
Ace of Spies: 76% accuracy over 25 days
Again, the beginning spike is pretty high, and applying the test adjustment we jump up to 92%. So far, so good.
Make Leisure Suit Larry come again!: 76% accuracy over 25 days
Similar to before, at 76% jumping to 92%. Let’s jump into food.
Liquid Styx: 85% accuracy over 26 days
Same here, but overall not bad. The adjustment unfortunately doesn’t change much as the weighting in food appears less volatile than some of the projects, so it’s doing a pretty good job already. Let’s try a recently closed comic.
Nothing Is Forgotten: 88% accuracy over 25 days
Also very similar in the comics with the existing weighting doing a good job, and we bump up to 96% with the adjustment. Next, documentaries.
BronyCon – The Documentary: 70% accuracy over 23 days
This campaign was considerably more volatile, but had a lull that threw off the projection for a few days. Even so, 70% isn’t too bad considering all the movement. Let’s look at the first million dollar music project.
Amanda Palmer: 41% accuracy over 27 days
Here’s where the conservative weighting adjustment got us in trouble a little. At 41%, as you can see, the weighting was under the actual total most of the project. The bump the last few days pushed everything down, but a million dollar music project is unlike anything seen before. With the adjustment, notice the percentage jumps to a whopping 85%.
Grim Dawn: 37% accuracy over 27 days
Again, the projection gets squished down from a huge last day, but if we apply the adjustment, it jumps to an impressive 93%. Quite a change from 37%.
The Unpredictable
So far, the adjustment seems to do pretty well at rounding out the estimations, but let me show you why all these fancy adjustments matter little to some projects.
Disaster Looms: 23% accuracy over 47 days
Disaster Looms had one heck of a last day. They not only had a big rush at the end, but a last-minute mention by Penny Arcade shot the last day into the stratosphere. No amount of projection can accommodate for something like this on the last day. Even with the adjustment, it only adds a couple more days, which bumps it up to 34%.
Republique: 7% accuracy over 27 days
Ouch, only 7% – but this is another project which had a massive surge the last couple weeks after announcing support for additional platforms, and a huge finish the last few days after mentions on a slew of news outlets and backers doing everything they could to “Keep Hope Alive”. They did more in the last 4 days than they did the first 25 days of the project.
As you can see, the project had a really rough start. What they were able to accomplish in the last couple weeks was very impressive. But again, you can’t begin to calculate for situations like this. Even with the adjustment, it only adds a couple more days and bumps it to 22%.
Hybrid Vigor: 38% accuracy over 26 days
Hybrid Vigor is another project with a bumpy road and quite a finish the last week. The adjustment bumps us up to 48%.
Sedition Wars: 13% accuracy over 32 days
Another project with a huge jump during the last week. CoolMiniOrNot know how to get all their backers to bump their pledges up at the end with all the amazing stretch rewards being unlocked, and it shows with each of their campaigns. Again, similar situation as the two above, the projection is pushed way under and the adjustment only bumps it up to 28%
Legend of the Lost Dutchman: 16% accuracy over 55 days
Look at that bubble and big jump towards the finish. This struggling project had an angel investor who came in 10-days before the end and brought momentum back to the project. A large increase in pledges for even a single day can make an entire projection, even one as consistent as the one for this project, completely moot. Again, the adjustment only ads a handful of days, and bumps it up to 29%.
Lastly, how can I not include Pebble.
Pebble: 15% accuracy over 33 days
Just the opposite of a big jump, is when a wildly popular campaign like Pebble decides to cap rewards 10-days before their campaign ends which completely stalls out any further growth. What’s really crazy is that even with no product to sell they were still making thousands of dollars a day in pledges. Imagine where they might have reached had they not capped it!
Also notice, the huge funnel at the beginning where they earned nearly 4-million in the first 7 days, and bumped it to 6-million only 5 days later. As the largest Kickstarter of all time, it’s such an anomaly that what works for the other projects sets the acceleration for the beginning of the project to a whopping 20-million gap.
Conclusion
So, that’s the state of things with the projections. I’ll probably do some additional testing to make sure the category weighting adjustments are within acceptable ranges and no specific projects throw anything off. I’ll look into more aggressive means to quell the initial couple days of spikes, but I’ll have to do some testing as I don’t want to inadvertently punish poorly-performing projects.
I’ve included a few more samples of some projects. Enjoy!
Comments are closed.