I wouldn’t claim that I strongly care for those things or even watched the show but this morning it transpired that the German contestant had won 2010’s Grand Prix Eurovision de la Chanson – or Eurovision Song Contest, as today’s brutes prefer calling it.
Before the event, Google came up with the fun idea of predicting the outcome by doing the Google thing and playing some arithmetic games on search queries. They happened to predict the winner correctly. But one wonders whether that happened by sheer luck. And Google’s ability to predict any results would certainly be much more trustworthy if they also managed to get the results of the top-ten contestants right.
Unfortunately comparing the results to the prediction suggests things aren’t that easy. On the one hand it seems that Google’s model of distributing points used 35 contestants while the show on telly only had 25, hence their total scores end up being a bit high and need to be corrected in some way. It’s hard to find a good way for that correction. The obvious multiplication by 5/7 does improve things and I used it for my graph, but it’s not quite right. As Google don’t provide the points given by each country I don’t see how I can do better, unfortunateley. So here comes my graph of how far the points and final rank differ between Google’s prediction and the actual result:
Even with that pseudo-correction the differences remain quite significant, suggesting that good analysis and prediction remain hard. The point score Google predict in on average 45 points off the actual result
My apologies for the inadequate graphing with a mixture of bars and curves. It seems to be the best I could get out of Numbers when wanting two distinct scales.
Received data seems to be invalid. The wanted file does probably not exist or the guys at last.fm changed something.