There are so many ‘opportunities’ to rate things these days that it’s impossible to list them all. Most applications or web sites which offer ratings, let you assign a ‘stars’ to whatever you rate. And I find that concept quite difficult. Not only do scales vary (out of five, out of six, out of ten…) but different people interpret these ratings differently as well.
In my music and photo libraries, for example, I will not ‘rate’ every item. If only because there are so many of them. So the simplest thing I do is to ‘rate’ and item with a single star if I want to remember that item. It’s just a convenient way of finding it. I may use more stars if I want to remember the item and actually think it’s quite good. And I use four or five stars to indicate it’s great or insanely brilliant.
This, however, differs from how other people use ratings. Some people, for example, seem to think that a ‘normal’ item should get the ‘middle’ rating and items in the low range are bad while items in the high range are good. This seems to be particularly common when a scale from one to ten (or zero to ten?) is used. Things which people consider to be all right will be rated seven. That’s 70%, which in some places is an ‘A’ grade.
I wonder how much information such rating systems really give. And what they can tell you about the rated items. Particularly when the people who rate things are self-selecting and you probably get quite a bias into the data by people who really like the item (and everything seems to be absolutely loved by someone) while people who think it’s rubbish may not have bothered to give their rating.
It’s quite difficult to make sense of these data. And I wonder whether greater simplicity wouldn’t be better here. Like the system use at last.fm where you can just mark your favourites and ban songs from your radio stream. These are simple ratings which do not depend on your personal interpretation of numbers.
Of course even outside the ‘social’ rating nirvana rating exist. There must be millions of questionnaires in the shape of
On a scale from one to six, how much do you agree with where you can give answers like
mostly agree. I usually find it hard to even understand the questions in these poorly designed tests. But apparently some people will claim to gain some real data from that. Which makes me wonder whether they’re just clueless or whether their analysis simply takes the various ways to read a badly written question into account…
I use ratings almost exactly the same way. One star to “flag”, usually as I need to remember or fix something about that item, and then four and five for “artist’s best” and “library-wide best of” respectively. I don’t think I have any two or three rated items. Good to hear I’m not alone.
Received data seems to be invalid. The wanted file does probably not exist or the guys at last.fm changed something.