BrainLazy has done its fair share of reviews over the years, but from the start we had a hard time adjusting to the standard rating system. In some of our earlier review attempts, we tried to forgo the usual rating, choosing instead to produce a haiku. That didn’t go over so well, so we started to provide the standard “out of ten” score, but we always include a one sentence verdict as well. It isn’t that we just wanted to be different. The problem is that the usual rating system just doesn’t mesh with the way the human mind works, or at the very least the human race can’t agree upon a single proper way to implement a rating system.
Let’s start with the scale of 1 to 10, for instance. Logically, an average game should be at the center of the scale, at 5. This isn’t the case at all, though. All through school, we aim for 70%. A seventy is a passing score. Thus, psychologically, we judge 70% to be the baseline for a good score. Any score down to around 6 indicates a bad game, anything up to about 9 is a good game. We disregard 10 entirely, because obviously that is just hype and fanboy talk, and 0 is just trolling and hater talk. Everything from 1-5 is effectively the same score, “awful.” So our finely calibrated ten point scale becomes a scale of 6 to 9. Five point scales are even worse, getting collapsed down into three for “meh” and four for “good.” I find that people are a little more willing to accept a five star score as legitimate, but that doesn’t change the fact that we are forced to grade on a curve in order to get a score to be taken seriously. And it doesn’t matter if you are one of those principled and objective individuals who uses the whole scale, because the people who read your reviews are likely to be using the curved structure and assuming that 4.2/10 is just you being a troll. The fact that you intended that score to indicate something slightly below average is beside the point.
Another issue with rating systems is the initial score indie products tend to get. This goes for books, movies, games, and trendy things like restaurants and fashion. Because the expectations for an unrated indie product are so low, if it has any merit at all the tendency is to score it high and rave about it. As soon as the four or five star rating is applied, though, the gauntlet is thrown down. The next reviewer feels some sort of cosmic obligation to balance the scales by judging it incredibly harshly. This is how fan squads and hate squads are formed early on. They are made up of people sticking up for the little guy whether he needs it or not or trying to show what an aficionado they are by welding the hammer of justice, because you can’t be a connoisseur of something and actually LIKE it. Things work similarly the other way around. If a movie or game is getting plenty of media attention, or is the next in a popular series, the scores are hugely exaggerated, with fans of the series leaning on one side of the scale, and people who define themselves through their rebellion against the mainstream leaning on the other.
It has always been our observation that the scales are used and abused by human psychology, but recently the financial consequences have begun to rear their ugly heads. Homefront – a game from THQ that you may not have seen here, but couldn’t have missed elsewhere – was hyped beyond belief and given an aggressive media push. Unfortunately, what have we just learned about games with media attention? Yes, the crowd of people with high expectations or statements to make about the mainstream, combined with less biased reviewers using the full scale but being interpreted on the curved one, produced a disappointing score. Worse, it wasn’t just the developers that were disappointed, but the stockholders. Lackluster review scores translated into an actual hit to the THQ stock price.
Not every system is as vulnerable to the proclivities of the human psyche as others. The “thumbs up, thumbs down” system is a pretty good one, for instance. It is binary, and thus very hard to corrupt or misconstrue. If you liked it, then it was thumbs up, if not, thumbs down. Spread the same rating across a large population, and you start to get a fairly clear and relatively unbiased picture of what the users think of it, on average. Trolls and fanboys can only add a single vote in either direction, and it can only move it up or down by the same amount, so it would take a massive squad of hypers or haters to actually skew the score. And if a game, movie, book, etc. has that many people who feel that strongly about it, that may be a good indication of its quality after all. Rotten Tomatoes is a good example of a site that puts it to good use, but the problem is that in order for it to give a finely tuned score, you need a LOT of reviewers, so it isn’t applicable for reviews from smaller sites. Sure, Roger Ebert boils down his criticism to this system, as do a few other reviewers, but that only works when you’ve got a strong following and some serious gravitas.
In the end, the real thing you have to realize is that interpreting the ratings in such a way that they are actually useful to you is almost as much of a skill as doing the actual rating. They remain good for a quick indicator, but if you are seriously interested in the overall quality of something, read an actual review, and read the whole thing. Even then, it is a good idea to familiarize yourself with the reviewer, if at all possible, to get an idea of what sort of pet peeves and peculiar tastes he or she might have. Ideally you find someone with tastes that match yours, and an interpretation of the scoring structure that is at the very least consistent. This will provide you with the tools to actually get a review that is relevant to you. Sure, it is a bit of a hassle, but think of it this way. Now the inherently biased nature of the human mind is working FOR you. Neat!