A Verge Review Scoring Concept
Lots of us have recently been frustrated by the scoring system in The Verge, and I'd like to put forward my idea for a new Verge scoring system. I thought about writing a case for removing it entirely, like Engadget, but numbers have always acted like a tl;dr; for me, and I'm sure do for many others.
Products are scored no longer out of 10, but as a positive or negative number around 0, where zero is a product with no major faults, but where nothing is executed very well. For example, +20, or -7. This will be calculated using sub-scores, and just sub-scores. These sub scores will be ranked from -3 to +3, with each number meaning a different thing.
+3 - Exceptionally, if not perfectly executed and innovative.
+2 - Very good execution and well thought out.
+1 - Above average performance and well thought out.
0 - Has no consequential faults or downfalls. Good, but nothing exceeds.
-1 - Moderately poor performance, execution isn't quite satisfactory, although close.
-2 - Substandard execution. May seem like an afterthought.
-3 - Very poor execution and/or badly implemented.
Each of a fixed amount of categories (for now, let's use 6) will have this score. The numbers will be added/subtracted to form the final score, which could be anywhere from -18 to +18 (if there are 6 sub-scores), a wide, but not excessive range.
Let's take the Late 2012 iMac review, a recently reviewed product, which received an outstanding score. It's a great machine, in my opinion, but in many respects it doesn't exceed enough to receive a 9.0. In this system, it would be ranked as the following, based on review text.
This is an excellent score, but this system makes the faults in the product much clearer. If we divide +/-18 (The maximum and minimum scores) by 3, we get 6. So a product with a score of +6 should have the same execution as a sub-score with +1, an overall of +12 should equal a +2 subscore, etc... The iMac get's a +12, the max possible score equal to a +2, and the summary is virtually spot on: "Very good execution and well thought out." The whole product itself is nothing innovative, but it's damned good. In places, it's virtually perfect.
Why is this a better system?
The downfalls, or things which are great about a product are much clearer through these new sub-scores, for example, the speakers getting a 7.0 using the old system makes them look like really good speakers. They're not, they are mostly average, or thanks to the lack of bass, just a little sub par. The -1 score reflects this.
A glance at these scores is far more informative than a set of 'out of 10s'. It is clearer whether a product is recommended or not, and it's obvious why at a glance. In the above example we used 6 sub-scores, which gave us a very broad range, which is smaller but far more efficient than the current 'out of 10, with decimal' system. In an ideal world, each product will have 8 sub-scores with an overall range of 48. This range is an even better size, and more sub-scores mean more information. With this new system, there will also be no need for averages, or scores which don't incorporate averages, which cause contention.
This system isn't perfect. But in my opinion it's a start. Technology review scores have always been confusing, but this aims to simplify things, and give the readers the most detailed, yet easy to comprehend summary of any review. Tell me what you think, and if you're part of the Verge or Vox team, I'd love to hear from you just as much.