Real Team Ratings are Being Replaced by Ultimate Real Team Ratings and by Reports on the Team Measures that are Most Important for Winning Playoff Series
Every year Quest for the Ring (QFTR) focuses first and foremost on developing the technology we use for at least one major element of basketball. In the early years of QFTR the focus was on players. Eventually, the almost perfect Real Player Rating (RPR) system was fully developed. This is now a stable, proven system. This year QFTR has been focusing on developing new technology for evaluating NBA basketball teams. As a result and as of now, the Real Team Ratings (RTR) system is no longer seen as state of the art. RTR like every dog had it's day but that era seems to be over with now.
So QFTR has decided to suspend the Real Team Ratings (RTR) that were published in recent years. I’ll briefly discuss the most important (but not all of the) reasons RTR is suspended. Then I’ll tell you what the replacements and improvements are for that product. It's not impossible but very unlikely that we will revive the "RTR concept" in the future so this is likely a termination rather than just a suspension.
Behind the scenes, QFTR has a new, state of the art Ultimate Real Team Ratings (URTR) system which has essentially every single team performance measure in existence. All of the basic ones known in the general public are included and all of the "advanced" measures not commonly known among the general public are included in URTR. URTR can and will be used to breakdown every single aspect of a specific team. URTR shows how a specific team stacks up against all the other teams in roughly 180 performance measures (90 on offense and 90 on defense). Some are more important than others, so the relative importance of the measures for winning playoff games and series is shown with an extensive color coding scheme.
URTR is most likely the best and the most extensive system for evaluating basketball teams ever created and, as such, it can not simply be posted at QFTR because it is worth serious money and it would sooner or later be stolen by someone. URTR is so advanced that it is the first ever QFTR product for which a blanket ban on publication has been issued. URTR is sort of like nuclear technology; if you are going to have it you need to keep it out of the general public domain. URTR is most likely better than anything which any NBA team uses assuming they use comparable systems at all. (And in all honesty I don’t know what NBA teams use for evaluating themselves but I would think it varies a lot from team to team). Instead of depositing the whole thing online lock stock and barrel, QFTR will be using the URTR system offline to give visitors to QFTR the most important and valuable information that can be obtained from the system.
The only people who will lose big from the decision to not publish the whole system are the ones who would have stolen it for their own benefit. Meanwhile, regular visitors to QFTR will get much and hopefully most of the highly valuable information that you can get from URTR and they will get it easily and efficiently. Obviously the whole idea here is to give red carpet treatment to a very small but exclusive group: regular visitors to QFTR. Believe me, we are even more than ever before making sure that if you bookmark QFTR and you regularly visit, you will be heavily rewarded.
EXACTLY WHY REAL TEAM RATINGS IS BEING MOTHBALLED
The Real Team Ratings of recent years have included only a small subset of all of the measures that are in the URTR system. While most of the most important measures made it into RTR by 2011, not all of them did, and so in order to continue with the concept I would have to at the least expand out the system. A fairly large amount of time would be required for that.
Much more importantly, it was realized (recently) that the value of RTR is limited because when you aggregate performance measures at the team level you are likely to get an information output product that is LESS valuable than when you look at the most important team performance measures separately. This is due to the heavy offsetting that goes on with teams that does NOT really go on with players in the QFTR Real Player Rating system. Teams often have some strong strengths and some strong weaknesses. When you combine all the team numbers together the strong strengths offset the strong weaknesses to produce a middle of the road type result that does not give you the detailed information that is the most valuable. The detailed information hides behind the overall aggregated RTR rating.
Also, teams can and do change much more radically than players do. Players are substantially stuck with the same skills and abilities for their whole careers, whereas teams can make huge changes over time in what they are good at and by how much and what they are bad at and by how much. Players can only make much smaller changes in what they are good at and in what they are bad at.
For example, if a team is one of the worst paint defending teams but it is one of the highest shooting percentage teams, those two might offset in the traditional RTR system (assuming it contained both of those components). The resulting RTR rating would be a blend of those two components (and of all the other components) and the important details (that the team was really bad at paint defending but really good at shooting) would be hidden in an overall middle of the pack type RTR rating.
COMPARING TEAMS VERSUS COMPARING PLAYERS
Comparing and ranking the performance of teams most accurately and most effectively is actually more complicated and tricky than doing that for players. With players you don’t have that gross offsetting effect just discussed that you have with teams. First, players are not really “bad” at anything because for one thing players don’t have win / loss records. Or to be more precise, players are not bad at things in the way that teams are.
Even the best players in the League are not good at certain things, but it is not really very relevant because, well, the player is one of the best players in the League overall. For example it is not really that important that LeBron James turns the ball over much more than typical players do. (And obviously, since he insists on infringing on the point guard role it is also inevitable that he will be making a lot of turnovers.) Instead of being bad at things, it is more accurate to say that players are less good at some things and “more good” at other things. When you combine everything players do into one number as QFTR does with Real Player Ratings, you get a very valuable and perfectly accurate and valid overall performance measure for each and every player rated. Then you can fairly compare players using those ratings.
You can be just about perfectly fair when you compare players using QFTR Real Player Ratings if you understand all of the important contexts. For example, if you understand that small forwards and shooting guards have on average lower ratings than centers and point guards then you can be more fair and accurate when comparing players using RPR than if you don’t understand that. And if you understand that lingering effects from an injury can reduce a player’s RPR by 5%-25% in a season (and you know which players were affected by one or more injuries) then you can be more fair and accurate when comparing players using RPR.
Players by themselves don’t have win / loss records. But since teams win and lose games and since teams win and lose playoff series and Championships series teams operate in a different reality so to speak and in that reality if you try to have a summary number like you do with players you end up with a much less valuable result or tool. So that is the primary reason why we are suspending the classic Real Team Ratings system.
FOR RATING TEAMS: OUT WITH THE OLD AND IN WITH THE NEW AND BETTER
As you might expect, RTR is being replaced with something better. As time permits (and as always, we have only a small fraction of time that we would like to have, and we would have more time if traffic were higher) we are going to post Reports on the most important components of the Ultimate Real Team Ratings system. Sometimes there will be one single component in a Report, but more commonly we will group several components that fit together very nicely.
For example, and this will always be one of the most important ones, we will produce a Report at least once a year and with any luck twice or three times a year on the key defensive measures for the teams. In that Report teams will be rated and ranked according to:
--Overall defensive efficiency
--Paint defense
--Outside the paint defense
--3 point shooting defense also known as perimeter defense
QFTR plans to produce the first ever Report of this type this very week.
In summary, the RTR system is no longer state of the art and is being replaced by:
--Ultimate Real Team Ratings (URTR) behind the scenes (offline). URTR will be used for highly detailed and high level reporting about specific teams.
--Reports (at QFTR) showing how all of the pro NBA teams stack up on the specific components of basketball that QFTR knows are the most important ones that determine who wins playoff series and Championships.
The QFTR decision to discontinue RTR and replace it with these new tools and products is a good example of how you have to evolve your use of statistics whenever it is time to do so.
A GENERAL WORD ABOUT THE HIGH VALUE OF STATISTICS
This is a good spot for QFTR to insert a defense of the effective and correct use of statistics for the objective of explaining how playoff games and series are won and lost.
There are those who don’t understand specific statistics or use of statistics in general and that’s bad. But then worse still there are those who do understand individual statistics and usage of statistics but they don’t understand why it is very valuable to use statistics. That’s even worse, because these people by rights should know better.
For example, there is basketball writer David Friedman who in the early days of QFTR was an icon but more recently less so. Friedman continually makes derogatory comments about people who use anything other than the most basic statistics when they Report on basketball. Friedman himself uses basic statistics extremely heavily so when he criticizes what he calls "statistical gurus" he must be referring to those who use statistics other than the really basic ones. QFTR uses both statistics and text Reports so strictly speaking or at least arguably Friedman has not completely trashed QFTR; it would be interesting to get Friedmans' overall opinion of QFTR's unique mix of text and statistical reporting.
Friedman focuses on what are ultimately relatively minor flaws in statistics, most famously how the number of assists made is subjective because the scorekeepers are unable to be perfectly objective if and when it is not precisely clear whether there has been an assist or not.
This is a classic case, of course, of making the perfect the enemy of the good. Not to mention that neither Friedman nor anyone else has ever given any detailed, hard information about how inaccurate the assist counts might be. Further, to the extent that the assist counts are not perfect, any flaws in the counting are going to very likely in the long run be applied equally across all the teams and all the players, so there should not be any teams or players who are benefiting or being disadvantaged by the minor flaws in the assist counts (assuming that the flaws can in the first place produce significant departures up or down from what the perfect assist count is).
Assuming that the scorekeepers are not rotated from arena to arena, it's possible that point guards for certain teams might end up at the end of a season with slightly more or slightly fewer assists than they really deserve, but any such difference would be small, because when all is said and done (a) saying whether a play involves an assist or not is not really all that complicated and (b) the scorekeepers are trained and they have oversight coming from the League; the scorekeepers are not just people picked randomly out of the stands or something.
But aside from these logically flawed and relatively trivial objections to statistics, the biggest problem with those like Friedman is that they don’t understand that the real value in using statistics is not that they perfectly reflect reality but that they extremely efficiently reflect reality almost perfectly or if you prefer ("relatively perfectly"). A QFTR report that contains a lot of statistics is nothing more and nothing less than the equivalent of a very long (many thousands, perhaps tens of thousands of words) text Report on the subject that almost no one could or would read because hardly anyone has the time to read a monstrously long report on, for example, how good all the players are on a particular team. And while readers wouldn't have the time to read text versions QFTR would not have the time to produce them anyway. Using statistics and formatting, QFTR can and does produce and report out information that it could not possibly produce and report out using text articles. Statistical, formatted reporting can be 50 times more efficient to produce and 20 times more efficient to consume than text reporting.
In summary, statistics are not perfect but they are a lot closer to perfect then people like David Friedman think. And statistics are an extremely efficient way of giving people very valuable, useful, and highly accurate information. QFTR statistical reporting is not literally perfect information but it is far closer to perfect then useless and it is among the best information out there on the subjects covered. Moreover, even with serious overall production limitations, QFTR statistical reporting gives readers a huge amount of information extremely efficiently that they could not possibly get unless they spent far more time, twenty or more times as much time. In the real world, no one is going to do that, so therefore, QFTR statistical reporting produces information that simply would never be produced and never consumed if statistics were not used for the dumb reason that they are not absolutely perfect.