Seth Partnow (@SethPartnow): So, friend of the blog Neil Greenberg of the Washington Post tweeted this out on Monday:
Bill James in '86 Abstract on trying to summarize everything a player can do in one number. Applies to NHL especially pic.twitter.com/Ybu0601oxL
ā Neil Greenberg (@ngreenberg) March 23, 2015
Though Neil was specifically referencing the rise of hockey analytics, this obviously has relevance to basketball. Weāve had some fun internally with the back-and-forth over the utility of āYAPMās[1. āYet Another Plus/Minus Modelā courtesy of TNC guest contributor Hannes Becker.],ā Ā but I think that Bill James quote encapsulates a lot of my thoughts on the matter. Without putting further words in his mouth, Iād venture to say heād agree that one-number metrics can be a decent starting point and I certainly agree with that.
Where I get off the train is that seeking perfection in those models without a full understanding of what we are trying to measure is both unproductive and a bit of a foolās errand. Itās why I gravitate more towards Andrewās work with PT-PM because thatās not just trying to show āwhoās better. whoās bestā but also about figuring out which things that happen on the court matter and which are largely ephemera.
I know a lot of you disagree to an extent, and Iād like to hear why.
Krishna Narsu (@Knarsu3):Ā My biggest issue with the APMs/RAPMs etc. has always been the āwhyā of the metric. SoĀ Khris Middleton has a 3.70 DRPM. Cool, so we can be reasonably confident heās a good defensive player. But anyone want to tell meĀ how orĀ why? And thatās where RAPM/APM etc. metrics stop. But with the SPM[2. Statistical plus/minus, usually derived by regressing some set of more traditional statistics onto an APM model to estimate the value of each input.] metrics, like Andrewās, we can break it down into components and see āoh Khris Middleton is good defensively because heās contesting 3s at a high rateā (I have no idea if thatās true, just using it as an example). So Iām definitely in favor of metrics where we can look at more than just the one number and see why that metric is telling us this player is good offensively or defensively etc.
We can be reasonably confident [Middleton is] a good defensive player. But anyone want to tell me how or why?
Of course, as you pointed out Seth, no all in one metric will ever be even close to ideal because thereās so much weāre missing in the metric no matter how much we try to add in. But to not try is also foolish. Itās still a good starting point and we always need a starting point. For example, knowing that Khris Middletonās DRPM is 3.70 might not tell us much but it does tell us somethingāwe can say with decent confidence that Middletonās a good defensive player based on his high DRPM. Thatās a useful bit of information, even if itās not much.
Nathan Walker (@bbstats):Ā The only objection I have to that Bill James statement is a philosophical one. Specifically where he saysĀ that the ābestā evaluation is a subjective one. Iām not disagreeing with his conclusion, moreĀ his premise.
I think itās a fallacy to separate subjective and objective thinking. Instead, I think that subjectivity is the *lens* by which we should interpret āobjectiveā information. Ā While some might describe the eye test as āequally importantā to statistical analysis, in reality we are always processing information the same way. Even if you start with āobjectiveā information, your evaluation will always be through the lens of our own perception. I do think that with the weakness of the human mind, stats can do things much better than us. Ultimately, most of our own āinstinctive judgmentsā (i.e. āAnthony Davisā length and athleticism is unparalleled, heās going to make a huge jump this season!ā) can be boiled down into two categories:
1) Judgments that are statistical in nature and are therefore less accurate without data or data analysis (i.e. the āyou canāt watch every game, stats canā argument)
2) Judgments that have not yet been measured in any meaningful way (i.e. before this year most of us typically measured āgreat passersā by their assist totals only!)
Furthermore, I think that data, and one-number systems can very wellĀ estimate a playerās current value to their team in the NBA. I do notĀ think that data can tell us āHow will Player X do when heās put in a scenario heās never been in beforeā ā for example, what would happen when the Thunder played Russell Westbrook at point guard right out of college.
On the other hand, #2 points out something that human intuition does very well; see patterns. When I observe x, even though it is difficult to measure, y is the typical result. Under those circumstances, subjectivity is king. However,Ā statistics can do the same thing.
Back to the point at one, one thing that plus-minus tends to measure that is in opposition to the eye test is players that do things which are extremely difficult to observe. Not turning the ball over is a great example of this ā I think this is one of the reasons Dirk was typically top 5 in RAPM stats in the 2010s. We can see when a player turns the ball over. Less so, we can see the rate at which they do so. Finally and least of all we are barely able to take notice of a playerās low turnover rate. So, to some degree there is a āblind faithā in plus-minus data. Players that can look particularly bad can be making up for it in other, smaller ways and vice versa. This is where the benefit of out-of-sample testing and prediction methods come in to play, and why I think RPM is the low fruit we should often be reaching for ā if in the past a playerās data x,y, and z make their team so much better, after enough samples we have to shrug and say āthis is what the model says.ā
At any rate ā to *purely* rate players by their āone number ratingā is silly. But at the *very* least, a one-number rating can beĀ a hand-rail that guide us up the stairs to understanding player value.
a one-number rating can beĀ a hand-rail that guide us up the stairs to understanding player value.
Andrew Johnson (@CountingBaskets):Ā I certainly wouldnāt say I disagree with the initial point, itās Bill James after all! ItāsĀ incredibly dangerous to think that any one number has all of the information one needs evaluate a player or all of their contributions.
However, like Nathan, I think James might be underestimating how hard it is to balance all the different measures we have in our heads at the same time, to say nothing of the things we canāt (or donāt yet) measure. People just arenāt very good at this, just watch someone try to do long division in their heads; their eyes practically bug out. So, one number metrics are a bit of a convenient and helpful accounting method.
Further, subjective reputations can be overly sticky[4. Ed: as we see in All-Defense voting every season.],Ā which is why I think less analytically inclined teams have a tendency to go after guys that used to be a name but arenāt very good anymore. We donāt necessarily need a one number metric to point some of those guys out, butā¦
On the other hand, an issue with one number metrics the James quote doesnāt touch on, probably because it is less of an issue in baseball, is the importance of context and role, which is even more important with the more black box metrics. For example, Timo Mozgov is a player contributing more value to Cleveland than the simple addition of his prior statistics from Denver might have indicated, but that may well not transfer to yetĀ another team.
Partnow: Ā If I can flip this around on Andrew, is the problem then as much looking for the platonic best player as opposed to the player who can contribute in areas of X, Y and Z to a new team in the most efficient (in both on-court and contractual terms) manner? My complaint about the top-down metrics is that time spent on them is time NOT spent finding out whoās a good rebounder? Playmaker? Screener?
Now some of those things we donāt have the input data to really judge on more than ad hoc, eye-test basis. But weāre also exposed to exponentially more and better data than we were even a few months ago.
Johnson:Ā I think thereās definitely some truth to that, Seth.Ā But, I think teams and analytics need to go both bottom up and top down, if that makes sense. That said, I think itās hard to find evidence that NBA teams outside of central Texas are good at exploiting context in a consistent way to find value.
Yes, weāre getting all this new data, so weāre going through the steps of describing it and categorizing it. Then testing how stable the measures are and trying to put them into context.
In fact, one of things that we could look at doing with one number metrics is using them as a tool to compare the expected performance to actual lineup and team performance to see if there is something theyāre systematically missing in the new data. Spacing is the one area that has been done a number times.
Michael Murray (@MichaelMurrays): Ā I donāt want to respond for Andrew, but the distinction between what is and what is not a one-number metric is something thatās really interesting to me (YAPM vs āwhoās a good rebounderā). For something to be beyondĀ scrutiny, to me, it needs to have a discrete outcome. A shot goes in or not, is worth a certain number of points, and a player makes a certain combination of these things over time. When we start combining these three things into new metrics (eFG%, TS%) are they one-numberĀ metrics? Thereās definitely a sort of uncanny valley where it feels like a metric tries to mean too much,Ā but where is that? So the resulting argument for pursuing these metrics is that in the same way a shot going in is a discrete outcome, so is a basketball game, and given perfect information I should be able to represent both of those accurately. You not only have this sort of intuitive justification but you have a whole lot of incentive. At their core, teams are usuallyĀ trying to maximize their teamās winning ability relative to their costs, and a single metric is ideal. Also, you have sportsbooks which donāt offer lines for efficiency in screening. Finally, you have the public space, where articles are written out of interest and edification. But, itās hard toĀ write articlesĀ that drive down to the minutiae of basketball and there can be a high barrier of entry to read them.
Austin Clemens (@AustinClemens):Ā I like the phrase āplatonic best playerā. I canāt find the exchange right now but Talking Practice tweeted something about how he doesnāt like to use RPM for players who change teams ā does anyone else remember that? Basically he was taking the logical endpoint of Sethās argument; because RPM can fluctuate so much when someone changes teams, he doesnāt think itās a useful prior for players who are in a new role. That seems a bit far-fetched to me⦠like saying that once someone changes teams you canāt use their old FG% to predict their future FG%. You know what would be a cool thing to develop is like a measure of how much a playerās situation has changed from one year to the next ā so itās high either when they change teams or when the team changes dramatically around them ā and then you could see if particular stats are more/less consistent for high values of change vs low values of change. Does that seem dumb?
Walker: TP has said he/they generate two ratings for a player who has been on two teams for just that reason.
Partnow:Ā As a general matter, it would be fascinating to look into āpredicted roleā for a guy whoās in a different context. However, how much work has really been done to describe ārole?ā We have some usage stats, Iāve messed around with point guard styles a little, but Iām not even sure we have the statistical vocabulary to describe a playerās role in a manner lending itself to either external or internal comparison.
Jacob RosenĀ (@WFNYJacob): Iāve always like Jon Nichols'[5. Now of the Cavaliers.] position-adjusted classification as something I think does a good job of defining āroleā.
Nick Restifo (@Itsastat): RAPM, APM, and the other one number metrics are decently predictive. They appear to do a good job at predicting how good a team will be as the sum of its parts and how valuable a player is to his teamās success. What we still canāt get at with these numbers is fit, or how valuable players will be when asked to change their role. RAPM and other metrics arenāt nearly as predictive when players change teams. Luckily, I think the first question is far more important. If player X is valuable to his current teamās success, he is likely to be valuable to another teamās success as well, whether it be in a similar of different role. But players can definitely be used optimally or poorly.
It is also important to remember that RAPM, APM, FFAPM, etc. are estimates of a playerās plus-minus, and that they do not reflect an exact count of plus-minus in any way. They donāt tell us how or why a player is valuable, which also speaks to their inability to evaluate changes of roles well. From the lens of RAPM, the Pistons signing Josh Smith was an objectively good move. RAPM would not have predicted the Pistons would waive Smith and eat his salary for years just to get rid of him.
[Itās important] to be aware of what it is and what it isnāt. In a static environment, numbers like RAPM and APM are an impressively accurate measure of player value. The NBA is not a static environment
When digesting these numbers, it is most important, like any number, to be aware of what it is and what it isnāt. In a static environment, numbers like RAPM and APM are an impressively accurate measure of player value. The NBA is not a static environment, however. Players/teams/coaches change, and one number metrics do not yet excel in measuring this. One number metrics like RAPM, APM, etc. are the best we have with regards to estimating player value, and are at their most usable when you need one number to encapsulate a playerās worth (like in many forms of statistical analysis and modeling). If you wish to evaluate how valuable a player actually is, why they are that valuable, or how they will do on a new team, you need to consider other methods of evaluation, like metrics and film, in addition to one number metrics.
Ian Levy (@HickoryHigh): I think NickāsĀ last paragraph hit on something important for me. All statistics are a shorthand to some degree, a way of taking many data points and turning them into something descriptive or predictive. Even something like points per game just gives an impression of a playerās scoring quantity, it doesnāt describe the entirety of a playerās scoring quantity. Some games they score more, some games they score less. All of these one-number-metrics are similar in intent to something basic like field goal percentageātake a mountain of information and distill it into something smaller, digestible, understandable.
Thinking of these things in the same way has helped me feel more comfortable using all-in-one metrics. Iāve tried to make a point recently of characterizing them as āan estimateā of a playerās overall value or contributions. Thinking of them as estimations and shorthand for big, big things has helped me keep them in the proper context. They tell a little bit about some things that Iām interested in. But knowing that theyāre estimations keeps me from using them to draw conclusions that are inappropriately concrete.
I think that reflection of them as estimations could help a lot with how they are perceived. People who generally donāt trust them may feel more comfortable if they were presented in this softer and more accurate way. It also makes it more difficult to wield them as an argument-ending baton if they are acknowledged to be approximations.
I think most people who dislike and/or distrust these metrics are uncomfortable with approximations being presented as facts. For example, Nick Collison, who wasĀ an āAPM superstarā for several years inĀ several different plus-minus models. I think people were so turned off by the implication that he was among the leagueās best players that it colored the perception of the whole frame of analysis. But I donāt think anyone, even the biggest plus-minus critics, would have argued that he was a bad player. He set screens, played good defense, crashed the boards, made smart passes, didnāt make mistakes. Those are things that are nearly universally regarded as good (if unglamorous) things. The problem came with the implication that Collison was the āsixth-bestā player in the league, or wherever a given model slotted him.
The point being. I think these models are useful and accessible for identifying whoās good and whoās bad and the ballpark of how good or bad they are. They become problematic and undermine their accessibility when they are used with careless specificityāāNick Collison is better than 453 other NBA players, because this number said so.ā
Clemens:Ā Ianās point is actually a point I was trying to make in my RPM glossary entry. Even FG% is an estimate because we donāt know what the playerās FG% would be if we had him take an infinite number of shots. Thatās the definition of a statistic ā an estimate of the true population value we get from looking at a sample of all instances. This is why I consider RPM to be more āhonestā than things like PER and WS, because those are point predictions based on a weighting formula. They donāt really acknowledge the uncertainty that must be inherent to any statistic. RPM does and sure, itās hard to get standard errors for ridge regressions and etc, but they are the best one-number estimates we have. Bill Simmonsās footnote about RPM[4. in part three of his trade value column] was largely misguided. UsingĀ PER as a contrast toĀ RPM, which he described asĀ some kind engineered thing where nerds just screwed with numbers until they looked rightĀ gets it largelyĀ backwards. PER was created by fiddling with theĀ numbers until they look right ā I donāt know exactly how Hollinger created it but I assume he tinkered with theĀ input weights until he got some reasonable answers. By contrast, RPM is really a theoretical construct. Itās a method for decomposing the value from a play into constituent parts that can be attributed to particular players. The method came first, and it creates numbers that look right because it is based on strong theoretical principle.
Walker: In Simmonsā defenseĀ everyĀ good metric goes through āscrewing with numbers until they look right.ā One of the reasons RPM exists is because Daniel Myers suggested adding box score information to Jerry Englemannās RAPM. But how did we know RAPM needed help? It was obviously flawed, as we noted by the eye test. Kevin Durant and Kevin Love, for example, posted extremely low RAPMs in the early 2010s, and adding box score and other information significantly improved on RAPMās ability to predict offensive and defensive ratings.
Restifo:Ā There are some oddities too, that Iām not sure have been mentioned yet. In some RAPMās you can improve your DRAPM by taking more shots, etc. etc. This is more of an artifact of the mathematics of estimating a playerās defense, rather than a reflection of the reality of defensive skill. Defense as a whole is still notoriously hard to evaluate precisely.
Levy: Shifting topics slightly,Ā I know Seth talks a lot about the difference between what a player āisā and what a player ādidā which I think is really germane to the all-in-one metric conversation. These metrics show what a player did, which is heavily influenced by context and is one of the reasons they are so inappropriate for player ranking. For example, Patty Mills ranked really high in SPM last season, above John Wall. Thatās what he did, and thatās because he played for the Spurs in a constrained role where it was easier for him to be productive and efficient. That doesnāt mean he is better than John Wall. When people want to talk about ābetterā theyāre really talking about something that we canāt measure well because it involves ādidā and ācouldā. These metrics are pretty good with did. Not as good at could.
Layne Vashro (@VJL_BBall):Ā I want to say more on this, but one quick note.
One of the biggest strengths of quantitative methods over more subjective evaluations is that we know almost exactly how wrong we likely are
Most of metric interpretation issues would be solved by more consistent use of error reporting or confidence intervals.Ā One of the biggest strengths of quantitative methods over more subjective evaluations is that we know almost exactly how wrong we likely are (at least in general). Despite that, you never see range estimates attached to metrics (this criticism includes myself). I think it would do everyone a big favor for us to phrase things in terms that capture our lack of certainty. āIt is 95% likely that Nick Collison, in his current context, is worthā¦ā⦠āI am 95% certain that Aaron Gordonās peak NBA value will fall between X1 and X2ā³ā¦
In my opinion, this is one of the easiest important changes we can all make.
Clemens:Ā I made this point in my RPM glossary entry too and then a bunch of people schooled me on how hard it is to get standard errors for ridge regression. J.E. has done it via bootstrapping but I have read some academic literature that suggests that no matter how you do it it is probably wrong and that ultimately you just shouldnāt try. So thatās kind of depressing. So the bootstrapped SEs are like estimates of estimates.Two (three?) possible solutions though:1) Stick to OLS (no ridge) and iteratively drop players who cause collinearity problems. This means moving more players into your comparison case, which sucks, but you should be able to retain good estimates of players you really care about, like Roy Hibbert, by dropping players you only kind of care about, like Ian Mahinmi. Youād have to do something like run the regression, then automatically check VIFs or collinearity between sets of players, drop a player that is problematic in a group who has the fewest possessions, repeatā¦
2) Maybe bayesian ridge regression can dodge this? Bayesian ridge regression as I understand it is just OLS with a prior centered at 0 and a parameter for the SD of that prior distribution. If you want to also have SPM as a prior, you could either mix it afterwards (.25*SPM + .75*BPM or whatever, totally a-theoretical āmessing with numbersā!) or you could blend the prior with your SPM and then use that as a prior (so the SPM regressed towards 0 basically).
3) Lasso? I donāt know anything about lasso. I assume if it was a good solution someone would have done it.
Partnow:Ā It seems mildly churlish for the reason not to include error terms or confidence intervals for a stat like RAPM to be the inexactitude of the confidence interval. We canāt tell you exactly how wrong we are, so weāll present it as if weāre not wrong at all?Even an estimate of confidence intervals would be useful.
For example, friend of the blog Talking Practice shared a small selection of his/their 2013/2014 data with me[1. For a post which died a silent death in the drafts, but which was survived by its child, this discussion.] which had Danny Green estimated to be 0.5 pts/100 more effective/valuable/better than Klay Thompson last season. The first thing to note is that Thompson obviously got better from last season to this. One-number metrics arenāt necessary to see this, but most/all Iāve seen reflect it, which is nice. The second thing is that using bootstrapping techniques, we were able to determine this relative rating meant Green was about 60% likely to have been better/more valuable/more effective than Thompson last year. Thatās FAR less definite (considering the baseline is 50/50 for any two players) than a simple rank ordering of players would suggest.
Yet, how is RPM most-commonly, almost exclusively used? Khris Middleton is a top 10 player by RPM. Steph Curry is better than James Harden because RPM. The way these number are most oftenĀ presented implies a level of exactitude and certainty that is simply unwarranted which then gets translated into āobjective rankingsā when employed by the more general public which doesnāt understand the methodology well enough to know these ratings are estimates, and what that fact means. Further, the complexity of the method tends to provide an unwarranted veneer of objectivity. Any given method, from PER on up hasĀ Ā a ton of assumptions baked into the formula. At least with PER (or better versions of BPM models like Kevinās DRE) those assumptions are transparent. That transparency is largely lacking from most public APM models and Iām not completely sure many of the creators could adequately verbalize the assumptions a given model necessarily includes.
Thereās not much marketability in uncertainty, so itās understandable why the range of the estimate and the falseness of any appearance of pure objectivity donāt get talked about much. But those issues are there and are vital to the proper understanding of how much weight to give and what the best use case for a given metric should be.
Clemens:Ā When I get done with the freelance project Iām working on now, I am hoping to introduce an RAPM with confidence intervals and a frontend where you can enter two players and it will spit out something in plain english like āPlayer a is 75% likely to be better than player b defensivelyā but now that we have YAPM and Layneās thing I dunno maybe we should just figure out some way to consolidate all that stuff. I also want to have a full python or state or R tutorial for creating it. I know Evan Zamir just did this and it has been done in the past but I feel like it could use a new presentation.
Kevin Ferrigan (@NBACouchside):Ā I think the regression ā whether ridge or otherwise ā Ā methodology should just be applied to more things. I like Layneās Four Factors RAPM, but Iād love to see things like how opponent shot distributions change with one player on the floor ā specifically threes surrendered and shots within 3 feet. Those two areas are the most important, obviously, and weāve hammered the point on NC now that 3P% against is largely random / luck, so forcing teams out of those shots in the first place is a good idea. FG% inside 3 feet, alongside FGA inside 3 feet is also probably a good thing to do, because shot deterrence does seem to be more of a thing there than on 3s.
Walker:Ā Regarding confidence intervals I have two things to add:
1) The inability to discern the true difference between two observed valuesĀ is the fault of rankings in general, not one-number stats specifically. Every ranking system suffers from this, and will always suffer from this. An easy example is that there is almost no discernible difference in W-L between a #15 college team and a #25 college team.
2) I think itās a little odd that we are saying in the same breath that āOne Number Systems Canāt Really Rate Players Because Context Mattersā then we go on to say āI Wish We Knew The Confidence Intervals So We Could Appropriately Rank A Playerā (which I feel is an undertone here). If RPM doesnāt ārankā players (which Iām starting to agree with), then confidence intervals donāt seem to add much to the conversation. I do agree however that RPM does rank āvalue to team,ā by which confidence intervals would be of some aid.
Jay CipolettiĀ (@Hoopalytics): Ā Iām the blasphemer here in that I really donāt pay attention to one-number ratings at all. Iām not involved in recruiting/talent acquisition, so determining who is better really doesnāt matter to me.
If Iām looking at a team, I want to know HOW they do WHAT they do well. Inevitably that drills down to individual players, but ratings have nothing to do with it. I canāt recall a one-number metric ever coming up in conversation with a coach.
Four factors numbers, shot distribution, FG% by zones ā those all tell you who to look at and what to look for on film. Iāll admit indifference has led to ignorance regarding the metrics in question, so it is quite possible Iām missing an entire world of insight. If Iām sitting in a film session with a coach, how can I use one-number metrics to pinpoint specific things to look for?
Or for this specific two-year old question I have been unable to answer, is there a measure in use that captures this play: Ā Thomas Bryant is a 5-star 6ā10 kid at Huntington Prep that moves like he is 6ā5. On one play in February ā13, he kept an offensive rebound alive for two tips, allowing his guards to retreat. Then he jammed the outlet pass. Then he sprinted the floor to be the third defender back, preventing a layup from being attempted on the right block. After the ball was kicked out, the possession ended with a missed three-pointer that a teammate rebounded. He essentially made three potential stops on one possession but never touched the ball. I was at the game with my high school coach and he asked how you measure that. I had no idea. I still donāt. Iām an eager student if there is a way.
The underlying problem I have, with trying to capture that play and player ratings in general, is that they are expressed in a ball-centric language. I think of the game as being basket-centric, with 14 bodies in motion governed by different sets of rules. The ball is a unique body on the level of a celestial sphere, but its movements are wholly a function of the other bodies and in relation to the baskets on either end.
The underlying problem I have, with trying to capture that play and player ratings in general, is that they are expressed in a ball-centric language. I think of the game as being basket-centric, with 14 bodies in motion governed by different sets of rules. The ball is a unique body on the level of a celestial sphere, but its movements are wholly a function of the other bodies and in relation to the baskets on either end.
I just have this sense that 1 player metrics are the Catholic Church circa 1542ā¦it is also entirely possible my willful ignorance renders all the above moot.
Clemens: I donāt think anybody would say that one-number metrics are particularly useful for coaching. And it comes with caveats for recruitment. But to answer your question about Thomas Bryant, yes, RAPM does that. RAPM is kind of the ultimate holistic measurement. It picks up literally everything that happens on the court and boils it down to one number. The problem being, of course, that you have no idea what happened on the court that resulted in that number. But my guess is that if Bryant frequently made those kinds of plays he would have a good RAPM. A pure RAPM is completely divorced from the box score, so even if Bryant is a relatively weak box score player, RAPM might still like him.
So I guess the one place where it might help a coach is to help confirm what the eye test is telling you about the Nick Collisonās of the world and maybe even to separate a Nick Collison kind of player from a hustle player who does not bring value ā like I donāt know Scalabrine or something. But that might already be obvious to coaches.
Walker:Ā In an attempt to be a more reasonable human, I think Iāve begun to take a more Haberstroh-ian / Hollinger-ian approach to RPM and RAPM: using it to highlight, reward, explain where team and player success comes from that might be difficult to do otherwise.
Example: In D.Roseās MVP season, the Bulls were something like 10th on offense and 1st on defense. One of the primary logical reasons that RAPM advocates and basketball-stat-twitter (or at least my own narrow vision of it) were against Rose for MVP was that his contributions appeared to be mostly offensive. Of course defense is arguably more a āteamā skill than a player skill ā but for the voters, the best player on the best team was clearly Rose. RAPM favored Dirk, LeBron, CP3 and Dwight I believe. On the Bulls the strength by RAPM was split betweenĀ Deng and Noah (Rose gets lumped back into that conversation once you throw in his box-score stats, which 2011 pre-RPM shows). So from at least this one-number metric, we are enabled to better see a playerās defense and therefore MUCH better understand their overall contributions.
Partnow: Thanks guys, this was great, at least if the readers were able to wade through the YAPM alphabet soup.