Deep Dives – Rim Protection Metrics Revisited

January 20, 2014; Oakland, CA, USA; Golden State Warriors shooting guard Klay Thompson (11) shoots the ball in front of center Andrew Bogut (12) and Indiana Pacers center Roy Hibbert (55) during the first quarter at Oracle Arena. Mandatory Credit: Kyle Terada-USA TODAY Sports
January 20, 2014; Oakland, CA, USA; Golden State Warriors shooting guard Klay Thompson (11) shoots the ball in front of center Andrew Bogut (12) and Indiana Pacers center Roy Hibbert (55) during the first quarter at Oracle Arena. Mandatory Credit: Kyle Terada-USA TODAY Sports /
facebooktwitterreddit
NC Deep Dives
NC Deep Dives /

For as much work as it has been to this point, analysis[1. Emphatically including my own.] of defensive metrics, including those measuring rim protection, are in dire need of improvement. As with many issues involving cataloging, categorizing and rating defense, the problem is in the counting. There’s a common expression/acronym in statistical analysis: GIGO. Meaning Garbage In, Garbage Out. The analysis is only as good as the underlying data. Reducing the amount of garbage-like analysis by providing broader access to better, more informative and more accurate data is a big part of the reason we’ve started publishing tutorials like Savvas’ excellent Python series.

I talk about this a lot, but the single best thing about writing at Nylon Calculus is the behind-the-scenes access to a number of really smart people, with a wide range of technical skills that go well beyond basketball metrics. When I first developed the concept of Rim Protection at my old site[1. Offense no longer happens there…], I didn’t know much about where to get or how to quickly clean NBA data. Still, I think the underlying concept was solid: attempt to estimate the value of rim defense by looking at points not scored by the opposition. Still, I knew there were a bunch of things I wasn’t really accounting for, because even in the age of SportVU, a lot of the necessary data simply wasn’t available to me.

The primary thing I was forced to dance around was any notion of a deterrent factor. Even with the best rim defenders in the league[1. Though the best rim defenders allow percentages in the low 40s, once the possibility of shooting fouls and offensive rebounds are accounted for, it would still be hard to construct a great defense by giving the opposition a free reign to attack even Rudy Gobert at the basket.], not allowing shots at the rim in the first place remains the best plan. In original iterations of the metric, I accounted for this with some team adjustments to deal with pace and porousness[2. Without these adjustments, a player like Spencer Hawes in his time in Philadelphia in 2013/14 looked like a monster rim protector largely because the 76ers played so fast and so poorly on defense that he had the opportunity to “defend” the rim much more frequently than almost any other player in the league.]. However, given the way NBA.com calculates on/off opponent shooting stats, this was highly imperfect.[3.  For a few reasons, the first appears that those counts appear to be based on prorated proportions of team totals determined by playing time. This is probably a good first pass, but makes it very difficult to pick out differences in the shots faced from various locations when different players are and aren’t on the floor. The data was available from places like play-by-play logs, which allowed Evan Zamir and others to do some excellent on/off analysis, but more exact counts were hard to find in the form I needed them.] In fact, the overwhelming majority of players were credited with “allowing” a number of close attempts within a very narrow band around the team average. Working from memory, around 95% of the league was within +/- 1 attempt per 48 minutes of floortime of the team’s average. I didn’t really believe that to be the case, but without better info, what else was I to do?[5. Believe my lying eyes, as it turns out…]

Which brings me back to today, where thanks to some great effort to compile data from various sources[1. The reason I continuously refer to Darryl Blackport as the hidden MVP of this website is he does so much of this very involved back end work which allows people like me to write bright and shiny articles and get all the credit even though he did the more difficult and arguably more important part of the job for me.] I now have much more accurate counts of the underlying information, including opponent’s rim attempts. These totals are more accurate both because they are coming from actual on/off floor information as well as comparing SportVU apples-to-apples in terms of shot distances.[3. For whatever reason, NBA.com’s data has around 20% higher number of shots attempted from within five feet of the goal than does SportVU. Since the entirety of the defensive data set comes from SportVU, I’m going with those numbers for shot attempts as well because fidelity!]

The primary upshot of this improvement is allowing a much more accurate look at the on/off splits for various players in terms of close attempts allowed by their team when they are on and off the court:

As the above shows, some teams faced some very interesting decisions in terms of rim deterrence based on lineup[8. Or in the case of Utah, personnel] choices. Additionally, splits like those go some way to illustrating one of the major quibbles with floor time and lineup-based regression modeling. How to properly assign credit to Rudy Gobert and blame to Enes Kanter for Utah’s vastly improved ability to keep opponents out of the paint with the Stifle Tower on the floor? With players who tend to be like-for-like positional and lineup replacements, discerning the extent to which one is really good or the other is really bad is always difficult[7. I’ve referred to this is “The Perk Effect” before, nothing Nick Collison’s status as an RAPM darling can’t be hurt by being the guy who came into the game for post-Boston Kendrick Perkins. Good things seem to happen when he’s on the floor. I wonder why…].

Of perhaps more practical import is the discovery that the difference in rim protection offered by players primarily playing as centers and those appearing mostly at power forward are far larger than my initial analysis indicated. I’ve always noted some degree of disparity. A logical degree, as centers tend to guard bigger, slower, opponents, thus leaving them closer to the basket to, uhm, defend the rim. However, with the more accurate shot counts, the degree of this difference was further illuminated. While centers as a whole contest around 60% of all opponent shots at the rim[9. “Shots at the rim” refer to shots taken within 5 feet of the basket per SportVU, to mirror the description of “contest” provided by SportVU.], power forwards were only present around 42% of the time. Similarly, centers were better at forcing misses both as primary defender and as help defenders[10. The public data isn’t quite good enough to 100% account for which contests are primary, where the player in question is the closest defender to the shooter. Rough estimation suggestions overall shots contested by 1 player go in around 60% of the time, while 2 or more defenders drops that percentage to around 40%.].

Given that the two main factors in the metric are number of contests[1. Or contest% to account for variations in the opportunities faced by different players] and percentage allowed on those contests, it is thus not surprising the average center is over a full bucket better per 36 minutes than the average power forward[11. A quick refresher on the methodology, PS = Expected points – points allowed. Expected points is another place I’ve made a change as I founde my original estimate of 80% for “uncontested” shots at the rim to be too low, as per SportVU data, this number is closer to 90%. Adjusted Points Saved includes a, yes, adjustment for whether the player faced a greater or smaller than average number of opponent’s close attempts relative to floor time.]:

When you get to the end of the article at the full list of players, you might notice another change – I’m no longer displaying the a “saved over average” column, as I had used before to measure big man rim defense because combining power forwards and centers into the same analysis no longer seems tenable given the obvious differences in defensive roles in the modern NBA. With so few teams playing a “traditional” 3 around 2 style, this is probably to be expected, as only one big at a time is really left to patrol the paint.

Unfortunately, this presents one new difficulty, the “is that good?” question? Kenneth Faried saved 4.1 points per 36 minutes last year. Without a frame of reference, saving two baskets sounds like it could be a good thing, until you realize it’s almost three full points worse (per 36 minutes) than the average power forward[1. In Faried’s case it might be even worse, or possibly slightly better than that, depending on the amount of time he spent as a small ball five or a monster ball three. An “expected” average based on the amount of time spent at each position might be an improvement. However, good “position estimate” data is hard to come by. For example, Basketball Reference appears to have assigned Harrison Barnes at the 4 and Draymond Green at the 3 when they shared the floor, while listing Ryan Anderson as the center and Anthony Davis as the power forward in their shared court time.]. The position averages above will have to act as a rough guide to where a player falls in the “good or not” spectrum of rim protectors relative to his positional peers.

Without further ado, here are some revamped rim protection numbers for big men leaguewide in 2014/15: