Nylon Calculus: Shot defense and separating metrics from actions
The idea of how best to defend a 3-pointer has become pretty solid — don’t let your opponents shoot them. This is not a new concept — we know that there is a lot of randomness in defensive 3-point percentage and that the best defenses generally limit attempts. However, there is still plenty of data available about shot defense and, unfortunately, not all of it is meaningful. We now have more than three years of SportVU data, to reinforce the concept that the Defended FG% stat and the Difference stat displayed on the NBA.com stats pages don’t offer much useful information about a player’s individual defensive performance.
These statistics are focused on a defensive player, showing the field goal percentage for all shot attempts where they were closest defender, and then how that compares to the weighted average of the offensive players’ season-long field goal percentage from that area. If these statistics were capturing a measurable skill we would expect to see a strong year-to-year correlation. That is to say, if a player is good at reducing an opponent’s 3-point percentage, we would expect that trend to appear across multiple seasons.
However, if we compare year-to-year correlations for player performance in these statistics, we can see that is not the case.
On 3-pointers, there is essentially no year-to-year correlation for either the Defended FG% stat or the Difference stat. This means that a defender doesn’t really have the ability to control their Defended FG%. This is not to say that on any one particular shot attempt a defender couldn’t make the shot attempt more difficult and force a miss. It simply means that as the sample size gets larger, more randomness seeps into this particular metric, making it essentially unusable as a measure of individual defense.
Read More: Player Tracking Plus-Minus at the season’s halfway point
But what about 2-point shots? Are these shots mostly random as well? When is it appropriate to use these statistics as a measure of individual defense?
We can answer these questions by looking at the year-to-year correlations of the Defended FG% stats displayed on NBA.com. Because NBA.com has different categories like greater than 15 feet, less than 10 feet and less than 6 feet, I focused on specific ranges: 6-10 feet, 10-15 feet and 15 feet to the three-point line.
There are a few observations to make. First, there appears to be a decent year-to-year stability in the rim protection metrics. Interestingly, there appears to be an even stronger year-to-year correlation for the <6 feet Defended FG% stat. This indicates that players who are allowing a lower field goal percentage within six feet than the shooters normally make are consistent year-to-year. Basically, good rim protectors are likely to continue to be good rim protectors assuming no health or age regression.
Outside of six feet, everything appears to be mostly random — defenders don’t appear to have any control over their Defended FG%. This is a hard concept to grasp because it goes against our intuition that different defenders can have a better or worse effect on the shot. For example, if we see Kawhi Leonard guarding a shooter, we’d expect the shooter would have more difficulty making the shot over him than say James Harden, assuming all else is equal. The issue is that the statistic doesn’t capture all things being equal. It includes shots where Leonard is standing still, guarding a shooter who jab steps and fires. It also captures shots where Leonard is closing out from a great distance on a catch-and-shoot attempt. It also includes shots where Leonard is guarding another player, perhaps with his back to the shooter, but happens to be the closest defender as recognized by the player tracking system.
Another point is that a defender can deter the shot by making sure the shooter isn’t comfortable shooting. As Johannes Becker explained here, NBA players have a comfort zone and will shoot if they feel comfortable with the amount of space they have to get the shot off. So as a defender, you have to attempt to crowd their space and make them feel uncomfortable about shooting. In that article, Johannes also found that a players’ comfort zone requires more space (defender distance between the shooter and defender) when there is a negative height differential (meaning the defender is taller than the shooter).
It’s not surprising the to see that height is an important variable when it comes to defense, and one of the variables in Defensive Real Plus-Minus. Players need more space to shoot over taller defenders and this is especially true at the rim, where there is a huge difference in field goal percentage based on height differential. This all brings us back to the rim protectors, who are usually consistently good defenders because they have some control over that aspect of the defense.
But where does that leave the non-rim protectors? The wing/guard defenders who aren’t protecting the rim and whose job it is to defend the perimeter? Do these players really have no control over whether the shooter makes the shot or not?
Again it’s important to make a distinction between playing good defense and measuring good defense.
Perimeter defenders do affect whether the shooter makes the shot with proximity. On average, players shoot worse as the distance between them and a defender decreases. The problem is that that Defended FG% captures that and about seven other variables. We can’t simply use Defended FG% to determine if a player is a good defender or not. Rather what we should be looking for is which players are consistently good at deterring shots, forcing turnovers and disrupting the offense.
One such example of a more nuanced analysis is when Matt Moore took a look at why the Spurs defense wasn’t as good as you would think with Kawhi on the court. It’s a great look into how teams are effectively avoiding Kawhi on defense and not allowing him to disrupt the offense. However, in a sense, he is deterring shots.
One way to contrast this is to look at Jeff Green, who in just 23.9 minutes of playing time per game, is defending five jump shots (greater than 15 feet) while Kawhi, in almost 10 more minutes is defending only 4.7 of these shots. However, if you were to look at their defended FG% and Difference stats, you would see that Jeff Green has been better in both areas. We know Kawhi is a better defender and that players aren’t comfortable shooting against him based on Matt Moore’s analysis of how teams are avoiding him.
On the other hand, it seems teams are perfectly fine taking jump shots against Jeff Green. This itself doesn’t necessarily mean Jeff Green is a bad defender. It just means that players are comfortable taking shots against him.
Perhaps an even better example is Steph Curry, who was constantly attacked on defense in the Finals. Whether Steph is a good defender or not was irrelevant in this case (the evidence seems to support that he was) because the Cavs perceived him to be a weakness they could attack on defense. Players were comfortable shooting against Steph. Regardless of whether the Cavs shot better or worse when defended by Steph, he did not deter those shots which in a way, is not good defense. Perception became reality as Steph was considered to be a poor defender in the Finals.
So do we distinguish good defenders based on how many shots they allow? This isn’t necessarily a good way to evaluate either because a player can be marked as the closest defender and not be responsible for that man.
So what are we looking for? Theoretically, we want defenders who will disrupt the offensive player’s comfort zone. A player does that by getting up in the opposing player’s airspace and potentially creating turnovers. This is why we generally perceive aggressive players on defense to be good defenders (and why steal rate may be an important statistic as a proxy for evaluating defense).
Next: All-Star selections by the numbers
So this leaves us about where we started. Defense is hard to measure and the best approach uses multiple statistics and considers the context in which the players is used. However, most of the player tracking Defended FG% statistics should be left out of the toolbox of defensive evaluation. Unless you’re looking at shots around the basket, the statistics don’t show any stability year-to-year and so it’s hard to tease out the signal from the noise.