“The only thing that I know is that I know nothing.”
The above quote is famously known as the Socratic Paradox, recounted from Plato’s writings. While I’m fairly certain that neither Plato nor Socrates spent much of their waking hours poring over NBA draft prospects, the old adage applies here all the same. The draft makes fools of us all.
But that doesn’t mean we should blindly throw darts at the board either. Statistical models can help clarify some of the haze surrounding young prospects every year, as a small but still important piece of the overall player evaluation puzzle. Over the years, there have been various public draft models to enter the digital basketball-verse, from Steve Shea’s College Prospect Rankings to Nylon’s own Andrew Johnson’s P-AWS model. And different models can communicate different critical observations. The key to being able to gain intuition from a draft model then is to be intimately familiar with the strengths, weakness, and design choices of the system, and as such, I set about attempting to build my own.
With some advice from Andrew himself, a little data help from Will Schreefer, and the incomparable Basketball Reference and RealGM, I spent the last month or so chipping away at the complex world of draft modeling. After several iterations, I’m happy to release the first “functional” build of LUCARIO: Layered Upside Calibrated Aggregation of Regression Informed Outcomes. Yes, I’m just as surprised that I was able to make the acronym work, but eight hours solo in a car can do wonders for one’s creativity.
Motivating a new model
There were several goals I laid out before I started building LUCARIO that directly informed my model design choices. Chief among them was interpretability and deconstruct-ability. Draft models are typically released as rankings of one-number results. And while the ultimate goal is to have some sort of ordered draft board, from discussing it with other analysts and fellow draftniks, it became clear that the majority of the value of a statistical system comes not from the endpoint, but from the process and the constituent steps along the way, from acknowledging our own internal biases. As such, LUCARIO was built with two fundamental principles in mind:
- It needs to offer value beyond simply an endpoint number. A black box algorithm or a mashup of a hundred neural networks would be of little use. A good model should be intuitively designed and easily deconstructed and explained. We should be able to quickly process how certain variables are interacting and why a prospect is rated they way they are. We need to be able to get a more granular understanding of the components that make up a player’s value (i.e. splitting defense and offense). It should allow us to challenge as well as augment traditional scouting.
- A good model should be adaptive. While a single “final” ranking is necessary, the model should offer a flexible range of outcomes. Incorporating this is critical to understanding how coaching and development affect a player’s growth and value. The reason the draft is considered such a crapshoot is because we have a tendency to talk in absolutes when discussing prospects. “A player will be X” instead of “A player could be Y under circumstances M and N.” Sean Derenthal of The Stepien begun work towards this end when he built a tool to examine just that, and his work in steering the conversations to be of a distribution of outcomes was a key inspiration for LUCARIO.
Preparing the input and setting the output
As I didn’t feel properly equipped to evaluate international prospects, I limited the first version of the model to NCAA players only, with the training data set comprised of all drafted NCAA players from 2005-2015. LUCARIO takes in an input array of 60+ features, a combination of demographic and biographic data (position, height, wingspan, age, etc.), per-36 minute data, advanced stats (3-point attempt rate, usage, win shares, etc.), and interaction variables (assist-to-turnover ratio, body density, assists-times-rebounds, etc.). I decided to use both per-36 and advanced versions of metrics as it provided more stability to the model, from validation through partial dependency tests. All numerical variables were also first transformed as relative to their positional means before being input into the base models (more on the base models in a bit).
For the target variable, there were various outputs I could have chosen, but I decided to build this first version on third-year Offensive and Defensive Box Plus Minus. This is a critical area for future improvement, but for an initial version, I was satisfied with using Basketball-Reference’s popular and recognizable (plus historically maintained) player value metric. I decided to use a third-year snapshot rather than a peak as I wanted to target the player’s value while he was still on his rookie contract. Furthermore, both target variables demonstrated relatively little skewness after removing some sparse outliers.
Before starting, I wanted to build a little intuition on the base data set. Some quick CART analysis (CART being Classification and Regression Tree) showed, unsurprisingly, that offensive production, creation ability, and efficiency were some of the most important skills for projecting offense. On the defensive side, athleticism, rebounding, steals, and blocks were the key indicators. Sounds about right!
I set up LUCARIO as a multi-stage stacked ensemble regressor, while trying to be careful to not go too overboard with the chaining.
As shown above, I used four base models, two linear regression methods and two boosted tree methods. OLS is a fairly basic linear regression that minimizes a simple residual sum of squares, whereas elastic net is a linear regression with both L1 and L2 regularization, aimed at minimizing the effects of multicollinearity and potential overfitting. The boosted decision trees are where things get really fun. Boosting is the process of taking consecutive weak learners (in this case, smaller decision trees) and iterating over them in order to cyclically reduce the residual error at each iteration, thereby forming a weighted ensemble of weak learners in order to produce one strong learner.
A tree-based model allows us to learn more complex interactions and additionally is more robust to overfitting, with generally greater out of sample predictive power than typical single stage regressions. Whereas the GBDT is the standard implementation from Python’s omnipotent Sci-Kit Learn package, XG Boost is a more powerful implementation that introduces additional features such as regularization, greater hyperparameter flexibility (hyperparameters are customizable aspects such as the maximum length of a decision tree), and greater precision.
At this stage, I also included the prospects’ RSCI rankings as a fifth “model”, to help capture some effects of traditional scouting (inspired by Andrew Johnson’s modeling process) at the high school level, with missing values imputed in. With the five base models tuned, I fed the resultant outputs into a random forest with 1000 trees, with the weighting of each constituent model determined from 10-fold cross validation. The random forest helps to achieve goal No. 2, allowing us to grab the distribution of results from all 1000 trees. Whereas XG Boost is a boosting algorithm, Random Forests take advantage of a technique known as bagging. As explained nicely in this article, think of random forests this way: you’re trying to figure out where you want to travel, so you poll a thousand of your friends for their opinions. Each friend knows a little something about you, and each of them has a slightly different decision-making process. From all their opinions, you can put together a weighted understanding of what the best recommendation might be.
Stages 3 and 4 are about condensing the random forest outputs into a few endpoint numbers. In an Ode to Oden podcast (great listen, especially on morning runs) with Ben Falk, Sean Derenthal discussed the distinction between being transparent with yourself about whether you’re aiming for upside or for floor, and the importance of the former. Taking those comments to heart, I aimed to set up LUCARIO’s final outputs to point closer to a prospect’s upper bound rather than their probabilistic expectation. Accepting a player’s EV as their eventual outcome before they’re even drafted felt like a slightly self-defeating mentality.
LUCARIO results and takeaways
So putting all of the above together, the below is an example of how one prospect’s final projection comes together, in this sample case, DeAndre Ayton from Arizona.
It’s apparent to see how different base models treat one observation in starkly different ways, and therefore the importance of being able to combine and understand each of those middleman outputs. But remember, LUCARIO is not about simply about offering a single rating endpoint. It’s about the process of getting there, which is why the below slide is so much more fun, a sample prospect card for Jaren Jackson Jr., assembled from LUCARIO’s inputs and processes.
Jackson has been rightly touted as perhaps the best defensive prospect in his class and among the players best suited for the modern game. He’s got great measurables (the percentile rankings are out of both the training and testing data sets), showed an ability to stroke it from deep in college, and the conversation surrounding JJJ doesn’t do justice to some of the eye-popping plays he flashed on offense last year.
His projections are being dragged down by two key areas: limited usage and his penchant for foul trouble. He didn’t get to flex into as large a role on offense as a DeAndre Ayton, who’s more heralded on that end, however, with his promising ball-handling and shooting ability, it would be no surprise to see Jackson make the more seamless transition. And where the fouls are concerned, at a certain degree, fouls for a young player show a willingness to take risks and be aggressive, a positive sign which indicates that Jackson might be getting unfairly overly penalized for his foul rate in college. If you believe that those negatives can be corrected, then you’re betting on Jackson’s very real potential to be the best player in this class.
That’s high praise, especially in a draft class which is saturated at the top with talented bigs (you didn’t think I’d leave this without an overall board, did you?).
There’s a lot to unpack there (and there are probably a few withdrawn prospects that I didn’t catch, including Jontay Porter, who it just felt wrong to leave off), but I’m going to just go through a couple of the main points (and let’s all agree to just talk about Allonzo Trier’s rank at a later time). The first thing that will stick out is, obviously, the incredulously high rankings for Gary Clark and Ajdin Penava, two big men who are currently being talked about as a second-rounder and an undrafted free agent, respectively. While I will be the first to say that, no I do not believe they will be top 10 players in this class, I do think it’s worth examining the strength of their college production and couching their sleeper potential in view of such. While there are legitimate concerns about whether great college numbers will translate to the NBA level for players like Clark (and Jalen Brunson), those incredibly productive resumes still get rewarded here.
Second, you may have noticed that Miles Bridges is ranked No. 37 overall. Bridges is a great case study in evaluating players whom LUCARIO views as “one-way” prospects. Bridges has star equity on offense, but projects to struggle out of the gate on defense, just like Lauri Markkanen last year. Based on his offensive potential alone, Bridges should probably move much further up a final draft board, but it’s critical to again consider the context around the ranking. That also ties into a corollary regarding “role” players, a label which gets a bad rap. For example, De’Anthony Melton (one of my favorite prospects in this draft) is ranked No. 17 by upside value, but No. 9 by expected value. He projects as a very good role player with a high floor, which is certainly far from a bad thing, especially once we start moving down from the elite prospects at the top.
It’s insane to think that in LUCARIO’s top 10, there’s only one guard (Trae Young) and one wing (Zhaire Smith), but that jives with the general expectations surrounding this class, that it is dominated at the top by the big men. But prospects like Kevin Huerter, Shai Gilgeous-Alexander, and Jarred Vanderbilt all display promise. Again, it’s not about the final ratings and rankings; it’s about the process of getting there.
Now, I do want to close this out with a quick discussion of what I’ve picked up about LUCARIO’s performance and biases from some testing, including on last year’s rookie class. While the one-year results are certainly too early to declare the case closed, LUCARIO nevertheless shows a lot of promise when compared against the draft order. By avoiding debilitating potential busts like Malik Monk, the model beat the draft order in the Top 12 by a mean error of 4.5 spots from their BPM rank and was 0.6 spots of mean error better than the draft order overall (in three out-of-sample tests, the mean error has ranged from 0.6 to 1.5 spots better than the draft order).
Digging a little deeper, Ben Simmons and Lonzo Ball were identified as the consensus top two players in the class, a tier above the rest. Of course, Simmons made First Team All-Rookie and Ball was on the Second Team. LUCARIO also did well in picking up two of the most productive rookie bigs, Bam Adebayo (model rank No. 3) and John Collins (model rank No. 9), who became starters for their respective teams. OG Anunoby came in ranked No. 7. He grew into a no-doubt steal for the Raptors last year as he became the rarest of commodities, an impact two-way wing. LUCARIO also did well identifying potential steals near the bottom of the draft order like Sindarius Thornwell (model rank No. 21) and Sterling Brown (model rank No. 27). On the flip side, two of the biggest misses were Donovan Mitchell, who was ranked No. 32 and Lauri Markkanen, who was ranked No. 33. Of course, both went on to make First Team All-Rookie. However, as noted earlier, Markkanen was given star equity on offense, and his rating was dragged down by his defensive projection, which well, played out in reality in a similar fashion, just to varying degrees.
So, what does this tell us about LUCARIO’s tendencies and biases? The model has shown a proclivity for projecting defensive upside. It identified Simmons, Ball, Jonathan Isaac, and Anunoby as all some of the strongest defenders in the 2017 class. It’s also shown to be stronger at projecting big men than for wings or guards. There is still further work needed on designing the target variable and understanding the proper balance between offensive and defensive contributions. Additionally, LUCARIO’s treatment of sleepers and depth players also functions as a double-edged sword of sorts. While its ability to identify that depth greatly outperforms the draft order, it also means the model has a tendency to overvalue rotation-level players at the expense of other heuristically star-level guys.
But with that said, if a model doesn’t make you ask “wait, what” multiple times over and only serves to reaffirm pre-existing beliefs, then what’s the point of the whole exercise?