Finding the Value of NBA Draft Picks
As we head into the NBA season later this month, fans are rightfully focused on how their teams will perform on the court this year. A few significant off-season moves have made things interesting heading into the season. Some of the headliners for me are how DeMar DeRozan fits on court with the Kings, how the new-look Knicks will look with Mikal Bridges and Karl-Anthony Towns, whether the Timberwolves get enough out of Donte DiVincenzo and Naz Reid to not miss KAT, how effective the Warriors off-season moves were in replacing Klay Thompson, and how much growth there will be in OKC and Houston, two of the most fun young teams to watch in my personal view.
At the same time, I also want to stay cognizant of what’s on the horizon (as NBA teams must and some fanbases certainly will be). Next June will see an incredibly talented group of young prospects join the league, led by Duke freshman Cooper Flagg and a couple of Rutgers (!) freshmen, Ace Bailey and Dylan Harper. While these players won’t contribute to a team this season, they’re inevitably going to impact what happens on the floor, especially later in the year as teams jockey for lottery odds and draft position.
With that in mind, let’s jump headfirst into a project I’ve been thinking about for some time: analytically assessing the value of different draft picks.
As NFL draft fans may recognize, this is something that’s been done for quite some time in pro football circles. In the 1990s, then Cowboys head coach Jimmy Johnson developed a trade value chart that has become the most widely known trade value chart in football. The concept was simple—assign a value to each draft pick so that front office personnel could evaluate potential trades more easily. While Johnson’s draft value chart was in a sense pioneering, it is now generally considered outdated due to subsequent CBA changes and it wasn’t obviously based on analytical principles. As a result, many teams have developed their own draft value charts (which aren’t public), and some league observers/advisors have developed more analytically driven charts. For example, Chase Stuart developed a stat-based draft value chart using Pro Football Reference’s “Approximate Value” stat, a relatively blunt all-in-one stat designed to capture a player’s contributions to points scored/allowed. Kevin Meers published a draft value chart based on similar analysis. Jason Fitzgerald and Brad Spielberger also developed a draft value chart based on NFL contract values and performance.
Some NBA watchers have developed draft value charts of their own, though they’re not widely referenced. For example, ESPN’s Kevin Pelton put together an NBA draft value chart based on player performance at various draft pick slots over the course of their four seasons in the NBA (roughly, the player’s rookie contract) and the subsequent five year period using his wins above replacement metric. Others have also developed draft value analyses relying on Basketball Reference’s win shares statistic (for example, here, here, and here). I’ve also seen an analyses using value over replacement player (VORP) from a couple years ago. But for a variety of reasons, I wanted to try to tackle this question using one of my preferred all-in-one metrics, Dunks and Threes Estimated Plus-Minus (EPM), which no one has used as far as I can tell.
For starters, EPM is arguably the most accurate modern all-in-one metric in terms of its ability to predict team performance. It’s also not solely reliant upon box score statistics (such as points, rebounds, assists, steals, blocks, and shooting percentages), which may not be sufficiently detailed. I also wanted to look at pick values based on the outcomes that could have happened rather than focusing solely on who was picked in each slot, as many of the draft value analyses I’ve seen were skewed by a great player somehow falling in the draft (Nikola Jokic getting drafted at #41 overall shouldn’t make that pick decidedly more valuable than pick #40 after all, as the team picking #40 could have picked him). In addition, I was looking for an evaluation methodology that could be carried forward without artificially limiting a player’s production based on semi-artificial windows, like the player’s first four or five years. While those kinds of limitations can be defensible—four years is the length of first round rookie contracts when you include team options, and teams can hold restricted free agency over those players in their fifth year, too—a significant proportion of productive players are going to re-sign with the team that drafted them anyway. This dynamic is especially true for players who show All-NBA or All-Star level productivity early on in their careers, such that the value of those players can be underrated.
With a ton of help from my good friend Alex Takakuwa, whose math and programming experience far surpass mine, I set out to try to pull together a draft value chart.
Just so it’s easier to visualize from the start, I’ve included the draft value chart we developed below. The next few sections of the post will go into (gory) detail on how we created the chart, the rationale for our analytical decisions, and the potential issues with our approach. Then, I’ll follow up with some observations from the chart and our analysis that stood out.
Creating the Draft Value Chart
This section details the process we went through to develop the draft value chart above. If anything is unclear or unexplained, feel free to reach out to me for more details.
The Data & Why We Chose EPM
To conduct the analysis, I first needed to identify sources of information for past NBA draft picks and EPM. The draft pick history was easy—Basketball Reference has downloadable spreadsheets with picks by year. EPM, on the other hand, is a relatively new metric developed by Dunks & Threes (dunksandthrees.com) that incorporates possession-based stats derived from play-by-play data and player-tracking stats from stats.NBA.com. That modern data has not always been available, so we could only obtain league-wide EPM data from 2014 through last season (2023-24).
There are, of course, numerous all-in-one metrics available for assessing player performance, such as PER and its derivatives; Box Plus/Minus (BPM) and its derivatives like Value Over Replacement Player (VORP); Win Shares; RAPTOR; and net rating to name some. They all have advantages and disadvantages, but I personally favor EPM for a few reasons: it’s been shown to be the most predictive (as I mentioned), it attempts to account for teammate/opponent quality, it’s not wholly reliant on counting stats that can’t really capture defensive performance with precision, and, subjectively, it seems to have fewer inexplicable outliers than most of the other metrics.
With that said, the major trade-offs for using EPM data for this analysis are several. It is very time-limited (just 11 seasons of data), there’s no easy way for me to completely understand how the metric is determined given it’s based on a proprietary regression model, and a handful of players—especially on the lower minutes end—don’t have EPM data available. I’m also not aware of EPM data existing for playoff games, which is a bummer. Regardless, it’s still worth using in my view. The time limitations aren’t that problematic for this exercise, and it’s not obvious that draft tendencies from 30+ years ago—or even 20+ years ago—are all that relevant. I’m not realistically able to verify every metric anyway, and most other all-in-one metrics have missing data, use different approaches to different time periods, or are subject to major swings for players with few minutes played. Sufficient playoff data is also not going to be available for a huge proportion of players to draw meaningful conclusions, though that problem may not be evenly distributed amongst all types of players. But ultimately we can live with the concerns either way.
Initial Approach & Process
This section describes the process that Alex and I used to create the draft value chart. Where we made significant process decisions, I’ll flag them and explain why. For those that want just a cursory understanding of the process, I’ve bolded the major points below.
First, we chose to convert EPM—which is, in effect, a rate-based efficiency statistic—into a new metric that could be summed. We called these “EPM units” and I’ll describe the process further below. The decision and process for this warrant explanation.
To start, EPM as a metric can be positive or negative. Positive values are generally associated with better players, but because of the way the metric is designed, most players will have a negative value EPM. For example, the median EPM last season—which describes a middle of the pack player—was about -1.7 (Kris Murray or Haywood Highsmith). At the same time, Coby White’s EPM of 0.0 was in the 70th percentile of all players. In other words, you can be a pretty good player and still throw up an EPM that’s around 0 or slightly negative. We wanted to avoid having players with negative EPMs show up as negative contributors, though. This was for two reasons: first, below average players still contribute to a team, and two, roughly half the players on the court at any given time will have a negative EPM by the nature of the stat—yet they are on the court presumably because they are better than whomever potential replacement players are.
The second issue is one that affects all NBA metrics that describe efficiency: EPM does not directly account for how much a player actually plays. Said another way, a guy who plays with high efficiency over just a handful of games can appear to be a “better” player than someone who wasn’t quite as efficient but played 10 seasons in the NBA. For example, Josh Hart (Knicks) played 81 games and 2,707 minutes last season with an EPM of -0.7, putting him in the top 40% of all players last season by EPM. Without considering minutes played, it could appear that Hart had a worse season than looks like he had a worse season than Neemias Queta (Celtics), whose +1.5 EPM was in the 84th percentile of all players last year. But Queta only played in 28 games though, a total of 333 minutes, against mostly backups. Nobody would seriously argue he was “better” than Hart, and we agree! Thus, we wanted to ensure that our analysis captured the performance “value” of players who routinely saw the court and attribute value to their contributions.
Dunks & Threes has a metric called Estimated Wins that attempts to convert EPM into a summable stat. We could’ve relied on it, but we chose not to for a few reasons. There’s no robust public description of how the Estimated Wins metric is derived, so we weren’t sure exactly what it represents. Estimated Wins also can go negative, which is one of the things we wanted to avoid. We also built a quick tool to consider a draft value chart based on Estimated Wins, and the results weren’t as sensible—but I’ll be the first to admit that picking between stats becomes a subjective exercise, and we debated the question.
With that said, by making our EPM units metric positive, we credit some level of contribution to any minute played even if the player is bad. That makes some intuitive sense, as the minute must be played by somebody and if a team thought they could fill the minute with a better player, they had the opportunity to do so in theory. Of course, the consequence is that we mask the possibility that a player is so bad that they’re truly a negative contributor on the court and could easily be replaced, which we can also intuit is a real possibility. Imagine a player who is getting minutes solely for development purposes. The team knows he isn’t good enough to play now, but they want him to play for the potential long-term benefit. That player’s performance could actually be replaced by a better player, but the team is specifically choosing not to do so. Our choice to make EPM units positive hides that possibility somewhat, but the masking effect is mitigated by the likelihood that the player would have a noticeably poor negative EPM, which our calculation methodology will illustrate.
Calculating EPM Units
To calculate EPM units, we started by taking a player’s EPM for a given season, applying a scaling function, and then multiplying the result by the player’s number of minutes played for that year. Although could have used possession data, it’s trickier to obtain reliably and minutes function as a reasonable, consistent proxy. Regardless, for each season, we can calculate a player’s EPM units, and we can find the player’s career EPM units by adding the cumulative total for all seasons played. In simple conceptual terms, you can think of the equation below:
EPM Units for a Season =
(Scaling Function [EPM]) x (Minutes Played) x 1000
[Note: the 1000 is just so we have easier to read numbers—you can basically disregard it from a conceptual standpoint]
Obviously, the scaling function we applied is fundamental to the result, so I’ll explain our approach below.
To determine what scaling function to apply, we decided to look at the general relationship between player performance and player salaries as a guide. The idea here is straightforward: presumably, NBA teams pay players more if they think they’re better performers and playing often enough to be worth the financial commitment, so we could use a similar relationship to scale EPM to our EPM unit metric. While this is probably not a perfect comparison, it seemed more sensible than using an arbitrary scaling relationship that we might otherwise have conceived of.
So, we pulled data on the salary cap charges for all 449 players league-wide for the 2023-24 season from Spotrac and ranked them by percentiles, with the 100th percentile being the highest salary cap charge (we opted for cap charges over cash salaries because they aren’t as likely to fluctuate based on players receiving performance/trade/other bonuses, which could create some odd spikes). We also ranked Dunks & Threes’ EPMs for all players by percentile, again with the 100th percentile being the highest positive EPM. We then plotted these two datasets (salary cap charge vs. EPM) against one another to see what type of relationship there was between them, such as linear or exponential, to inform our scaling approach. You can see the curve below in Figure 2:
We plotted the same curves using salary cap charges and EPMs from 2023 and 2022 to make sure that 2024 wasn’t some odd outlier. As you can see from Figure 3 below, the curves look largely similar.
A few things stood out from reviewing these curves, keeping in mind the potential effects of the NBA salary cap.
At the low end, player salaries are clustered around the same values even as EPMs improve. You can see this based illustrated by the relative flatness of the curve between for lower EPMs (roughly in the -10 to -5 range). This is likely due to a few factors:
player salary cap charges can’t go below $0 (duh);
players on rookie deals have specified salary amounts based on the NBA CBA, which may be artificially low; and
A lot of salary cap charges cluster around the two-year veteran’s minimum, which was just over $2 million last season.
At the top end of the salary range, player salaries are artificially depressed by the CBA, so you see a clear flattening. Generally, no player’s maximum salary can’t exceed 35% of the salary cap (other than for in-contract raises), and whether players are even eligible to receive that maximum 35% depends on how long they’ve been in the league. [Note: 2023 looks a little wonky, but that looks like a spike specifically from Stephen Curry’s new deal before that season.]
In the middle, there appears to be a roughly exponential curve, with salaries staying relatively flat between EPMs of about -5 and -3 before quickly rising. The exponential nature of the curve is even easier to pick out if you look at the graphs below in Figure 4. The left-hand side shows just the 10th to 90th percentiles of salaries vs. EPMs, which looks like an exponential curve. The right-hand side shows the 5th to 95th percentiles, which starts to show some of the flattening at the top end of salary cap amounts (likely resulting from CBA-imposed maximums).
Based on our review of the above charts, we opted to use an exponential scaling function and fit the scaling curve to specific points along the 2024 Cap Charge vs. EPM curve (Figure 2). Use of an exponential scaling function may come as a surprise to those of you familiar with the Wins Above Replacement (WAR) metric often used in baseball and some other win-value metrics, but we found that an exponential function makes more sense for the NBA as it better tracks how teams are valuing players in the open segments of the market (e.g., without as many rule-based boundaries) and results in a more sensible valuation of draft picks, which I’ll discuss more later.
To create the exponential, we fit an exponential curve to the 10th, 30th, 50th, 70th, and 90th percentile points on Figure 2 to avoid the “flattened” areas of the curve that happen at the high- and low-ends of the distribution that were likely due to maximum and minimum salary rules in the CBA. We did not expect those tail results to be relevant to the calculation of EPM units, which aren’t artificially capped at either end.
We also calculated each player’s salary cap charge as a percentage of the total cap allocations across the league (over $4.54 billion in 2024). The players who are the most valuable “accrue” a greater proportion of the value allotted to all players.
Once that was done, we pulled the results for 10th, 30th, 50th, 70th, and 90th percentile players (by salary cap charge) to fit the scaling function.
We used this function to convert EPM to EPM units for each player drafted from 2013 through 2023 and summed each player’s career EPM units (meaning the sum of the player’s EPM units accrued between the 2013-14 through 2023-24 seasons, the years for which we found complete EPM data).
Building the Draft Chart
To create the draft chart itself, we performed a few different steps.
First, we ranked the players in each draft class in order of career EPM units, best to worst, and we divided the EPM unit totals by the number of seasons that draft class was eligible to be in the NBA (up to a maximum of 11 seasons for the 2013 draft class). That gave us EPM units per season for each player, which was a bit more useful for comparison purposes.
[Note: There is some risk that we lowered the per-year performance of certain players who weren’t able to play the full number of seasons they were eligible for. That could be an issue for some players, such as those who suffered career-ending injuries or missed significant chunks of time due to injury. Given the nature of our project, though, it seems appropriate to knock down the values of players who missed time due to injury, and it’s reasonable to set and outside window around 10-12 years. We’ll have to think further about how to address this as EPM data becomes available over a longer window, however. For example, it would be a bit crazy to expect players to be playing 20+ seasons after they’re drafted.]
Second, we matched players up to their “theoretical” draft slots if they had been drafted in order of best (#1) to worst (#60) as if they’d in fact been drafted in order of ultimate performance. In other words, we slotted the player with the most EPM units in a draft as pick #1, the player with the second most EPM units was slotted as pick #2, and so forth until each of the 60* picks in a given draft year was matched up to a particular EPM unit score. [*The 2022 and 2023 drafts each had only 58 picks, as some picks were forfeited.] I’ll explain why we took this approach in a bit more detail later.
Third, we found the average of EPM units produced per year for each draft slot across the 11 draft classes with complete EPM data and normalized each value to a scale of 1000 (for convenience). To illustrate, let’s look at an example. Between 2013 and 2023, the average best player in a given draft class produced about 1,127 EPM units per season and the average second best player produced about 758 EPM units—about 67.2% of the EPM units produced by the average #1 pick. Accordingly, on our normalized scale, the #1 pick is worth 1,000 points and the #2 pick is worth 672 points.
You can see the full draft value chart (again) below:
What Jumps Out?
Let’s start with the draft value chart itself (Figure 1). There are a few things that jump out that are worth flagging, in my view.
The first two picks are incredibly valuable.
Just looking at the chart itself, it’s evident that the first and second pick are tremendously valuable relative to other picks, even other early first round picks. There are a few ways we can see this.
The #1 pick (1000 points) is worth roughly 1.5 times more than the #2 pick (684 points), and the #2 pick in turn is about twice as valuable as the #3 pick (336 points). But the changes in pick values start to smooth out quickly after the second pick. For example, the #3 pick is only about 1.2 times more valuable than the #4 pick (276 points); the #4 pick is only about 1.2 times more valuable than the #5 pick (234 points); and the #5 pick is only about 1.1 times more valuable than the #6 pick (205 points). That general trend remains true through about pick #34 (in the second round), when we finally again start to see bigger changes in pick value as we move down the draft order.
This is illustrated by Figure 5 (below). The bars show, for each pick, how many times more valuable it is than the immediately subsequent pick (e.g., pick #1 is 1.5x the value of pick #2, and pick #2 is 2.0x the value of pick #3, etc.). Bigger bars show bigger changes in value as you move down the draft board.
For the most part, there aren’t huge drops in draft pick value when you go down a pick, other than jumping from pick #1 to pick #2 or pick #2 to pick #3. For the bigger changes from pick to pick toward the end of the draft, it’s important to keep in mind that these are small in absolute terms. While pick #54 may appear to be 2.4 times more valuable than pick #55, that’s because the actual values are so very low that small changes can have a big impact. For example, a player who makes an NBA roster and plays 5-10 minutes per game for a few seasons has many times more value than a player who did not even make a roster after being drafted, but in real terms, neither player has a big impact.
You can also see why picks #1 and #2 are so valuable when you look at the data another way. Check out the graph below:
Figure 6 shows the range of potential outcomes for the top 5 players in every draft from 2013 through 2023 in terms of EPM units generated per season (like with the draft value chart, I’ve treated the top five players as if they were picked with picks #1 to #5). The curves were generated using ChatGPT to perform a Kernel Density Estimation (KDE) to estimate the distribution of particular picks. The “density” here represents the frequency with which a particular pick value will land at a particular EPM units per season value—the higher the peak, the more likely that pick will land at that value. Alternatively, you can think of the area below the line for any given pick as the distribution of outcomes for that pick.
[Note: If you’re curious, I’m happy to share the KDE method used. I used a Gaussian kernel, a bandwidth spread of 1.0, and boundaries of 0 and 2,840.8 EPM units per season (the maximum EPM units per season generated by any player in our dataset, Nikola Jokic).]
For example, look at the red line for pick #1. Assuming you actually pick the best player in the draft, pick #1’s most likely outcome appears at roughly 600-700 EPM units per season, meaning the team with an average first pick in the draft has a chance to draft a player who generates about 600-700 EPM units per season—a really strong player. But the pick also has another huge benefit: it’s also quite likely to allow the team to draft a player who generates 1,000 EPM units or more per season (up to the bounded maximum of ~2,840). While those 1,000+ EPM unit per season players aren’t common—only 7 of 656 players drafted since 2013 have hit that mark, some in the same draft—having the #1 pick gives a team a reasonably good shot of landing that good of a player.
Let’s look at Pick #2 next. Pick #2 has a similar curve shape to pick #1, but you can see that the distribution of outcomes has a peak somewhere around 450 EPM units per season—quite a bit lower. The distribution also swings heavily toward the lower end of the spectrum, even though there are some outlier instances when the #2 pick ought to be worth 1,000+ EPM units per season. This occurred in drafts like the 2014 draft, where Joel Embiid was extremely productive but still the second most productive player in the draft class on a EPM unit per season basis.
Unfortunately picks #3, #4, and #5 skew much more heavily toward the left of the chart. The players available with those picks (the third, fourth, and fifth best players in a given draft) are likely to be less productive than the best and second-best players, and they are much less likely to generate seriously outsized performances in the 1,000+ EPM unit per season range (none of them have gotten there in the past 11 draft classes).
The combination of these curves helps illustrate why the #1 pick and #2 pick carry so much value in our draft values chart (Figure 1).
If you’re curious about the rest of the lottery picks, I’ve also included that graph below (Figure 7). I’m not going to provide curves for all 60 picks though because it would be visually impossible to decipher.
The value of picks in the middle of the first round flattens out quickly and substantially.
Let’s go back to Figure 1 and Figure 5 again.
Looking at Figure 1, between pick #9 and pick #21, there’s only about a 93-point drop in pick value (from approximately 135 to 41), and the drop-off for each pick slot is never more than about 14 points.
Looking at Figure 5, there’s a noticeable flat trend between picks #7 and #21 for pick value relative to the subsequent draft slot—each pick is worth between about 7% and 15% more than the pick that follows (there’s rounding going on).
If we look again at Figure 7, too, you can see a clustering of the distributions for picks #9 to #14 around the same EPM units per season outcomes.
Taken together, these charts strongly suggest that once teams are picking in the middle of the first round, there probably isn’t a ton of difference moving around in pick location—at least within a few slots. There’s little to suggest that picking the 10th best player in a draft class is meaningfully different than selecting the 12th best player (assuming the “correct” picks are made), so there’s little reason to value those picks much differently from one another.
Second round picks probably don’t have much value—and late second round picks aren’t worth much more than replacement players.
Let’s look at another graph, Figure 8.
Figure 8 was actually the basis for the draft value chart itself before we normalized everything to an easier scale. Specifically, Figure 8 shows the average EPM units per season generated by players drafted over the last 11 years (from best to worst). It shows that the average “best” player in a draft produces a little over 1,100 EPM units per season; the average second-best player generates just over 750 EPM units per season; and so forth.
You can see the bars start to get quite small when we’re talking about the 30th to 60th best players in a draft class who could be available at those pick slots. Many of those players essentially produce no value.
There are some jarring facts from the data. On average, the bottom 24 players in each draft class produce less than 5.0 EPM units per season, while a staggering 15 players (!) produce less than 1.0 EPM unit per season. Compare that to the over 1,100 EPM units per season generated by the average best player in a draft class and the 750 EPM units per season generated by the average second best player. Or even the roughly 134 EPM units per season generated by the average 10th best player in a draft, or the nearly 50 EPM units per season generated by the average 20th best player. It’s hard to conclude anything other than those last 15-24 picks are near worthless. Ultimately,, about 40% of the average draft class of 60 players is producing basically no value on the court.
If you map those 40% of players onto draft picks, that means that picks outside the top 36 aren’t worth much in terms of production, and as you get even further in the round (toward picks #50 to #60), those players aren’t likely to produce anything.
Potential Pitfalls
In my view, we took a fair, reasonable approach to our analysis and we followed a good process in creating the draft value chart. With that said, I think it’s important to acknowledge that there are some areas where we made judgment calls and it’s fair to wonder about the impact of those decisions. Let’s go through the most important ones.
#1: Re-Ordering the Draft
As I noted previously, we opted to sort draft pick values based on the best performing player from each class rather than the specific draft slot players were in fact drafted into. The basic rationale is that the team with the #1 overall pick, by definition, has the chance to pick the best player, the team with the #2 overall pick has the chance to pick the second-best player, and so forth. If teams miss their picks, that’s on them.
Of course, there are several real-world problems with this approach. I won’t go through a laundry list, but the most evident issues are that teams won’t always pick the “best” player and players may develop more if the team that drafts them has better coaching/player development or is simply a better fit (on and off the court). Using larger sample sizes and averages can mitigate these concerns, especially if you believe—as I do—that NBA teams tend to do a good job of identifying the best players through scouting.
But the biggest justification for sorting draft pick values based on the best performing player rather than actual draft slot is easier to show than explain with words alone.
Check out the graph below.
Figure 9 shows the average EPM Units generated by the 11 players drafted in each draft slot from the 2013 to 2023 drafts as though they were drafted in their actual draft position. In other words, the actual eleven players drafted with #1 picks in the eleven drafts from 2013 to 2023 are all averaged, the actual eleven #2 picks are all averaged, the actual eleven #3 picks are all averaged, etc.
You can see the trend here is quite lumpy and hard to predict. On top of that, some immediate absurdities leap off the screen.
The average #3 pick is about 75% more valuable than the average #1 pick and more than three times as valuable as the average #2 pick. The average #41 pick also appears to be insanely valuable—it’s the third highest pick in terms of average value overall, and over 10 times more valuable than the average pick on either side of it (#40 and #42).
If that seems wild to you, it’s because it probably is.
The chart below of top 3 picks over the last 11 years. Green is the best pick of the three draft slots, yellow is the second best, and red is the worst.
Glancing at Figure 10, you couldn’t be faulted for thinking the #1 pick is the best here. It produced the best pick of the three picks in 4 of 11 drafts, and only produced the worst pick two times. But the #3 pick has produced the best player of the three slots five different times, and when you look a bit closer you see that a few guys who were drafted at pick #3 have been truly stellar players.
The home-run picks really distort the numbers in an 11 year sample. Specifically, the outsize performances of Joel Embiid, Luka Doncic, and Jayson Tatum dramatically improve the average value of the #3 pick. Embiid and Doncic have produced by far the most value in terms of overall EPM units among top 3 picks (with the caveat that Embiid is one of the longest tenured players in the sample), and Tatum is only matched by the two of them and Karl-Anthony Towns. Conversely, the lack of outsized performances for #1 picks other than Towns, coupled with the bust picks of Markelle Fultz and Anthony Bennett, really affect the average value of the #1 pick. The #2 pick, meanwhile, hasn’t yielded a truly great player in the last eleven drafts, and several of the better performers from that pick slot have had injuries (Oladipo, Parker, Ball) or other issues (Morant) limit their performance.
We can also easily explain the apparently absurd value of the average #41 pick—it’s basically driven by one guy. Nikola Jokic has generated a staggering 2,840 EPM Units per season since he was drafted with the 41st pick in 2014, despite missing a full year playing overseas. The only other players who even clear 1,000 EPM Units per season are regularly All-NBA contenders like Joel Embiid (~2,129), Giannis Antetokounmpo (~2,098), Luka Doncic (~1,855), Shai Gilgeous-Alexander (~1,590), Rudy Gobert (~1,238), and Jayson Tatum (~1,107).
The mere fact that a later pick ultimately yielded a better player should not make that selection inherently more valuable than an earlier pick. Though Marvin Bailey III was selected #2 ahead of Luka Doncic in 2018, for example, doesn’t mean the #3 pick was actually “better” despite the outcome. The Kings picking at #2 had the opportunity to select Doncic but chose not to, as did the Hawks, who traded out of the #3 pick—that those teams made the wrong selection at the end of the day says little about the pick’s value.
Moreover, when we sort draft position based on player performance—essentially, what should have happened with the benefit of hindsight—you get much more sensible results. Check out the graph below:
Figure 8 shows the average EPM Units per season that would have been generated by the #1 pick had the eleven best players in each draft instead been drafted #1 overall, the eleven second-best players been drafted #2 overall, etc. In other words, when you assume that in an average year the best player available is drafted in each slot, this is the curve you get.
While the result here is obviously somewhat artificial, the smoothed curve makes a lot more sense when trying to estimate the value of a given pick compared to the exceptionally spiky chart shown in Figure 8. Thus, we thought this was a much more sensible sorting methodology to use for creating our draft value chart.
#2: No Time Limits On Value Accrual
The number of years you count toward a player’s draft value is also incredibly important. In this exercise, we did not limit the years of value that a draft pick could accrue, so based on our EPM dataset, draft picks could accrue EPM Units for up to 11 seasons. Some of the other analyses I’ve seen have capped the amount of value they credit to a draft pick at 4 or 5 seasons, which is certainly defensible. However, I strongly suspect that 4 and 5 season caps on value dramatically understates the value of getting a top player through the draft, as most elite players will stay with the team that drafted them far beyond the first 4-5 seasons of formal “team control.” For example, out of the five players drafted since 2013 with the highest career EPM Units, four players have spent their careers with the team that drafted them (Nikola Jokic, Giannis Antetokounmpo, Joel Embiid, and Luka Doncic), and the fifth player (Rudy Gobert) spent nine seasons with the team that drafted him before he was eventually traded, garnering additional value for the team. Still, there probably is some value in trying to nail down a more accurate timeframe to use for value accretion of each draft pick, and that’s something we may look at down the road.
#3: Choosing EPM
We chose to use EPM for several reasons that I covered earlier, but it essentially boils down to the fact that I think it’s one of (if not the) most accurate all-in-one metrics available. You could create this type of chart based other all-in-one metrics like Box Plus/Minus (which I may try at some point for comparison), RAPTOR, or other advanced stats, and people have. A draft value chart that incorporates several stats could, in theory, be even better if the other metrics used are also reliable.
We did, however, run a sanity check using another metric, which I describe in the next section.
#4: Exponential Scaling
Picking the scaling function used to calculate EPM units from Dunks & Threes’ EPM stat is, inherently, a subjective exercise. As I discussed above, we chose to use an exponential scaling function based on how NBA salaries go up, but we basically tossed out the high-end and low-end to do so. There are good reasons for that, but we certainly had alternative options. For example, the salary scale curves (figures 2 and 3) actually look a lot like Sigmoid curves when you include the extremes. We think that’s probably attributable to the CBA rules for maximum and minimum salaries rather than a true reflection of market values, so we didn’t use a Sigmoid scaling function, but that hypothesis may not actually be true. We also could have opted to use a linear scaling function, but that doesn’t seem to match with how the NBA distributes salaries as well, either.
As a check, we decided to also develop a rough draft value analysis based on Estimated Wins from Dunks and Threes [Note: it looks like Dunks and Threes recently removed Estimated Wins from its public site, but we had previously downloaded the data for the 2014 through 2024 seasons.] While we don’t have an exact understanding of how Dunks and Threes calculates Estimated Wins, we understand that it’s a function of EPM and playing time (probably in terms of minutes, but possibly based on non-public possession data). Regardless, based on the distribution of Estimated Wins, it appears to scale in a more linear fashion than our EPM units metric. Estimated Wins can also be negative, which introduces some challenges.
Doing this analysis was useful because it yielded results that quickly showed obvious issues.
First, in an average draft year, 23 players were expected to produce negative Estimated Wins. Not only did those players show up as being worse than players who never played, but their negative performance also suggests that more than two-thirds of second round picks in an average draft somehow carried negative value. That doesn’t track with logic or reality, as NBA teams routinely make trades for those picks.
Second, creating draft values based on Estimated Wins led to apparent under-valuation of early first-round picks vis-a-vis middle first-round picks.
For easy comparison, I’ve put side-by-side charts of the normalized values for EPM units and Estimated Wins below (Figure 11) and a graph showing how picks decrease in value using each metric (Figure 12).
You can see from Figures 11 and 12 is that draft values based on Estimated Wins run negative for pick #38 and higher in an average draft, which doesn’t make any sense.
You can also see the Estimated Wins-based draft values are significantly higher than EPM unit-based draft values for certain picks, specifically in the range of pick #3 to about pick #21. It’s difficult to assess the value of very high picks in the NBA based on past trades, but nonetheless, the Estimated Wins values seem too tightly packed between top picks. For instance, the Estimated Wins draft values suggest that teams should be willing to trade a #1 overall pick for two #5 picks or three #8 picks, both of which are virtually impossible to imagine for normal drafts (not to mention drafts with truly coveted prospects available at pick #1, such as Wembanyama last year or potentially Cooper Flagg in 2025). That said, it is possible that draft values that are more evenly distributed amongst the top picks (as occurs when we used Estimated Wins) would make more sense in an environment where you are less certain which players will be best.
Wrap-Up
That’s it for now! We hope this post was informative and valuable. If you want to see the script we used, feel free to email me and I’ll share it with you (duncan@thesportsappeal.com). Unfortunately, however, I am not going to be able to publish the EPM dataset from Dunks & Threes since they have it behind a paywall (it’s not that pricey). If you can’t find another way to get the data, let me know via email and I can try to point you in the right direction at least.
Also, we’re probably going to play around with some of the future projects like looking at better ways to determine how many years of value to attribute to a given draft pick. I am not sure when but stay tuned if you’re interested.
As always, really appreciate any feedback folks have—so if there’s anything you think we missed or did wrong, please don’t hesitate to let us know! Another special shout-out once again to Alex Takakuwa, who was fundamental to making this analysis possible.