I was recently looking through some old Sports Illustrateds online, and I came upon a February, 1997 issue that included a chart ranking the top 16 NBA point guards of that season. You can look at it here (it’s on pages 40 and 41 – easiest to access if you select “Show Thumbnails”). After getting past the shock of seeing Terrell Brandon listed #1 over John Stockton—not as alarming as you might think when you consider where each player was in his career at the time—I started looking a little more closely at the system itself.
Each player is ranked from 16 (best) to 1 (worst) in nine statistical categories, some very rudimentary weights are given to the stats (assists and turnovers become more important, rebounds and blocks less), and SI then added up the weighted rankings into a final column. Brandon edged out Stockton at the top. Allen Iverson placed last since many of his stats were relatively terrible (turnovers, all three shooting percentages). Jason Kidd was nearly last (doesn’t score, poor shooter).
There are some serious flaws to the magazine’s system. Considering 6 of the 9 stats were based simply on the amount of them accrued per game, PG’s on faster teams had an obvious advantage. All three shooting percentages (FG, 3FG, FT) were weighted equally no matter how frequently an individual player shot the ball, attempted bombs, or got to the line. Blocks and rebounds were weighted equally, even though a top-rebounding PG (5-7 rpg range) is providing way more value to his squad than a top-shot blocking PG (0.5 bpg neighborhood). Defense was barely covered. And an extreme example of an awesome floor general would rank last: a hypothetical PG who racked up 60 assists per game with 0 turnovers and who played perfect pestering defense on an opposing point, forcing him into one thrown away pass after another, would rank below Iverson because of a lack of scoring and steals.
Not only this, the nine measured stats are simply individual numbers each player earns with little regard for their actual effect on the game. It hasn’t been until recently that basketball fans had much more detailed information available to assess how well a player helps their team win (those crazy advanced stats). I decided to take 18 of the top PG’s in the league today (15 vets, 3 rookies – included Steve Blake since there’s a lot of talk surrounding his signing with LA) and compare them to each other using some advanced stats that all attempt to measure a player’s impact on their team by looking at how the team performs with that guy in the lineup. I used data from the last two seasons so that one fluke year didn’t skew things, and it’s hopefully a short enough period of time that one’s play two years ago isn’t much different than what you’d expect in 2010-11.
Here’s some information about the three formulas I used.
Pythagorean Winning Percentage (PyWin%)
Here’s an article I wrote last year explaining what the PyWin% is and how it can be used, but I’ll try to explain the basics of it for link-haters. By looking at how many points a team scores and surrenders over a season, one can calculate very closely what a team’s winning percentage should have been if luck was not a factor. That formula is
PyWin% = points^14 / (points^14 + opponent’s points^14)
This calculation can be applied to individual players if you use their Offensive Ratings and Defensive Ratings, which come from longer formulas that figure out how many points per 100 possessions someone was responsible for on both sides of the ball. Those two ratings can be found for players at basketball-reference.com (here’s Jason Kidd’s page as an example – scroll down to the Advanced section and you’ll see his ORtg and DRtg for each season).
I set up a spread sheet to calculate each player’s PyWin% over the past two years. Chris Paul, who is absolutely outstanding on both sides of the ball, came out on top with a 90.2 PyWin%. In theory, a team full of Paul-esque players at each position (not 6-footers at all five spots, but players with his abilities and impact at each spot) would go 74-8 in a season. On the other end, Baron Davis—lackadaisical defender, horrendous shooter who likes to shoot, streaky distributor—is at 23.2%, good for 19 wins in a season for a team of Davis-esque guys.
These numbers and the resulting PyWin% will rise and fall with an entire team’s success more noticeably than the other two, so it’s the one I trust the least.
Adjusted Plus/Minus (APM or Adj +/-)
APM attempts to measure a player’s impact on his team’s point differential. Regular (or unadjusted) Plus/Minus works something along the lines of “Player A came into the game when his team was down 12-10, and when he left 5 minutes later, they were winning 20-15, so he has earned a +7 (movement in differential from -2 to +5).” Adjusted +/- starts factoring in the other nine players in the game and the situation (4th quarter of a tight game is more meaningful than garbage time in a 20-point blowout). Different sites and mathematicians have different formulas factoring situations and other players differently, but they all work toward isolating one player’s effect on the game independent of the other players.
I used the 2 Year Adjusted +/- for players at Basketball Value. Here is the Jazz team page with Deron Williams at the top as an example. The 1 Year APM is calculated only using that season’s information, whereas the 2 Year includes the last two seasons and the playoffs during that timeframe. The people calculating these numbers do their best to not let it be affected by a team’s rises and falls, so many people consider it the most trustworthy when evaluating individual players.
Wins Produced Per 48 Minutes (WP/48)
Like APM, multiple mathematicians/basketball fans have worked on assigning individual wins to individual players. The site Wins Produced Test Suite does a good job of explaining how it does this and has a simple-to-use interface to find and compare players. I looked at Wins Produced per 48 minutes instead of simply Wins Produced because I’m trying to find which player is the best at producing wins, not who was the combination of best and played the most minutes in the past.
Top players are able to produce an amount of wins for a season in the teens and a few even surpass 20 (it can’t be much higher for an individual because a team will win as many games as all of its players produced together in a season), but I’m looking at WP/48 so that I’m not tricked into saying a player who plays more minutes is simply better, or a player who was injured for part of a season is simply worse. Usually the best players can produce 0.2 or more wins per 48 minutes. Jason Kidd lead all PG’s last year with 0.337 WP/48, and Chris Paul lead PG’s in 2008-09 with an amazingly high, league-leading 0.450.
Here's the spreadsheet showing each PG's rank in the three advanced categories (in yellow) and their overall average ranking (green).
Using the spreadsheet, I simply averaged each player’s rank in the three categories together to get an overall idea of where they fall within these different ways of evaluating a player’s impact on the team. Again, none of these matricies factor in the normal stats we’re used to looking at for a player: points, shooting percentages, assists, rebounds – they were never considered in any of these rankings.
After averaging everyone’s rankings in the three categories, the results more or less followed what I’d expect if I had to give my opinion on how good they are at being PG’s. I’d definitely place Paul at the top of the list (came up 1st with average ranking of 1.33), with Williams, Kidd, and Nash in my next group. Those three fell next on the advanced math scale, finishing 4th with a 5.00, 2nd with a 3.00, and 3rd with a 4.33, respectively. Even though Kidd has clearly slowed down in his career, his ability to run an offense is still top-notch and he’s one of the best defenders at the position, so it seems perfectly reasonable that he could place second. The next group also makes sense: Rajon Rondo was tied for 4th with a 5.00, Jameer Nelson was 6th with a 6.67, Andre Miller was 7th with a 8.67, and Chauncey Billups was 8th with a 9.00.
At the bottom were two rookies (which is reasonable because they probably aren’t doing a lot of the little nuance parts of the game yet that don’t show up in stats but definitely hurt a team, things such as help-side defense and reading mismatches correctly) and Aaron Brooks. Brandon Jennings was 16th with a 14.33, and Darren Collison and Brooks were last with a 15.00.
There weren’t a lot of surprises when you actually think about what the players do on both sides of the ball to help a team win regardless of statistical acknowledgement. Derrick Rose was a little lower than expected (13th with a 13.00), but his offensive game is still developing and his defense isn’t all there yet. Jose Calederon might seem high to some people at 9th with a 9.67, but he’s been a great shooter and distributor so you can see how it happened.
Obviously it’s tough to judge how accurate or “correct” each of the three calculations are since all of these formulas are being tweaked regularly, but it seems odd that some players’ rankings were so wide-ranging. Billups ranked 2nd in PyWin%, 7th in WP/48, and 18th (last) in APM. Russell Westbrook was 6th in APM, 12th in WP/48, and 16th in PyWin%. When there was a clear discrepancy with one of the categories for a player, it tending to be with the APM. This doesn’t mean that it is the least “correct” of the three, but probably rather that the other two are more similar in how they’re calculated.
Either way, I feel this attempt to rank the PG’s with nothing but advanced stats measuring impact certainly passes the laugh test, and has some very clear advantages over charts such as SI’s from 1997. I’m hoping over time more fans will rely on advanced impact stats such as these three to judge how good players are. Until then, volume scorers will continue to be fan favorites and propped up unnecessarily.