Sunday, May 10, 2009

Evaluating the WNBA Draft With Adjusted Wins Score



A long time ago, I introduced a linear metric called "Wins Score", which is one of my favorite metrics. I like it better than the WNBA's metric (Efficiency) because I believe that Efficiency overrates poor shooters.

Since then, there has been some by people involved in the statistical side of basketball whether "Wins Score" overrates rebounders. The Wins Score metric gives players one point for every rebound, regardless of whether the rebound was offensive or defensive. The argument - as well I understand it - is that in a lot of cases, rebounds will sort of come as a byproduct of just being there. Someone has to end up with the ball after a missed shot, or it just goes out of bounds.

Adjusted Wins Score (AWS) adjusts for the rebounds. It gives 0.7 points for all offensive rebounds, and 0.3 to the more numerous defensive rebounding. The resulting formula:

Adjusted Wins Score=

total points scored
+ 0.7 * offensive rebounds
+ 0.3 * defensive rebounds
+ steals
+ 0.5 * (assists + blocks)
- field goal attempts
- turnovers
-0.5 * (personal fouls + free throw attempts)

The correlation of Adjusted Wins Score to total teams wins is around 0.9 percent, if I recall correctly. The metric has a high degree of predictability "after the fact".

However, if you want to use AWS to predict the future, you have to make some tough decisions. The first is how to project future performance of a player based on past performance. The second problem is how to project initial performance for players who have never played before - draftees.

So how do you evaluate the draftees? We now have 12 years of data, which should allow us to make some kind of predictions. We can simply look at a draft position - #1, #2, #3 - and see how players have done historically in their first year of play. We could then compare a drafted player after the season to her ancestors, so to speak. We would compare Angel McCoughtry to every previous #1 draft pick between 1997 and 2008, we'd compare Marissa Coleman to every previous #2 draft pick, and so forth. Depending on how those comparisons measured up, we could then sort out which GM organizations drafted well and which didn't.

My first idea was to take the average AWS of all #1 picks, the average AWS of all #2 picks, etc. The problem is that very high values or very low ones would skew the average. If we go by the rule of mathematical average, you'd have to conclude that the Los Angeles Sparks organization were geniuses in drafting Candace Parker. They shouldn't be given huge amounts of credit for that - Candace Parker was clearly someone who would be a great player in the WNBA right away. We can at the very least give the Sparks organization credit in not passing up Parker, so they should at least get credit for doing something right.

Instead of using an average AWS, I would use the median AWS. The median of a set of numbers is just the dividing line between the top half of the values and the bottom half. If the set has an odd number of values, the median is the one that's smack in the middle; if the set has an even number of values the median is the average of the "highest low" and the "lowest high".

In those cases where a draft pick didn't play in the year when she was drafted, or never made it to the WNBA at all, I assigned a value of 0.0 for that given draft position and year. It seems fair, since AWS is an additive metric and the missing player had neither added nor subtracted from the team's total AWS.

For the top 39 positions in the AWS, here is the median AWS:



What do these numbers tell us?

First, let's look at how the players in 2008 stacked up in AWS. My grading scale:

50.0 AWS and above: A
10.0 - 49.9 AWS: B
0.0 - 9.9 AWS: C
-9.9 to -0.1 AWS: D
-10.0 AWS and below: F

Second, let's compare the sample median scores to some actual players: according to the table, the median AWS for a #1 draft pick is 38.4. Historically, they have proven to be "B" players in their first year of play. In 2008, Shameka Christon of the Liberty had a AWS of 38.1. We would expect Angel McCoughtry to be about as good in 2009 as Shameka Christon was in 2008 - probably a B or B+ player, not expected to be a superstar immediately. If Angel McCoughtry is at least this good, we give the Atlanta Dream organization kudos. If she proves to be sub-par, we knock the organization.

Third, there is some unevenness in the results. #6 picks have proven to be historically better than #5 picks - possibly because they don't have to bear the expectations that come with being a Top Five draft pick. #11 is also a good position for some reason. #7 has not been good, with most of the players drafted at #7 having a poor AWS in their first year. (And this year's #7 pick? Courtney Paris.)

Fourth, you can see that once you get to around to the #8 pick, the value of the drafted player in the first year is almost negligible. One you get past the mid second round, any player that puts up even average numbers should be considered an example of smart drafting. It might be the case that GMs make their reputations in the second and third rounds of the draft and not in the first rounds, where the good choices are fairly obvious.

It will be interesting to see how well this years draft picks turn out. I'll be watching, and calculating.

2 comments:

pilight said...

You really need to limit comparisons to within the same draft class. If McCoughtry is the best player in the draft, then taking her #1 was the right choice even if her wins score isn't that high. Sometimes the draft is just weak, like 2000 or 2007, and there's just nothing a GM can do about it.

pt said...

pilight, that'a a very good point about relative strength of draft classes year by year and one that I didn't think of.

An idea might be to use the scores above to create a downward sweeping curve. The base value - where the curve sits, basically - could be adjusted up or down depending on the strength of all drafted players.

Actually doing the above...well, I got to put on the thinking cap again....