Why measuring player development’s easier than developing players


Where did Victor Oladipo’s 2013 came from? (Yes, this was a miss. Still.)

In the coming days I’ll post a piece at ESPN.com that purports to rank major-conference coaches on how well they’ve performed in terms of player development over the last eight years.

This might therefore be an appropriate moment to offer the following disclaimer: I’m not really sure to what extent, in the most literal and causal sense, coaches develop players.

More importantly, no one, to my knowledge, is sure on that score. I suppose what we mean by a seemingly benign and straightforward compound noun like “player development” is actually something more like “developing your players’ naturally increasing ability to score and prevent points even faster than opposing coaches do.” That’s quite different than young players merely improving measurable NBA combine-variety skills.

The analytic nut to be cracked is that all college players get better at combine-variety skills. These are athletes between the ages of 18 and 20-something, they’re going to improve naturally and at a fast rate. You did slash are doing so too at that age.

One question, then, is whether there are some programs where players consistently improve at actual basketball point production and prevention at an even faster rate than the norm. That answer turns to be yes, so, if you haven’t already, spring for that ESPN+ subscription and stop by my other place very soon.

Naturally coaches at such programs deserve credit. They signed the players, they hired the assistants, and they ran the practices. They’re the common denominator across time. (I looked at eight years’ worth of data, coinciding, not coincidentally, with the point at which the box plus/minus stat at sports-reference.com kicks in for college players.) More to the point, coaches are administratively responsible for developing players. It’s what they do.

Fair enough. Still, just for a second, consider Victor Oladipo’s 2012-13, and what it might tell us about our preferred model of player development, one where development’s something bestowed upon players by coaches in possession of the keys to that safe. Oladipo’s junior season ranks No. 1 out of the more than 2000 individual player-seasons that I looked at in terms of adjusted year-to-year BPM improvement by a major-conference player.

So, absolutely, give it up for Tom Crean. By all that’s customary and linguistically standard in college basketball, to the coach go the player development spoils.

But, kind of like when Sindarius Thornwell was taking an outlandishly spectacular leap forward as a senior, Oladipo had teammates receiving, one presumes, the same coaching yet recording normal (very good) results for development. What fuels the outliers?

Indeed, the Oladipo example is particularly instructive because it’s old. We have the evaluative luxury of knowing the rest of the story, and we know 2013 was no fluke. It was instead the moment when our basketball brains first switched over to a correct appraisal of Oladipo’s true — albeit, at the time, newfound — ability. That’s what every team wants to happen.

It would be great news for college basketball writers and even better news for athletic directors if we could predict these happenings by simply pointing at a coach. It is not, however, clear that we can do so.

Another theory, not at all at odds with the occasional Oladipo event, would be that development entails not only the Normal Dale aspects of proper individual technique and endless drills in the gym (which, to be clear, are essential, but may also be relatively difficult to translate into a significant competitive advantage). It might also entail a coach affording all of his players access to a particularly effective way of playing the game collaboratively.

I’m not sure Jay Wright, for example, really is the literal Mozart of shooting instructors the way he would seemingly have to be to explain his teams’ historically absurd shooting. It’s possible, of course. Perhaps he does talk endlessly about backspin, pronation, elbow position, “nose over your toes,” etc., and maybe that really is the special sauce.

Still, it’s also likely his players are additionally finding excellent shot opportunities and are skilled at passing up lower quality looks. We don’t traditionally call that second thing player development, but maybe we should. Wright’s players keep getting drafted. Maybe Villanova players develop by playing basketball at Villanova the same way high school students improve their French by spending three months in France.

All I know is, if you’ve read as much of your Emerson and Kevin Pelton as you should, player development might be a smidge less passive and pedagogical than we usually phrase it and a bit more active and volitional. Player, develop thyself.

Bonus quantified priors! Ironically, the best case for purely coach-directed player development might be the negative case in the most extreme examples. One thing I learned by digging through eight years’ worth of plus-minus is that, contrary to that “all players get better” bit above, players can and do regress statistically under coaches who are soon to be fired.

In fact, that sort of regression is, so far, an excellent predictor of a sacking to come, even if it takes a season or two to transpire. True, if we had a controlled experiment where we prohibited all firings for, I don’t know, a decade, surely some coach somewhere would be able to pull out of this kind of nosedive. But, at least in the last eight years, it hasn’t really happened yet. If you see a major-conference team backsliding across the board on adjusted BPM, it’s officially time to start speculating about the hot mid-major coach du jour.