Jim Van Valkenburg’s creation of the Ratings Percentage Index in the fall of 1980 marked an analytic and administrative triumph. Van Valkenburg was working in an information economy of near-total deprivation, with little or no supporting data at hand beyond wins, losses, and points. Nevertheless he was given time (six months), staff, and an office roof over his head in Kansas City by Walter Byers and told to come up with a rating system that would make the NCAA tournament’s selection and seeding processes something more than a rote parroting of the AP poll.
And, after a fashion, Van Valkenburg’s RPI did exactly what it was intended to do. Part of the impetus behind creating a rating system in the first place was the possibility that the NCAA might choose to give automatic bids to only a portion of Division I.
It never came to that. Instead, the NCAA expanded the field to 52 teams in 1983, and to 64 in 1985. By then the selection committee had already made some relatively daring at-large choices that appeared to be fueled, at least in part, by the RPI. At the same time a rating system that had been created to shed badly needed light on the game’s balance of power was beginning to change how the game was scheduled.
As a member of the men’s basketball committee, former Duke head coach Vic Bubas hit upon the idea of having his two interns generate printouts of every team’s record versus RPI top-50 opponents. Bubas and Van Valkenburg are seldom cited as seminal figures today, but you can make a case that every March we unconsciously yet unfailingly talk about tournament selection in grooves that were laid out for us 30-odd years ago by these two thinkers.
Maybe that will change this March. With its announcement that it will solicit the contributions of Ken Pomeroy, Jeff Sagarin, Ben Alamar, Kevin Pauga, and Jerry Palm as part of a metrics audit of sorts, the current generation of NCAA staffers has evinced a laudable Van Valkenburg-style desire to at long last clamber up out of evaluative grooves that had become constricting and myopic.
In 2017 we are all the happy beneficiaries of an information economy marked by extraordinary abundance. The NCAA’s challenge now is not to shed light or to create new metrics but to articulate a selection vision that’s open to and animated by today’s information wave and, especially, to align its practices with this vision.
Performance, scheduling, and Gasaway’s Law
The RPI’s primary operational weakness is revealed by the fact that the single most statistically revelatory moment in the entire season is the day when a team releases its schedule.
Ordinarily this trait would be seen as running afoul of what I refer to with winning modesty as my Law:
There is zero correlation between a coach’s ability to schedule to a selection committee’s liking and the team’s ability to play basketball.
It could be seen as something of a problem, for example, to have known last October, before one possession of basketball had been played, that a preseason AP top-25 team like Indiana would likely have a terrible RPI in 2016-17. We have chosen instead to adopt modes of talking that minimize what would otherwise be seen as a problem…
Indiana and rating systems
January 7, 2017
Team Rankings Sagarin KenPom Massey KPI RPI 24 25 29 50 79 135
We say Tom Crean shouldn’t have scheduled home games against Austin Peay, Mississippi Valley State, and SIU Edwardsville. Well, if you’re a Hoosier fan yearning for entertainment, of course Crean shouldn’t have scheduled that way. But the material point is that there was never a moment in the last four decades when the NCAA stepped to the mike and said: Hey, in addition to that whole “best teams” thing, we’re now in the schedule-preferring business too.
Today we talk about resumes and “deserving” versus “best” teams, which, of course, is fine. It’s just interesting that we arrived here alongside or maybe even (we can never know for sure) because of the RPI’s severe mathematical foregrounding of schedule at the expense of performance. If we really think this is the best way to talk, if we truly believe this is the most discerning and revealing lens though which to evaluate the sport, then the NCAA’s analytic Prague Spring affords us the perfect moment to avow it openly and with precision.
Call this path “status quo, but with way more accuracy on both the ‘mission statement’ and ‘stat’ sides.” If this is indeed chosen as the best way forward, I know from personal experience that the uncommonly bright minds gathering in Indianapolis are uniquely capable of crafting a composite rating that is: 1) fed from multiple independently-sourced numerical springs and thus self-regulating against IU-style outliers; 2) far superior to the RPI descriptively; and 3) immune to Pac-12-in-2016-style metric-gaming operationally.
Touching these three bases and drafting a more candid statement at the top of the committee’s charge (“best teams weighted with a frank preference for ambitious non-conference scheduling”) would be a marked improvement over what we have now. I for one would applaud loudly and long. Perfect is the enemy of the good.
I would only add one further note to my time capsule to be opened in a post-RPI utopia….
Maybe schedules aren’t a math problem, maybe they’re a programming opportunity
Clemenceau said war’s too important to be left to the generals, and that’s kind of how I’ve come to feel about scheduling and coaches. College basketball’s unusual among major spectator sports worldwide in turning over fully 35 percent of its games to the teams themselves and saying, “Here, play whomever you wish.”
In theory that could work out really well and be empowering, but in practice — possibly because we have 351 teams, a postseason tournament with 68, and a hard-wired tendency to lionize only the best six or so — what it gets us is what we really do see every November and December.
The best teams play a big splashy game right off the bat someplace eccentrically alien like on an aircraft carrier, and a few weeks later they’ll participate in a neutral-site holiday tournament. Otherwise, the good teams mostly stay home and fatten up on cupcakes, waiting around for conference play. The variability in strength-of-schedule introduced by this model is precisely the opening the RPI needed to get its nose under the tent. A four-decade reign of evaluative mischief and confusion ensued, and here we are.
But instead of framing this matter of schedule-based variability purely as a pitched battle between good math (yay, KenPom) and bad (boo, RPI), maybe we could address the schedule itself. That game Villanova played at Purdue was a drop of true-road-game rain on a November desert, and, at root, maybe this is just a collective-action problem. No single coach wants to be the dumb one who plays an entertainingly tough schedule and loses a bunch of games. Well, what if every coach were required to be entertaining?
While keeping the overall number of games and out-of-classroom hours constant, can we play more conference games? Could every major-conference team get at least one true home or road game against at least one team in each of the other five major conferences? Or give mid-majors a prescribed number of shots — including home games — against teams in the six major conferences? Can we move forward on that whole “Champions League for college hoops” thing? Could we standardize schedules through a tier-based system across D-I?
I don’t know whether any such step would increase the sport’s visibility long before March, but I suspect “we’ve always done it this way before” does not necessarily describe best practices when it comes to scheduling. Van Valkenburg had a doggedly experimental disposition, one that led him through at least 14 different statistical options before he found one that he was willing to present to the men’s basketball committee. We should observe the RPI’s potential eclipse by trying to be more like the guy who invented the RPI.