clock menu more-arrow no yes mobile

Filed under:

Build Me Up, Tear Me Down, Part Three: All-Americans In Repose

A couple days after Signing Day, Peter Bean at Burnt Orange Nation guessed a graph of the ultimate value of five-star prospects relative to less heralded recruits would look something like this:

To me, and to most casual observers, I suspect, that looks like self-evident, common sense (aka "duh studies"). It never is, though, as I’ve been compelled to address before: there are enough people out there who think "Rhett Bomar" and then write, "recruiting rankings are a bunch of horsecrap." Every year, a naysaying articles accompanies Signing Day to point out its foolishness, sentiment you'll find backed up on any message board, where posters inevitably end up saying things like, "It’s all a crapshoot," or "Come back to me in four years." People delight in calling "bullshit" on the so-called experts.

Taken at face value, such philistinism could be armed by a study that made its way around the 'sphere last week, a look at the composition of 2007 all-America teams by the descriptively-titled site OmniNerd. The site looked at 256 players named first team, second team, third team or honorable mention all-American last year by at least one of a dozen mainstream outlets. By high school star rating (according to Rivals and Scout), the numbers broke down like this:

Rivals No Stars One Star Two Stars Three Stars Four Stars Five Stars
Number of AAs 22 1 78 83 53 19
% of AAs 9% 0% 30% 32% 21% 7%
Scout No Stars One Star Two Stars Three Stars Four Stars Five Stars
Number of AAs 33 16 74 74 34 25
% of AAs 13% 6% 29% 29% 13% 10%

On the surface, it looks like the conclusion is that zero stars out of high school was a slightly better predictor of elite success than five stars, and both groups quivered in the path of the two-star and three-star prospects: in both cases, a substantial majority of last year’s all-Americans (62 percent by Rivals' opinion, 58 percent by Scout’s) fell into one of the two middle categories. The distribution in raw numbers is generally like a bell curve:

If you didn’t know better – or if, like the author, you assume a normal distribution in which most of a sample falls into the median ranges (two and three stars) and narrows at either end – you'd think this data suggests the "star system" is random, and not much more predictive of elite success than pulling prospects’ names out of a hat.

We do know better, though, and so, it seems, does the author, who did begin late in the article to look at the number of prospects within each "star" category. Unfortunately, he didn’t get very far at this crucial distinction, only comparing the number of five-star designations awarded by each of the two services in question over the last six years, and failing to compare these numbers with the number of prospects in other star levels because "the necessary data isn’t available."

Still, even without this information, he goes on to make conclusions based on the raw numbers and the assumption of normal distribution: the data is not "biased high" toward the four and five-star end of the scale as it should be if those labels were accurate predictors of success; there’s an "abnormally large number" of 0-star outliers on the all-America teams; and finally, in the "Concluding Comments," the author writes,

...the prospect rankings exist with the sole purpose of predicting the likelihood a player [will] succeed at the generalized college level. Accurate rankings should take these factors into account and still show a much greater percentage of 5-star recruits making the All-America team than 0-stars.
- - -
with the clear implication that the rankings in question have failed to accomplish that purpose.

It’s a shame he didn’t look closer at the Web sites, both of which very clearly do distinguish between the number of five-star, four-star and three-star prospects in any given class, if you add up each level from the sites’ team-by-team breakdowns; subtract those players from the class total, which is also available for each team, and you get the number of players rated two stars and lower. If you do that over the last five years for Rivals, the assumption of a "normal distribution" is fairly destroyed:

Rivals: Star Distribution By Year (All I-A Teams)
2003 2004 2005 2006 2007 Total % of Total
Total Recruits 2,648 2,644 2,907 2,700 2,812 1,3711 100
Five-Star 32 33 33 37 36 171 1.2
Four-Star 270 239 272 329 340 1,450 10.6
Three-Star 905 607 752 836 911 4,011 29.3
Two-Star or Lower 1,441 1,765 1,850 1,498 1,525 8,079 58.9

The distribution does not form a bell curve because there are are far more players in the bottom categories of the scale than in the top categories. Only about eleven percent of all incoming players from 2003-07 were rated four or five stars. Yet players in those categories (according to the Rivals rating) represented 28 percent of last year’s all-Americans. Note also that among the zero-star outliers in this study, 13 of 22 all-Americans in that category according to Rivals and 15 of 33 according to Scout were kickers and punters, who are destined to receive less attention and a lower rating than every-down players; specialists are an afterthought in recruiting, and many, many times more likely to come from obscurity (sometimes literally, like, out of the stands, or from the soccer team) and contribute than a player at a more scrutinized position requiring some observable amount of size and speed. If you remove the 24 kickers and punters receiving all-America votes last season from the equation, it skews even more sharply to the higher end of the rankings:

Odds of Becoming All-America By Star Level (Rivals Rank)
Number % of Total Odds vs. Rest of Level
Total All-Americans 232 100 1 in 59
Five-Star 19 8.2 1 in 9
Four-Star 53 22.8 1 in 27
Three-Star 81 34.9 1 in 50
Two-Star or Lower 79 34.1 1 in 102

Five-star prospects were about three times as likely to earn an all-America vote than four-star prospects, five-and-a-half times as likely as a three-star prospect, eleven times as likely as a one, two or zero-star prospect. If the setting was 'random; – if the rankings were worthless – every level would show roughly the same 1 in 59 odds of producing an all-American. Three, four and five-star prospects all fared better than that, the top two much better than that. Zero, one and two-stars were not close. If you pay attention to the distribution of the star rankings, the results are nothing like a bell curve. They look like this:

Which is basically what Peter (and the recruiting services) predicted. I.e., common sense. Boring, I know.

But if one of the measures of the "sole purpose" of the guru rankings is their ability to "show a much greater percentage of 5-star recruits making the All-America team than 0-stars," then those rankings succeeded wildly. For predictive purposes, they are generally what they say they are.