Wherein SMQ examines the final regular season statistics in more than a dozen major categories to suss out who succeeded in what and how that statistical success correlated to overall success in terms of final record. I do not have the luxury of a high-powered supercomputer or degree-type qualification in mathematics or statistics, but analysis here will be driven as deep as my egghead, tinfoil cap curiosity and cell phone calculator will take it. That is to say, quasi-scientific at best. If you've ever said "the only number that matters is the one on the scoreboard" or anything to such effect, click here and don't be such a philistine.
- - -
Part One: Which stats correlate most closely with winning?
Part Two: What do the best teams do the best?
The short answer to the larger question posed by this edition of the Relevancy Watch is that the best teams don't do any one thing better as a group, especially in terms of offensive philosophy - among ranked teams at the end of the year, there are running teams (West Virginia, Illinois) with horrible passing numbers and passing teams (Boston College, Texas Tech) with horrible running numbers and a few more balanced teams (LSU, Texas) that look just OK in each individual category but come out strong overall, and teams that are obviously terrible all around on offense (Virginia Tech, Auburn). It's not essential to be good at everything; the numbers show there's more than one way to skin a cat, and score a touchdown.
Backing up Monday's defensive message, though, it's clear in the big picture that what's really important is not so much the offensive skinning as not getting skinned on the other end:
|Avg. Rank of Top 10||Avg. Rank of Top 25||Top 25 in Category Top 20||Top 25 in Category Bottom 40|
|Pass Eff. Defense||24.9||30.3||10||0|
|Pass Eff. Offense||26.1||36.3||11||3|
The omnipresent caveat here is "correlation is not causation," since the circumstances of being a good team that plays with a lead most of the time inherently favor good teams sporting better numbers against the run and worse numbers against the pass, because opposing offenses have to throw more often to catch up. Still, it hardly seems like a coincidence that ranked teams dominated the top of three of the four defensive categories with nary a representative among the dregs of any of them (the lone straggler against the run was Texas Tech, ranked 82nd), while at the same time, as noted, almost as many teams got away with being plainly bad at some or all facets of offense as excelled in them. None of the top teams in the country got away with being bad on defense; mediocre at worst, but with the lone (and predictable) exception of Texas Tech against the run, never bad.
This is a demonstrable trend that corresponds perfectly to the conclusions in Part One, which also showed a stronger correlation between winning and rush, pass efficiency and total defense than any other measures.
That's the big picture...
On a team-by-team basis, even outside the oppressive subjectivity of the polls, the result was largely the same:
|Rush O||Pass O||Pass Eff. O||Total O||Rush D||Pass D||Pass Eff. D||Total D||TO Margin||Avg.|
In fact, if you added up the numbers for all the teams in the year-end polls, you could hit the final poll position of half of them within four slots based on their statistical finish alone:
|Poll(s) Rank*||Avg. Stat Rank**||+/- vs. Polls|
|W. Virginia||6||2||+ 4|
|Arizona State||15||13||+ 2|
|Oregon State||25||22||+ 3|
|Penn State||26||15||+ 9|
* Top 25 teams based on average final rank in AP, Coaches and Blog Poll (hence the inclusion of Penn State, No. 25 by the coaches).
** Average Stat Rank ordered within ranked teams only.
- - -
Is it significant that the three teams that finished much better in the polls than on paper (Georgia, Tennessee and Auburn) are all from the SEC? Probably, though for different reasons depending on your perspective: either pollsters are so blinded in favor of the mythical speeeeeeed in the conference they chronically overrate its members, or the SEC is just too tough to compare to other girlie conferences. They're probably both right.
"Good teams have good stats" is not like, wow, I know, but going back to the point of the exercise, it's necessary to demonstrate these things with certainty.
These numbers and those presented in Part One, however, are just averages over the whole season, a macro look that might not necessarily hold true to form on a micro level. Where the first two editions of the Relevancy Watch have been large scale, top-down assessments, Part Three forward will deal with the micro on a game-by-game basis to build an anlysis from the ground up.