First, let's get to some important points:
1. These ratings are not best used to determine future performance past the next game played on the schedule. While, I will release predicted records based upon these ratings, this is for discussion only. The rating for any team only applies until they play their next game. If State U has a rating of 100 on September 1, 2006, and Tech has a rating of 95 on September 1, 2006, these ratings do not mean that State should beat Tech by five points when they conclude the season on a neutral field in late November. The ratings are set up for these teams to either get stronger, get weaker, or neither during the season. Depth and scheduling can automatically increase or decrease these ratings. For instance if Tech has a rating of 95 on September 21, and they win by exactly the PiRate-predicted margin, they may lose a point any way because their depth-part of the rating is lower than average. Regardless, I will break this rule and do exactly like I tell you not to do. What a hypocrite I am!
2. I have no input in determining the component sub-ratings that make up this rating. For instance, I cannot say to myself, "I like State's running backs more than Tech's, so I will raise them a point." I only crunch numbers that are available to me. Yes, I determine the constant values that go into these numbers, but these constants basically remain the same year-to-year. Adjustments are only made in the spring and then only to make the ratings more accurate.
3. When you view the ratings for our beloved Commodores, these figures will have absolutely NO partiality in them, and everything I have witnessed in August practices will have no bearing upon them. If for instance I know some things that the general public has not been made privy to, you will not get that information in the preview. The preview for Vanderbilt will be the same as the preview for Hawaii—impartial using only statistical information available to me, some from Las Vegas, as well as many others. In my heart, Vanderbilt's football players and coaches are number one and will continue to be so regardless of what the math reveals.
Now, to briefly explain what I do that is different from most other ratings, I try to break the game down into the sum of its parts. Where other prognosticators might say State will beat Tech because they have a better quarterback, I say what does that matter? The QBs aren't facing each other one on one. I compare State's offensive line to Tech's defensive line and vice versa. Those units will have to square off. I do the same thing for every possible scenario--running game against defense vs. the run, passing game vs. defense against the pass, etc.
After determining who has the advantage at each scenario, I assign a numerical value to that advantage. State's running game has a rating of X, while Tech's defense against the run has a rating of X-3, or three points weaker. After summing up all the differences, I throw in a few intangibles, such as home field advantage, revenge possibilities, having a bye week, etc. This brings me to a total difference, which then becomes my predicted point spread. Based on where those advantages exist, I also determine a possible final score.
Previously, I applied this to all the Division 1-A games each week and then looked to see where the Las Vegas spread differed from mine by a determined amount (which varied based on the imaginary wager).
Starting last year, I began to compare all teams as if they were playing an opponent that was perfectly average at every position and for that matter every facet of the game. It allowed me to say that State's offensive line was X points better than the average defensive line. After comparing all parts of each team, I assigned each team a beginning power rating. The new ratings were easily updated each week by comparing predicted outcomes by actual results and factoring in other effects such as what former opponents did, the new up-to-date strength of schedule (it changes for each team every week), and how the real score of the game compared with the stats (for instance, if the one team scored three touchdowns with their defense, don't penalize the other team's defense for that; they weren't responsible).
Two years ago, these ratings were among the best in the nation; with the aid of the PiRate ratings, I picked games at better than 72% accuracy against the spread. Last year, that number plummeted to a more realistic 58% (remember, if all games have the same monetary value, then anything better than 52.4% is a winning proposition at 11-10 odds).
Before I explain how a PiRate rating is figured, let me make one thing perfectly clear: Because I now cover Vanderbilt on an up-close and personal basis, I cannot include any Vanderbilt games on my PiRate pretend wager list. I will include their rating and the opposing team's rating and maybe make a general prediction, but they won't appear in the "picks" section of the PiRate Picks.
So, if you've read this far, here is how I go about rating the teams. I do this 119 times a year, and it does consume a great chunk of my summers.
A PiRate rating is divided into five different sub-ratings. The first rating looks at last year's final rating and adjusts that number by looking at which positions return starters and 2nd string players. This rating does not care about the talent level of the players. If the starting quarterback returns (5 games started qualifies), the team gets more credit than if the 2nd team tight end returns). After adjusting for all 22 offensive and 22 defensive positions, I look at the returning place-kicker, punter, long snapper, punt returner, and kickoff returner. This sub-rating is looking basically at experience.
Sub-rating number two looks at talent instead of experience. Each player is assigned a rating of 1-10 for his ability. I do not do this; I get these evaluations from another source, but I do supply the parameters that convert this information into a rating. Once again, this sub-rating uses last year's final rating and adjusts from there. This is where the units of each team are compared with their opposing counterparts (offensive line vs. average defensive line, etc.)
Sub-rating number three is the same algebraic rating I have used since I was a wee lad in the early 1970s. For many years, it was the rating in its entirety. This sub-rating looks at many statistical factors, such as points per game (offense and defense), yards per game (offense and defense), returning starters, returning lettermen, strength of last year's schedule, strength of conference, and seven other intangibles. This sub-rating does not use the previous year's final rating as a starting point.
Sub-rating number four uses points per game and points per game allowed. It then adjusts by taking the strength of schedule into play and moving those averages up or down depending on the schedule difficulty. For instance if Team A has an average score per game of 24-22 and has played all top 25 teams, and Team B has an average score per game of 38-12 and has played a bunch of creampuffs, Team A's averages might adjust to 34-12, while Team B's averages might adjust to 28-22, making Team A the better team by 16 points. For the beginning of the season, I must use the previous year's final averages and strength of schedules; as the season progresses, the previous year's averages and schedule strengths meld with the current season. By the sixth game, none of the previous year is considered.
Sub-rating number five is similar to sub-rating number three. It is another algebraic formula, but with this one I apply different values. This was the second rating I ever invented, and it differed so much from what is now sub-rating number three. However, in some years, it was more accurate. Over any 10-year period, the accuracy of numbers three and five will be about equal.
After I have all the numbers for each of the five sub-ratings, I figure the average of these for each of the 119 I-A teams. If a team is new to I-A, I can only use those ratings that do not require the previous year's final rating. Once I have all 119 teams averaged, I make some slight adjustments based on conference strengths. This is done because some times the best team from the weakest conference is rated too high, while the worst team from the best conference is rated too low.
The final part to the equation is a simple normalizing process. I want the mean for the PiRate ratings to always be 100. That way, the number means something. A rating of 114 means the team is 14 points better than average, while a rating of 86 means they are 14 points weaker than average. The final part is to include a home field advantage of between three and seven points. This is almost the same every year; maybe a dozen teams will see this rating change.
So that's how the PiRate ratings are born every year. Starting in a few days, I will begin releasing the ratings conference-by-conference starting with the lowest rated league and continuing until I have previewed the highest rated league. The Southeastern Conference will be the last league previewed, even though they are not the highest rated league to start the season. Here's how the conferences rate, worst to best:
12. Sunbelt Conference
11. Mid-American Conference
10. Western Athletic Conference
9. Conference USA
8. Mountain West Conference
6. Big East Conference
5. Atlantic Coast Conference
4. Southeastern Conference
3. Big 12 Conference
2. Pac-1o Conference
1. Big 10 Conference
The top five conferences differ by tenths of a point. It is really more accurate to say they all start the season equally strong. There is a huge gap between the Sunbelt and the MAC. The MAC, WAC, and C-USA are relatively close in power. The Independents really shouldn't be classified as a league, but there is no really good method to place Army and Navy. Notre Dame could be rated with the Big 10, while Temple will soon be a member of the MAC.