Computer Ratings, The BCS And Boise State

The BCS formula in college football is widely misunderstood. By learning more about how the system operates, fans can not only make themselves more aware of the computer rankings and how they work, but actually enjoy them!

Boise State has yet to play its first game of the 2009 season but most of us know that this year's team has the potential of returning to another Bowl Championship Series (BCS) bowl game.  Though the BCS system continues to be chided by fans and many in the media, it is the system we must live with at present.  As most of us already know, this system is composed of three separate polls:  the USA Today Coaches Poll (which consists of 61 coaches), the Harris Poll (a motley crew of 114 college football "experts"), and six computer polls (Kenneth Massey, Jeff Sagarin, Richard Billingsley, Anderson and Hester, Peter Wolfe, and the Colley Matrix). 

 

What part do computer rankings play in the overall BCS ranking for a team?  

     Rather than providing several details regarding the history of how computer polls made their way into the BCS mix and, even more, how the six specific computer polls currently used by the Bowl Championship Series came into play, let's take a brief look at some of the highlights.  From 1998 to 2003, the BCS tinkered with a few different ranking models.  As we know, the Associated Press Poll was used as one of the human polls during that time (they remained until 2005).  In addition, there were set formulas for determining a team's strength of schedule (aside from computers), a quality win factor was used from 2001-2003, number of losses was explicitly used as a value to be deducted from a team's ranking score, and anywhere from three to eight computer polls were used (additional computer polls included the New York Times, Richard Dunkel, Herman Matthews/Scripps Howard, and David Rothman).  

  Some of these computer rankings used margin of victory in their calculations and were later forced out of the BCS unless they could provide a method that did not use margin of victory.  Due to several controversies relating to the final rankings of some teams and to the heavy weight that computer polls had in the process, the entire system was recreated in 2004, which is the system we now have.    

     Good, bad, or indifferent, each of the three polls currently used by the BCS holds equal weight (1/3 each) in the overall ranking score.  As an example, the final BCS rankings for 2008 (before any bowl games were played) had Oklahoma in the #1 spot with a .9757 BCS average.  This score was determined by averaging Oklahoma's Coaches Poll score of .9718, its Harris Poll score of .9554, and its computer poll score of 1.000 (.9718 + .9554 + 1.000 = 2.9272 and 2.9272/3 = .9757).  To determine a team's BCS score for each poll, the total number of votes or points received in the poll is divided by the maximum number of votes or points available in the poll.      

    When applied to the computer polls, a first-place vote equals 25 points as in the human polls.  But unlike the human polls, the highest and lowest computer poll scores for each team are thrown out when calculating a team's BCS computer poll average.  In the case of Oklahoma above, they received a 1.000 average in the computers because, after throwing out one of their high scores of 25 points and their lowest score, they still held the number #1 spot in the four remaining computer polls.  Thus, they had received 100 points, the maximum number of points available in the computer polls (100/100 = 1.000). 

 

Computers vs. Humans  

     Both human and computer polls have their critics and supporters, and in many instances both sides are correct in their opinions.  It's interesting that the non-human polls are even called computer polls.  I mean, my own computer rankings (junkysrankings.com) could technically be called Junky's Excel Spreadsheet Rankings because that is the program I use to enter in all the data and to perform the calculations.  In reality, computer polls are nothing more than mathematical polls in which a computer is used to present the data.  Even more, computer rankings are truly human polls that use fixed conditions and methods developed by their human creators to evaluate the performance of college football teams.  The so-called human polls, on the other hand, are subject to changing thought-processes and whims of those individuals who participate in them (assuming they all put much thought into their decisions).   

     Please understand that though I'm a big supporter of computer polls, I'm very aware of their shortcomings and, at the same time, realize that the human polls have some important advantages.  Many consider computer polls to be fairer because they evaluate all teams based on the same criteria.  In the same sense, however, this could be considered less fair because the computers aren't able to account for changing game plans, injuries, team improvement, unfortunate calls by referees, weather conditions, or in the case of the BCS computers, margin of victory (Kenneth Massey's rankings actually claim to account for some of these factors and Richard Billingsley attempts to control for changes occurring from week to week).  Those voting in the USA Today Coaches Poll and the Harris Poll can view the games on television or collect other information about the games and then consider the effort made by the losing team or decide that one team got some lucky or unfortunate breaks.  You need to look no further than the Oklahoma-Oregon game of 2006 as an example of where the subjective eye of pollsters would be important.  

     It is the opinion of this college football fan that both computer and human polls are necessary when evaluating and ranking the teams.  There will certainly continue to be a healthy debate about which factors are critical and which are more important than others when it comes to computer polls, which is good; because without a playoff, we really have no definitive way to determine who the best teams are.

 

       

What do the computer rankings evaluate?  

     Each of the six current BCS computer poll creators provide some details regarding their methods, but some provide more information than others.  They each state the factors they feel make their rankings unique or better than other run-of-the-mill computer polls.  Some of them do this using highly technical jargon, while others speak in generalities.  If you aspire to work for NASA someday, or you like to spend your evenings reading the works of Einstein, Tesla, or Pythagoras, you might enjoy reading Colley Matrix's dissertation on how to best rank college football teams.  Otherwise, you can simply take the poll creator's word for it that they've got a well-run system and that the factors they use are all that are needed.     

     For those of us who like things put in simple terms, I will attempt to summarize how each of these six BCS computer polls ranks the 120 teams that make up what most of us know as Division 1-A college football (or what the NCAA likes to call the Football Bowl Subdivision).  Before looking at each computer poll, it is important to note that all of these computer polls have some things in common.  For instance, all have some kind of method of determining a team's strength of schedule ( SOS ), which is required by the BCS .  However, the SOS method is not standardized and is therefore left up to the poll creators to determine.  The six computers also look at a team's number of wins and losses in some way or another and most of them (aside from Peter Wolfe) use a strict retrodictive process in ranking the teams.  A retrodictive process is simply one in which current accumulated data is used to justify or explain a team's ranking (rather than looking at their potential).  A predictive process could be used to determine where a team will be ranked at the end of the season, or to forecast the outcome of an upcoming game.  Jeff Sagarin's poll is one that has both retrodictive and predictive rankings for each team, though only the retrodictive ones are used for the BCS .  

     One of the conditions the BCS sets on the computer polls is that they are not allowed to use Margin of Victory (MOV) in their BCS ranking formulas.  Kenneth Massey and Jeff Sagarin have specific BCS-compliant rankings in addition to their regular rankings in which they factor in a MOV value.  When looking at Sagarin's rankings website, he lists his official BCS rankings for each team under a column labeled ELO-CHESS, his regular rankings that use a MOV value are listed under RATING, and his predictive rankings are listed under the PREDICTOR column.  

     As I've already mentioned, each computer poll creator has his own philosophy on how to best rank the teams from 1 to 120.  Without going into the geeky details about how these calculations work (many of which are proprietary anyway), let's just look at some basic similarities and differences between the polls.  For each poll listed below, I will list the poll creator's/owner's name(s), whether they provide separate rankings than those used by the BCS (such as those using a MOV factor or predictor), a brief explanation of what factors are most critical in their ranking calculations, how they determine strength of schedule for a team, if they rank 1-AA/FCS teams in the same pool as the 1-A/FBS teams (or if they simply exclude 1-AA/FCS teams and games), if there is any carry-over in rankings from the previous season, if conference strength (however defined) is used in their ranking formulas and who they ranked number #1 and #2 at the end of the 2008-2009 season (post-bowl).

 

ANDERSON & HESTER  

Creators/Owners:  Jeff Anderson and Chris Hester (part of BCS rankings since 1998)

 

Rankings Provided:  Retrodictive BCS-compliant Rankings Only.  Rankings posted after 5th week of season is complete.

 

Key Ranking Factors (BCS Only):  Quality Wins, Presumably Wins and Losses, Strength of Schedule

 

SOS Determination: Opponent's (and Opponents' Opponents) W/L Record as well as Opponent's Conference Strength (defined as a conference's record against non-conference opponents and the W/L record of the conference's non-conference opponents).

 

Ranking of 1-AA/FCS Teams:  Not ranked and do not appear to factor into 1-A/FBS rankings

 

Season Carryover:  None (each team starts with a clean slate each season)

 

Conference Strength Factor:  Used as part of a team's SOS calculation.  Conferences are individually ranked as well.

 

#1 Team from 2008-2009:  Utah Utes

#2 Team from 2008-2009:  Florida Gators

 

 

COLLEY MATRIX  

Creator/Owner:  Wesley Colley, Ph.D. (part of BCS rankings since 2001)

 

Note:  Of the six BCS computer polls, the Colley Matrix poll seems to be the most mathematically intense.  In fact, Wesley Colley provides a 23 page detailed explanation of his rankings that can be accessed by anyone who visits his rankings website.   

 

Rankings Provided:  Retrodictive BCS-compliant Rankings Only

 

Key Ranking Factors (BCS Only):  Wins and Losses and Strength of Schedule

 

SOS Determination:  Running several iterations of opponents' ratings based upon the initial ratings contrived from wins and losses until there is little movement left in teams' ratings in the entire poll (for further explanation, please visit his site and read The Colley Matrix Explained.

 

Ranking of 1-AA/FCS Teams:  Not ranked and do not factor into 1-A/FBS rankings

 

Season Carryover:  None (each team starts with a clean slate each season)

 

Conference Strength Factor:  Not used though individual conferences are ranked separately.

 

#1 Team from 2008-2009:  Florida Gators

#2 Team from 2008-2009:  Texas Longhorns

 

 

JEFF SAGARIN  

Creator/Owner:  Jeff Sagarin (part of BCS rankings since 1998)

 

Rankings Provided:  Retrodictive BCS-compliant Rankings (ELO-CHESS), Retrodictive Rankings Using Margin of Victory (RATING), and Predictive Rankings (PREDICTOR)

 

Key Ranking Factors (BCS Only):  Wins and Losses, Quality Wins, Game Location, and Strength of Schedule (no information is provided as to how these factors apply but it is inferred they are used from his rankings and explanations)

 

SOS Determination:  Rating of opponent plus game location

 

Ranking of 1-AA/FCS Teams:  Ranked with all 1-A/FBS teams and games factored into all rankings

 

Season Carryover:  Temporarily but previous season is removed from calculations after sufficient number of games have been played to "connect" all teams

 

Conference Strength Factor:  Uncertain if this is used in formula, though individual conferences are ranked separately.

 

#1 Team from 2008-2009:  Utah Utes (BCS) / Florida Gators (MOV)

#2 Team from 2008-2009:  Florida Gators (BCS) / USC Trojans (MOV)

 

 

KENNETH MASSEY  

Creator/Owner:  Kenneth Massey (part of BCS rankings since 1999)

 

Rankings Provided:  Retrodictive BCS-compliant Rankings and Retrodictive Rankings Using Margin of Victory

 

Key Ranking Factors (BCS Only):  Wins and Losses, Game Location, Date of Game, and Strength of Schedule

 

SOS Determination:  Rating of opponent plus game location

 

Ranking of 1-AA/FCS Teams:  Not ranked and games do not appear to factor into 1-A/FBS rankings

 

Season Carryover:  Temporarily but previous season is not influential in calculations after certain number of games have been played

 

Conference Strength Factor:  Not used in formula, though individual conferences are ranked separately

 

#1 Team from 2008-2009:  Utah Utes (BCS) / Florida Gators (MOV)

#2 Team from 2008-2009:  Florida Gators (BCS) / USC Trojans (MOV)

 

 

PETER WOLFE  

Creator/Owner:  Peter R. Wolfe (part of BCS rankings since 2001)

 

Rankings Provided:  Retrodictive BCS-compliant Rankings with a Predictive Element

 

Key Ranking Factors (BCS Only):  Wins and Losses, Game Location, Mutual Opponent Comparison, and Strength of Schedule (implied)

 

SOS Determination:  Unknown

 

Ranking of 1-AA/FCS Teams:  Ranked with all 1-A/FBS teams and games factored into all rankings

 

Season Carryover:  Unknown but appears no carryover is used

 

Conference Strength Factor:  Not used in formula and unknown if conferences are ranked separately

 

#1 Team from 2008-2009:  Utah Utes

#2 Team from 2008-2009:  Florida Gators

 

 

RICHARD BILLINGSLEY  

Creator/Owner:  Richard Billingsley (part of BCS rankings since 1999)

 

Rankings Provided:  Retrodictive BCS-compliant Rankings Only

 

Key Ranking Factors (BCS Only):  Starting Position (final rank from previous season), Strength of Schedule, Win Value (based on single game and accumulated from week to week), Losses, Game Location, and Head to Head Rules

 

SOS Determination:  Based solely on the Rank and Rating of Opponents

 

Ranking of 1-AA/FCS Teams:  Not ranked with 1-A/FBS teams and games not factored into rankings

 

Season Carryover:  Yes, though teams are able to quickly overcome previous season's rankings

 

Conference Strength Factor:  Not used in formula and unknown if conferences are ranked separately

 

#1 Team from 2008-2009:  Florida Gators

#2 Team from 2008-2009:  USC Trojans

 

What seem to be the most critical factors in determining if a team has a good computer ranking?  

     When all is said and done, there really isn't that much to understanding how a team ends up ranked very high in the computer polls versus the middle or bottom.  In fact, you'll find that most of the disparities that exist between different computer polls aren't seen in the rankings of the best and worst teams as much as they are seen in the teams in the middle of the pack.  When you think about it, teams like Duke, Idaho, and Louisiana-Monroe (perennial bottom-dwellers) seem to find their way into the bottom of all the computer polls no matter what the minute differences are in their ranking formulas.  Where the computers seem to differ most is when comparing those teams with 4 to 8 wins (using a 12-game season).  For instance, is a 3-9 Mississippi State team better than a 7-5 Arkansas State ?  Some computers would say yes because they weight SOS or some sort of conference-weighting value much higher than wins and losses.  Whether this is proper or not is up to each college football fan to decide and many of these computer-ranking gurus will give you their spiel as to why their method is better than another.  

     Before we go any further, we have to remember that six of the BCS bowls offer guaranteed births to the conference winner of an automatic-qualifying (AQ) conference (ACC, Big East, Big 10, Big 12, and PAC-10).  So when discussing the polls and their influence on a team's BCS-worthiness, we have to realize that that this only applies to at-large selections and to those teams attempting to qualify for the BCS Championship game.  We also need to accept the fact that each AQ conference and all the non-AQ conferences combined are limited to two BCS bowl spots.   

     Remember, the six computer polls can't use margin of victory in their formulas, so winning big only helps in the human polls.  So when it comes to the computers, what separates the top teams from the middle of the pack, and further, what matters most when trying to qualify for an at-large BCS spot or the BCS Championship game?  If you look at just three critical factors, I think you can narrow things down pretty quickly.  The first factor is Total Number of Losses, the second is Quality Wins, and the last is Opponents' Win-Loss Record.   

     #1 – Number of Losses:  This probably seems obvious but too many losses will disqualify a team from BCS contention faster than any other factor.  The threshold for an at-large BCS birth has historically been two losses (though all non-AQ teams have gone undefeated, they could conceivably make it with one or even two losses).  Since 2004 (the year the current BCS formula started) only one at-large team has ever had three losses.  That team was Illinois and they lost to USC in the Rose Bowl 49-17.  Technically, the exact number of losses isn't really the key because a three-loss team could conceivably play in the championship game in the unusual event that no other team in the league had fewer than three losses.  However, there have historically been plenty of at-large teams meeting the two-loss threshold and so it is statistically very difficult to earn a BCS birth with three or more losses.  By the way, if you are looking for a #1 or #2 ranking in the computers, then the threshold moves to one loss, though LSU won the championship in 2007 with two losses (since 2004, five teams participating in the championship game have had no losses, four have had one loss, and LSU is the only team to have had two losses).        

     #2 – Number of Quality Wins:  Computer polls play a major role in verifying what qualifies as a quality win.  Preseason rankings and victories over perceived quality teams early in the season can skew the human polls in the first few weeks.  However, computer polls do bit have this problem as the quality win factor is free from prejudice.  This factor is important because it helps separate teams having the same number of losses and can boost teams above others that have more losses than themselves.  

     LSU's appearance in the 2007 BCS Championship game with two losses supports this argument.  Using Anderson and Hester's results from that season, LSU had victories over five teams ranked in the top 25 of that poll.  Its only two losses were to #25 Kentucky and #32 Arkansas .  A simple rule of thumb when looking at quality wins would be to look at how many victories a team has over teams in the top 25, the top ½ of the rankings, and the bottom ½ of the rankings.  When comparing two teams with an equal number of losses between them, or that are within one to two losses of each other, it is almost certain that the team having more wins against teams in the top 25 and top ½ of the rankings is more likely to be ranked ahead of the other team, especially if one has a majority of their wins against teams in the bottom ½ of the list.  

     #3 – Opponents' Win-Loss Record:  This factor is generally what the computer polls refer to as strength of schedule.  In Boise State 's case, this factor alone has probably kept them from reaching the highest spots in some computer polls.  The Western Athletic Conference's performance in non-conference games has often been a hindrance.  Any conference that loses more non-conference games than it wins is going to have an adverse affect on all teams in that conference, no matter how good their top team is. 

When a conference finishes its non-conference schedule, the win-loss record for each team in the conference, as far as it applies to SOS , has basically been set.  Because all teams in a conference play each other (with a few exceptions), the inter-conference win-loss record is going to be .500.  For instance, in the nine-team WAC, the total number of wins and losses in inter-conference play will always come out to 72 wins and 72 losses.  In reality, individual conference games generally have no bearing on a team's SOS score.  In some conferences, such as the Big-10 and Big-12, conference games would seem to have some bearing on SOS because not every team in the conference plays each other; but regardless, many of the computer polls completely disregard inter-conference games when calculating SOS .  So, instead of the WAC winning 37 percent (or 11 out of 30) of their non-conference games against 1-A/FBS teams, as they did last season, let's assume the WAC were to win 75 percent, or 23 out of those 30 games.  If that were to occur, the SOS for each team in the conference would greatly improve and, in turn, boost every conference member's ranking in the computer polls.  Non-conference dominance is a major reason why teams in the SEC and Big 12 do so well in the computer rankings even when they have a significant number of losses after conference play.  Though playing soft non-conference schedules (such as those often found in the SEC) can adversely impact team rankings for those in the conference, winning those non-conference games appears to have a greater influence on the rankings.      

     Though it's critical that conference-mates perform well in non-conference games, the success of a team's own non-conference opponents is even more important.  This is because there are more games affecting a team's SOS from non-conference opponents than from conference opponents (the PAC-10 may be an exception).  If Boise State were to play four 1-A/FBS non-conference teams, those four opponents could play up to 11 other 1-A/FBS teams during the season (assuming a 12-game schedule).  That would provide 44 games (four teams x 11 games) compared to a maximum of 32 from conference opponents (eight teams x four non-conference games) in a 12-game schedule, assuming all non-conference opponents are 1-A/FBS members. 

     Games versus 1-AA/ FCS teams are often looked down upon (and often do no count in calculating SOS ) but these games can be helpful if the only other option were to be a game versus a 1-A/FBS team likely to have a losing record.  On the other hand, these games are detrimental if they could be replaced by 1-A/FBS opponents likely to have winning records.  

 

What does this all mean for Boise State ?   

     For Boise State to appear in a BCS game this year, they cannot afford to lose more than one game but probably will need to go undefeated with other non-AQ teams in the mix again (such as TCU).  It is unlikely they could play in the BCS Championship game simply because not enough teams on their schedule appear (at present) to qualify as quality opponents ( Oregon , Tulsa , and Nevada are probable quality opponents).  If the WAC can have a banner year, with more victories than losses in non-conference play this will greatly benefit Boise State and the WAC as a whole.  There are plenty of quality opponents in WAC non-conference games and beating a good number of these teams is also key (some of these quality teams could include USC, Oregon , Stanford, Notre Dame, Cincinnati , LSU, Auburn , Missouri , Texas A & M, Ohio State , Wisconsin , BYU, Utah , and Tulsa ).  Obviously, the human polls will carry more weight than the six BCS computer polls (making up 2/3 of the BCS ranking), but a respectable or poor showing in the computers could make the difference if things gets close.  

In accumulating a high SOS , it is more important for a conference to win games than it is to schedule quality opponents.  A conference member scheduling a "bodybag game" is doing a disservice not only to themselves but to their conference mates.  A team's computer rating will always go up if they win and down if they lose; it's that simple.  

I will be providing ratings at my site during the season—feel free to drop by and check on them.  If Boise State is in the hunt for a BCS Bowl (as it has been for four of the past five years), we will chronicle their quest on a weekly basis on the Blue Turf Board. 


Bronco Country Top Stories