PINKOS RETURNS: For the 2005 Football Season

Joe Pinkos, regarded as the best at predicting football games and rating teams in the state of Florida, will return for his fourth season with Pinkos will give you his pre-season rankings and look at the first week of the season.


Ten Frequently Asked Questions About Joe Pinkos' Power Ratings

Basically, what is your rating system all about?
The objective of my ratings is to evaluate Florida's high school football teams solely on demonstrated performance on the gridiron. Unlike polls, my ratings are objectively based on an arithmetic formula, which is uninfluenced by regional biases, lobbying, sentimental favorites, classification/school size, or once the season gets underway, prior reputation. My system is based on power ratings.

What is a power rating?
A power rating is a numeric representation of a team's current relative strength. To determine the projected or forecasted margin of victory between any two teams, one simply has to figure the difference in the respective power ratings.

Are the 2004 final power ratings used as the starting point for the 2005 season?
No. A high school football team's performance can change dramatically from one season to the next.

How are the 2005 season opening power ratings determined?
The 2005 season opening power ratings are arithmetically determined based on historical power rating data and trend identification and analyses, which considers performance in 2005 spring games and preseason kickoff classics. The season opening power ratings are calibrated to allow comparison between different seasons and different weeks within a season.

How are the power ratings computed during the season?
Throughout the season the power ratings are computed by a formula based on strength of opposition and margin of victory weighted toward more recent performances.

My rule is that the power ratings be "100% politically correct" through completion of the first three weeks of the season. This means that no team will be rated lower than any team it has beaten. The rationale for the power ratings being politically correct through the completion of Week 3 is that it is too early in the season to confidently label any game's outcome as a genuine upset, i.e., more unlikely than likely to be repeated if a rematch were played in the current week.

Beginning with the completion of Week 4, the "100% politically correct rule" is no longer totally appropriate. That's because some teams have improved more than others due to a variety of reasons- lineup adjustments, recovery from injury, the differing maturity rates of their young student-athletes, etc. Beginning with the completion of the fourth week, a team's power rating is no longer inhibited or capped by a loss. This can result in Team B being rated higher than Team A even though Team A beat Team B earlier in the season. Should that happen, it means that my ratings indicate that Team B would win a rematch with Team A in the current week. It has been known to happen in the playoffs! Again, my intention is that the power ratings reflect teams' current relative strength.

Will a team's power rating always rise the week after a win, and fall the week after a loss?
No. Generally, an underdog that scores an upset or loses by fewer points than projected and a favored team that wins by a greater margin than forecasted can expect a rise in its power rating. Under those scenarios, either or both teams' power ratings would at worse remain unchanged. Conversely, a favored team that either loses or prevails by a margin of victory less than expected and an underdog that falls by a greater margin than forecasted can anticipate a drop in its power rating. Under those circumstances, either or both teams' power ratings would at best remain unchanged.

Are your power ratings measured on a scale of 0 to 100?
No. It is possible to achieve a power rating greater than 100 or have a power rating dip below zero, though it is my rule that no team's power rating is less than zero for Week 1. Negative number power ratings are attributable to the many small schools starting football programs since the establishment of my power ratings system. Years ago, no teams were rated below zero. I could eliminate negative number power ratings by recalibrating my system (adjusting all ratings upward), but I prefer not to do that in order to maintain ratings that are easily comparable from season-to-season.

Why don't your game forecasts include an allowance for home field advantage?
Simply because my research doesn't indicate that a home field additive (for example, plus 3 points) is appropriate for Florida high school football. My research doesn't identify a home field advantage that can be quantified and uniformly applied to all games- or even some portion of the season's games, such as the later rounds of the state playoffs.

Generally, what kind of success can be expected from your game forecasts?
Aside from accurately projecting margins of victory, expect my forecasts to identify the winners in 80% or greater of all games played over the course of the entire season. While I believe my power ratings to be a better measure of team strength than you'll find from any other source, early season turbulence in the ratings probably should nonetheless be anticipated. Typically, the power ratings become more accurate as the season progresses.

Specifically, how did your forecasts fare last season?
Last season, my forecast for the state championship games was a perfect 7-0; which bested all published forecasts including the picks of The Gainesville Sun's entire panel of eleven statewide media experts, both individually and as a consensus.

After the final week of the regular season, but prior to the start of the playoffs, I picked six (6) of the seven (7) eventual 2004 state champions based on those teams occupying the top spot respectively in my power rankings for each classification. I missed only Immokalee (ranked 3rd in my final regular season rankings, ranked 5th by the FSWA poll), but quickly recovered by picking the Indians to beat Madison County in the 2A state title game. Among all sources of pre-playoff picks, I was the only source to select Killian (ranked #7 in Class 6A by the FSWA) and Lakeland to win state titles. Also by comparison, three (3) correct picks of eventual state champions was the best a statewide, pre-playoff panel of high school football writers assembled by The Miami Herald could muster. The FSWA (Florida Sports Writers Association) final poll and the Lazindex had only two (2) of the eventual 2004 state champions ranked at the top of their respective classification at regular season's end.

For the entire state playoffs, my forecast record was 150-34, or 81.5% correct, which easily topped the Lazindex--- serving as an encore to my regular season whipping of that same index.

Sun State Football Top Stories