Assigning a specific power rating to each of the 353 Division I college basketball teams is tricky business.
Not only is the workload extreme, but the work product is easily impeached.
It's more artwork than science in November and more science than artwork in March.
Of course, the mathematical assessments crystallize with more games and more data, but relying on one number to capture the changing nature of young adults playing amateur basketball is a substandard way to view the college hoop landscape.
In Ken Pomeroy's popular model, where every possession holds the same value and the entire season is viewed as one long game, the hole in his system is obvious.
A possession in the final minute of a 75-55 game cannot be viewed through the same prism as the first possession of an overtime game tied at 65.
Score context matters.
Moreover, a college basketball team accorded an abstract power rating of 80 may never play a single game at that level.
Assume the given team played five games which earned a rating of 90 on a scale of 100 and five games at 70.
Do you feel confident to attach a power rating of 80 to said squad?
Additionally, do you want to assign a home-court advantage of 3 or 3.5 points for every game on the team's schedule?
Why not a more nuanced rating for the site, spot and situation to reflect the atmosphere of the arena, the importance of the game and the level of competition?
A game against a hated rival in front of a sold-out arena on a Saturday night with ESPN cameras rolling figures to provide more of a boost than a mid-week game against a non-conference cupcake in a half-filled building when the students are on semester break.
Furthermore, what happens to a college basketball team in the game following a titanic showdown?
How do power ratings account for a letdown spot?
Should a college hoop team have one power rating for home games and another power rating for road games?
Power ratings from season-to-season are less meaningful today then ever before with the ever-changing rosters in the age of college basketball free-agency.
More than 2,000 players changed teams over the past three years compared to 1,300 transfers over the three-year period between 2008 and 2010.
Here's the central question regarding college basketball power ratings: What's the range between a team's best effort and worst effort and what factors influence whether a team will play its "A" game or throw a dud?
Sophisticated sports bettors have been asking these questions for decades while the NCAA, only in recent years, scrambles to find the best way to quantify teams most worthy of postseason play.
For years, the NCAA Selection Committee has struggled in two major categories: 1) Identifying 36 at-large berths for the bulky 68-team tournament field and 2) Assigning proper seeds to every member of the tourney.
The NCAA misses the mark by placing foolish restrictions on numerical evaluations in three areas of the relatively-new NET (NCAA Evaluation Tool) rankings.
In the NCAA's NET rankings, adopted before the start of last season, a 10-point win is considered exactly the same as a 20 or 30-point win.
Nonsense.
Why ignore any development on the playing field?
Simply employ a diminishing-returns principle that recognizes lopsided scores, places them in their proper light, but doesn't completely ignore them.
Furthermore, a game played in early November is assigned the same value as a game played in late February.
This is another imprudent feature of the NET ratings system.
Recent form is a critical factor in handicapping sports.
The NCAA champion is not always the best team but rather the team that's playing best, the team peaking at the season's most important time.
The champion is crowned in early April, not late November, and every NCAA basketball champion since 1985 is required to finish the season by winning six consecutive games.
Besides, the NCAA rewards its institutions with fame and fortune based on tournament wins accrued in March, not holiday tournament championships in December.
One last omission in the NET rankings: A team's travel schedule and rest days are not calculated into the NCAA formula.
Teams from power conferences load up their non-conference schedules with low-level D-1 teams that are forced to schedule away games as a way to balance their athletic budget.
Why not downgrade the performance of a rested Power 5 team when it defeats a squad playing its 13th consecutive road game?
These are not the only issues with math-based formulas.
Computer models also struggle to assess coaching prowess, roster versatility, team morale and under-the-radar injuries.
These critical handicapping factors are sometimes labeled "intangibles."
More nonsense.
In sum, forecasting models are only as good as the modeler's understanding of the subject.
Poor-quality input produces faulty outputs, furthering the likelihood of misleading results or useless predictions.
Like they say in the world of computer science, "Garbage in, garbage out."
Not only is the workload extreme, but the work product is easily impeached.
It's more artwork than science in November and more science than artwork in March.
Of course, the mathematical assessments crystallize with more games and more data, but relying on one number to capture the changing nature of young adults playing amateur basketball is a substandard way to view the college hoop landscape.
In Ken Pomeroy's popular model, where every possession holds the same value and the entire season is viewed as one long game, the hole in his system is obvious.
A possession in the final minute of a 75-55 game cannot be viewed through the same prism as the first possession of an overtime game tied at 65.
Score context matters.
Moreover, a college basketball team accorded an abstract power rating of 80 may never play a single game at that level.
Assume the given team played five games which earned a rating of 90 on a scale of 100 and five games at 70.
Do you feel confident to attach a power rating of 80 to said squad?
Additionally, do you want to assign a home-court advantage of 3 or 3.5 points for every game on the team's schedule?
Why not a more nuanced rating for the site, spot and situation to reflect the atmosphere of the arena, the importance of the game and the level of competition?
A game against a hated rival in front of a sold-out arena on a Saturday night with ESPN cameras rolling figures to provide more of a boost than a mid-week game against a non-conference cupcake in a half-filled building when the students are on semester break.
Furthermore, what happens to a college basketball team in the game following a titanic showdown?
How do power ratings account for a letdown spot?
Should a college hoop team have one power rating for home games and another power rating for road games?
Power ratings from season-to-season are less meaningful today then ever before with the ever-changing rosters in the age of college basketball free-agency.
More than 2,000 players changed teams over the past three years compared to 1,300 transfers over the three-year period between 2008 and 2010.
Here's the central question regarding college basketball power ratings: What's the range between a team's best effort and worst effort and what factors influence whether a team will play its "A" game or throw a dud?
Sophisticated sports bettors have been asking these questions for decades while the NCAA, only in recent years, scrambles to find the best way to quantify teams most worthy of postseason play.
For years, the NCAA Selection Committee has struggled in two major categories: 1) Identifying 36 at-large berths for the bulky 68-team tournament field and 2) Assigning proper seeds to every member of the tourney.
The NCAA misses the mark by placing foolish restrictions on numerical evaluations in three areas of the relatively-new NET (NCAA Evaluation Tool) rankings.
In the NCAA's NET rankings, adopted before the start of last season, a 10-point win is considered exactly the same as a 20 or 30-point win.
Nonsense.
Why ignore any development on the playing field?
Simply employ a diminishing-returns principle that recognizes lopsided scores, places them in their proper light, but doesn't completely ignore them.
Furthermore, a game played in early November is assigned the same value as a game played in late February.
This is another imprudent feature of the NET ratings system.
Recent form is a critical factor in handicapping sports.
The NCAA champion is not always the best team but rather the team that's playing best, the team peaking at the season's most important time.
The champion is crowned in early April, not late November, and every NCAA basketball champion since 1985 is required to finish the season by winning six consecutive games.
Besides, the NCAA rewards its institutions with fame and fortune based on tournament wins accrued in March, not holiday tournament championships in December.
One last omission in the NET rankings: A team's travel schedule and rest days are not calculated into the NCAA formula.
Teams from power conferences load up their non-conference schedules with low-level D-1 teams that are forced to schedule away games as a way to balance their athletic budget.
Why not downgrade the performance of a rested Power 5 team when it defeats a squad playing its 13th consecutive road game?
These are not the only issues with math-based formulas.
Computer models also struggle to assess coaching prowess, roster versatility, team morale and under-the-radar injuries.
These critical handicapping factors are sometimes labeled "intangibles."
More nonsense.
In sum, forecasting models are only as good as the modeler's understanding of the subject.
Poor-quality input produces faulty outputs, furthering the likelihood of misleading results or useless predictions.
Like they say in the world of computer science, "Garbage in, garbage out."
Last edited: