clock menu more-arrow no yes mobile

Filed under:

Big 12 Football Power Rankings, 10.06.2022

Please help me understand SP+

NCAA FOOTBALL: OCT 10 Oklahoma v Texas Photo by Ray Carlin/Icon Sportswire/Corbis/Icon Sportswire via Getty Images

Like last week, the top and bottom of my rankings remain unchanged. It may have been a whimsical move to place the Jayhawks at the top of these rankings a couple of weeks ago, but as KU continues to pass test after test they prove that even if losses come, they do not go straight back to the bottom. Speaking of the bottom of the rankings, OU finds itself dropping like a rock. Depending on how their game against fellow bottom-third team Texas goes, they may unseat West Virginia next week. After shellacking the Sooners, TCU was the week's biggest mover. Normally beating a team as low in the rankings as OU would not move you up much, but the way they put it to the Sooners was not reflected in the final score. TCU let up in the fourth quarter or they could have scored 70, not 55.

Now for a tangent:

I know around here and across the sports stats nerd universe, Bill Connelly’s SP+ rating is held in high regard. I enjoy Bill’s attempt to use data to create a predictive model, but 5 games into the season, do his ratings seem to have the same fundamental flaw every other ranking or rating system has? Despite Bill’s generic claim it is “forward facing” and “predictive”, it seems to give too much credit to teams that are historically good and not enough to those that are historically bad. Like with AP voters, it looks like KU has to do a whole lot of extra credit work to get its grades up because they are historically a D student team, while OU a historically A student team has to fail more exams to actually see their grade go down. For Big 12 teams, SP+ has fallen on its face so far this season. This week, OU is the SP+ #6 team in the country and #2 in the Big 12, behind only underperforming Texas. Bill tries to explain why OU is still so high according to his model, but his explanation flies in the face of what his model is trying to do. According to Bill:

OU has underachieved dramatically in the past two weeks, but the combination of preseason projections (which drop again by a good amount after six games) and strong performances over the first three weeks continue to prop the Sooners up a bit more than they perhaps deserve. Needless to say, they’ll fall more if they continue to play like they have of late.

This explanation left me thinking “WTF?” First, “preseason projections”, what the hell is that? I thought this was a data-driven model, am I wrong? What “preseason projections” are based on data? If he uses previous season data, the high turnover rate of college personnel, both athletes and coaches, this data cannot be very relevant. If it is not previous season data, what is it? Polls? I am guessing not. Second, “they’ll fall more if they continue to play like they have of late”, sounds like AP voters have invaded his brain and model. Don’t give me the ”wait until next week” line. If you are trying to build a model that is “forward facing” and “predictive”, the most relevant results are not the ones that happened 5 weeks ago or last season, but the ones that have happened most recently.

For those who understand his model better than me, which would be virtually all of you, enlighten me on what I am missing. Specifically, why does the model fail many eye tests? This week’s examples are: TCU is rated 21st while OU is at 6; Iowa State is at 35 while Iowa is at 29. Yes, I understand the model thinks the higher-rated team has a brighter future, but in both cases, the lower-rated team beat the higher-rated team. In TCU’s case, there is no doubt they are the better team after last week’s game. In Iowa State’s case, they won on the road. This win was not a fluke win. I would argue they would beat Iowa more than 5 times out of 10.

In my opinion, the fundamental flaw is the sample size. There just are not enough games played by each team during a season to create a reliable predictive model that is significantly better than the eye test we have used since the beginning of college football time. If you want to build a predictive model base it on the historic coaching success of each school’s current coaching staff and the overall talent rating of the athletes on each team and you will do as well as anything based on season game data.

As usual, the ratings are just my opinion and my opinion can be influenced by your input. This week in particular I would love to hear your opinions about the SP+ and where I am falling short in understanding it.

  1. KU: 5-0 and Game Day is on the way.
  2. OSU: Staying under the radar, but like KU just getting it done.
  3. TCU: Put a whoopin’ on the boys from Norman. Can they break the KU run this week?
  4. K-State: Maybe the OU win was not a fluke.
  5. Baylor: Lost to OSU and get an extra downgrade for those awful uniforms.
  6. Texas Tech: The K-State loss is not bad, TCU’s performance moved them down.
  7. Iowa St: Kinder uprights and one less hook would have only netted 20 points last week.
  8. OU: What is there to say? Maybe they should be one spot lower?
  9. Texas: A home win over the cellar dweller doesn’t say much.
  10. West Virginia: Lost the battle for the basement, so here they stay.