By Subvertadown
Tagged under
Accuracy , Current SeasonJanuary 7, 2026
[TL;DR— This was overall a great season for Subvertadown, maybe the best ever. Betting lines returned overall positive ROI, and Survivor failures were fewer than normal. The D/ST models came out clear #1 for the season, and so did the QB model by my usual measurement. The Kicker models surpassed expectations for the entire second half of the season. The main need that I see for updating appears to be in addressing the Kicker models that cover the first month of the season.]
I’m happy to be back and share my final analysis about the predictability of the 2025 season.
The earlier months’ reports are here: month #1, month #2, and month #3.
I hope I’ve been clear about this in the past: My analysis of accuracy is mostly useful for… me!
But of course I’ve always wanted to share my insider’s view. One reason is so you can understand how I choose what improvements to make (or not to make). But maybe the bigger reason is to share an understanding around “what was different” about the season. "Was this season more or less predictable than normal?"
Examining predictive accuracy has been a long-standing tradition, underpinning a key purpose of Subvertadown: To give us a grounding in how “rational” things have been.
As usual, here’s a look at how each individual model is doing, compared to other seasons.
This does not tell "how good the models are". It only tells us how predictable the current season is, compared to the historical norm.

We’d started the season with lower predictability for TE and Kicker. In the final 5 weeks, the predictabilities of most positions were above the normal level. TE and QB were the exceptions, being closer to their historically normal levels.
For D/ST and Kicker in particular, the final 5 weeks were excellent. When these are more predictable, strategizing by streaming pays off better.
Reminder / for newbies: I’m not trying — and don’t expect to be— “#1”. That’s plain unrealistic to expect that all the time, against very competent expert analysts I track. My goal is rather to check that the models are still performing at a similar level to others, specifically sources that have been consistently good for at least a few years. Many sources are great one year but then poor the next. My chosen experts are good in a more consistent way. We naturally trade places at the top, year to year. Knowing that the models perform at least at a "similarly" to top sources lends confidence and gives reason to trust forecasting, when we extrapolate models to future weeks.
During weeks 13-17, the Subvertadown D/ST models outperformed all alternative sources. While you shouldn’t always expect it, in 2026 my D/ST models clearly stood at #1 for the season, by any measurement of accuracy.
I’m not even showing the better performing model, which was the ESPN tailored model. As I mentioned in previous posts, the ESPN model was giving better recommendations regardless of scoring setting.
Therefore— and I shy away from saying this boldly— there is no clear need for improvement during the off-season. However, I would like the success of the ESPN model to transfer to the “Yahoo” model. And I would like to see the Yahoo model do better in mid-season, so I will be looking into possible overfit in the model that covers weeks 6 - 11 (approximately).

My Kicker model accuracy was clearly in the lead during the second half of the season.
My models’ main failures occurred roughly in the first month: week 5 and particularly week 3 killed my overall season-long standing in kicker accuracy. This early-season issue is the one problem area that really deserves my attention during the off-season updates.
Still, I’m extremely pleased with the performance during weeks 7 - 16. My recommendations gave a fantastic performance for streaming, as a good way to end the season. (Although I note that all expert rankers failed to deliver good kicker rankings in week 17 specifically.)
(For this time, I’m not showing the Accuracy Gap measure that I usually show for kickers. These graphs explain the same overall result.)

My only takeaway is that I should investigate more closely at modeling the early part of the season, when team kicking habits haven’t settled.
After digging into the causes of inaccuracy in week 3 and week 5, I found an interesting explanation: There were 8 NFL teams who kicked long FGs during the first weeks, but then stopped taking long kicks almost entirely for a long period. Some of their kickers started getting only PAT chances. These were the same kickers that my model recommended in those 2 worst weeks (Bills, Eagles, Giants, Ravens, Rams, Cardinals, Chiefs).
I might as well mention here: I will be testing out some new ideas with the kicker modeling, especially focusing on differentiating kicker situations. For example, I may make distinct models for kickers whose team is expected to lose. More on this later. (I hope! If it seems to work.)
My QB model didn’t disappoint: I’m happy to say I think it deserves the victory lap I foresaw earlier in the season.
In short: my QB rankings boomed to start the season, and then it more than kept up.
The only cause for concern were the specific weeks 5 and 6. I have a suspicion that the data processing had learned some “regression” during those weeks, which got exaggerated. In the off-season, I will make sure the learned-regression effects are minimized or even eliminated.
Otherwise, it was a great season for the QB model to prove itself.
However, I do like to be more critical by looking at different kinds of accuracy measurements (I look at at least 5 methods in total). For example, the Accuracy Gap methodology shows that all sources were extremely close in accuracy, with my model simply at the average. And I can see that my model performed less well when considering the QBs ranked specifically in the range of #10 - #15. At a glance, this seems to be caused by the weeks 5-6 mentioned above, but it’s worth giving the QB a routine check-up.

Overall, it confirms what I usually see: that the methodology seems to suit QB well. I still don’t see major needs for correction.
The results have remained even better than normal.
We’d earlier finished week 12 with a total of 7 losses out of 36 games in the 3 pathways— an almost 20% loss rate for the season, which is lower than normal.
In the fourth month, there were 5 more losses out of 15 games in the 3 pathways, which is more like the average failure rate:
Pathway 1: Buccaneers in week 14 (replaced by the Vikings backup)
Pathway 2: Eagles in week 13 (replaced by the Chargers backup), Steelers in week 17 (replaced by the Jaguars backup)
Pathway 3: Rams in week 13 (replaced by the Dolphins backup), Cowboys in week 15 (replaced by the Jaguars backup)
That means week 17 finishes with 12 total losses out of 51 games in the 3 pathways, a 24% loss rate for the season, which is still lower than normal.
With the “single back-up pathway” methodology I chose to display, it is possible (if unlikely though!) that someone succeeded in all 3 of the pathways.
The 3 final winning pathways were;
Pathway 1: Broncos, Ravens, Bills, Lions, Colts, Packers, Chiefs, Patriots, Rams, Seahawks, Dolphins, 49ers, Chargers, Vikings Jaguars, Texans, Cowboys.
Pathway 2: Commanders, Lions, Buccaneers, Bills, Colts, Rams, Patriots, Chiefs, Ravens, Broncos, Texans, Seahawks, Chargers, Packers, 49ers, Eagles, Jaguars.
Pathway 3: Eagles, 49ers, Seahawks, Texans, Lions, Colts, Packers, Chiefs, Chargers, Broncos, Patriots, Ravens, Dolphins, Rams, Jaguars, Bills, Bengals.
The ROI finished positive but below target for the season, due to an especially bad finish in week 17.
Before week 17, the returns were quite precisely on target that I try to set.
Our baseline expectations should normally be to lose about -60% of the weekly pot, by this time (after 12 weeks). So, if you routinely bet $100 per week, then the sum of your 17 weeks of losses would be $85.
That means, a normal “coin flip” betting process would lose us about -5% per week, from the target weekly bet amount. Flipping coins pays the bookkeeper. Yes, that’s if we were just monkeys shooting darts.
During weeks 13 - 17, we experienced the worse stretch of the season, a net loss. The high points were at weeks 11 and 14, and for most of the season were well above target. Therefore it was disappointing that the final month should not keep up.
At least it’s easy for me to identify which parts of the season deserve most attention, as I prepare updates in the off-season.

Here is the list of bets recommended weeks 13-17.
Remember, you can see the earlier lists in previous accuracy reports: : month #1, month #2, and month #3.
For simplicity, I’ve combined added bet increases to opening bets, according to the table display for live changes:

TL;DR— This was overall a great season for Subvertadown, maybe the best ever. Betting lines returned overall positive ROI, and Survivor failures were fewer than normal. The D/ST models came out clear #1 for the season, and so did the QB model by my usual measurement. The Kicker models surpassed expectations for the entire second half of the season. The main need that I see for updating appears to be in addressing the Kicker models that cover the first month of the season.
/Subvertadown
Tagged under
How to Use , Betting LinesTagged under
Background , Updates / News , ExclusiveTagged under
How to Use , Season Strategy , Updates / NewsTagged under
Season StrategyTagged under
How to Use , Season StrategyTagged under
Background , How to Use , Season StrategyTagged under
How to Use , Season StrategyTagged under
How to Use , Season StrategyTagged under
How to Use , Season StrategyTagged under
How to Use , KickerTagged under
BackgroundTagged under
How to Use , Survivor PoolsTagged under
Season Strategy , Understanding Statistics , Weekly Strategy , KickerTagged under
How to Use , Understanding Statistics , Survivor PoolsTagged under
How to Use , Weekly Strategy , Betting LinesTagged under
Season Strategy , D/STTagged under
Background , AccuracyTagged under
BackgroundTagged under
Background , ModelingTagged under
BackgroundTagged under
Background , ModelingTagged under
BackgroundTagged under
How to UseTagged under
How to UseTagged under
How to Use , Updates / News , Prev. Season / ArchiveTagged under
How to Use , Updates / News , Prev. Season / ArchiveTagged under
Background , Updates / News