LIVE
NEXT São Paulo GP · Sun 17:00 GMT · in 3d 12h LAST Mexico City — VER P1, NOR P2, LEC P3 WDC NOR 412 · PIA 387 · VER 366 WCC McLaren 799 · Red Bull 612 · Ferrari 484
Pre-tournament predictions

🇨🇦 Canadian Grand Prix

Round 5 · 2026 · Circuit Gilles Villeneuve · 5,000 simulations

Updated after Pre-weekend · Sat 23 May, 19:59 UTC
22 drivers · click any row to see the maths
#Driver

Your predictions

Edit site/src/data/predictions/my-predictions.json to enter your own probabilities. Drivers you skip show as "—". Each cell shows your number bold, the model's number muted underneath, and a coloured arrow with the gap in percentage points.

7 / 22 drivers enteredEdit src/data/predictions/my-predictions.json to add or change entries
DriverPoleFinishPointsPodiumWin
🇬🇧RussellMercedes20.0%model 5.4%+14.6pp95.0%model 88.5%+6.5pp95.0%model 88.5%+6.5pp65.0%model 82.7%-17.7pp30.0%model 42.1%-12.1pp
🇮🇹AntonelliMercedes18.0%model 5.6%+12.4pp95.0%model 88.9%+6.1pp93.0%model 88.9%+4.1pp55.0%model 83.1%-28.1pp25.0%model 42.4%-17.4pp
🇬🇧NorrisMcLaren12.0%model 6.7%+5.3pp92.0%model 86.9%+5.1pp88.0%model 86.9%+1.1pp40.0%model 47.1%-7.1pp12.0%model 6.5%+5.5pp
🇦🇺PiastriMcLaren10.0%model 7.1%+2.9pp92.0%model 88.1%+3.9pp85.0%model 88.1%-3.1pp35.0%model 49.2%-14.2pp10.0%model 6.8%+3.2pp
🇳🇱VerstappenRed Bull8.0%model 6.3%+1.7pp90.0%model 86.4%+3.6pp80.0%model 77.0%+3.0pp30.0%model 0.4%+29.6pp10.0%model 0.0%+10.0pp
🇲🇨LeclercFerrari10.0%model 5.1%+4.9pp88.0%model 86.1%+1.9pp75.0%model 86.0%-11.0pp20.0%model 18.7%+1.3pp5.0%model 1.1%+3.9pp
🇬🇧HamiltonFerrari8.0%model 5.2%+2.8pp88.0%model 86.0%+2.0pp72.0%model 85.9%-13.9pp18.0%model 18.4%0pp3.0%model 1.1%+1.9pp
🇪🇸AlonsoAston Martinmodel 4.7%model 83.8%model 0.1%model 0.0%model 0.0%
🇫🇷GaslyAlpinemodel 4.6%model 83.2%model 53.6%model 0.0%model 0.0%
🇳🇿LawsonRBmodel 4.1%model 85.1%model 45.6%model 0.1%model 0.0%
🇩🇪HülkenbergSaubermodel 4.1%model 83.5%model 26.9%model 0.0%model 0.0%
🇫🇮BottasCadillacmodel 1.4%model 84.2%model 0.9%model 0.0%model 0.0%
🇨🇦StrollAston Martinmodel 4.2%model 83.7%model 0.1%model 0.0%model 0.0%
🇲🇽PérezCadillacmodel 1.4%model 84.9%model 1.1%model 0.0%model 0.0%
🇧🇷BortoletoSaubermodel 4.2%model 83.0%model 27.5%model 0.0%model 0.0%
🇬🇧LindbladRBmodel 4.0%model 84.0%model 42.8%model 0.0%model 0.0%
🇫🇷OconHaasmodel 3.6%model 83.6%model 21.6%model 0.0%model 0.0%
🇬🇧BearmanHaasmodel 3.5%model 83.6%model 22.3%model 0.0%model 0.0%
🇹🇭AlbonWilliamsmodel 4.2%model 80.9%model 12.6%model 0.0%model 0.0%
🇪🇸SainzWilliamsmodel 4.6%model 81.5%model 13.0%model 0.0%model 0.0%
🇦🇷ColapintoAlpinemodel 4.3%model 83.0%model 54.5%model 0.1%model 0.0%
🇫🇷HadjarRed Bullmodel 5.6%model 85.8%model 76.0%model 0.2%model 0.0%

Team pace ladder

Posterior race-pace gap to the fastest team, with 90% credible intervals.

Team pace ladder for Canadian Grand Prix

Model accuracy — 2026 baseline

The current model scored against the four completed 2026 races (Australia, China, Japan, Miami). Each race uses a posterior fit only on data before that race's qualifying — leak-free. This is the baseline to beat with model improvements.

0/4
races where the model's
favourite actually won
1.5 / 3
predicted-podium drivers
who actually podium'd (avg)
0.268
log-loss on race-winner
(baseline 0.185)

Pole prediction: Bayesian vs simple position-based model

ModelLog-lossBaselineFavourite hitVerdict
Bayesian (current quali model)1.3190.185worse than baseline
Simple position-based (new)0.1980.1853/4worse than baseline

Per-market scores

MarketNLog-lossBaselineBrierECEVerdict
Win880.2680.1850.0640.083worse than baseline
Podium880.2980.3980.1080.086beats baseline
Points880.7010.6890.2280.143worse than baseline
Pole881.3190.1850.0680.091worse than baseline
Finish880.5340.5370.1730.067beats baseline
Team H2H2190.7800.6930.2490.207worse than baseline

Per-race breakdown

R1

Australian Grand Prix

Actual
Winner
Russell
Pole
Russell
Podium
AntonelliLeclercRussell
DNFs
AlonsoBottasHadjarHülkenbergPiastriStroll
Model's pre-race top 5 to win
#DriverP(win)Result
1Norris46.4%Finished
2Piastri44.4%DNF
3Hamilton3.1%Finished
4Leclerc2.8%Podium
5Antonelli0.9%Podium
Actual winner Russell: 0.9%Bayesian P(pole | Russell): 0.0%Simple P(pole | Russell): 60.4%Avg P(podium) for actual podium: 24.3%
Simple model's pre-race top 5 to pole
#DriverP(pole)Result
1Russell60.4%Pole
2Norris20.6%
3Verstappen10.3%
4Piastri8.6%
5Alonso0.0%
R2

Chinese Grand Prix

Actual
Winner
Antonelli
Pole
Antonelli
Podium
AntonelliHamiltonRussell
DNFs
AlbonAlonsoBortoletoNorrisPiastriStrollVerstappen
Model's pre-race top 5 to win
#DriverP(win)Result
1Piastri46.3%DNF
2Norris46.1%DNF
3Leclerc2.4%Finished
4Hamilton2.3%Podium
5Russell1.0%Podium
Actual winner Antonelli: 0.9%Bayesian P(pole | Antonelli): 0.0%Simple P(pole | Antonelli): 0.0%Avg P(podium) for actual podium: 23.5%
Simple model's pre-race top 5 to pole
#DriverP(pole)Result
1Verstappen95.0%
2Russell2.3%
3Piastri1.6%
4Hadjar0.8%
5Norris0.3%
R3

Japanese Grand Prix

Actual
Winner
Antonelli
Pole
Antonelli
Podium
AntonelliLeclercPiastri
DNFs
BearmanStroll
Model's pre-race top 5 to win
#DriverP(win)Result
1Norris47.8%Finished
2Piastri43.9%Podium
3Leclerc3.0%Podium
4Hamilton2.3%Finished
5Antonelli1.1%Won
Actual winner Antonelli: 1.1%Bayesian P(pole | Antonelli): 0.0%Simple P(pole | Antonelli): 11.9%Avg P(podium) for actual podium: 45.2%
Simple model's pre-race top 5 to pole
#DriverP(pole)Result
1Russell37.6%
2Leclerc24.2%
3Verstappen20.7%
4Antonelli11.9%Pole
5Piastri3.2%
R4

Miami Grand Prix

Actual
Winner
Antonelli
Pole
Antonelli
Podium
AntonelliNorrisPiastri
DNFs
GaslyHadjarHülkenbergLawson
Model's pre-race top 5 to win
#DriverP(win)Result
1Piastri47.8%Podium
2Norris44.0%Podium
3Leclerc2.6%Finished
4Hamilton2.5%Finished
5Antonelli1.2%Won
Actual winner Antonelli: 1.2%Bayesian P(pole | Antonelli): 0.0%Simple P(pole | Antonelli): 56.0%Avg P(podium) for actual podium: 61.6%
Simple model's pre-race top 5 to pole
#DriverP(pole)Result
1Antonelli56.0%Pole
2Russell20.1%
3Leclerc17.9%
4Verstappen3.4%
5Piastri1.2%
R5

Canadian Grand Prix

Actual
Winner
Pole
Podium
DNFs
none
Model's pre-race top 5 to win
#DriverP(win)Result
Simple model's pre-race top 5 to pole
#DriverP(pole)Result
1Russell49.9%
2Antonelli26.8%
3Verstappen12.1%
4Piastri6.4%
5Leclerc2.7%

Source · 2026 walk-forward backtest · Method SVI · Refit cadence weekend · 5,000 simulations per race · Each race scored on a posterior fit only on data prior to that race's qualifying.

Inside the model

The exact numbers the simulator drew from. Every probability on this page is downstream of the trajectories and hyperparameters below.

AR(1) team-pace evolution

Each line is one team's β_team_year across every season in the training window. The curve is the random walk β[s, t] = β[s-1, t] + ε that the model fit. Lower lines = faster cars. The big move on the right edge is the 2026 step — what 2026 lap data has done to each team's posterior. Hover a team in the legend to isolate it.

0.000.050.100.15201820192020202120222023202420252026β_team_year (log-seconds)MercedesMcLarenFerrariRed BullAlpineRBSauberHaasWilliamsAston MartinCadillac

Fitted hyperparameters

What the model has learned about its own structure. The "posterior" column is the 90% credible interval after fitting on 2018-2026 data; compare it to the "prior" column to see whether the data moved the model's belief.

Pace model

SymbolWhat it isPriorPosterior (90% CI)
μ_circuitPopulation log-lap-time interceptNormal(log 85, 0.5)2.858[2.857, 2.859]
σ_team_initSpread of teams in year-1 baseline (log-s)HalfNormal(0.2)0.7035[0.7003, 0.7073]
σ_year_stepAR(1) year-to-year drift (log-s)HalfNormal(0.05)0.0659[0.0657, 0.0661]
σ_circuitSpread of per-race base lap timesHalfNormal(0.5)1.062[1.061, 1.063]
σ_compoundSoft/medium/hard offset spreadHalfNormal(0.02)0.0175[0.0163, 0.0186]
φ_fuelFuel-burn slope per fuel-fractionNormal(-0.012, 0.01)-0.0369[-0.0390, -0.0350]
ψ_tyreTyre-wear slope per lap of ageNormal(0.0006, 0.0005)0.0003[0.0001, 0.0005]
σ (lap noise)Within-stint lap-to-lap residualHalfNormal(0.05)0.0210[0.0199, 0.0223]

Qualifying model

SymbolWhat it isPriorPosterior (90% CI)
σ_team_initSpread of teams in year-1 baseline (log-s)HalfNormal(0.2)0.0206[0.0194, 0.0221]
σ_year_stepAR(1) year-to-year driftHalfNormal(0.05)0.0013[0.0011, 0.0014]
σ_circuitSpread of per-race quali baselinesHalfNormal(0.5)0.7491[0.7488, 0.7494]
σ_segmentQ1/Q2/Q3 track-evolution offsetHalfNormal(0.05)0.0799[0.0795, 0.0802]

Reliability model

SymbolWhat it isPriorPosterior (90% CI)
μ_dnfPopulation logit DNF rateNormal(logit 0.15, 0.5)-1.766[-1.830, -1.692]
σ_team_initSpread of teams in year-1 DNF baselineHalfNormal(0.5)0.1938[0.1338, 0.2662]
σ_year_stepAR(1) year-to-year drift in DNF rateHalfNormal(0.2)0.0440[0.0190, 0.0870]
σ_circuitPer-circuit DNF-rate spread (logit)HalfNormal(0.5)0.1519[0.0938, 0.2248]

How these predictions are made

The number in each cell is the fraction of 10,000 simulated races where that driver hit that result. The simulator draws from three Bayesian models fit on every clean-air race lap from 2018 to today.

The rest of this page is a long, technical explanation of the model. If you just want to read the table and move on, you can stop here.

What each column means

Pole

Probability of starting first on the grid. Computed from the qualifying model only — single-lap pace, fresh tyres, low fuel. The race result has no effect on this column.

Finish

Probability of being classified at the chequered flag (i.e. not retiring). Computed from the reliability model only — a logistic regression on whether each (team, circuit) combination historically DNFs.

Points

Probability of finishing P1–P10. From the race simulator — combines pace + reliability across 10,000 sims.

Podium

Probability of finishing P1–P3. Same simulator, stricter threshold.

Win

Probability of finishing P1. Same simulator, strictest threshold.

Exp.

Mean finish position across 10,000 sims (DNFs treated as last). A summary number — sortable but not a probability.

Three models, five markets — why the count differs

The three models capture three independent statistical objects you can fit from F1 data: race-stint pace, single-lap pace, and finish/DNF. Of the five markets, Pole comes from the qualifying model alone, Finish from the reliability model alone, and Win / Podium / Points are not separate predictions — they're the same race-outcome distribution counted at finish-position thresholds of 1, 3, and 10. Counting from a single simulation guarantees P(win) ≤ P(podium) ≤ P(points) automatically.

The pace model — full specification

Fits clean-air race lap times. ~135,000 rows, one per (session, driver, lap) for every clean lap 2018–2026. A clean lap is IsAccurate + green track + not pit-in/out + not stint-warmup + not stint-final.

log(lap_time_s)  =  β_team_year[s, t]
                  +  γ_circuit[s, r]
                  +  δ_compound[c]
                  +  φ_fuel  · fuel_lap_norm
                  +  ψ_tyre  · tyre_age
                  +  ε
        ε  ~  Normal(0, σ)
Priors
TermPriorWhat it is
β_team_year[s, t] β[s_0, t] ~ Normal(0, σ_team_init)
β[s, t] = β[s-1, t] + ε
ε ~ Normal(0, σ_year_step)
σ_team_init ~ HalfNormal(0.2)
σ_year_step ~ HalfNormal(0.05)
+ sum-to-zero across teams per year
AR(1) random walk per team. Last season's effect is the prior mean for this season's, with σ_year_step controlling year-to-year drift. Strong recent form gets absorbed; weak 2026 evidence falls back to 2025 form, not zero. Sum-to-zero across teams within each year fixes an identifiability ambiguity with μ_circuit: only relative team gaps live in β; absolute level lives in μ.
γ_circuit[s, r] Normal(μ_circuit, σ_circuit)
μ_circuit ~ Normal(log 85, 0.5)
σ_circuit ~ HalfNormal(0.5)
Per-(season, round) base lap time. μ_circuit is the population mean (~85s, the historical median lap). Each (season, round) gets its own cell — round 1 in 2018 (Australia) and round 1 in 2020 (Austria) are different races.
δ_compound[c] Normal(0, σ_compound)
σ_compound ~ HalfNormal(0.02)
Soft/medium/hard offset. Tight prior because compound deltas are typically 0.5–1s — small in log terms.
φ_fuel Normal(-0.012, 0.01) Fuel-burn slope. Cars get faster as they get lighter. The prior centres on -1.2% per fuel-fraction (informed by physics), with a tight standard deviation.
ψ_tyre Normal(0.0006, 0.0005) Tyre-wear slope. Lap times grow at ~0.06% per lap of tyre age.
σ (lap noise) HalfNormal(0.05) Within-(team, race, compound) lap-to-lap noise. Includes driver skill (no α_driver in v1), traffic, weather variation. The thing the team-pace prior had to compete with.

Non-centred reparameterisation

Hierarchical Bayesian models fit better when you decouple the scale parameter from the unit-effects. Internally the model stores z_team_year ~ Normal(0, 1) and constructs β_team_year = σ_team_year × z_team_year. NUTS samples through this much more cleanly than the centred form, which is why every coefficient in the model has the same trick.

The qualifying model

Fits a single observation per (session, driver) — the fastest single lap that driver set across Q1/Q2/Q3.

log(best_lap_s)  =  β_team_year[s, t]
                  +  γ_circuit[s, r]
                  +  δ_segment[Q1 / Q2 / Q3]
                  +  ε
        ε  ~  Normal(0, σ)

Same priors as the pace model for β_team_year, γ_circuit, and σ. No fuel or tyre slope — quali is single-lap, low-fuel, fresh tyres. δ_segment captures track evolution: the same driver's identical effort in Q1 vs Q3 logs different times because the track rubbers in. Without this, drivers who only progressed to Q1 would look slower than they are.

The reliability model

Bernoulli on whether each (driver, race) ended in DNF. One row per driver per race; ~500 rows per season × 8 seasons.

logit(P_DNF)  =  μ_dnf
              +  β_team_year[s, t]
              +  γ_circuit[circuit_id]
is_dnf  ~  Bernoulli(sigmoid(logit_P_DNF))

Different from pace/quali in three ways:

  • Likelihood family. Bernoulli with a logit link, not Gaussian on log-time. Same hierarchical-Normal priors on the coefficients, applied through the logit transform.
  • Circuit identifier. Uses the canonical Ergast circuit_id pooled across years, not (season, round). Monaco is persistently accident-prone in 2018 and 2024 alike — pooling concentrates that signal. With ~22 races/season × 20 drivers = 440 rows/season per circuit, splitting by (season, round) would dilute it.
  • Population baseline. μ_dnf ~ Normal(-2, 1) centres the prior on a ~12% DNF rate (sigmoid of -2), close to the long-run F1 average. Per-team and per-circuit effects shift this up or down on the logit scale.

How we fit the posteriors

Both NUTS (No-U-Turn Sampler, gradient-based MCMC) and SVI (Stochastic Variational Inference, fast approximation) are supported. NUTS is the default for production runs — slower but exact, sampling roughly 500 warmup + 500 posterior draws per model. SVI uses an AutoNormal guide and is used during development for fit-window experimentation. Both produce the same shape of output: a sample-by-coefficient array that the simulator draws from.

Posteriors are content-addressed by their inputs hash (since, until, method, n_rows, n_drivers, n_teams, num_samples) and cached on disk, so refitting on the same data is a no-op. That's how the per-session workflow stays cheap: the heavy NUTS fit runs weekly, intra-weekend session updates reuse the cached posterior.

Leak-free temporal cutoff

Every fit uses an as_of timestamp that defaults to one second before the target race's qualifying session. Training data filters to laps and results that finished before that timestamp. This is what makes backtesting honest: the 2024 Imola prediction sees only the data the model would have had at the moment of qualifying, not the race outcome it's trying to predict.

The simulator — step by step

  1. Draw one joint sample from the three posteriors (one realisation of all coefficients).
  2. For each driver, compute predicted log-lap-time as β_team_year[2026, their team] + γ_circuit[2026 R5]. (When the target race's circuit cell isn't in the encoder — typical for upcoming rounds — fall back to μ_circuit.)
  3. For each of the 60-ish laps, draw a multiplicative noise term (1 + σ × Normal(0, 1)) and apply it to the driver's lap time.
  4. Sum the laps → race time.
  5. Roll a DNF outcome from Bernoulli(sigmoid(μ_dnf + β_rel + γ_rel)).
  6. Compute quali time the same way as race lap time but using the quali model's posteriors → quali rank.
  7. Sort drivers by race time. DNFs go to position 0 (sentinel).
  8. Record finish position, quali rank, race time, DNF flag for each driver.
  9. Repeat 10,000 times. Aggregate fractions per market.

Identifiability and design choices

A few non-obvious model decisions, and why:

  • No driver effect (α_driver). Driver and team are colinear in F1 data — drivers move teams rarely, so a per-driver / per-team coefficient pair fits non-uniquely. With weak priors the fit collapses on whichever side gets first dibs and rankings come out garbled. With tight priors it doesn't recover. The fix is a separate residual model fit on within-team-year teammate gaps, where the colinearity disappears (you're comparing two drivers in the same car). That's planned for v2.
  • Per-(season, round) circuit cells, not per-circuit. The same physical track is a different race in different years (regulations change, tarmac is resurfaced, weather differs). Pooling across years would smear those changes. The reliability model is the exception — DNF rates are dominated by track geometry, which is stable.
  • AR(1) cross-year structure for β_team_year (added 2026-05-08). Each team's per-year effect is now a Gaussian random walk across seasons: β[s, t] = β[s-1, t] + ε with ε ~ Normal(0, σ_year_step). Last season is the prior mean for this season, so strong recent form gets absorbed without having to overcome an at-zero prior. Sum-to-zero per year removes the identifiability ambiguity with μ_circuit.
  • Pace model has no α_driver, but lap noise σ absorbs driver-skill variance. This means the lap noise is wider than it should be, which is part of why team gaps had to fight uphill against the data.

What's NOT in the model

  • No driver effect. Drivers in the same (season, team) predict identically. Verstappen and his teammate get the same race-pace forecast. The gap you sometimes see between teammates in this table is pure Monte Carlo noise.
  • No track-position effect. The race simulator does not use grid position to determine race outcome. A car predicted to be slowest in qualifying but fastest in race trim would still "win" the simulation. Monaco predictions in particular will look wrong because real Monaco is ~80% determined by qualifying.
  • Year-to-year drift prior is a single parameter. σ_year_step is shared across all teams. In reality some teams' form is stable (McLaren 2024-2026), others volatile (Williams across regulation changes). Per-team year-step variance is a v3 enhancement.
  • No tyre-strategy model. Pit-stop timing, undercut/overcut effects, 1-stop vs 2-stop choices are not modelled. The pace model captures average stint pace; the strategic decisions made on top of that aren't.
  • No weather conditions. Wet vs dry races have completely different pace orderings, and the model treats them identically.
  • No live form features — practice times, sprint results, mid-session tyre data, free-practice fuel-corrected pace. These exist in the data but don't feed the model.

The page reflects engine output verbatim. Every number above came out of the simulator, not editorial judgement. If a column looks wrong, the fix lives in the model, not in the display.

Source · Five Reds Engine · Method SVI · Fit window 2018–2026