The claim
A perception model good enough to ship in March will be giving you wrong answers by September. We have the dive logs to prove it. The interesting question is not whether the model drifts but where the drift comes from — and whether you should be retraining the network or recalibrating the sensor.
The data
2,140 dive-hours across 11 sites in Norway, Scotland, and the Faroes between Q2 2024 and Q4 2025. Each dive logged ground-truth bounding boxes from a teleoperator, raw sensor data, and the live model's predictions. We then re-ran every model checkpoint we shipped against every dive's data, retroactively.
The matrix is N×N: every model on every dive. From it you can read drift in two directions — how a model degrades over the year, and how a dive's data ages relative to the corpus that trained on it.
Where it comes from
We expected drift to be dominated by lighting changes — turbidity, season, depth. We were wrong. Three sources, in order of impact:
- Sensor fouling. Optical drift on the camera lens accounts for 54% of the observed accuracy loss. Most operators wipe lenses on a calendar schedule. They should wipe them on a model-confidence schedule.
- Biofouling on calibration targets. The fiducials we use to recalibrate intrinsics get colonised by mussels in 6–9 weeks. 21% of drift is the calibration target lying about ground truth.
- Seasonal biology. Salmon body morphology shifts measurably with temperature and feed cycle. 17%. The remaining 8% is everything else combined.
The protocol we ship
The protocol is more boring than the headline. We retrain the perception network twice a year (spring and autumn), but we recalibrate the sensor weekly, and we replace the calibration targets every six weeks. Eighty percent of what looked like model drift was a hardware-hygiene problem.
The remaining 20% — the real model drift — is small enough that two retrains a year covers it. Continuous retraining was a temptation we ran experiments to disprove; we now actively recommend against it.
Three things to take
- Instrument the sensor before the model. Most "model drift" is a smudge on the lens.
- Replace your fiducials on a schedule. Calibration targets that get worse over time will silently teach your model to be worse.
- Retrain less often than you think. Continuous retraining masks the underlying problem and makes it harder to debug when something goes really wrong.