Forecast evaluation
Understanding, evaluating, and improving forecasts of infectious disease burden
Importance of evaluation
Because forecasts are unconditional (what will happen) we can compare them to data and see how well they did
Doing this allows us to answer question like
Are our forecasts any good?
How far ahead can we trust forecasts?
Which model works best for making forecasts?
So-called
proper scoring rules
incentivise forecasters to express an honest belief about the future
Many proper scoring rules (and other metrics) are available to assess probabilistic forecasts
The forecasting paradigm
Maximise
sharpness
subject to
calibration
Statements about the future should be
correct
(“calibration”)
Statements about the future should aim to have
narrow uncertainty
(“sharpness”)
Your Turn
Load forecasts from the model we have visualised previously.
Evaluate the forecasts using proper scoring rules
Return to the session