Forecast evaluation

Introduction

So far we have focused on visualising forecasts, including confronting them with events that were observed after the forecast was made. Besides visualising the forecasts, we can also summarise performance quantitatively. In this session you will get to know several ways of assessing different aspects of forecast performance.

Slides

Objectives

The aim of this session is to introduce the concept of forecast evaluation using scoring rules.

Source file

The source file of this session is located at sessions/forecast-evaluation.qmd.

Libraries used

In this session we will use the nfidd package to load a data set of infection times and access stan models and helper functions, the dplyr and package for data wrangling, ggplot2 library for plotting and the scoringutils package for evaluating forecasts.

library("nfidd")
library("dplyr")
library("ggplot2")
library("scoringutils")
Tip

The best way to interact with the material is via the Visual Editor of RStudio.

Initialisation

We set a random seed for reproducibility. Setting this ensures that you should get exactly the same results on your computer as we do.

set.seed(123)

Introduction to forecast evaluation

An important aspect of making forecasts is that we can later confront the forecasts with what really happened and use this to assess whether our forecast model makes good predictions, or which of multiple models work best in which situation.

What do we look for in a good forecast? Some food for thought:
  1. Calibration: The forecast should be well calibrated. This means that the forecasted probabilities should match the observed frequencies. For example, if the model predicts a 50% probability of an event occurring, then the event should occur approximately 50% of the time.
  2. Unbiasedness: The forecast should be unbiased. This means that the average forecasted value should be equal to the average observed value. It shouldn’t consistently over- or underpredict.
  3. Accuracy: The forecast should be accurate. This means that the forecasted values should be close to the observed values.
  4. Sharpness: As long as the other conditions are fulfilled we want prediction intervals to be as narrow as possible. Predicting that “anything can happen” might be correct but not very useful.

Evaluate a forecasting model

We consider forecasts from the model we visualised previously, and explore different metrics for evaluation.

data(rw_forecasts)
rw_forecasts |>
  ungroup()
# A tibble: 224,000 × 7
     day .draw .variable .value horizon target_day model      
   <dbl> <int> <chr>      <dbl>   <int>      <dbl> <chr>      
 1    23     1 forecast       4       1         22 Random walk
 2    23     2 forecast       2       1         22 Random walk
 3    23     3 forecast       2       1         22 Random walk
 4    23     4 forecast       6       1         22 Random walk
 5    23     5 forecast       2       1         22 Random walk
 6    23     6 forecast       3       1         22 Random walk
 7    23     7 forecast       5       1         22 Random walk
 8    23     8 forecast       2       1         22 Random walk
 9    23     9 forecast       3       1         22 Random walk
10    23    10 forecast       5       1         22 Random walk
# ℹ 223,990 more rows

Some important things to note about these forecasts:

  • We used a 14 day forecast horizon.
  • Each forecast used all the data up to the forecast date.
  • We generated 1000 predictive posterior samples for each forecast.
  • We started forecasting 3 weeks into the outbreak and then forecast every 7 days until the end of the data (excluding the last 14 days to allow a full forecast).
  • We use the same simulated outbreak data as before:
gen_time_pmf <- make_gen_time_pmf()
ip_pmf <- make_ip_pmf()
onset_df <- simulate_onsets(
  make_daily_infections(infection_times), gen_time_pmf, ip_pmf
)
head(onset_df)
# A tibble: 6 × 3
    day onsets infections
  <dbl>  <int>      <int>
1     1      0          0
2     2      0          1
3     3      0          0
4     4      0          2
5     5      0          1
6     6      0          1

Scoring the forecasts

We now summarise performance quantitatively by using scoring metrics. Whilst some of these metrics are more useful for comparing models, many can be also be useful for understanding the performance of a single model.

Tip

In this session, we’ll use “proper” scoring rules: these are scoring rules that make sure no model can get better scores than the true model, i.e. the model used to generate the data. Of course we usually don’t know this (as we don’t know the “true model” for real-world data) but proper scoring rules incentivise forecasters to make their best attempt at reproducing its behaviour. For a comprehensive text on proper scoring rules and their mathematical properties, we recommend the classic paper by Gneiting and Raftery (2007).

We will use the {scoringutils} package to calculate these metrics. Our first step is to convert our forecasts into a format that the {scoringutils} package can use. We will use as_forecast_sample() to do this:

sc_forecasts <- rw_forecasts |>
  left_join(onset_df, by = "day") |>
  filter(!is.na(.value)) |>
  as_forecast_sample(
    forecast_unit = c(
      "target_day", "horizon", "model"
    ),
    observed = "onsets",
    predicted = ".value",
    sample_id = ".draw"
  )
sc_forecasts
Forecast type: sample
Forecast unit:
target_day, horizon, and model

        sample_id predicted observed target_day horizon       model
            <int>     <num>    <int>      <num>   <int>      <char>
     1:         1         4        4         22       1 Random walk
     2:         2         2        4         22       1 Random walk
     3:         3         2        4         22       1 Random walk
     4:         4         6        4         22       1 Random walk
     5:         5         2        4         22       1 Random walk
    ---                                                            
223996:       996         1        2        127      14 Random walk
223997:       997         0        2        127      14 Random walk
223998:       998         0        2        127      14 Random walk
223999:       999         1        2        127      14 Random walk
224000:      1000         0        2        127      14 Random walk

As you can see this has created a forecast object which has a print method that summarises the forecasts.

Take 2 minutes

What important information is in the forecast object?

  • The forecast unit which is the target day, horizon, and model
  • The type of forecast which is a sample forecast

Everything seems to be in order. We can now use the scoringutils package to calculate some metrics. We will use the default sample metrics (as our forecasts are in sample format) and score our forecasts.

sc_scores <- sc_forecasts |>
  score()

sc_scores
     target_day horizon       model   bias      dss     crps overprediction
          <num>   <int>      <char>  <num>    <num>    <num>          <num>
  1:         22       1 Random walk -0.304 1.587880 0.623424          0.000
  2:         22       2 Random walk  0.828 3.215491 1.664522          1.118
  3:         22       3 Random walk  0.002 1.936078 0.574624          0.000
  4:         22       4 Random walk  0.687 3.231981 1.570469          0.862
  5:         22       5 Random walk  0.767 3.750147 2.172579          1.386
 ---                                                                       
220:        127      10 Random walk -0.821 3.120249 1.745842          0.000
221:        127      11 Random walk -0.867 3.853982 1.972866          0.000
222:        127      12 Random walk -0.949 7.803330 3.092738          0.000
223:        127      13 Random walk -0.914 5.003912 2.318689          0.000
224:        127      14 Random walk -0.637 1.140452 0.774974          0.000
     underprediction dispersion log_score    mad ae_median   se_mean
               <num>      <num>     <num>  <num>     <num>     <num>
  1:           0.154   0.469424  1.887790 1.4826         1  0.367236
  2:           0.000   0.546522  2.231108 2.9652         3  8.479744
  3:           0.000   0.574624  1.862234 2.9652         0  0.142884
  4:           0.000   0.708469  2.175798 2.9652         3  9.284209
  5:           0.000   0.786579  2.489797 2.9652         3 15.327225
 ---                                                                
220:           1.400   0.345842  2.694333 1.4826         3  5.597956
221:           1.684   0.288866  2.871442 1.4826         3  6.687396
222:           2.832   0.260738  4.339301 1.4826         4 13.883076
223:           2.062   0.256689  3.239324 1.4826         3  8.265625
224:           0.496   0.278974  1.814469 1.4826         1  0.948676

See the documentation for get_metrics.forecast_sample() for information on the default metrics for forecasts that are represented as samples (in our case the samples generated by the stan model).

At a glance

Before we look in detail at the scores, we can use summarise_scores to get a quick overview of the scores. Don’t worry if you don’t understand all the scores yet, we will go some of them in more detail in the next section and you can find more information in the {scoringutils} documentation.

sc_scores |>
  summarise_scores(by = "model")
         model      bias      dss     crps overprediction underprediction
        <char>     <num>    <num>    <num>          <num>           <num>
1: Random walk 0.2147143 6.366294 13.50026        8.38367       0.7640446
   dispersion log_score     mad ae_median se_mean
        <num>     <num>   <num>     <num>   <num>
1:   4.352542  3.973451 18.2512  18.20982 1567.78
Take 2 minutes

Before we look in detail at the scores, what do you think the scores are telling you?

Continuous ranked probability score

What is the Continuous Ranked Probability Score (CRPS)?

The Continuous Ranked Probability Score (CRPS) is a proper scoring rule used to evaluate the accuracy of probabilistic forecasts. It is a generalization of the Mean Absolute Error (MAE) to probabilistic forecasts, where the forecast is a distribution rather than a single point estimate (i.e. like ours).

The CRPS can be thought about as the combination of two key aspects of forecasting: 1. The accuracy of the forecast in terms of how close the predicted values are to the observed value. 2. The confidence of the forecast in terms of the spread of the predicted values.

By balancing these two aspects, the CRPS provides a comprehensive measure of the quality of probabilistic forecasts.

Key things to note about the CRPS
  • Small values are better
  • When scoring absolute values (e.g. number of cases) it can be difficult to compare forecasts across scales (i.e., when case numbers are different, for example between locations or at different times).

For distributions with a finite first moment (a mean exists and it is finite), the CRPS can be expressed as:

\[ CRPS(D, y) = \mathbb{E}_{X \sim D}[|X - y|] - \frac{1}{2} \mathbb{E}_{X, X' \sim D}[|X - X'|] \]

where \(X\) and \(X'\) are independent random variables sampled from the distribution \(D\). To calculate this we simply replace \(X\) and \(X'\) by samples from our posterior distribution and sum over all possible combinations.

This equation can be broke down into the two components:

Breakdown of the Components
  1. Expected Absolute Error Between Forecast and Observation: \(\mathbb{E}_{X \sim D}[|X - y|]\) This term represents the average absolute difference between the values predicted by the forecasted distribution \(D\) and the actual observed value \(y\). It measures how far, on average, the forecasted values are from the observed value. A smaller value indicates that the forecasted distribution is closer to the observed value.

  2. Expected Absolute Error Between Two Forecasted Values: \(\frac{1}{2} \mathbb{E}_{X, X' \sim D}[|X - X'|]\) This term represents the average absolute difference between two independent samples from the forecasted distribution \(D\). It measures the internal variability or spread of the forecasted distribution. A larger value indicates a wider spread of the forecasted values.

Interpretation
  • First Term (\(\mathbb{E}_{X \sim D}[|X - y|]\)): This term penalizes the forecast based on how far the predicted values are from the observed value. It ensures that the forecast is accurate in terms of proximity to the actual observation.

  • Second Term (\(\frac{1}{2} \mathbb{E}_{X, X' \sim D}[|X - X'|]\)): This term accounts for the spread of the forecasted distribution. It penalizes forecasts that are too uncertain or have a wide spread. By subtracting this term, the CRPS rewards forecasts that are not only accurate but also confident (i.e., have a narrow spread).

Whilst the CRPS is a very useful metric it can be difficult to interpret in isolation. It is often useful to compare the CRPS of different models or to compare the CRPS of the same model under different conditions. For example, lets compare the CRPS across different forecast horizons.

sc_scores |>
  summarise_scores(by = "horizon") |>
  ggplot(aes(x = horizon, y = crps)) +
  geom_point() +
  labs(title = "CRPS by daily forecast horizon", 
       subtitle = "Summarised across all forecasts")

and at different time points.

sc_scores |>
  summarise_scores(by = "target_day") |>
  ggplot(aes(x = target_day, y = crps)) +
  geom_point() +
  labs(title = "CRPS by forecast start date", 
       subtitle = "Summarised across all forecasts", x = "forecast date")

Take 5 minutes

How do the CRPS scores change based on forecast date? How do the CRPS scores change with forecast horizon? What does this tell you about the model?

  • The CRPS scores increase for forecast dates where incidence is higher.
  • The CRPS scores increase with forecast horizon.
  • As the CRPS is an absolute measure it is hard to immediately know if the CRPS increasing with forecast date indicates that the model is performing worse.
  • However, the CRPS increasing with forecast horizon is a sign that the model is struggling to capture the longer term dynamics of the epidemic.

PIT histograms

As well as the CRPS we can also look at the calibration and bias of the model. Calibration is the agreement between the forecast probabilities and the observed frequencies. Bias is a measure of how likely the model is to over or under predict the observed values.

There are many ways to assess calibration and bias but one common way is to use a probability integral transform (PIT) histogram. This is a histogram of the cumulative distribution of function of a forecast evaluated at the observed value.

Interpreting the PIT histogram
  • Ideally PIT histograms should be uniform.
  • If is a U shape then the model is overconfident and if it is an inverted U shape then the model is underconfident.
  • If it is skewed then the model is biased towards the direction of the skew.

Continuous Case

For a continuous random variable \(X\) with cumulative distribution function (CDF) \(F_X\), the PIT is defined as:

\[ Y = F_X(X) \]

where \(Y\) is uniformly distributed on \([0, 1]\).

Integer Case

When dealing with integer forecasts, the standard PIT does not yield a uniform distribution even if the forecasts are perfectly calibrated. To address this, a randomized version of the PIT can be used (Czado, Gneiting, and Held 2009). For an integer-valued random variable \(X\) with CDF \(F_X\), the randomized PIT is defined as:

\[ U = F_X(k) + v \cdot (F_X(k) - F_X(k-1)) \]

where:

  • \(k\) is the observed integer value.
  • \(F_X(k)\) is the CDF evaluated at \(k\).
  • \(v\) is a random variable uniformly distributed on \([0, 1]\).

This transformation ensures that \(U\) is uniformly distributed on \([0, 1]\) if the forecasted distribution \(F_X\) is correctly specified.

Let’s first look at the overall PIT histogram.

sc_forecasts |>
  get_pit_histogram() |>
  ggplot(aes(x = mid, y = density)) +
  geom_col() +
  labs(title = "PIT histogram", x = "Quantile", y = "Density")

As before lets look at the PIT histogram by forecast horizon. To save space we will group horizons into a few days each:

sc_forecasts |>
  mutate(group_horizon = case_when(
    horizon <= 3 ~ "1-3",
    horizon <= 7 ~ "4-7",
    horizon <= 14 ~ "8-14"
  )) |>
  get_pit_histogram(by = "group_horizon") |>
  ggplot(aes(x = mid, y = density)) +
  geom_col() + 
  facet_wrap(~group_horizon) +
  labs(title = "PIT by forecast horizon (days)")

and then for different forecast dates.

sc_forecasts |>
  get_pit_histogram(by = "target_day") |>
  ggplot(aes(x = mid, y = density)) +
  geom_col() + 
  facet_wrap(~target_day) +
  labs(title = "PIT by forecast date")

Take 5 minutes

What do you think of the PIT histograms? Do they look well calibrated? Do they look biased?

  • It looks like the model is biased towards overpredicting and that this bias gets worse at longer forecast horizons.
  • Looking over forecast dates it looks like much of this bias is coming from near the outbreak peak where the model is consistently overpredicting but the model is also over predicting at other times.

Scoring on the log scale

We can also score on the logarithmic scale. This can be useful if we are interested in the relative performance of the model at different scales of the data, for example if we are interested in the model’s performance at capturing the exponential growth phase of the epidemic. In some sense scoring in this way can be an approximation of scoring the effective reproduction number estimates. Doing this directly can be difficult as the effective reproduction number is a latent variable and so we cannot directly score it.

We again use scoringutils but first transform both the forecasts and observations to the log scale.

log_sc_forecasts <- sc_forecasts |>
  transform_forecasts(
    fun = log_shift,
    offset = 1,
    append = FALSE
  )

log_scores <- log_sc_forecasts |>
  score()

For more on scoring on the log scale see the paper by Bosse et al. (2023).

At a glance

log_scores |>
  summarise_scores(by = "model")
         model      bias        dss      crps overprediction underprediction
        <char>     <num>      <num>     <num>          <num>           <num>
1: Random walk 0.1774554 -0.6084371 0.2451112       0.121214      0.03678654
   dispersion log_score       mad ae_median   se_mean
        <num>     <num>     <num>     <num>     <num>
1: 0.08711065 0.5440927 0.3761217 0.3379461 0.2082002
Take 2 minutes

Before we look in detail at the scores, what do you think the scores are telling you? How do you think they will differ from the scores on the natural scale?

CRPS

log_scores |>
  summarise_scores(by = "horizon") |>
  ggplot(aes(x = horizon, y = crps)) +
  geom_point() +
  labs(title = "CRPS by daily forecast horizon, scored on the log scale")

and across different forecast dates

log_scores |>
  summarise_scores(by = "target_day") |>
  ggplot(aes(x = target_day, y = crps)) +
  geom_point() +
  labs(title = "CRPS by forecast date, scored on the log scale")

Take 5 minutes

How do the CRPS scores change based on forecast date? How do the CRPS scores change with forecast horizon? What does this tell you about the model?

  • As for the natural scale CRPS scores increase with forecast horizon but now the increase appears to be linear vs exponential.
  • There has been a reduction in the CRPS scores for forecast dates near the outbreak peak compared to other forecast dates but this is still the period where the model is performing worst.

PIT histograms

Let’s first look at the overall PIT histogram.

log_sc_forecasts |>
  get_pit_histogram(by = "model") |>
  ggplot(aes(x = mid, y = density)) +
  geom_col() +
  labs(title = "PIT histogram, scored on the log scale")

As before lets look at the PIT histogram by forecast horizon

log_sc_forecasts |>
  mutate(group_horizon = case_when(
    horizon <= 3 ~ "1-3",
    horizon <= 7 ~ "4-7",
    horizon <= 14 ~ "8-14"
  )) |>
  get_pit_histogram(by = "group_horizon") |>
  ggplot(aes(x = mid, y = density)) +
  geom_col() +
  facet_wrap(~group_horizon) +
  labs(title = "PIT by forecast horizon, scored on the log scale")

and then for different forecast dates.

log_sc_forecasts |>
  get_pit_histogram(by = "target_day") |>
  ggplot(aes(x = mid, y = density)) +
  geom_col() +
  facet_wrap(~target_day) +
  labs(title = "PIT by forecast date, scored on the log scale")

Take 5 minutes

What do you think of the PIT histograms? Do they look well calibrated? Do they look biased?

  • The overall PIT histograms suggest that the model is less biased to over predict when scored on the log scale than the natural scale, but it is still biased. This makes sense when we think back to the comparison of reproduction number estimates and forecasts we made earlier where the model was consistently over predicting on the reproduction number.
  • By forecast horizon the model is still biased towards over predicting but this bias is less pronounced than on the natural scale.
  • Towards the end and beginning of the forecast period the model appears to be well calibrated on the log scale but is biased towards over predicting in the middle of the forecast period.
  • This matches with our knowledge of the underlying reproduction number which were initially constant and then began to decrease only to stabilise towards the end of the outbreak.

Going further

  • In which other ways could we summarise the performance of the forecasts?
  • What other metrics could we use?
  • There is no one-size-fits-all approach to forecast evaluation, often you will need to use a combination of metrics to understand the performance of your model and typically the metrics you use will depend on the context of the forecast. What attributes of the forecast are most important to you?
  • There are many other metrics that can be used to evaluate forecasts. The documentation for the {scoringutils} package has a good overview of these metrics and how to use them.
  • One useful way to think about evaluating forecasts is to consider exploring the scores as a data analysis in its own right. For example, you could look at how the scores change over time, how they change with different forecast horizons, or how they change with different models. This can be a useful way to understand the strengths and weaknesses of your model. Explore some of these aspects using the scores from this session.

Wrap up

References

Bosse, Nikos I., Sam Abbott, Anne Cori, Edwin van Leeuwen, Johannes Bracher, and Sebastian Funk. 2023. “Scoring Epidemiological Forecasts on Transformed Scales.” Edited by James M McCaw. PLOS Computational Biology 19 (8): e1011393. https://doi.org/10.1371/journal.pcbi.1011393.
Czado, Claudia, Tilmann Gneiting, and Leonhard Held. 2009. “Predictive Model Assessment for Count Data.” Biometrics 65 (4): 1254–61. https://doi.org/10.1111/j.1541-0420.2009.01191.x.
Gneiting, Tilmann, and Adrian E Raftery. 2007. “Strictly Proper Scoring Rules, Prediction, and Estimation.” Journal of the American Statistical Association 102 (477): 359–78. https://doi.org/10.1198/016214506000001437.