A retrospective assessment of COVID-19 model performance in the USA.
Kyle J ColonnaGabriela F NaneErnani F ChomaRoger M CookeJohn S EvansPublished in: Royal Society open science (2022)
Coronavirus disease 2019 (COVID-19) forecasts from over 100 models are readily available. However, little published information exists regarding the performance of their uncertainty estimates (i.e. probabilistic performance). To evaluate their probabilistic performance, we employ the classical model (CM), an established method typically used to validate expert opinion. In this analysis, we assess both the predictive and probabilistic performance of COVID-19 forecasting models during 2021. We also compare the performance of aggregated forecasts (i.e. ensembles) based on equal and CM performance-based weights to an established ensemble from the Centers for Disease Control and Prevention (CDC). Our analysis of forecasts of COVID-19 mortality from 22 individual models and three ensembles across 49 states indicates that-(i) good predictive performance does not imply good probabilistic performance, and vice versa; (ii) models often provide tight but inaccurate uncertainty estimates; (iii) most models perform worse than a naive baseline model; (iv) both the CDC and CM performance-weighted ensembles perform well; but (v) while the CDC ensemble was more informative, the CM ensemble was more statistically accurate across states. This study presents a worthwhile method for appropriately assessing the performance of probabilistic forecasts and can potentially improve both public health decision-making and COVID-19 modelling.
Keyphrases
- coronavirus disease
- sars cov
- public health
- respiratory syndrome coronavirus
- decision making
- cell cycle
- convolutional neural network
- risk factors
- type diabetes
- cardiovascular disease
- magnetic resonance
- healthcare
- high resolution
- magnetic resonance imaging
- computed tomography
- mass spectrometry
- systematic review
- cell proliferation
- social media