COVID-19 in Austria: Why scientists learned to speak with one voice

During the SARS-CoV-2 pandemic, the Austrian government tasked us with predicting confirmed SARS-CoV-2 cases and hospitalizations over the next two weeks to support decisions regarding strengthening or easing of interventions. This is how we responded and learned the value of combining models.
COVID-19 in Austria: Why scientists learned to speak with one voice
Like

Europe, March 2020, from the perspective of an epidemiological modeller: SARS-CoV-2 arrives on the European continent. First estimates for basic parameters of the virus become available in the literature, such as the basic reproduction number and the infection-to-hospitalization ratio. Back of the envelope calculations make it very quickly very clear that an unmitigated spread of the virus would overload existing hospital capacities by several folds. Nervosity grows as the first clusters bloom. To complicate the situation, presymptomatic transmissions mean that the virus will always be one step ahead unless we manage to build a proper test-trace-isolate system.

In Europe, the Italian region of Lombardy is hit particularly early and hard by the novel coronavirus, sending shockwaves to the neighbouring countries. This includes Austria, which has a renowned healthcare system with one of the largest numbers of hospital beds and physicians per capita worldwide. Austria’s public health research infrastructure, however, is not as generously equipped. From the media, Austrian government officials learn what disease modellers think about the developing situation and recognize this as a blind spot in their institutional landscape. They call for a meeting with three modelling teams and ask two questions: When will the wave peak? And will hospitals be able to cope with the patient numbers? But they want a single, harmonized answer to each of these questions, not three different ones. We as modellers had to learn to either speak with one voice or not be heard at all. This way the COVID Forecast Consortium was formed in Austria and tasked with answering these questions. This process we describe in our novel paper entitled “Supporting COVID-19 Policy-Making with a Predictive Epidemiological Multi-Model Warning System”.

The consortium consists of three modelling teams from the Austrian National Public Health Institute, the Technical University of Vienna / dwh and from the Medical University of Vienna / Complexity Science Hub Vienna, respectively. By seeking to combine the outputs from different epidemiological models, operated independently by the three modelling teams, we could develop a consolidated view of the epidemiological situation. Furthermore, we were able to evaluate how it is expected to continue without becoming overly reliant on strengths or limitation of one specific model.

We published our first forecasts shortly after the first national lockdown was implemented in Austria, but before it took effect and case numbers began to decline. The basic ingredients of our joint modelling approach were developed in many frantic overnight programming sessions and under increasing public spotlight. To share risk among the consortium members, we decided to only publish harmonized forecasts for case numbers and bed usage, but not results from individual models. We further decided to only publish short-term forecasts that span up to two weeks, because we were well aware of the uncertainties. First, uncertainties grow exponentially with the forecast horizon in epidemic spreading models. Second, we faced a volatile political environment with regard to the implementation and easing of non-pharmaceutical interventions.

The peak of the first wave in terms of cases, hospitalized patients and occupied beds in intensive care units falls well within our projected range. Our models also anticipated that case numbers will continue to decrease as interventions are lifted and as summer approaches. The summer gave us time to consolidate our forecasting system methodologically and to take care of all the details for which there was no time during the overnight sessions in the early phases of the pandemic. We also had the opportunity to develop a reporting scheme for our forecasts, including a continuous evaluation of the accuracy of past forecasts. Reporting channels were established to ensure that our regionalized forecasts are communicated timely and directly to regional authorities and hospital managers.

What the actual value of our forecast system would turn out to be, we actually started to grasp in autumn 2020. Our models were expecting increasing case numbers due to the changing seasonal influences. These influences range from behavioural changes as people return to schools and workplaces after the summer to increased transmission risks due to lower temperature and humidity. Within a couple of days in October, however, the effective reproduction number jumped from around 1.1 to almost 1.4, while case numbers and hospital occupancy surged in a way that none of our models anticipated. A second lockdown quickly became necessary.

At this point we noticed a pattern. If our joint forecasts were off, this was typically not because one of the models was substantially less accurate than the others (agreement between the model was typically higher in such cases than the agreement with the actual development). By evaluating possible reasons for these systematic deviations, it turned out that the impact of changing external conditions provided a plausible explanation. For instance, underestimations of the dynamics in low incidence phases at the end of the summer coincided with sudden increases of travel-associated cases, signalling the return of people into workplaces and schools. We also learned (and published in a different paper) that sudden changes in meteorological variables during autumn or spring impacted on the case dynamics, suggesting that short-term epidemiological forecasting is to a large extent also weather forecasting.

An important signal from our forecasting system was therefore not if the forecasts were accurate – often this just meant that nothing has changed in the factors driving the dynamics. Systematic deviations between model and data suggest that some external factors not captured in our models have changed and we need to dig deeper in order to understand what is going on and to resharpen our tools. Still, the accuracy of our combined forecasts was higher than the accuracy of each individual model, demonstrating the benefit and robustness of multi-model forecasting approaches.

Before the pandemic started, all of us were experienced in developing models and scientifically publishing their output. What we were unprepared for, however, was doing this under immense public scrutiny and time pressure reporting directly to decision-makers who had to quickly make extremely far-reaching societal and economic decisions based on limited and uncertain information. There is an abundance of papers describing how infectious disease models work; after the pandemic even more so than before. Instead, we wanted to write an article that sheds light on all the technical steps and solutions that we developed to translate the outputs of different models into a decision-support system that meets the requirements from policy-makers as outlined above. This is what we as modellers were not really prepared for and this is, we strongly believe, a story very worth telling.

Please sign in or register for FREE

If you are a registered user on Nature Portfolio Health Community, please sign in

Go to the profile of Christopher J Kopka
about 2 months ago

Peter this is a great piece. I shared it with several co-authors of the recently published Nature consensus statement on ending COVID-19 as a public health threat. https://www.nature.com/articles/s41586-022-05398-2

To my read, it's a great example of (a) 'whole-of-society' approaches fostering and demonstrating (b) collaboration. 

Much to be applauded and emulated from your experience. Thanks for this!