The "end of the pandemic" from a modeling perspective


Christian Drosten has recently been interpreted extensively on the subject of the "end of the pandemic," and Federal Minister Rauch has now held out the prospect of an end to COVID-19 measures. Both statements are actually political and consist of a melange of scientific basis, practical intervention planning and pragmatic and supposedly empathy-free assessment.

It would be presumptuous of me to comment on whether these assessments are justified. I would just like to briefly illuminate the "relation" to models. Because one lesson we can learn from the pandemic is: because you are knowledgeable in one subfield, you should not comment on everything. I've always tried to keep that in mind, for example, in terms of whether masks work or how good a vaccine is. This is not so easy, especially with modelers, because the impact of these aspects in turn in the model is highly relevant for our results and thus also statements. In this respect, we have to refer to these results and refer to them.

This is easier if there are clear scientific findings. Then I also dare to give concrete answers to journalistic questions, because they are scientific "common sense". For example, I have often said in abbreviated form that COVID-19 vaccination is useful in the recommended applications (which means in abbreviated form that it definitely generates more benefit than harm). It becomes more difficult when the evidence is not yet clear or is changing. A very good example of this is the assumed precise effectiveness of vaccinations (from the end of 2020). Here, the burning question was whether vaccine protection would be built up over the long term. And specifically, whether the vaccination permanently protects against infection, i.e. has a sterilizing effect. Since this could not be estimated, we assumed both variants in the modeling of the COVID-19 vaccination [1] and included both variants for further considerations. In practice, this has - politely formulated - unfortunately only partially arrived, very gladly was pointed out by some in the media to the lasting effect, without this being confirmed. This is a good example of how complicated it is to deal with the impact of one's own statements and to "control" them.

But back to the "end of the pandemic". From a modeling point of view, we can distinguish between the real assessment and the view based on the available data. What's exciting about this is: since we don't know the reality, the former only results from the latter - so it's a bit like a the chicken and the egg or a feedback loop. We can't make the assessment of reality* until we calculate back together with the system knowledge and models. This is exciting, but also tricky.

* that we can never grasp the reality is clear anyway - further to it remains for the philosophy. Here is very much simplified meant: what "makes the difference" for observers. So how different people evaluate a situation (differently) and why, and which interventions change which developments and so on. Thereby one has to define exactly: For whom? To which aspect? Which intervention? Which effects are considered? Which effects are ignored? and much more.

In the specific case of COVID-19 development, we (also in the forecasting consortium) stopped modeling the spread of the epidemic in Austria at the beginning of summer 2022 and focused on hospital utilization. The reason for our decision to do so was not the lack of necessity (nothing is said about that there), but the lack of data basis after the decline in testing. We therefore only determine what we can still or can no longer calculate with models and end our work in a certain area due to external limitations. Our assessment at the time, however, was that although this was not a good thing from a modeling point of view, there was no risk in the then current situation that we would overlook a dramatic development as a result. Again, such an assessment is possible based on many other considerations and model calculations, such as the fall scenarios of May/June [2].

Thus, the termination of modeling does not mean how we assess the situation, but results from a data limitation and is not our decision, but the consequence of political decisions. However, we are very much considering the consequences in the sense of the feedback described above, in order to possibly warn.

The same is true for modeling the current effective immunization of the population [2]. We discontinued this as of Aug. 1, 2022, because in addition to uncertainties in the current infection situation, the number of different variants of immunization has greatly increased, and with it the uncertainty of who-is-immunized-how-long-against-which. We described this in a short article in [4]. The reason for this is different vaccination efficacies, depending on the vaccine, frequency of vaccination but also temporally different loss of protection against infection and disease. By the summer of 2022, we were able to statistically model quite well for the population (and certain parts of it, but not for individuals) what the effective immunization is. This was critical to be able to quantitatively estimate quite well what the future trajectory will be.
Again, this was important because at a certain point, the biggest influencing factors were no longer measures or other things, but the current prevailing variant and the current immunization against infection. Again, of course, this evolution (whether measures are important) is not in the realm of modeling (it's not that we prefer or like certain factors less), but the insight represents a model-based analysis of real-world evolution based on real-world decisions, data, and systems knowledge. That we can understand and map this in models helps in further assessment in reality.

Again, this was important because at a certain point, the biggest influencing factors were no longer measures or other things, but the current prevailing variant and the current immunization against infection. Again, of course, this evolution (whether measures are important) is not in the realm of modeling (it's not that we prefer or like certain factors less), but the insight represents a model-based analysis of real-world evolution based on real-world decisions, data, and systems knowledge. That we can understand and map this in models helps with further assessment in the real world.

As of summer 2022, a model-based assessment of effective immunization in the population was no longer possible, and thus the ability to clearly say what would happen next with the spread dynamics was lost. To that extent, of course, one can argue that this is negligent. Is it? This is where it gets tricky, because this assessment is up to others. Above all, of course, physicians. From our point of view, the only important assessment was that we have reached the limits of the models and can no longer make a meaningful contribution with this approach. However, we modelers can also contribute something.

On the one hand, we continuously evaluate with our models whether and which influencing factors play a role and whether the respective effect increases. This is qualitative modeling, it is not used for concrete scenarios and certainly not to make forecasts. But it helps to assess whether virus variants, immunity, seasonality, therapies or other systemic or strategic aspects change their influence strongly and one could set or should discuss interventions. This could be simply measurements, but also changes in care or many other things.

From the summer of 2022, from a modeling perspective, we also kept pointing out that COVID-19 is not going to go away [2], but already in the fall of 2022, the biggest challenge would be the combination of different diseases as well as the impact that COVID.-19 would have on the whole healthcare system also in the future. So how we will deal with competing infections, so influenza, RSV, and other respiratory diseases that will be added to COVID-19.

In the fall of 2022, we also discussed with the states with which we collaborate that singular modeling of Covid-19 for hospital burden no longer makes sense. Again, not because COVID-19 no longer exists, but because looking at it only makes sense in an overall context and must include other conditions, scheduling of elective procedures, and staffing dynamics. As summarized in the December 16, 2022 TU News [5], "The focus has shifted over time: "We are trying to convey that, given the context, the focus needs to be on modeling the health care system itself"

So what are the insights that can be contributed with the tool of modeling and simulation in the current situation and the conclusions we have drawn as of summer 2022?

  1. Better surveillance for infectious diseases would be useful, namely surveillance that focuses on a bundle of diseases. The ECDC has issued recommendations in this regard, and in Austria this was also recommended in the Virus Variant Management Plan. Basically, the collection of tests in hospitals and in private practices is recommended, as well as wastewater analyses and sequencing of samples. If this had been implemented optimally, it would have been much cheaper and more effective to assess the impact on the health system and the risk to people and to analyze its development by observing many viruses. This could have been done in parallel and would be important at least now.
  2. The focus of further planning (or actually modeling) should be on the one hand on the impact in hospitals (care of COVID-19, other infectious diseases, other acute diseases, elective interventions, follow-up and prevention, and many more), and on the other hand on the impact in care in private practice (acute cases of COVID-19, does the use of therapies work well, long covid, prevention). This does not mean that COVID-19 is no longer there, only that it should now be modeled differently to advise strategy. This includes how to deal with prevention measures in a sustainable way for different population groups and areas of life. In this respect, the focus of advisory bodies such as GECKO would also have to change significantly from a modeling perspective.
  3. Evaluation of previous measures, interventions and strategies must be done independently, not only to improve the system and be better prepared, but also to be able to further improve the models. This is not only about the often cited data that is needed, but also about preparing the models and making them better. The ECDC Scenario Hub [6] has intensified the cooperation between European research groups, as described here [7], and is now readjusting its focus. An evaluation not only of the measures, which is transparent and independent, is also absolutely necessary for the modeling.

[1] B. Jahn, G. Sroczynski, M. Bicher, C. Rippinger, et al., “Targeted COVID-19 Vaccination (TAV-COVID) Considering Limited Vaccination Capacities – An Agent-based Modeling Evaluation”. MDPI, COVID-19 Vaccines and Vaccination, 2021. doi: 10.3390/vaccines9050434