Coronavirus: techniques from physics promise better COVID-19 models—can they deliver?
Never before has the subject of mathematical modelling been so prominently in the news. The interest in the techniques used to predict the development of the coronavirus pandemic was given a new focus recently, when prominent neuroscientist Karl Friston advocated using something called generative or dynamic causal models.
Inspired by physicist Richard Feynman’s quote, “What I cannot create, I do not understand”, generative models could supposedly allow us to “look under the bonnet” and capture the mathematical structure of the pandemic, and to infer its causes.
Friston is a researcher with an impressive track record, cited by other scientists two and a half times as much as the Nobel prize-winning Feynman. Friston’s model predicted that the number of new COVID-19 cases in London would peak on April 5 and deaths would peak on April 10, just two days after the data now suggests the actual peak occurred. He also claims his model can be run from start to finish in a matter of minutes, while conventional models would “take you a day or longer with today’s computing resources”.
This all sounds impressive, but is it perhaps too good to be true? Scientists have expressed both intrigue and scepticism at this neurobiologist’s suggestion of using modelling ideas from physics in the field of epidemiology, not least for his use of the term “dark matter” to describe unknown factors in the model. Let’s have a quick look under the bonnet.
What is a generative model?
The easiest way to explain a generative model is to start with a much simpler “fitting model”. This basically involves plotting all the data points you have (for example the number of deaths from COVID-19 each day) on a chart and using maths to work out where to place a curved line that best fits their pattern. You can then continue that curve to forecast future data points. The White House was recently criticised for using such a model to forecast a fall in the COVID-19 death rate.
A generative model similarly starts with the existing data points but also includes a description of the possible causes for those points and how they are related. Instead of simply fitting a line to the data points, the model uses a technique called Bayesian inference to specify which variables to include in its calculations and to what extent, based on an understanding of the probabilities associated with the data.
You can then use this model specification to produce a forecast by generating new data points, but you can also use it to work out what potential factors have a strong influence on the outcomes. Such models are used, for instance, to assist in functional magnetic resonance imaging of the brain or to model populations of neurons.
So how well does Friston’s generative model really forecast the pandemic? The headline result of correctly predicting the peak of new cases in London as April 5 sounds impressive, but it is a little misleading. When you carefully read Friston and his colleagues’ paper, you can see that they made this prediction on April 4, just one day in advance.
And unfortunately, the model mispredicts all later data points. It forecasts 14,000-22,000 deaths in the UK by early June (we have actually recorded around 40,000) and that we should have had fewer than 200 cases per day in the last two weeks, while the reality checks in at over 1,500 per day.
Lastly, the model predicts that one in every four to five confirmed cases results in a death, which would either make COVID-19 nearly as fatal as Ebola, or means that only one in about 20 people who catch the disease are actually confirmed, which at this point seems highly unlikely. To summarise, it’s a pretty appalling forecast.
But while the the model has shortcomings, Friston’s idea of generative modelling does have a distinct advantage. It is naturally equipped to handle uncertain assumptions, so you can easily generate results with uncertainty ranges without having to run simulations many times.
This contrasts, for example, with the many runs needed for the COVID-19 simulations my colleagues and I have been doing as part of the HiDALGO project. That said, all the simulations I have attempted to run, including the COVIDSim model developed by Imperial College London that has been used to inform UK government policy, can finish on a single supercomputer node in an hour or less.
More data needed
In general, the principles of generative modelling can be an effective way to determine how different causes could contribute to the simulation outcome. For this, the conceptual model does need to include all the relevant causes, and the training data needs to cover enough relevant aspects to pin down the most important behaviours.
With this in mind, it’s worth mentioning Friston’s claim that Germany has had fewer COVID-19 deaths because it has more “immunological ‘dark matter’ – people who are impervious to infection, perhaps because they are geographically isolated or have some kind of natural resistance”. I found this an amusing statement, not least as someone who has done modelling work on actual dark matter, the unknown theoretical substance used to account for gaps in our understanding of matter in the universe.
Friston’s generative model omits more than 90% of the locations relevant for studying transmission of the disease, such as schools, supermarkets, parks and nightclubs. Instead, in his model people are either at home, at work, in a critical care unit or in a morgue.
Source: Read Full Article