By now, covid-19 does not need a lot of introduction due to its global spreading. Epidemiological models exist to make predictions on how this pandemic will evolve. This comes, however, with a high degree of uncertainty, mainly because the data is scarce and uncertain. This can be quantified by performing an uncertainty analysis which propagates sources of uncertainty through the model and yields an interval prediction instead of just one number. The next thing one can do is run different scenarios by adapting the model input. This allows simulating the impact of a (set of) measure(s) taken, such as social distancing, canceling mass events or the like. However, one then gets the logical follow-up question: when can these measures be released or relaxed again? We will treat all the above and demonstrate how a popular control algorithm from chemical engineering could be applied to dynamically calculate the "optimal" sequence of measures in order to meet certain custom-defined constraints. However, before doing this, we need to define and calibrate a model to be used in such an approach.
We use an extension of the SEIR model as demonstrated in the figure below. This model splits the population into different categories, i.e. susceptible, exposed, infected and removed. The latter 2 categories are further broken down in super mild, mild, heavy and critical for the infected part of the population, whereas the removed population indicates the immune and dead fraction. A super mild infection refers to the category of asymptotic people who are infected but are unaware of their own infection. Recent figures from Chinese scientists put this number at 86% of all infections.
Transitioning between different fractions of the population is indicated by the arrows and its rates are expressed by parameters in the model. The two most important parameters in such a model are: 1) the incubation period and 2) rate of virus spread. Other parameters include the odds of having a super mild, mild, heavy or critical infection. For each type of infection there is an infectious period, etc. All parameters except one were gathered from the available literature on coronavirus. The parameter that remained to be calibrated is ‘beta’, which determines the rate of transitioning individuals from susceptible to exposed. Beta can be interpreted as the degree of social interaction or the amount of exposure to the virus. It is this parameter that is targeted when governments impose restrictions on their citizens. We will, therefore, focus on this parameter.
The average value of the social interaction parameter was determined by a least-squares fit to the number of reported cases over time in 5 European countries in the exponential phase (before government intervention). In addition, the data from the exponential phase observed in China was used to pinpoint the social interaction parameter in the absence of government response and general public awareness. This resulted in beta = 0.266±0.037. The obtained estimates for all countries, including Belgium, are similar and the observed variance is low. The model fit for Belgium is shown below.
The obtained estimate for the social interaction parameter in the presence of a drastic “China style” government action is beta = 0.011. The figure below indicates the degree with which the model matches the disease spread in China over the observed period. The obtained results indicate a 24-fold reduction in human-to-human contact during a strict quarantine. The jump in the observed data corresponds to a change in the applied measurement (diagnostic) method.
In South-Korea the government decided for an extensive screening and back-tracking strategy. Our model shows that the impact of “South Korean style” screening on virus spread is similar to a “China style” quarantine in terms of beta (=0.027). From this, it is concluded that extensive screening and case isolation is an attractive alternative to quarantine. However, to make a strategy of intensive screening and back-tracking manageable, the number of total infected must be below a certain threshold. At the height of the peak in South Korea, this was the case and at that point the South Korean government relied on the use of an app to inform the government of your infection. For the current situation of Belgium, the virus has already spread beyond the point where back-tracking and case isolation is manageable. However, if quarantine measures are effective in temporarily suppressing the infection, this strategy will again become feasible in the (near) future.
For now, the reader of this text can assume that beta can be either quarantine, intensive screening or a combination of both. We assume a minimal beta value of 0.03 to be a reasonable number to use in models for other countries (such as Belgium) when stringent measures have been taken. We will use that number in the remainder of the analysis.
According to good modeling practice, further model validation is needed. However, under data scarcity, this is the best we can do for now. To further build trust in the model one can recalibrate it when new measurement data becomes available to make maximum use of all accessible information.
In the remainder of the analysis, the social interaction parameter beta is discretized into four distinct levels for now because continuous actions between lower and upper beta values are difficult to interpret and implement in practice. Here is how we defined the discrete levels:
• beta_upper (0.26) corresponds to a behavior-as-usual scenario in which: 1) the government takes no action to contain the virus and 2) public awareness for the virus is low.
• beta_1 (0.2): As only containment data from China is available at the moment, these intermediate values are arbitrary. For now, this level is assumed to correspond to "mild" measures to prevent virus spread (e.g. canceling large events).
• beta_2 (0.1) is assumed to correspond to a state of elevated alertness in which strong measures are taken to prevent virus spread (e.g. closing schools, shops,...).
• beta_lower (0.03) corresponds to a quarantine situation similar to the Wuhan outbreak or to the extreme testing strategy applied in South-Korea which seemed equally effective.
In the future, one can discretize or update the meaning of these intermediate values as more data becomes available from other European countries in relation to measures taken.
Some scenarios for Belgium
Now the model is calibrated with all the currently available information and we defined the levels of the parameter beta, we can use it for scenario analysis. We apply it to the case of Belgium with a population of roughly 11.5 million. Let's start with the ignorant leader that does not impose any restrictions. This results in the figure below. We get a very fast system response and the right graph indicates the problem of severe shortage of intensive care units (ICU). With over 40,000 people in need of intensive care at the height of the pandemic, the Belgian ICU capacity could have been surpassed by a factor of 21.
Next, we simulate the scenario as it has been implemented for now including the "current" end date of the restrictions and going back to normal after April 5. We see the delay effect at the left tail, but still a major peak hits (albeit a bit less intense). This means that relaxing after April 5 and going back to a regular lifestyle is not a good idea either. The large secondary outbreak is caused by the fact that the Belgian population has not become sufficiently immune by the end of government measures. You can see a large fraction of susceptible people is still present after quarantine measures end.
A further zoom of this scenario including the latest data is shown below, indicating that the model performs quite well. Recent press releases (26/03) have stated that the peak should be reached by beginning of April, our model seems to back up this prediction.
A simple strategy to contain the virus could be to minimize the extent of measures and in the meantime avoiding the number of critical cases to surpass the number of ICUs. In other words, get the red line as close as possible to the dashed red line (keeping it below though) and doing this by allowing the black dashed line (=beta or social interaction/screening intensity) to allow as high values as possible (= less "burden" for the public). We can play here by jumping between more and less stringent measures as well as the time window in between. An important note about these long-term simulations is that a beta of 0.03 does not necessarily correspond to a quarantine. It could very well be screening if the government is willing to learn from the South Korean example and implement it. Such an approach looks like the figure below.
This strategy seems to work at first sight. However, there is still a vast amount of susceptible individuals (grey line in left figure) which is problematic when wanting to further relax on measures. Moreover, the increase of beta is quite limited (not quite back to normal life), which would mean that screening measures would be needed for a long time. The figure below shows an extended simulation. We still get a severe peak due to the fact that still many individuals are susceptible. There is some room for higher beta values in early 2021. This shows that the herd immunity approach is not really going to work on the short term. However, this could be just the amount of time we need to buy for a vaccine to become available, but the timing of the latter is also still uncertain.
Another scenario with yet another sequence of on-off measures on different levels is shown below. Here we increase beta to 0.2 in early 2021, but still, we need periods of stricter measures to control the outbreak in terms of not overloading the health care system.
The downside of the two above analyses is that it is "trial and error", meaning that we can think of many patterns for beta (different levels to jump between, different time windows to apply,...). In engineering, a much smarter approach is used, i.e. process control. Here, a dedicated algorithm is used to decide on actions based on system observations. Automating the above could lead to the most simple way of control, i.e. on-off control. In engineering practice, there are, however, much smarter controllers available that actually use a process model. So, let us explore how we could use one of these algorithms, called model-based predictive control. We first briefly introduce the principle before demonstrating it.
Model-based predictive control (MBPC): short intro for the layman
Model predictive control is a group of algorithms developed as of the 1970s, specifically for discrete control in the process industry (discrete because computers are digital and, hence, discrete). The elegance of the technique is that it is simple and intuitive. The principle is shown in the figure below.
The idea is to bring the true system state (predicted output) in line with the setpoint (reference trajectory). This is achieved based on past control inputs and the "optimal" predicted control input over a future prediction horizon. This "optimal" sequence is found by solving an optimization problem in which a cost function is minimized. The cost function is typically a sum of quadratic terms that punish for deviation of setpoints and excess control action, but might as well include actual economic costs. Given that each time step a new measured data point becomes available, only the first control action is implemented and the procedure is repeated. The latter accounts for uncertainties in the model structure which are always present as perfect models do not exist. The control action is in this way always based on the latest data available.
The beauty is that the technique is easily scalable to multiple outputs, all with their own setpoints that can be optimized simultaneously.
Let us now explore how this can be applied in conjunction with an epidemiological model, which in our opinion would be a useful engineering addition to the current predictive models.
Optimal measures over time using MBPC: some scenarios to demonstrate the potential
The figure below shows the optimal control sequence, i.e. the sequence in time of beta values to be imposed on the system to control the critical cases just below the number of ICUs for the prediction horizon (here chosen until June 2020). To compute this sequence, the controller uses the model and optimizes an objective function by checking a multitude of different beta-sequences. Here, we opted for only allowing updates of beta every week to keep the optimization problem reasonable. This could be relaxed depending on how frequently the government would want to update the measures. In this demonstration, we only included the deviation from the setpoint in the cost function, i.e. deviation between critical cases (red line) and ICU (dashed red line). One can observe that the criterion of keeping the red line below the dashed line over the prediction horizon is met and that the beta sequence allows different levels, though automatically anticipating when to reduce again to avoid overshoots that could harm the health care system.
The question then arises on how to translate this beta sequence into actual measures. We observe that we can relax more and for longer times on measures. Moreover, when we get the number of infected individuals after the first quarantine below a certain level, the South-Korea approach of extensive screening becomes an alternative option for quarantine. The jumping between other levels can then be seen as a combination of restrictions and a different level of screening. This highlights a shortcoming of the current model that actually lumps different types of action in a single parameter, whereas, in reality, they represent different approaches. The Chinese approach imposes a quarantine on the susceptibles, whereas the South-Korean approach imposes it for infected individuals. It would be stronger if the model would include more states and have both strategies expressed by different parameters. We will explore this route further, but for now, we just wanted to introduce the concept of coupling an advanced controller to the current SEIR models and demonstrate its potential on top of just using a prediction model.
As a matter of demonstration, we tested the impact of increasing the number of ICUs. As can be observed, this does not severely alter the beta-sequence retrieved. However, from the left figure, one can observe a little faster decrease of the susceptible fraction of the population, which means that by steering towards maximum ICU occupation, we are going faster into the direction of herd immunity. However, this approach may be infeasible because it does not account for health care practitioners getting sick.
As a final note, we want to stress that the MBPC approach has the flexibility to include other criteria in the objective function. One could, for example, include the economic costs of certain beta-values (i.e. set of measures) and account for this in the optimization. This allows for balancing with the economic cost and one can use weights to vary the priorities. Hence, such a control is an elegant way to optimize the situation in a customized fashion using the power of a predictive model and multicriteria optimal control.
Potential workflow combining all tools
The section above illustrates that we should not just use the fresh data we obtain every day to confirm or recalibrate a predictive model, but on top of that we can have an algorithm determine what would be the best "current" measure and how we anticipate its dynamics. The following generic workflow could be adopted (schematically shown in the figure below):
- determine upper (onset with no measures) and lower (extreme measures such as quarantine or excessive testing) bound for beta
- define some intermediate beta values that coincide with certain measures
- define the threshold condition in terms of ICUs, which will serve as setpoint for the controller
- determine the optimal beta sequence over the control horizon, yielding the best prediction, i.e. the number of critical cases as close as possible to the ICU threshold; allow beta only to change say on a limited time scale, i.e. the number of days (it is not feasible for a government to change measures on a daily basis)
- when new data comes in, calculate error term, being the difference between the observation and the model prediction done on the previous time. Use this error to correct for the next model prediction
- rerun the optimization and determine the next optimal sequence of beta and implement the first change in beta
Note that the algorithm and methodology can be used for different countries and will not necessarily lead to the same measures. The controller will adapt to the local situation and change its strategy to obtain the most optimal solution under a custom chosen objective function. It can also adapt to boundary conditions, e.g. the number of ICUs.