The Story of COVID-19 in Sweden

The Story of COVID-19 in Sweden

I

When Sweden announced their laissez-faire approach to COVID-19, experts, and politicians around the world, were in disbelief. Without a lockdown and mandatory masks, this could only end in a disaster. The famous model used to justify the lockdown in the UK and many other countries, predicted 100.000 deaths by June if Sweden did not follow all the other countries and institute strict measures. It is now November, and at the time of writing, 6,972 people have died in Sweden from COVID-19. So what happened?

II

Let’s take a step back and look at the famous model, which was the basis for many of the government measures back in March as well as for the 100.000 death prediction: The Imperial Model. It was developed by Neil Fergusson and his team at the Imperial College London.

The involvement of Neils Fergusson should already have raised doubts about the validity of the model because his track record, and there is no other way to say this, is terrible. If you think it cannot be that bad, you are in for a surprise.

We will do what Business Insider promised back in April:

Here's what we know about “Professor Lockdown” and the gold standard in science that is Imperial College.

Of course, they did not mention his past predictions, and the “gold standard” turned out to be fake gold, but let's not get ahead of ourselves. His past predictions:

  • In 2002, Ferguson predicted that up to 150,000 people could die from exposure to BSE (mad cow disease) in the U.K. There were only 177 deaths.
  • In 2009, Ferguson predicted that the bird flu could kill 150 million people. 282 people died worldwide.
  • In 2009, Ferguson predicted as a “reasonable worst-case scenario” that the swine flu would kill 65,000 people in Britain. It killed 457.

I think that is enough about our “Professor Lockdown.” We want to focus more on the model and “the gold standard in science that is Imperial College.”

III

One reason why nobody doubted the model was that it was secret—always a smart move. When on March 16, 2020, Neil and his team published their paper, they did not publish their code. Yes, you read that correctly. Their paper made several policy recommendations based on predictions of a model they did not publish. So there was no way of knowing if their claims were true or false. They were simply baseless claims without any evidence.

To me, this destroys any trustworthiness Neil Ferguson might have had left. One hallmark of science is independent verifiability. Without others being able to verify the claims and run the model themselves, it is not science; it is pseudo-science with a strong appeal to authority.

On April 27, over one month after the paper in question was published, they finally released their code on GitHub. However, it turns out that they did not release the original code used to generate the predictions in their paper. The released version was edited by software engineers of GitHub to make it acceptable:

As you can read in this Tweet, the original code was “a single 15k line C file that had been worked on for a decade.” Everyone with a little bit of knowledge about software engineering should be shocked. A single 15000 line C file is beyond bad practice.

But even GitHubs’s engineers could not salvage the code. The released version is still riddled with bugs and random equations that literally nobody can explain. Moreover, even if the released code were exactly the code used in the paper, no one could have replicated the results because the input parameters used in the paper were not published. As the GitHub page reads:

IMPORTANT: The parameter files are provided as a sample only and do not necessarily reflect runs used in published papers.

At this point, we are long past science, with a small s, and far into the world of Science, with a capital S. The latter is the world where the term “Believe the Science” makes sense. While science (small s) is the process of doubting everything, disregarding authority, and searching for the truth, Science (capital S) is where the truth is determined by fiat, and your job is to believe, not question it. But I digress.

IV

After several issues around the non-deterministic behavior of the model were brought up on GitHub, the team responded with an interesting answer:

We are aware of some small non-determinisms when using multiple threads to set up the network of people and places. (Look for the omp critical pragmas in the code). This has historically been considered acceptable because of the general stochastic nature of the model.

Non-deterministic means that given the same input, you do not always get the same output (non-deterministic behavior is not necessarily bad; only in cases like this where it is not explainable). Their answer to this problem was that it does not matter because the model is “stochastic,” which is just a fancy word for saying that they run the model multiple times and average over all the outcomes.

Every time a new issue around non-determinism came up or a different bug was discovered, the team’s answer was the same:

This isn’t a problem running the model in full as it is stochastic anyway.

But is this really true? You do not have to worry about bugs because “it is stochastic anyway”? The answer is, obviously, no.

To understand this stochastic magic better, let’s take a look at an example: baking a cake. If we weigh flour, we might weigh it several times and then average over all these measurements—only if we are nerds and want to follow the recipe particularly closely, of course. But why does this give us a more accurate result? The answer is that if, for example, we make an error when reading from the scale (imagine an old analog scale), then an error in one direction (more) is as likely as an error in the other direction (less). In other words, if the flour weighs 100 grams, you are just as likely to mistake it for 102 g the first time, and 98 g the next. Only if we make mistakes sometimes in one direction and sometimes in another, averaging out works. Otherwise, it does not. If your scale is broken, and always shows five grams more, averaging does not help you.

The same is true for bugs. If, and only if, we could be sure that the bugs distort the output sometimes upwards and sometimes downwards (preferably with equal probability and magnitude), we could solve the problem by running the model “stochastically.” However, this is not the case because, by definition, we do not know how bugs affect the code; otherwise we would understand them and probably be able to fix them. The point is that nobody, including the “scientists” who produced this model, can know how those bugs affect the code.

Do not get me wrong, stochastic models are not necessarily like this. Most of the time, if the model is not too complicated (we will get to this point later), small changes in the input create small changes in the output, and the randomness in the output stems from intentionally included pseudo-randomness. However, this model is not non-deterministic in the predictable (explainable) mathematical sense. It is non-deterministic in the angry-toddler-in-a-toy-store sense: nobody knows what is going to happen, and there is no way to replicate it. Put differently:

It has nondeterministic outputs that do not follow from seeded pseudorandomness but are rather an inexplicable part of the process. I am not using “inexplicable” rhetorically here: nobody can explain this. This is one of the great issues in Complexity Science. Clearly there is a stark mathematical difference between deterministic and non-deterministic. But there is also a fuzzy, and arguably more important, difference between non-deterministic and really, really, really NON-DETERMINISTIC.

Unfortunately, the Imperial Model falls into the latter category.

V

Alright, a summary of what we have learned so far: the code was not released with the paper; the released code is not the code used in the paper; the original code was a single 15 thousand line C file; the model is non-deterministic bordering on chaotic; the parameters released are not the parameters used in the paper; the code is riddled with bugs. An impressive list for a “scientific” model that informed government decisions on life and death.

VI

But what about the model without the code? Sure, the code that implements the model is awful, but maybe they have figured out a great way to model pandemics and just need better software engineering to make it work. Sadly, this is not the case.

Do you remember the parameters that were not provided so nobody could replicate the results of the paper? It turns out that the model has 450 input parameters. Let me say it again: the model depends on 450 (!!!) parameters. Nobody can understand a model with this many parameters. And it gets even worse because as it turns out, complex systems like pandemics have many interdependencies, which complicates everything. As described by Allen here:

Now I should be clear that, maybe, the virus is that complicated. But it doesn’t matter. Because we can’t possibly understand this. And actually I lowballed it by a factor of 450 (Oh, God …). Because if this system is linear, which it surely is not, what this really means is that a single set of parameters can be represented as a 450-dimensional column vector acting on a 450x450 matrix with 450² = 202k independent numbers. Because, remember, the parameters can be anything. Katya suspects they are totally made up. So it’s not just the dimensions we need to account for. The surface of the earth has 2 dimensions but more than 2 locations. Assuming every entry in this matrix has only two possible states, which it surely does not, this model maps a system with at least 2²⁰² k bits of information we need to be clear on to understand it. And probably some recursive exponent of this given how absurd the code itself is and the fact the system is almost certainly not linear.

Even without fully understanding the math, you can tell that 450 parameters are way too many. Nobody can understand a model with these many parameters, which makes it even more difficult to code and debug. This Tweet by Ole Peters summarizes the problems with the model very well:

VII

We now know that the model predicting 100,000 deaths is laughable at best, but we have not explained what actually happened in Sweden. More specifically, we want to explain these two graphs and the difference between them.

Picture 1: Deaths
Picture 2: Cases

The “Daily Deaths” graph suggests a clear explanation: Sweden developed herd immunity. There are, however, two obvious problems with this explanation (1) why are the deaths increasing again in November?, and (2) does not herd immunity require at least 60% of the population to be immune?—where are all those deaths that were supposed to happen while COVID “burned through the population”?

The latter problem is also the reason why other countries did not follow Sweden’s approach. Everyone thought it would take too many newly infected people—i.e. too many deaths—to reach herd immunity. (Even though Sweden’s death count is higher than some of its neighbors, it is well below any initially predicted figure.)

The primary reason for this seems to be T-cell immunity (or T-cell reactivity). This is a well-known phenomenon where the immune system is already prepared to fight off a novel virus because it has previously fought off a different, but similar virus (in this case, this virus seems to be influenza). It does not necessarily mean that this person is therefore immune to the virus, but he (or she) might at least experience milder symptoms.

The question then becomes: how many people have T cell immunity against SARS-CoV-2? As it turns out, this number seems to be higher than expected. For example, this paper, published September 17., finds that,

At least six studies have reported T cell reactivity against SARS-CoV-2 in 20% to 50% of people with no known exposure to the virus.

If up to 50% of people already have T cell reactivity from fighting off the flu (influenza), reaching herd immunity seems way less difficult. As the paper puts it:

With public health responses around the world predicated on the assumption that the virus entered the human population with no pre-existing immunity before the pandemic, serosurvey data are leading many to conclude that the virus has, as Mike Ryan, WHO’s head of emergencies, put it, “a long way to burn.”
Yet a stream of studies that have documented SARS-CoV-2 reactive T cells in people without exposure to the virus are raising questions about just how new the pandemic virus really is, with many implications.

A second part of the explanation for why Sweden has seen fewer deaths than anyone expected is simply the lower than expected case fatality rate (CFR) of COVID-19. Back in March, the WHO estimated the CFR at 3.4%, which seems way too high given the new data about the virus. As this paper called The many estimates of the COVID-19 case fatality rate describes:

A unique situation has arisen for quite an accurate estimate of the CFR of COVID-19. Among individuals onboard the Diamond Princess cruise ship, data on the denominator are fairly robust. The outbreak of COVID-19 led passengers to be quarantined between Jan 20, and Feb 29, 2020. This scenario provided a population living in a defined territory without most other confounders, such as imported cases, defaulters of screening, or lack of testing capability. 3711 passengers and crew were onboard, of whom 705 became sick and tested positive for COVID-19 and seven died,6 giving a CFR of 0·99%. If the passengers onboard were generally of an older age, the CFR in a healthy, younger population could be lower.

My guess is that the CFR is likely even lower, but we will have to wait and see. There are most likely other factors at play here—the lower population density, which we will discuss later—that let to a lower death count, but I have not seen any explanation other than herd immunity for why the deaths in Sweden have declined to close to zero without a lockdown.

(Factors like temperature, humidity, behavioral factors, UV radiation only help in making it easier to reach herd immunity. However, they are by themselves not sufficient to explain the decrease in deaths. A little more on that later.)

VIII

The other problem—problem (1)—with the herd immunity explanation is the increase in death that is happening right now (November) in Sweden. If they already have reached herd immunity (as I am suggesting), how can they have a new outbreak? To understand this increase in daily deaths and why it occurred now, and not in, say, in August, we have to understand herd immunity better.

Herd immunity is reached when so many people are immune to the virus that even if a new person gets infected, the likelihood of this person infecting someone who lacks immunity is low, thereby breaking the chain of infections and stopping the spread of the disease.

Therefore, it depends on the virus (and other factors of the environment) what percentage of a population must be immune to achieve herd immunity; more precisely, it depends on the R0 of the virus. Because if the R0—the average number of people one infectious person infects—is low, the likelihood of a newly infected person infecting someone who is not immune is very low as well. However, if R0 is high—i.e., the virus spreads faster and easier—we need a lot more immune people to stop the spread of the virus.

Let’s illustrate this point with an example. In places with less population density, like Sweden, fewer immune people are needed to achieve herd immunity because the average number of people a newly infected person infects—the R0—is low anyway—compared to, say, New York. There is simply nobody around to infect. This is obvious if you consider how many people you could infect by going to the supermarket in New York, compared to going to the supermarket in some rural area in Sweden.

So far, so good, but how does the fact that the herd immunity depends on the R0 of the virus explain the increase in daily death? Sweden is still the same country (with the same population density), and COIVD-19 is still the same virus, so what changed? The temperature.

In winter everyone meets indoors, which results in a higher R0 for any virus. (This is also part of the explanation for why the flu also called “Common Cold”—the name is very accurate—infects more people in winter.) Because of the increased R0, as discussed above, we now need more infected people to achieve herd immunity. In other words, you need more immune people to achieve herd immunity in winter than in summer. This explains why we have seen a small surge in deaths, even though they have developed a herd immunity back in summer.

IX

So what is going to happen in Sweden going forward? Nothing. There will be no wave of mass death. They have reached “winter herd immunity” and will be completely fine. The high number of cases is nothing to worry about since they do not seem to translate into deaths. The astronomical number of cases is probably a consequence of more testing and the high false-positive rate of the PCR test. The false-positive rate most likely has something to do with the cycles, as explained in this article:

The Covid-19 polymerase-chain-reaction (PCR) test run with a cycle threshold of 40 returns as positive also cases of patients only having a small number of viral fragments in the sample. This produces an overrated number of those who are considered infected. It is suggested to always include the cycle number for positivity in the test result, as well as to lower the cycle threshold to 30-35 for more appropriate detection of those contagious.

Keep in mind that a false positive rate of, say, 0.7% is high if the prevalence of the disease in the population is low. This is a simple mathematical point, but it is unfortunately often missed by many doctors and other non-mathematicians. (If you do not know what I am talking about and feel like 0.7% is low, I recommend learning about the Base rate fallacy.) Whatever the reason for the high number of cases may be, I predict no significant increase in daily deaths for Sweden in the coming months.

X

Lastly, I want to discuss parts of another essay on the same topic called The Riddle of Sweden’s COVID-19 Numbers. (It was published on Alvaro de Menard’s great blog Fantastic Anachronism; I highly recommend his post When the Worst Man in the World Writes a Masterpiece.)

He does a great job analyzing the available data and also recognizes the huge discrepancy between deaths and cases in Sweden; but, unfortunately, he finds no real explanation for it. The explanation is, as I have argued, simple: herd immunity.

His explanation (which is not an explanation at all) is age:

I have found data from July 31 on the internet archive; comparing it to the latest figures, it appears that old people have managed to avoid getting covid in Sweden! Here's the chart showing total case counts:

So old people just do not get infected—problem solved. But wait, why do they not get infected? I, for my part, do not consider this a satisfactory explanation, but there is another problem. If you look at the current number of cases (blue) in old people (60+), you see that they have increased compared to July (orange). This makes sense because the overall cases have increased by a lot:

So in absolute numbers, there are now more cases in the 60+ range than back in July. The obvious question is: why are these people not dying?

If we have more cases in every age group, how can we have fewer deaths? As he notes himself, better treatment does not explain this:

Mortality has declined everywhere, and part of that is probably down to improved treatment. But I don't see Sweden doing anything unique which could explain the wild discrepancy. Again I'm left confused about these cross-country differences. If you have any good theories I would love to hear them. Looks like age is the answer.

Age cannot answer this question. It is also not a satisfying explanation for why only Sweden has this huge discrepancy between cases and death. For example, in Austria, where I live, we do not see this weird difference between cases and death because we have not reached herd immunity.

XI

The herd immunity explanation clears up another comment from the essay:

I think the right way to look at this is to say that Sweden has underperformed given its cultural advantages. The differences between Italian-, French-, and German-speaking cantons in Switzerland suggest a large role for cultural factors. Sweden should've followed a trajectory similar to its neighbors rather than one similar to Central/Southern Europe. Of course it's hard to say how things will play out in the long run.

One reason Sweden has seen more deaths than other countries is simply that other countries still have some way to go towards herd immunity. Lockdowns and other measures slowed down the spread in those countries, which means that at any given point in time, they will have fewer deaths than a country that has let the virus “burn through the population.” (Until they themselves reach herd immunity, of course.)

Note that this does not necessarily mean that I endorse Sweden’s approach or that you cannot reach herd immunity with fewer deaths than Sweden (e.g., they did not do a great job of protecting old people). I am simply trying to explain why Sweden seems to have “underperformed.”

Due to the recently discovered vaccine, other countries might reach herd immunity with way fewer death than Sweden, which would be great. I hope the vaccine is as good as the first tests suggest, so we can put all of this behind us.

Thank you for reading my essay.

Show Comments