How algorithmic curation breaks our brain
Why the ‘lab leak hypothesis’ is just another one of those narratives too good to let go
If one thing is remarkable about a potential lab leak origin of SARS-CoV-2, it’s how easily it lends itself to be obvious for those who want to believe in it.
Intuitively, people find it odd that a supposedly ‘natural’ pandemic started in Wuhan, the same city that houses the Wuhan Institute of Virology, an alleged ‘epicenter’ for dangerous coronavirus research with potential ties to the Chinese military. Add to this a freak virus causing almost unprecedented global harm because of its genetic adaptations for efficient human-to-human transmission, the allegedly suspicious GoF research activities by virologists, and total stonewalling of Chinese authorities to any independent investigations into the origins. Putting two and two together on this one is truly not rocket science.
Even though any formal evidence of a lab leak remains elusive, and despite having most scientific experts firmly putting the odds on a natural spillover, many became convinced that a lab leak started the pandemic.
Yet they would be unequivocally wrong, being that confident at least. For almost every contentious issue we had to face with substantial amounts of scientific uncertainty recently, from masks to vaccine safety to Ivermectin, collective sensemaking failed for huge parts of the population.
While the individual reasons for bad decision-making are complex, I believe there is a common denominator. They all were guided into reaching a false certainty by a system of algorithmic disinformation we have yet to wrap our heads around.
It is a puzzle that starts by understanding own neural algorithms.
Part 1: Pattern recognition and the flawed hardware of our brain
We humans are great at pattern recognition. The neural networks constituting our brain evolved to provide a coherent picture of the world to our minds by connecting the dots, even and especially when information is scarce and the need to understand is dire. While powerful, our inborn sensemaking algorithms are not always accurate.
‘Was it the wind which moved the tall grass, or could it be a lion’s stalk? Does the stranger hide a weapon or something else in his hands?’
Those who waited for sufficient evidence to make the right call might not have had the same chances at procreation as their more superstitious peers. An argument can be made that for most of human history, missing a pattern could’ve cost your life. Seeing a pattern where there is none? Not so much.
As a consequence, the neural networks of our brain would’ve been smart to trade accuracy for speed or evolve to prefer false, but actionable certainty over action-impairing uncertainty. Today, machine learning inclined people would call this evolved behavior of our neural network ‘overfitting’; our brain’s sensemaking algorithms quickly come up with a solution explaining all the data available to us in order to create certainty. In general, our brains hate uncertainty; it is a cognitive strain that feels physiologically unpleasant; like a hand on a hot stove. Our intolerance to prolonged uncertainty is especially prevalent in times of heightened existential threats, from economic depressions to pandemics. When our survival feels at stake, we don’t ponder but need to (re)act.
In cognitive psychology, the term heuristics describes our brain’s preference to ‘overfit patterns’ and quickly create a state of ‘actionable certainty’. Applying heuristics is necessary to go through everyday life; we simply do not have the time or mental capacity to recalculate every routine action we take.
Yet applying fast heuristics to any and all decision-making is problematic, as it can lead to systematic thinking errors. For example, when we are exposed to unrepresentative (incomplete, biased, misleading) data, our inferences about reality will be flawed. If we lived in a very rural place and all cars we ever saw passing happened to be colored red, we might infer all cars in the world to be red. Or, more relevant, if we never met or knew anybody who got Covid, we might not believe it is even real.
Overall, the empirical data we have about our human psychology paint quite the stark picture of our cognitive blind spots and shortfalls. I want to stress that there are no exceptions, only gradients. All human brains are similarly flawed and subscribing at least some inborn biological and evolutionary roots to many of our common thinking mistakes is a well-accepted proposition in cognitive science. Despite all our obvious limitations, there is no ground to despair. We are more than mere biological automata, we are also humans capable to reason our way into good decisions. While an incredible and complex topic, researchers like Kahnemann and Tversky teased out which factors influence where we usually apply our fast and intuitive thinking to problems, or when we are capable of slower but analytical reasoning.
It boils down to how quickly we want or are expected to come up with an actionable solution to a problem. In unclear and confusing data context, or when uncertainty is encouraged, when our brain can handle having some cognitive load, our ‘intuitive’ autopilot kicks the problem at hand up to our analytical faculties.
The reasoning is important because it allows us to course-correct; to consider alternative explanations or missing data, to equip the autopilot of our mind with some new data points to consider, or even substitute faulty heuristics for more suitable ones. The goal is not to get rid of the autopilot; this is both impossible and counterproductive; rather, we want to teach the autopilot of our intuition where it can go freely, and when it has to kick up to problem to the more unpleasant and slower deliberation.
These two systems, fast intuition and slow deliberation, worked quite well off each other for most of human history. Until the rise of the attention economy.
Part 2: Addictive technology and a new kind of drug
A lot has been written about the ills of social media of late. There are filter bubbles, narrative bubbles, tribes, polarization, segmentation, sometimes leading to witch hunts, cancel culture and broad conspiracy-mongering. All humans just being humans, as the Facebook PR department would have you believe. Anything but the algorithms and the business model.
Many scientists, privacy activists, journalists and whistleblowers share a different view; that the very algorithms deciding what we get to see cause much of the harm we face today, from online toxicity to increased political polarisation to teenage suicide to conspiracies and even propaganda facilitating genocide.
Yet despite the above indications of a social media ‘infodemic’ being at the heart of many of these issues, a complete picture of the mechanisms has not yet emerged in scientific literature. This is somewhat expected, because the problems are relatively recent, complex, nuanced and multi-faceted. Studying them would require access to data companies are reluctant to give out and scientific expertise ranging from behavioral psychology and neurobiology to computer science and many other disciplines. Good science takes time and manpower.
This scientific uncertainty has stopped lawmakers and platform companies from regulating social media better, but not ourselves from using these services excessively. While scientific evidence of concrete harm is only slowly forthcoming, I believe we’re long overdue to have better heuristics when engaging and thinking about these services.
The best model I currently have to explain many problematic facets of how social media shapes society is that of a targeted information drug.
Social media is addictive, it stimulates our dopamine system, the same neurochemical reward network which is responsible for addictions to gambling, smoking, opioid, or alcohol consumption.
Additionally, within the high-tech social media companies are thousands of our most brilliant engineers and scientists who try to figure out how to design a product that optimizes engagement. How to make you click and share info so the company can learn to understand you, your personal choices, and preferences, which can then be monetized by directing advertisements to you. Everything we see on these platforms, the shapes and colors, the user interphase and easy accessibility; the like buttons and when notifications pop up, the content delivery algorithm (recommender systems and metrics), clickbait features, and gamification strategies; all are optimized for that singular purpose: Keep the user on the platform. Former Google employee Tristan Harris puts it more succinctly: Social media is designed to hack our brains and steal our attention.
It is beyond cynical to claim that these businesses just give us what we want, and that it is our fault when we want stuff that is bad for us. These platforms are designed to covertly make us addicted without our knowledge or consent. They profit from us and our data as we keep coming back to their sites for more dopamine shots. We are like the rats that press their own stimulation thousands of times, only we do it from our phones. This alone should at least warrant some consumer protection regulatory framework like we have with other drugs.
The unethical business model drives the addictive properties of this targeted information drug to become ever more potent; but the really neglected aspect of this drug is the ‘targeted information’ part.
Every second the content algorithms try to figure out and optimize what information they can show us to engage us to stay on the platform. They do not care whether the information they give us is useful or useless, whether it’s true or false, helpful or harmful. Only whether we would click on it. And these recommender algorithms become brilliant at it, like the engineers who designed them.
However, and more importantly, information is unlike any other addictive substance which we might ingest in our life; it serves as the input data our neural networks use to infer patterns about our lived reality. We need representative information from our environment to understand our world. The more we live and interact online, the more we rely on information online to make sense of our world.
Having confidence in your brain’s pattern recognition capacity, I am sure you have already put the pieces together of where I will be going with this.
Systems dealing with targeted information drugs (i.e social media) are prone to deliver unrepresentative data about reality as a side effect, thereby leading to widespread systematic thinking errors (=bad heuristics) in all of us when we engage our ‘intuitive’ autopilots for sensemaking.
Furthermore, the quick pace of technology, the constant drive for instant gratification through clicks and likes, or the rush of satisfaction from identity signaling are all psychologically more appealing than the heightened cognitive strain of analytical thinking. We are pushed to make quick calls, to judge vast amounts of novel information at breakneck speeds so we can act decisively to click that share button or comment angrily; because only these incentivized actions have a chance of being rewarded with a precious dopamine kick when we get liked. Many will have noticed how much more susceptible we are to participate in social media outrage when tired, stressed or otherwise not in our best state of mind. Or how that first supporting comment to a tweet of ours feels exhilarating. Better than the first cigarette out of the airplane. Add a pandemic and other existential anxieties on top, and we have a perfect storm. A storm that makes it all the more difficult to sustain any capacity to tolerate the healthy uncertainty necessary to engage in slower deliberations about whether our engagement is truly rational, productive and reflective of our values, or just a pile-on from a dopamine junkie looking for the next shot.
This is my mental model of what the targeted information drug does to our collective sensemaking. To sum it up in one sentence:
Social media exploits our biology to push us towards making intuitive calls on unrepresentative data as fast as we can and rewards us for doing so with addictive dopamine kicks.
This brings us to the last piece of the puzzle. Why exactly is the targeted information we engage with so unrepresentative of reality in the first place?
Part 3: Weaponisation of scientific uncertainty
We all have our moments of clarity, when our analytical reasoning course corrects our ‘intuitive’ autopilot and gives us better heuristics to work with to understand an issue. I started this article with a reference towards the popular ‘lab leak’ narrative around the origins of SARS-CoV-2, a topic somewhat inside my professional expertise. Intuitively I thought it quite plausible. I’ve worked with mostly harmless lentiviruses in BsL-2 labs, and I’m not oblivious to how easily things might go wrong; from accidental self-stabbing while handling animals to wrongful discard of infectious material. Mistakes happen, even bad ones. Reports about a potential lab leak seemed convincing; the geography of the outbreak next to a lab hosting similar pathogens, the intransparency from China; the genetic features of the virus hitherto unseen. I felt my insider knowledge on lab accidents gave me the edge to judge the odds better than most. For months, I’ve remained in this state of false certainty, believing a lab leak origin was considerably more likely than not.
Only when a physicist friend of mine actually asked me to help him dive into the evidence for a podcast, I felt the need to engage my analytical reasoning. I’ve read the publications, having some genomic expertise I downloaded the viral genome to fact check a few things myself. I tried to find every argument for and against a potential human origin of SARS-CoV-2 and assessed its strength based on the evidence. I played through different origin scenarios, from bioweapon research to directed evolution to serial passage to bat sampling accident to illicit wildlife trade. I talked to virologists and gaged their estimates, I engaged with lab leakers to hear their strongest points.
All this deliberate effort led me to confirm that actually, there is a scientific uncertainty that remains, but it is mostly decoupled from where most lab leakers online want you to believe it lies.
It’s not about allegedly ‘secret’ GoF research programs, genetic engineering or bioweapons. China’s intransparency alone cannot serve as evidence of a lab leak cover-up. No ‘leaking’ of selective email communications or cherry-picking from FOIA requests of Fauci, Daszak or the NIH will bring scientific clarity into the origins discussion. Neither will bad cherry-picking from Chinese research papers yield anything tangible. The alleged conspiracy around the Lancet letter; wherein virologists allegedly decided not to investigate the pandemic origin is a fantasy which doesn’t hold up to any scrutiny. Most scientists didn’t even see the letter. Yet all of these meritless points are held up over and over again to keep a very specific lab leak narrative going on social media. The sum of it serves as material for outrage, a cheap speculation fodder, provided by motivated social media actors, grifters, profiteers and LARPers posing as independent investigators.
I am not saying all lab leak proponents should be easily put aside as unserious either. It is important to detangle the social media detractors from the respectable efforts by so-called ‘internet sleuths’ working on real questions like the early epidemiology, or independent scientists probing the evidence for any clues of a potential lab leak. Plenty of those are around too, albeit not so popular as to have any impact on these platforms.
As with so many topics of scientific uncertainty, we have to concede that it is the grifters and influencers, the popular opinion ‘journalists’ and political commentators who have taken over the narrative; arguably on both sides of the issue. The whole thing started by ‘establishment’ media claiming any suggestion of lab leak is a racist conspiracy, a reaction to the Trump administration pushing the ‘China virus narrative’ as part of their effort to shift blame from their failed handling of the pandemic. The rest went predictably bad from there.
It is a tough field to navigate but I am now happy with my heuristics about this particular topic, and I will have a lot more to say more about it. (added 08.04.22)
However, I believe this example of narrative capture showcases a core feature of our current conundrum. Maybe the last critical puzzle piece: the power of a malicious human element.
Part 4: The technological incentivization of grifting behaviors online
It goes somewhat like this: Once any topic has garnered a lot of attention on social media, almost automatically a combination of contrarian grifters, political actors and profiteering influencers, supported by targeted information algorithms, create a polarizing counter-narrative; a wedge to segregate opinions and groupthink into two warring factions separated usually along lowest common denominator lines like ‘left/right’. After all, emotional outrage, fights with the outgroup and opportunities for identity signaling are tried and proved recipes to create the very engaging content that is rewarded by the algorithms.
On top of that, the current social media system has financially incentivized grifting on an unprecedented scale. Offline grifting is hard, because you’d have to extract money from your acolytes directly and they might sour on your extractive behavior eventually. Online grifting is a different ballgame. You just pretend to be a clown, guru or victim for a popular cause. Whatever. Your goal is to entertain. You aim to steal your audience’s attention for as long as you can, while the social media companies extract the monetary value for you. Right from the data of your followers. The companies then pay you your share of the profits. It was never easier to capture an audience (or be captured by it).
In the end, the ranking algorithms contributing to our targeted information drug might be smart, but not smart enough to not be gamed by humans. The few thousand engineers at those social media companies are brilliant at what they do, but they can’t compete with the ingenuity, time, craftiness or charisma of the billions using their services. It is our shared human ingenuity that creates the engaging content we can’t help but to click on once the algorithm shows it to us. For good or bad. Algorithms are always efficient facilitators in the end, but it’s the innate charisma of the snake-oil salesmen, influencers and online gurus that capture the hearts of people and drive them further down into filter bubbles. We can’t stop these malicious grifters while maintaining the attention economy system as is.
On the engineering side, this problem is almost intangible. No matter what feature selection and hyperparameters the engineers use to train the deep neural network of their content ranking algorithms, as long as they optimize for user engagement on the platforms, sensationalist fake news, outrageous lies and clickbait will have a leg up over measured, nuanced, accurate and truthful content. Popularity on social media, in general, is a horrible proxy for truth and I would not be surprised to find out there is an inverse relationship between the two.
Today, every popular area of scientific uncertainty (and even some of scientific certainty) has been seized upon by online grifters; their self-serving lies have been essential in driving us apart. Not because their lies are so brilliant or convincing, but because both grifters and content ranking algorithms learned that engagement is high when there are two warring factions fighting each other with little room in between.
This combination is what ultimately breaks our collective reasoning skills and distorts our perception of reality.
Think about every aspect of the pandemic, the one topic that by its nature demanded us all to pay attention to. What ‘popular’ aspect of Covid does not have a polarizing wedge driven through society? Does it exist? Do mask work? Did humans create it? Do lockdowns help? Should we open schools? Can you trust your health institutions? What about Ivermectin instead of vaccines? Are vaccines even safe and effective?
The merchants of doubt have abused a temporary gap in scientific knowledge, a legitimate uncertainty, in the most cynical and uncharitable way possible. They build up salient identity narratives around these issues, even personality cults. They are motivated by greed and empowered by targeting algorithms that deliver susceptible people to reward those ratcatchers who steal our attention best. Their salient narratives are psychologically attractive because what they offer is much-craved certainty to their followers. They make clear who the good and the bad guys are. Why what their followers ‘intuition’ tells them to be true is actually true. They ease social anxieties by providing a fiercely loyal ingroup and an almost hermetic echo chamber one can shelter in.
If experience is any indication, there is no coming back from that. Even long after some of our gaps in knowledge were closed by science, when reasonable studies on masks came out; when IVM was shown to have no magic efficacy against Covid, even after the irrefutable vaccine safety data came in, the contrarian narratives remain intact.
This is not a bug, but a feature of the system. A design choice. It is also deeply problematic for the future. Every major issue we will face as a global civilization will run exactly the same course if we maintain the current system.
A system where addictive software is directing us towards the most capable human grifters. Grifters who are financially incentivizing and algorithmically empowered to provide the psychologically most salient outrage narratives with zero regard for evidence, accuracy or truth. The reliably distorted perception of reality is subsequently prompting us to make systematic thinking errors, which ultimately lead toward bad collective decision making.
This is my new heuristic when thinking about the future of society and social media.
Conclusion
At this point, it seems entirely predictable to me that anything worth our attention will segregate people into polarizing bifurcated outrage narratives. For example, the discovery of the omicron variant is barely a week old, scientists haven’t yet wrapped their heads around it; but online grifters are already asking: Was it released intentionally? Maybe it “also” escaped from a lab? I give it a few more weeks before a sizeable portion of people will be convinced by it, because then a whole cottage industry will have spawned online, supplying outrage-articles for easy consumption on substack and youtube; despite the lack of any supportive evidence.
It can at times feel hopeless.
Our intolerance to uncertainty will continue to be weaponized by grifters. Our cognitive urge to end uncertainty and judge or act quickly (albeit mostly performative through clicks, comments and likes) is counterproductive.
Finding actionable solutions to uncertain problems facing public health, climate, economics, politics and society at large cannot work under these conditions. How much worse are we allowing things to go before we realize the attention economy is not working for us? Nobody is on team virus, yet our collective behavior of pre-programmed disagreement and conflict for engagement’s sake subverted public health measures and coordination. We are better than this, or at least we ought to be.
None of our problems are insurmountable if we manage to cool the temperature of this particular psychological experiment we find ourselves in.
We need to give ourselves more room for slower deliberations. We need to reclaim our attention and use it towards fixing the broken information systems around us. It wouldn’t be the first time our human ingenuity gets us out of a bind when the situation is dire.
That is at least my hope, with about medium certainty.