This chapter will look at various emergent macro-behaviors resulting from our collective actions online. It will highlight how social media platforms & the attention economy created exploitable vulnerabilities and why an epistemic crisis interferes with multiple complex functions democracies have to perform to remain viable.
Democracy might not survive the newly created asymmetries of power of our information age
But before we get there, let’s have a quick look at how social media really works
A) The ‘rich getting richer’ power law of attention
When the winners take all, democracies lose
There is no middle class in the creator economy.
Despite much talk about the ‘democratization’ of sharing information, none of the big social media platforms is democratic (some encouraging developments in Taiwan though). In the background, they are all profit-driven companies with strong hierarchical structures fulfilling the function of any late-stage capitalist company: Shareholder profits above everything else.
But even on the platforms, within the user base, there is no democratic equality. Platform companies have been secretive about the distribution of payouts and other metrics to measure individual user influence on these platforms, but that will not stop us from making some inferences based on proxy measurements we do have access to.
A leak of creator payouts from the video game streaming platform Twitch revealed how thin the air in the attention economy truly is. A mere 10.000 of its 8 million streamers garnered income higher than 10.000$ per year. So that is ~ 0,12 % of all streamers (not viewers!!!) and still that amount is far from a livable salary. To get to a livable salary of like 40.000$ per year, that number drops to about 2500 (0,03%). As others have noted, there is no middle class in the creator economy. The winners take all. Once the upper echelons of attention are reached, money starts flowing, following a power law. 81 streamers earned more than 500k, and 25 streamers more than one million dollars per year, with the top earners coming in at close to 5 million per year. Just from twitch, because influential streamers of this size have countless other avenues to make real money. The platforms also rely on and cater to their influencers, paying huge sums to exclusively retain them and throttle competitors.
So the baseline of any online social media platform is this: Only a minuscule fraction of platform users dominate social media platforms, be it in audience size, revenue, exclusive deals, or even just getting their voices heard, whereas the rest is drowned out as background noise. This uneven distribution of reach vs. followers is roughly the same for all platforms. The “getting voices heard” part is especially salient when talking about social media platforms like Twitter or Facebook, the poor substitutions for a public square to talk about political and societal issues. A minority of influencers asymmetrically dominate what topics get talked about in wider society, and how they get talked about. Often, these influencers even shape what is acceptable to discuss on these platforms (Overton window comes to mind). Spreading anti-vax idiocy and other harmful medical misinformation during a pandemic, even anti-semitism and ‘great replacement theory’ conspiratorial garbage all have become ‘acceptable’ (although often veiled) in public discourse because of influencers shamelessly mainstreaming it for engagements.
Whatever you might want to call this system, aristocratic, capitalistic, or even meritocratic, it certainly is not democratic.
Democracy presupposes some basic equality of influence, whereas these systems are designed to produce, facilitate and maintain an inequality of influence. The rich get richer and the winners take all.
In functional capitalist societies, we have at least regulations and taxation to reign in financial power (however weak they are today is another topic), but on social media, there is no “attention tax” that redistributes eyeballs, nor are there strong (or any) regulations about what can and cannot be done to capture attention.
Accurate, contextual and reliable information in our communication systems is, in the wider sense, a public good, whereas lies, misinformation and nonsense can be seen as pollutants to our shared info sphere.
Even in a (somewhat functional) capitalist system, if you pollute the river to increase your financial income, regulators are gonna slam and punish you because you endangered a public good for personal gain. However, if you pollute the info sphere by e.g lying about vaccine safety during a pandemic, odds are nothing is going to happen to you, even (or maybe especially) if you hold the biggest megaphone in the world.
Even worse, the fact that influencers deal in information products means that they inadvertently shape human perceptions of reality. If you pollute a river, people can at least agree that it was bad and that you need to be held accountable. If influencers lie, most of their followers will be convinced they were actually telling the truth because of an array of parasocial, tribal, and psychological phenomena influencing belief formation I really don’t have the space to go into. Take the false consensus effect:
To illustrate, a recent study by Leviston et al (2013) on people’s attitudes about climate change showed that only a small minority of people, between 5% and 7% in their Australian sample, denied that climate change was happening. However, those minority respondents thought that their opinion was shared by between 43% and 49% of the population. The massive discrepancy between actual and conjectured prevalence of an opinion (around 40% in this case) is known as the false consensus effect (Krueger and Zeiger 1993). When people believe that their opinion is widely shared, they are particularly resistant to belief revision (Leviston et al 2013), are less likely to compromise, and more likely to insist that their own views prevail (Miller 1993). The fact that any opinion, no matter how absurd, will be shared by at least some of the more than one billion Facebook users worldwide (Synnott et al 2017), creates an opportunity for the emergence of a false consensus effect around any fringe opinion because the social signal is distorted by global inter-connectivity. — Wiesner K., European Journal of Physics, 2019
No wonder people strongly believe in all kinds of nonsense and are not susceptible to belief revision when their echo chamber sings the same tune as the influencers around whom it formed. I mean, the examples are as endless as they are consequential, from climate change denial to alternative medicines to election steal fantasies, as an influencer you get to sell whatever information drug you want in service of reaching the top of the attention hierarchy. And once you are there, the winners take all, remember?
On social media, there is no “attention tax” that redistributes eyeballs
So all these dated ideas of “social media being a force to democratize the world” were never much more than cynical marketing. Nothing within these systems supports critical democratic processes. On the contrary, most design decisions taken by software engineers in service of their capitalistic platform companies might actively subvert them.
I see two main drivers of the epistemic crisis haunting our info spheres:
de-centralized, uncoordinated, or crowd-sourced distortions
centralized, coordinated, or targeted manipulations
To truly understand this, we have to look at how asymmetries create vulnerabilities, and how various actors exploit the gameable online environment these platforms have created. Let’s start with crowd-sourced distortions, shall we?
B) Asymmetric forces shaping the info sphere
The unwitting sabotage of democracy for profit
Epistemic paralysis hinders collective democratic action
I have previously written about the psychological, social, and technological factors playing into crowd-sourced distortions of the public info sphere, so let’s quickly recap and sharpen this phenomenon to see its relevance to democratic decline.
Any topic that garners a lot of attention is a potential source of income for influencers. The more eyeballs something attracts (or can be made to attract), the more money can be made from it. The easiest way to game the system is to create an information product that is just very addictive, broadly appealing, or shareable.
There are entertaining versions, like endless cat videos, and more harmful or toxic versions, like nasty political memes. For influencers, the real issue with these info products is not that some are potentially toxic to society, but that they are very flat and easy to produce, work independently of their creator or platform, and there is huge market competition and zero creator loyalty, so consistent monetization with these products is difficult unless they are very quick to manufacture on a large scale, basically click-bait or memes.
A more elaborate tactic is to create an addictive information product that has a unique appeal, or that is custom-made for a specific audience of the attention market. In marketing, we call the former ‘USP’, a unique selling proposition, and the latter targeting a ‘niche market’. This is where gurus, contrarians, political commentators, culture war spin doctors, and of course information grifters come in. These influencers are ‘experts’ in sensing what a niche audience wants to hear, developing parasocial relationships with their targeted group, and doing everything in their power to create outrageous, polarizing, or addictive meme content for their ‘tribe’.
Once any topic has garnered a lot of attention on social media, almost automatically a combination of contrarian grifters, political actors, and profiteering influencers jump on the bandwagon, supported by microtargeted information algorithms. Their goal: create a polarizing counter-narrative; a wedge to segregate opinions and groupthink into two or more warring factions separated usually along lowest common denominator lines like ‘left/right’, race, religion etc… After all, emotional outrage, fights with the outgroup, and opportunities for identity, virtue or intellectual signaling are tried and proven recipes to create the very engaging content that sucks people in, and that is rewarded by the algorithms.
Let me stress again that the content of the information product does not matter. It does not need to be bound by accuracy, reliability, context or facts, or any other metric we would usually care about in information. What matters to influencers is how many people engage with it, and how likely information consumers will come back for more. This is also what those content ranking algorithms dictate that made the platform companies (or better said their shareholders) so immensely wealthy. The algorithms optimize and reward influencers to steal your time, attention, and data so it can be sold to the highest bidder. I sometimes call it a grift, where attention grifters sell junk information products to their customers for huge personal profits.
The current social media system has financially incentivized grifting on an unprecedented scale. Offline grifting is hard because you’d have to extract money from your acolytes directly and they might sour on your extractive behavior eventually. Online grifting is a different ballgame. You just pretend to be an expert, clown, guru, or victim for a popular cause. Whatever. Your goal is to entertain. You aim to steal your audience’s attention for as long as you can, while platform media companies extract the monetary value for you. Right from the data of your followers. The companies then pay you your share of the profits and deliver ever more fresh unsuspecting customers to you. It was never easier to capture an audience (or be captured by it).
That really is all the magic behind the influencer economy, and all other correlates stem from that fact. Sponsorship or other monetization schemes enrich influencers because they steal your attention for personal gain, sometimes through intermediaries or in service of offering eyeballs, your eyeballs, to companies. However, information is not just a product, but a necessary good to make sense of our world, so by controlling information, influencers or companies shape our perception of reality too.
Okay, now that we refreshed our memory, we can look a bit deeper into why influencer and platform incentives might be problematic to a democratic society.
First, there is the obvious asymmetry of narrative power: Simplified, emotionally engaging, or populist narratives and ideas will eat everything else. Reality often comes in gray, facts are boring and relevant info might just not be entertaining enough to catch eyeballs.
Current social media narrative effects might be described as the great dumbing down of nuance, an exodus of facts, context, and expertise from public discourse.
In such an environment, almost any piece of information, even when entirely accurate, can easily become misinformation when deprived of its original context and surrounding facts, or oversimplified into oblivion. An extreme form of this would be cherry-picking, where information gets put out of its actual context to further an oppositional narrative.
Second is the asymmetry of audience demand. Going by the motto of ‘the customer is always right’, many influencers create content that the audience wants to see. This is a very straightforward business model and it might help sales of information products enormously, given our own propensity to seek confirmatory information and other psychological biases. However, giving an attentional audience always (and too much of) what it wants, rather than what it needs to hear, certainly does not help with fostering epistemic humility, on the contrary, it nudges us towards very biased fragmented realities of our own making.
Catering to ever-changing audience demand is one of the biggest challenges for influencers, because attention, by its nature, is a fickle thing. Maintaining market share of attentional audiences requires flexibility, that’s why influencers acting for example in the socio-political sphere (e.g culture war, social and political commentators) have to seemingly constantly reinvent themselves to keep their income stream.
Just take the sudden change from self-proclaimed viral epidemiologists to foreign policy experts of hundreds of Twitter accounts with the start of the Russian invasion of Ukraine.
These commentators have no business commenting on issues they know nothing about, but target audience (and follower) demand drives them to do so anyway. Once an attentional audience has been cultivated by an influencer, they need to keep selling information products to keep their market share of their niche. So what many end up doing is faking to be insightful, either by
being convincing through their confident delivery of information, or by fulfilling a figurehead role based on their identity (PoC saying actually ‘whites’ are discriminated, or gays saying conversion therapy is good,…), or by creating contrarian counter-narratives to differentiate themselves from genuine expert opinion (vaccines actually kill more people than they save, ‘they’ don’t tell you how climate change is actually good for us,…), or just bringing their unrelated pet talking points to any new topic. (if you have ever seen hot takes like ‘Russia is gonna win the war because they are not woke’, you know exactly what I mean).
The point is: They are selling junk information products based on grift, identity, manipulation, or intellectual virtue signaling, instead of offering real informational merit.
Fake but engaging experts shape public perception and action, and that can manifest for example in poor vaccine uptake during a pandemic, or political inaction in tackling climate change.
When people with no relevant expertise rapidly move to offer their opinions on a wide range of topics as soon as these topics become fashionable or newsworthy, and especially when these opinions are contrarian, we should suspect them of intellectual virtue signalling. […] I also suggest it is harmful, because it distracts attention from genuine expertise and gives contrarian opinions an undue prominence in public debate. — Neil Levy, 2022
And this brings us to the overarching, emergent systemic point:
The incredible asymmetry of noise over signal our interaction with social media creates. We don’t need to invoke Shannon-Hartley theorem to understand that the amount of information available to us far exceeds our bandwidth to process it.
A bad signal-to-noise ratio makes it difficult for people to find reliable information to form their opinion on any issue. Our current signal-to-noise ratio makes it almost impossible.
The overt favoritism of simplified, emotionally engaging narratives and personalized niche content delivered by contrarian shysters, marketeering influencers, or other engagement gurus destroys any navigable level of signal-to-noise ratio on any topic for the wider public.
This is one of the core roots of our epistemic crisis.
The current informational architecture destroys the signal-to-noise ratio of any topic that garners widespread attention.
Think about every aspect of the pandemic, the one topic that by its nature demanded us all to pay attention to. What ‘attention-grappling’ aspect of Covid does not have a polarizing wedge driven through society? Does it exist? Do mask work? Do lockdowns help? Should we open schools? Can you trust your health institutions? What about Ivermectin instead of vaccines? Are vaccines even safe and effective? Did humans create SARS-CoV-2? (on that last one, I did some work to provide epistemic clarity, and the answer is no)
If we cannot browse reliable information, if we lack domain expertise, time, or energy to deeply engage with the topic, we have to rely on our trust networks to reach actionable certainty on any topic to navigate modern life. However, relying solely on trust networks, not expert institutions, scientific consensus, or factual reporting to inform one’s opinions is dangerous in a world of fragmented realities. Fake experts, political commentators, and other attention stealers have a far wider reach than actual trustworthy sources, and usually stronger personal appeal or skill to manipulate us into trusting them. (I’d even say: never trust an influencer, always look for what the boring ‘institutions’ have to say)
Even worse, in an ideologically polarized environment, the system-imposed need to outsource opinion formation to trust networks will often result in our dependence on unreliable proxies already in our ideological network, or demand of citizens to choose the most stomach-able lowest common denominator tribe ideology they can live with to get their information from.
This leads to absurd social phenomena, for example, the association of ineffective Ivermectin prescriptions with political affiliation, or watching a specific cable news network reducing COVID-19 vaccine compliance.
Now let’s quickly think back to chapter 1, several systemic problems for democracy arise when the complex system we are part of is experiencing noise. Again, the scientific literature is nuanced here, a little bit of noise is normal and can be overall good for the robustness of a system (Tsimring L., Rep. Prog. Phys., 2014 , Junge K. et al., Systems Research and Behavioral Science, 2020), whereas large amounts of noise can throw systems out of equilibrium (Tyloo M. et al., Phys. Rev. E, 2019). But what could happen if noise completely drowns out any useful signal?
High noise means that communication breaks down between individual elements of a network that need to act together to fulfill a function.
This can manifest in multiple ways, one obvious example is the inability of feedback loops to exert regulation, another is a faulty allocation of resources within the network, yet another would be the decoupling or separation of elements from the larger system.
A biological cell within our body that loses communication with its environment might stop performing its function within the collective, maybe even turn cancerous and eventually threaten the whole.
In democracies, losing intra-system communication ability is horrible, because democracies are basically built from bottom-up productive interactions of citizens, unions, interest groups, intelligentsia, movements, and political parties. As we have seen in chapter one, democratic networks have a lot of cross-regulation in the intermediary layers, and political legitimacy of representatives gets challenged when collective demands cannot be met with real actions, or when some people are not involved in shaping political actions in the first place.
Another issue arising is of course polarization, as the links between different groups and layers become weaker or unworkable because of noise, reaching compromises is rendered impossible. There is a rich popular literature on the negative effects of polarization, so I will not spend any more time on it (for system dynamics of polarization, see e.g Leonard NE. et al., PNAS, 2021 , Levin SA. et al., PNAS, 2021).
Increased polarization is also a direct result of what disinformation researchers call epistemic paralysis, and the inability to reach actionable certainty or compromise on any topic of importance because nobody can agree on a basic set of facts or truth. (Poison a river and people can agree that was bad, poison the info sphere with over 30.000 lies while being in office (noise!) and your followers will still believe you are a great truth teller). So the outlook is dire.
Anything that grips our attention, like a pandemic, war, or presidential election, will inevitably segregate the public into warring epistemic factions, creating an unnavigable fog of information noise and increasing polarization. The resulting epistemic paralysis sabotages collective democratic action on shared challenges.
Alright, guess it’s time to sum up the meta-phenomena of our crowdsourced misinformation project.
The attention economy and information architecture of our current info systems incentivizes the creation of toxic influencers and narratives. Almost exclusively emotional and addictive narratives, together with contrarian opinions of parasocially elevated influencers inevitably shape the information landscape in an asymmetric way that is diametrically opposed to the epistemic needs of a democratic society. Furthermore, because each influencer has to deliver specialized information products to their niche market to stay in business, we humans get a wide spectrum of information products to pick and choose from, irrespective of the hidden harms some might cause us (a need for consumer protection regulation comes to mind). We willingly but unwittingly participate in the creation of our various fragmented info spheres, and with it are partially responsible for the noise that leads to epistemic paralysis. This epistemic paralysis collectively sabotages several essential democratic functions, from collective decision-making, to cooperation, to creating political legitimacy.
But it gets way worse, we still have to look at the even darker side of this already pretty dark coin: How targeted system manipulations can weaponize epistemic paralysis in our info spheres to serve the strategic aims of information combatants.
And this is what we have to look at next.
C) The rise of information operations and info warfare
Information has strategic utility for information combatants
The attention economy works because holding attention is power, and that power can be exerted to fulfill strategic aims
The centrality of information has changed pretty much anything in the 21st century. Physics, math, even biology, chemistry, neuroscience have transitioned or are currently transitioning into information science. The way we store, share, distribute and consume information has changed dramatically. Even military talk about information has changed, talking about information not just as a tactic, but as a ‘battle space’ to fight in, like naval or aerial. Fundamentally, the capitalist platforms we use are pay-for-play systems developed for advertisers to manipulate us, so money naturally goes a long way to amplify specific products, ideas, ideologies, or points of view on these platforms. But as in real life, money is not the only way to exert power over these platforms, and with them, our info spheres.
While we were talking about information as an addictive product when it comes to our crowd-sourced distortions, disinformation researchers and cyber specialists in the military talk about information as a tool of war when conceptualizing the role, impact, and purpose of targeted manipulations.
What both conceptional approaches have in common is that the content of the information is irrelevant, only the effect matters. Facts, accuracy, context, reliability, and other desirable traits of information are often even counterproductive when it comes to optimizing information products or tools. For information as a product, consumer engagement determines its value. For information as a tool, information space strategic utility determines its value. These are not mutual exclusive perspective, because engaging information products that can be used to manipulate have a higher strategic utility, and strategic use of information within the info sphere can also make it more engaging.
In the battle space of information, information has strategic utility when it can be employed to further the strategic objectives of information combatants. Information combatants include a wide array of actors and entities, from businesses to political campaigns, from troll farms to militaries, from religious movements to governments of nation-states. The strategic objectives are best illustrated through their use of information operations.
Information operations are inauthentic actions of a coordinated group, from state level and businesses down to disinformation contractors, pay-to-engage services, chat rooms and message boards; they may include fake identities, content, messaging or amplification; and they pursue a social, economic, or political purpose.
But before we go a bit deeper, we have to first quickly talk about another enabling feature on social media platforms that immensely empowers coordinated online operations and probably contributed most to their rise:
Microtrageting and the asymmetry of knowledge
Platform businesses use creators, influencers, and an arsenal of algorithms to bind us and our attention to their platforms. The ‘higher’ purpose is to steal our data to better target us with products as well as shape our perception of products. We talked about this already. User or customer data might already be the most valuable asset there is for businesses, no matter if they are big tech or not, and citizens have no access to it, often not even to their own data. This asymmetry of knowledge translates and asymmetry of power for information combatants over citizens.
To keep others under surveillance while avoiding equal scrutiny oneself is the most important form of authoritarian political power (Balkin 2008; Zuboff 2019). Similarly, to know others while revealing little about oneself is the most important form of commercial power in an attention economy. — Lewandowsky S. & Pomerantsev P., Memory, Mind & Media, 2022
Sitting on the iron throne of the attention economy is of course the evil giant Facebook, and the company does its best to use its power to whitewash the abusive, extractive, and anti-democratic practices they employ to get cash for our data. Targeted manipulation sounds sexier when you call it ‘behavioral marketing’. The Facebook PR department is truly worth their money when it comes to cover-up, denial and gaslighting about harms they have cause… but I get ahead of myself.
Let’s focus on microtargeting, the act of finely segregating users based on different (demographic, social, cognitive, psychological…) metrics, to better allow advertisers and product sellers to find their niche audiences.
The fundamental problem is that Facebook’s core business is to collect highly refined data about its users and convert that data into microtargeted manipulations (advertisements, newsfeed adjustments) aimed at getting its users to want, believe, or do things. […] Describing this business as “advertising” or “behavioral marketing” rather than “microtargeted manipulation” makes it seem less controversial. But even if you think that microtargeted behavioral marketing is fine for parting people with their money, the normative considerations are acutely different in the context of democratic elections. -Bankler Y. et al., Network propaganda, 2018
Ethically, it is highly questionable to allow companies to use data to build for example psychological profiles from their potential customers, it opens up the potential to target and exploit their psychological weaknesses (Matz SC. et al., PNAS, 2017), and yet here we are. For information combatants, microtargeting is basically a game changer, like having a rocket with a target-homing function vs. a rocket that just can fly straight and on sight. Guess which one will be more effective in war.
Researchers are quite unequivocal about the incompatibility of microtargeted messaging and manipulations with democratic fundamentals.
What must be noted, however, is that the micro-targeting of messages may be at odds with the democratic fundamentals. The foundational idea of a democracy is that it provides a public marketplace of ideas, however imperfect, where competing positions are discussed and decided upon. We suggest that this entails a normative imperative to provide the opportunity that opponents can rebut each other’s arguments. This possibility for engagement and debate is destroyed when messages are disseminated in secret, targeting individuals based on their personal vulnerabilities to persuasion, without their knowledge and without the opponent being able to rebut any of those arguments. — Wiesner K., European Journal of Physics, 2019
Information combatants attacking, sabotaging or subverting democracies thrive in environments where they are invisible, and fragmented info spheres are rendering public coordination to form a strong opposition to their actions impossible.
These platforms do not need to be as bad as they are for the public, and we should not allow them to be any longer
Furthermore, all our social communication platforms, public squares, and even the rules for public engagement or debate, are currently provided by private, autocratically ruled, capitalistic companies that care more about shareholder profits than public good or democracy. These systems do not need as bad as they are for the public.
The only reason microtargeting exists is that it is profitable to the platform companies and valuable to various monied interests and other information combatants. Again, there is not really any scientific controversy about how immensely stupid and dangerous this is for any democracy.
That same platform-based, microtargeted manipulation used on voters threatens to undermine the very possibility of a democratic polity. That is true whether it is used by the incumbent government to manipulate its population or by committed outsiders bent on subverting democracy. — Bankler Y. et al., Network propaganda, 2018
These impacts of social media on public discourse show how democracies can be vulnerable in ways against which institutional structures and historical traditions offer little protection. — Wiesner K., European Journal of Physics, 2019
The way our current public squares are set up certainly exposes a deep systemic vulnerability, and as we will see, anti-democratic actors are not shy to abuse it. In fact, abusing the vulnerabilities of these platforms is currently a booming business in many corners around the world.
Today, there is a sprawling cottage industry of pay-for-play ‘advertising’ companies, click farms, bot networks and even venture-funded AI startups that offer services to information combatants.
It is absolutely cynical how these disinformation services are marketed as tools of ‘protection’ against misinformation, or to ‘empower’ well-meaning businesses to detect threats, all to garner investment funding. It is either a charade, or strategic ignorance, because the actors who finance this know exactly what they are getting and why. This is why current technological innovations constitute another asymmetry that favors the currently powerful. While technology might not deterministic per se, in the sense that it can be used for good or bad (here is an AI start-up example for good), it is certainly telling when the majority of these technological innovations happen in private or secret, outside of public scrutiny or influence. A saying comes to mind:
If you are not at the table, then you are probably on the menu
Currently, the democratic public is being feasted upon, so the alarm bells should be ringing in everybody’s ears. If not, well, I guess now would be a good time to look at a well-documented recent case example from the dark underbelly of information operations.
How to influence from the shadow (#IStandwithPutin operation)
Russia has been known to be involved in information operations for a long time. Across March 2nd and 3rd in 2022, two pro-invasion hashtags began to trend on Twitter across a number of geographies around the world: #IStandWithPutin and #IStandWithRussia. In the days that followed, research began to be made public that suggested that some of the activity associated with these hashtags were inauthentic, including bots and engagement farming (using pay-to-engage services).
Further investigation by disinformation researchers (see CASM technology report) unearth the full scope of these inauthentic interactions with the help of deep learning based transformer language models. The model finds patterns of similar language use considering vocabulary, topic, thematic framing etc., basically allow researchers to cluster and group user accounts based on text-behavioral similarity.
Many of the accounts studied here were created very recently, have very few followers and post a very small amount of original content themselves,
preferring to amplify instead. They all follow a common volumetric pattern: a small uptick on the day of the invasion, a large spike on the day of the UN vote and a sharp decrease in the days thereafter. — CASM technology report, 2022
Notably, they found clusters of inauthentic networks with geographic association, likely a feature of how to best game the specific platform (Twitter hashtags and trending feature is based on geography, so activating networks there most likely was aimed at targeting people regionally, not globally).
The inauthentic activity focused on popular themes within their regions that might have persuasive power, for example anti-western sentiments, equivocating NATO membership expansion with colonialism, or appeals to BRICS solidarity and mingling pro-Zuma/Modi messaging with Putin appraisal.
It is difficult to assess the impact of a single information operation that ran very shortly and targeted mostly BRICS and developing countries, it is however notable that targeted countries in Asia and Africa largely decided to abstain from condemning the illegal Russian invasion of Ukraine.
While the Chinese and Indian abstention for condemning Russia might be the most consequential geopolitically, South African abstention was the most surprising, and drew international criticism. It is also notable that Brasil, where the #IstandwithPutin information operation was not observed, was also the only BRICS country (with strengthening economic ties to Russia and Bolsanaro’s sympathies towards Putin) that did not side with Russia. As often in this new age of covert actions, we are left to wonder: Is this all just coincidence? What other factors are at play? What is the relative effect size of a single information operation?
Again, we run into the complexity problem when we try to find a reductionist answer to this question. Probably the impact was small, but can it be excluded that it might have contributed to the decision making in those countries when it came to the UN vote? After thinking about systemic impact, I think we can appreciate that these types in information operations certainly create noise and contribute to epistemic paralysis, even without having measurable geopolitical payoff (which they might still have had!).
Sometimes, changes in quantity produce dramatic changes in the quality of a phenomenon, this is the essence of emergence.
One last thing we have to consider is that information operations are cheap, ubiquitous and quite easy to perform on unsuspecting targets. Pair this ease of manipulation with a gameable environment already distorted by crowdsourced misinformation, and we have a recipe for systemic catastrophe.
Conclusion Chapter 2:
Social media platforms are not built to be democratic or serve the public good, in fact, many of their design decisions inevitably cause anti-democratic incentives, behaviors, and phenomena to arise.
There is an ongoing epistemic crisis because democratic societies face a dual threat: Crowd-sourced distortions and targeted manipulations of the shared information sphere, leading to mutually incompatible fragmented realities and the breakdown of some complex functions required to maintain democracy. The most dramatic systemic vulnerability created for democratic societies is a catastrophic signal-to-noise ratio, which asymmetric actors and information combatants both cause and exploit for personal profit or strategic aims.
The attention economy is powerful because information shapes our perception of reality, and attention is the mechanism to filter information that ultimately ends up reaching us. While we seem to have control and agency over our attention, the architecture of our info spheres rewards content that best steals it, and this content is optimized for engagement and all the bad incentives and distortions coming along with it.
In this way, a broken signal-to-noise ratio favors engaging narratives or powerful actors who either can buy, capture or demand attention, it entrenches their power and worldview to the detriment of people’s own agency and democratic processes. This is also why microtargeting is so unethical and dangerous. Furthermore, while asymmetric actors and information combatants fight over information supremacy, the public gets polarized and the resulting epistemic paralysis halts progress in solving societal problems.
Democracy might not survive the newly created asymmetries of power of our information age
From a systems perspective, both broken signal-to-noise ratios and epistemic paralysis are a threat to the continued existence of democracy, as they corrode and sabotage the systemic stability and adaptability of democracies.
So I guess what I am saying is that we are finally ready to assess the full impact of the multiple overlapping crises of our information age.
We need to identify what pushes our democratic system in the wrong direction, and we need to do so fast.
Read next: Chapter 3: How an infodemic is reshaping the world