print-icon
print-icon

AI & The New Kind Of Propaganda

Tyler Durden's Photo
by Tyler Durden
Wednesday, Feb 14, 2024 - 11:30 AM

Authored by Johann Eddebo via Off-Guardian.org,

Do you remember how the unconstitutional, pastel-authoritarian and totally batshit insane “Disinformation Governance Board” – with its Mary Poppins-cosplaying, Monty Python level of unintentional self-satirizing department head – was rolled out two years ago like a half-joke, half-beta-test of a version of the 1984 Ministry of Truth?

Well, kids, I wouldn’t really call this 4D chess or anything, but of course this was just bait. This parody and its rapid withdrawal reassures us that nothing of the sort could conceivably take place, while also seeding a visible, red-herring template for how we should expect heavy-handed, overt propaganda efforts to look in this day and age.

Meanwhile, there are currently massive efforts in the background and below the surface, all across the playing field, towards implementing big data and AI technology for not only the purposes of classical, increasingly obsolete propaganda or simple surveillance. No, this time, we’re exploring entirely novel methods of behavioural modification and narrative control intended to GET OUT AHEAD of the crystallization of discourses and even the formation of identities and worldviews.

They want to control the formation and reproduction of “social imaginaries”.

So the idea is to use massive data collection and AI pattern recognition to preemptively disrupt the formation of behaviourally significant narratives, discourses or patterns of information.

With these tools of “early diagnosis” of information that potentially could disrupt the power structure and its objectives, it then becomes possible to nip it in the bud incredibly early on, way before such information has even coalesced into something like coherent narratives or meaningful models for explanation or further (precarious) conclusions.

During the past two decades the commitment on the part of the US Department of Defense (DoD) to what Edwards (1996) has analyzed as a ‘closed world’ of containment and military dominance has taken on new life. The shift during the 1990s from a frame of superpower conflict to the so-called irregular warfare of counterinsurgency and counterterror operations aligned well with the building out of networked infrastructures. Yet the associated Revolution in Military Affairs has resulted less in the dissolution of the persistent ‘fog of war’ than in its intensification. Read as a lack of information integration, the intransigent disorders of warfighting underwrite ever expanding investments in what I will argue is a resilient fantasy of data-driven, comprehensive command and control. Building out systems of sensors, signal processing, data storage and transmission has proven more straightforward than the translation of data into what in military terms is named ‘actionable intelligence’. As the excess of data now threatens to destabilize the technopolitical imaginary of just-in-time information, artificial intelligence (AI) is advanced as the promissory solution to automating data analysis and reclosing the world.

Suchman, L. (2022). “Imaginaries of omniscience: Automating intelligence in the US Department of Defense.”Social Studies of Science (Vol. 53, Issue 5, pp. 761–786). SAGE Publications.

The whole process would work something like this.

AI scours the massive amounts of data collected in real-time from social media and digital communications networks, including everything from cell phone transcripts to instant messaging applications (why do you think WhatsApp (now owned by Meta, formerly known as Facebook) allows you to call everyone in the world for free?).

You’ve then trained the algorithms to pick out “disruptive” patterns in the communications that precede the various sorts of developments you want to avoid, such as for some piece of information to go viral, but you would also want to identify potentially robust clusters of information that won’t necessarily go viral rapidly, but that have the capacity to generate strong and enduring narrative frameworks that over time could threaten the status quo. This would include such complex phenomena as religious and political reform movements, innovative counter-cultural approaches, or even artistic endeavours that could impact people’s meaning-making.

You then pick out key “influencers”, i.e. nodes or potential nodes in the patterns of communication, and the crucial and totally devious addition to run-of-the-mill approaches like shadow banning or other types of soft censorship that’s new to this stage of digital propaganda, is the proactive seeding of counter-narratives that can be done at this point.

So you gently and carefully “nudge” the potentially disruptive individuals in the preferred direction, to mold and shape the flows of information over which they have influence, with the end goal of actively shaping or blocking the crystallization of any viral formations or potentially robust narratives or new “social imaginaries”.

Full spectrum narrative dominance.

As reported by Lee Fang, one company that recently emerged, and which has its sights set on “keeping tabs” on online conversations and actively employing “countermeasures” at the behest of major corporations and Western governments, goes by the name of “Logically.AI”.

The business is even experimenting with natural language models, according to one corporate disclosure, “to generate effective counter speech outputs that can be leveraged to deliver novel solutions for content moderation and fact-checking.” In other words, artificial intelligence-powered bots that produce, in real-time, original arguments to dispute content labeled as misinformation.

Lee Fang. “Logically.AI of Britain and the Expanding Global Reach of Censorship”. RealClear Wire

Logically.AI is as spooky as they come, with their US headquarters in Arlington, VA., i.e. next door to the Pentagon and DARPA, and was allegedly founded at the tender age of 22 by this strangely untraceable ghost with the implausible name of “Lyric Jain”.

They’re really cozy with the major Silicon Valley players, including Microsoft, Google, and TikTok, while Facebook (“Meta”) uses them for “fact-checking” and rating posts and messages for suppression and downranking in all of their social media feeds. They’re also working closely with Western governments and aligned partners across the world, with an avowed role in psychological warfare to counter the influence of specifically Russia and China, as well as safeguarding Western states’ “election integrity” in the face of unwanted narratives.

Logically.AI is also the mainstream media’s little sweetheart, being adored by all the usual suspects, such as The Guardian, Washington PostBBC &c, and has entered into formal partnerships with academic institutions all across the world.

All of this gives us a picture of their overall reach (and incredible rapid expansion, which in and of itself begs many questions) which serves to illustrate the astonishing potential for influence an operation of this sort can possess.

We can then see how the close partnership between state and corporate actors supply the intentions and goals for the use of these tools, as well as access to the enormous digital data banks and online platforms for discourse, their reach, precision, and foreign impact is amplified by the association with intelligence services, and the legacy media provides strategic force multipliers for the behaviourally significant narratives that you want to foster and reproduce.

And one really neat addition to the toolkit of contemporary digitized mass-surveillance-censorship & social engineering that’s also taken aboard here is the notion of “malinformation”, i.e. truthful or factual pieces of information, yet which foster undesirable narratives. In other words, just stuff that’s true that they don’t want you to know or think about. This sort of gives up the whole game. It’s not about truth or facts, of course, but about the control of socially significant narratives and nothing else.

It could be worth pointing out that countering such “malformation” through muddying the waters or composing contrarian narratives by definition is tantamount to the very definition of “misinformation” that these thought-policing entities subscribe to. Of course, it’s not a problem when our owners do it.

Anyhow, what would then the overall mode of operation look like?

Imagine that you or I express some unit of “malinformation” in our online interactions that gets picked up by one of these pattern-recognizing algorithms trained to detect potentially disruptive pieces of information. Let’s say that this particular statement, entirely truthful, gets tagged with the moderate risk of supporting a set of conclusions that could have a negative impact on one of the major pharmaceutical brands with close ties to the state-corporate power structure. Totally random example.

What this ingenious AI propaganda system then does, is to automatically cordon off this statement by shadowbanning, downranking and other forms of concealment in the information flows. It also tags us with being a potentially disruptive agent, increasing scrutiny of our on- and offline activities. Of course, you can also effectively separate “disruptive agents” from each other by automatically demoting their posts in each others’ flows, no matter what the content, and through throwing up all sorts of obstacles for their online interaction. But these are old strategies.

What’s added on top of this is the seeding of counter-narratives, and the two obvious ways that this can be effected is by situating the relevant statement in a context of contrasting or discordant information so that both the “disruptive agent” and other recipients of the information get a clear message that this piece of information is both contested AND an off-key minority perspective fit for social ostracization.

This can be further supported by counterintuitively promoting the Facebook post or forum message in the flows of networks of singled-out users that have been identified as “loyalist” proponents of the preferred views (the volunteer thought police corps) so as to provoke their criticism and rejection of the message.

Another potential aspect of this proactive seeding of counter-narratives is to employ bots. Fictitious users, that through generative language models provide targeted responses to potentially disruptive pieces of information. Another interesting possibility is to generate fake, short messages by actual users in your social network to produce these targeted responses – messages which can’t be seen by themselves and so will generate no interaction, and which mimic their style and tenor.

You know, like they already do with deceased Facebook users, training the AI on the past post history to generate plausibly-sounding new ones. There’s a Black Mirror episode on the theme too.

So to sum up, we have these hugely connected organizations with tentacles throughout both legacy and digital media, closely associated with governments and military intelligence, which now proudly proclaim the application of AI towards an entirely new type of generative, responsive and predictive propaganda.

These operations will not only be capable of employing such legacy techniques of shadowbanning, mapping and surveillance with incredible efficiency through the synergistic connections with big data.

They also promise to control the very genesis of human narratives. The embryos of our collective thought patterns, disrupting their half-formed stages before they are able to crystallize into coherent frameworks that you and I could independently employ to navigate and make sense of the world.

And the brilliant thing is that most people won’t be able to connect the dots. If these systems work as intended, those sorts of potentially subversive higher-order rational activities will never find stable ground upon which to build.

Moreover, these more advanced forms of behavioural control and narrative and psychological manipulation are technically not censorship per se. Even if those structures and safeguards were not captured, this new form of predictive propaganda can’t be targeted by our traditional legal regimes, and are not caught in the web of our current constitutional mechanism or rights legislation. They sail right through that.

Human beings. We’re barely out of the trees. How do you and I fight back against something like this?

A friend wrote me an e-mail on the issue of how to organize, politically, socially, in our current situation. I’m still thinking about how to respond to that.

One thing is certain. The legacy models don’t work. They’re either recuperated or ill-suited to the new situation. The 19th century didn’t face a global behavioural modification regime perpetrated by an alliance of the entire global corporate media, the intelligence services, the advertising industry, most regional and national legislatures and government, the supranational governing institutions, and last but not least, the very technological structures themselves.

But the movements for revolution and reform that emerged about two centuries ago were equally isolated. They were also up against a compact global order that violently refused to acknowledge their legitimate needs and interests.

They had to find ways to work around that, and so do we.

*  *  *

Johan Eddebo is a philosopher and researcher based in Sweden, you can read more of his work through his Substack.

0
Loading...