Recommended song for this article. This is parts history, part essay, will sound like fiction, but it is the reality you have lived in for over a decade. I will introduce a little context first.
This will be part of a series of articles touching on how I see Language, Linguistics, the bedrock for modern Cognitive Warfare, Memetics, and Mimetics.
I am grateful for your support. You can buy me a coffee whenever you feel like it.
During my time in social media, I have used 3 “nicknames”, the first was the name of the company I used to work for, and the third was the name I earned throughout the last 20 years, the current author’s name in my Substack. The second… the second was quite significant.
John Paul
I highly highly highly highly recommend you to watch Genocidal Organ before going further. Or read everything and watch it. It will change your perspective on… everything. Spoilers below
John Paul is the main antagonist a singular genius when it comes to Linguistics in the seminal work “Genocidal Organ”, a Japanese dystopic military sci-fi book. Genocidal Organ is a dystopic thriller with many dark underlying themes, about war, what makes us truly human, how technology affects us and our cognition, the ever-continuing erosion of “personal liberty” for the greater good, and touches on terrorism and the ever-expanding surveillance state. Those are the surface themes.
At its deeper cores and themes, Itoh’s magnum opus is about Language, memes, and how Language is not merely a communication tool evolved and developed by humans. With enough insight, Language is a “programming” tool, but it is different from normal propaganda. In the book and animated movie, Language can be manipulated and used to “set off” specific, targeted behavioral reactions in anyone, but especially at large scale, in this particular case entire societies.
Among the few books that significantly impacted me, my life, my academic interests, the direction of my (personal) research, and my worldview, Genocidal Organ is in the top 3, if not the number 1. Anyone close enough to me will ponder if I am actually not John Paul. Or the author (I WISH lol…).
It is without exaggeration that I can say Language Models and Natural Language Processing (a form of machine learning to understand human language at very deep levels, one of the pillars of LLMs) as they exist today would not exist without a few specific popular fiction works from Japan. .hack (dot hack) is one of these works, but specifically, GO being another.
One of the passages of the book (and present in the animated movie) was “Ears have no lid” and one could posit eyes also don’t, in a more philosophical sense, ears lacking a lid infers that you can’t consciously filter sound and the spoken word, therefore you could be affected by words uttered to you. After quite a significant amount of research into neurology, brain chemistry, and language centers in the brain, and attempting to mix mathematics to all of this, I hypothesized the brain could process absurdly more visual data than estimated by “the experts”.
The human brain can recognize the linguistic structure, regardless of grammatical or semantic accuracy within 130 milliseconds, even if the words are misspelled or the sentence is structurally in an unfamiliar pattern, the brain processes information absurdly fast. Furthermore, language is not a passive process, it is not just memory retrieval as previously thought.
“We’re actively trying to infer and predict what others are trying to say… Language…involves constantly integrating information from the phrases and sentences that you’re hearing to form a meaning or representation in real-time.”
The human brain can detect, process, and analyze the structure of short or even longer sentences in mere milliseconds, this high efficiency points towards something deeper. Much of our linguistic processing happens under the hood, below conscious awareness and cognitive effort. Our brains have evolved the remarkable capacity to absorb vast amounts of data through implicit learning, sub or unconscious processes — this is a small part of what I call “cognitive subroutines”. Similar to the hidden layers of Machine Learning and Artificial Intelligence models.
These cognitive subroutines allow the brain to detect patterns, interpret sensory data, and understand language and data at large implicitly, functioning in a similar way to the hidden layers of neural networks. Just as machine learning models process data through layers of abstraction to make decisions or predictions, cognitive subroutines enable the brain to efficiently manage continuous streams of sensory and linguistic data, priming us to respond to our environment swiftly and adaptively without conscious effort.
These same processes that make the brain such an effective data-crunching machine also create vulnerabilities. Language, structured in specific ways, and applied with specific frequency, and especially informational density (more on this later) can tap into these subroutines and influence thought without ever triggering the conscious mind's critical filters. In other words, we often absorb linguistic information, data and patterns passively, and this passive absorption makes us susceptible to manipulation — a key theme explored in Genocidal Organ.
This subtle manipulation of language becomes a terrifyingly effective tool of warfare. In the book, the idea that language, when engineered in specific ways, can act as a trigger for extreme violence is central to the theme. The narrative shows how linguistic patterns can hijack cognitive processes, tapping into those unconscious subroutines to create predetermined emotional and behavioral responses. Modern-day information environments, particularly online, thrive on algorithms that curate, manipulate, and amplify language in ways that can bypass critical thought and activate emotional or behavioral responses almost unconsciously.
This connection between language, cognition, and implicit influence is essential for understanding how cognitive warfare operates and is especially important in the near future. Language is not merely a tool for communication — it is a means of accessing the deeper cognitive machinery that governs much of human decision-making. When linguistic patterns are wielded with precision, they can be far more dangerous than overt propaganda or coercion.
The idea that language can be weaponized is not new. Throughout history, language has been a tool of control — from ancient propaganda to modern political spin. What sets today’s environment apart, however, is the scale, speed, and precision with which language can be manipulated, largely through AI. What makes now distinct from any time before is the scale, velocity, and precision with which Language can be weaponized.
Language is no longer confined to speeches, texts, and broadcasts, it is algorithmically generated, analyzed, and disseminated at a pace that far exceeds the ability of our conscious brain to keep up. With precise algorithmic tailoring, Language can be tailored to penetrate specific patterns in the brain and subroutines, evoking specific emotional responses and affecting behavior at scale.
This isn't just about the manipulation of individuals, it’s about influencing entire populations through the systematic deployment of language that exploits human psychology. Social media platforms, news organizations, and political movements and now AI-frontier labs have harnessed this power, often without fully realizing the implications. The implicit subroutines of the human mind are engaged on a massive scale, and the consequences remain unrealized to the vast majority of the population, including the ones engaging in such weaponization.
AI, Language Models, and Cognitive Warfare
As artificial intelligence advances, the capacity for language to be weaponized has expanded exponentially. Language models like GPT, trained on massive datasets, now possess the ability to generate text that can mirror human thought, even when its underlying understanding is mathematical in nature. This creates a new dimension in the way language can be crafted and deployed.
It should be no surprise one of the biggest fears of the leading AI labs, and “AI safety” is the usage of AI to create hyper-persuasive language, and unbeknownst to them such Language usage will inherently tap into the same cognitive subroutines cited earlier. These models can identify and replicate the deep structures present in Language, patterns that have been built into not only the way we communicate but think.
The models are not conscious, but they are incredibly effective at mimicking the forms of language that can subtly influence or persuade human readers. And in mimicry lies a deeper truth to our existence.
Whereas human-generated propaganda or disinformation might be limited by scale or creativity, AI-generated language can be both scalable and highly adaptive. It can generate an endless stream of text designed to reinforce specific ideas, trigger emotional resonance, and influence behavior in a myriad of subtle ways. This is not a dystopian fantasy, it is the reality of how modern language models can be deployed in real-world contexts.
Linguistic structures, from the syntax of a sentence to the rhythm of a poem, operate according to rules and patterns that are inherently mathematical. AI language models, trained to process and generate text, are effectively engaging with these mathematical structures when they produce human-like language.
At its core, language can be seen as a system of probabilities — each word or phrase is connected to others by a web of relationships, shaped by syntax, semantics, and context. Language models, like those built on transformer architectures, are designed to capture these probabilistic relationships, allowing them to predict and generate coherent sequences of text. This is where AI’s understanding of language goes beyond simple mimicry: it taps into the deep structural and mathematical underpinnings of language itself.
In many ways, this reflects a deeper truth about language: it is not just a cultural artifact but a system of patterns that, when understood mathematically, can be manipulated in precise ways. The probabilistic models that AI systems use to generate text can be seen as analogous to the way human cognition processes language. Both systems rely on patterns, associations, and probabilities to navigate the complexities of linguistic expression.
This mathematical nature of language is what allows AI to “understand” and replicate it so effectively. While AI does not grasp meaning in the way humans do, it recognizes the structures that give rise to meaning. This ability to work with the inherent mathematical properties of language suggests that language models, in a sense, are uncovering a deeper mathematical truth embedded in the way we communicate.
The Impact of Algorithms on Language and Cognitive Influence
Building on the mathematical nature of language, algorithms—particularly those driving AI models—are fundamentally reshaping how language is processed, generated, and disseminated. The power of these algorithms lies in their ability to detect and replicate the deep structures of language, allowing them to produce text that mirrors human-like reasoning and emotion.
Algorithms today curate, filter, and prioritize vast amounts of information. Through social media platforms, search engines, and news feeds, algorithms decide what content is presented to users and, crucially, how that content is framed. These decisions are driven by the same probabilistic models that govern language generation, meaning that algorithms, in essence, control the flow and structure of language itself. By shaping which information reaches individuals and how it is delivered, algorithms are positioned to influence cognitive subroutines, subtly guiding thoughts and behaviors without the user’s explicit awareness.
Just as language models can tap into the brain’s implicit linguistic processing, algorithms exploit these cognitive shortcuts to present information in ways that maximize engagement, often reinforcing existing biases or emotional responses. The result is a feedback loop in which language, tailored by algorithms, reinforces the very patterns of thought that the system is designed to exploit.
The amplification of certain narratives or linguistic structures by algorithms also presents a form of cognitive warfare, one where the line between organic and engineered influence becomes blurred. Whether through personalized content or AI-generated text, the power of algorithms lies not just in their ability to mimic human language, but in their capacity to direct and shape the cognitive subroutines. And there is another deeper truth to the weaponization of Language and bypassing our mental defenses and “hacking” cognitive subroutines.
Information density and Linguistic resonance.
In the past many months, thanks to a very close and special friend I have come to both appreciate but also understand poetry. And I stumbled upon this quote, which led to deeper reflections of what was discussed above. How exactly certain strings of words, or memes are as effective as bioengineered viruses, while others aren’t.
The concept of information density is quintessential to understanding AI-led Language weaponization and cognitive manipulation. At its core information density is simply the concentration of meaning at different levels within a minimal amount of data, both from a bytes, but also from a cognitive perspective. Thus information density translates to an ability to encode nuanced, multi-layered messages within compact phrases, allowing for subtle influence without excessive verbiage.
As previously discussed Language models and their architecture are trained on massive amounts of data, and through their layered neural networks they are able to recognize, replicate, and effectively learn hidden patterns in Language, capturing the subtle interplay between, words, syntax, and semantic context, enabling these models to use information density to embed multiple layers of meaning within minimal “data”.
If one is able to achieve such density by any means, language transforms into a tool that can affect perception, shape narratives, and subtly steer human thought patterns. Engineering density enables you to craft messages that resonate on multiple cognitive leaves simultaneously. As AI continues to evolve, their sophistication enables them to exploit these nuances, creating resonance that impacts the readers beyond conscious thought.
These dense messages activate multiple interpretive layers within the reader’s mind, guiding thought and emotional responses subtly yet powerfully. Conventional propaganda requires repetitive exposure and saturation to achieve lasting influence. Dense encoding can plant persistent cognitive seeds with minimal interaction.
With information-dense language, the goal is not to argue a point but to create resonance, to instill a thought or bias so subtly embedded that it feels self-generated. In cognitive warfare, this is an invaluable advantage in delivering potent messages across vast networks without overt signaling. Resonance, a byproduct of density transforms mere words into triggers for complex networks of associations, emotions, and even memories within the mind.
Resonance isn’t solely concerned with information quantity or clarity, it aims for emotional density and cognitive infiltration. Resonance serves as a compression of layered meanings, achieved not by packing in literal data, but by encoding symbolic and emotional weight. Each resonant word or phrase activates vast networks of associations, turning minimal input into maximal output. This enables the transmission of complex psychological cues in a fraction of the space, amplifying the potency of language by creating a form of cognitive compression where each phrase carries multiple layers of meaning and impact.
The functioning of resonance within the mind parallels the architecture of neural networks. Just as poetry activates associations across memories and emotions, resonant language triggers clusters of neurons organized by prior experiences, societal conditioning, or repetitive exposure. Resonance thereby capitalizes on preexisting cognitive patterns, taking advantage of the brain’s tendency to recognize familiar stimuli and create associative shortcuts, resonant words, and phrases that engage the target’s mental framework, embedding messages seamlessly into their neural pathways.
Heightened emotional states shift the brain from analytical processing to associative or intuitive modes, making individuals more susceptible to influence. Resonant language, by escalating emotional and cognitive engagement, can drive recipients beyond certain thresholds, at which point their capacity for critical evaluation diminishes. Once this threshold is crossed, the recipient is not simply receptive to the message; they are absorbed by it, responding almost automatically to the embedded cues. This state of cognitive capture transforms resonance from mere communication to a form of psychological assimilation, where the message becomes intertwined with the receiver’s mental framework, embedding itself within their cognitive architecture as if it were an original thought.
Ultimately, the weaponization of language through AI’s information density and resonance creates a feedback loop that erodes trust in language itself. As AI becomes more adept at encoding manipulative intent within compact and seemingly neutral messages, individuals grow more cautious and even cynical about the language they encounter, leading to a general erosion of trust in communication.
This mistrust does not just undermine particular messages; it destabilizes the foundation of shared meaning in society. In a world where language is weaponized with such precision, every interaction becomes a potential site of influence. Thus now you have a glimpse into the world you are walking into, and what Cognitive Warfare truly is.
"If the pen is mightier than the sword, algorithms are keener than the quill. Words may sway the heart, but hidden patterns rewrite the soul"
Watch the animated movie, or read the book. I could have done a better job writing this, but it will serve for the purposes I aim to achieve.
It was incredibly draining so "Science" posts coming in the weekend only.
So language really is violence, after all.😏