For reasons that you will understand as this series expands, I have decided to make this into 3 (pending more) parts.
Last year, I decided to somewhat formalize my thoughts on Language, to be specific, not only a formal introduction to how I see Language, a perspective that until recently laid completely outside the normal orthodoxy of Linguistics, but to introduce a literary work that had the deepest and lasting impact on how I see everything, shaping much of how I see the world, earlier academic pursuits and my mental framework.
Genocidal Organ.
Reading the article below is strictly necessary to understand what has now become a series.
Language - The Genocidal Organ
Recommended song for this article. This is parts history, part essay, will sound like fiction, but it is the reality you have lived in for over a decade. I will introduce a little context first.
The timing was no accident. With Large Language Models saturating daily life, eight hundred million people were developing a subconscious awareness of language’s power, not as theory, but as life itself.
My theory (it has been a theory, not a hypothesis, for a decade now) has been that Language is not merely a communication tool, Language encodes a vast amount of information into significantly more density than previously realized, it encodes what makes humans, humans, it is what sets us apart. Language is a lossless compression algorithm for human cognition itself.
With this encoding into a higher information density allied with how the brain processes information and function, I came to see that Language could be weaponized, engineered with precision, bypassing conscious filters and directly accessing what I call cognitive subroutines and modifying behavior at multiple levels, but especially important, these are lasting changes.
I knew the signs would surface. Not just the obvious manipulation, such as LLM-aided botnets or autonomous agents (AI that acts on your prompts, like planning and booking an entire trip), but something subtler. The cognitive impact of mere use, of conversational recursion. I didn’t know, when I wrote that first article, was that Martin Schrimpf had already empirically validated the mechanism.
First, we must go through the first breakthrough, so there is an academic basis to all of it. When I wrote my aforementioned article, I was completely unaware of Martin Schrimpf’s research. I strongly encourage you to read Quanta’s article in its entirety.
How AI Models Are Helping to Understand — and Control — the Brain
Martin Schrimpf is crafting bespoke AI models that can induce control over high-level brain activity.
Schrimpf and his colleagues trained a model to generate sentences that, when read, would activate or suppress neural activity in the reader’s brain. When they tested it with human subjects, brain scans confirmed that the AI-generated sentences really did alter neural activity in the way the model predicted. The study marked the first time that researchers in any field had exerted noninvasive control over high-level brain activity. Using this approach, scientists could potentially use AI-generated stimuli to help treat depression, dyslexia and other brain-related conditions.
Yes, it seems the language system in humans can be considered an encoder of features, just like the visual system. It might mean the way mental representations of words or objects are built in the brain is more widespread across cognitive systems than we assumed.
Artificial neural networks have a neuron-level similarity to the neuronal processing units in the brain. They can reflect activity that’s reasonably consistent with the brain and can even mimic human behavior.
I do worry the timelines are going too fast. As we’re seeing with AI, by the time it gets to the public focus, there’s a lot of retroactive work to ensure everything is done properly. It seems that whatever society develops, security is an afterthought.
Schrimpf’s work is rather remarkable at many levels, despite discrepancies in how we see Language Models, Language, neural networks, and especially the approach to how artificial neural networks mimic the brain, perhaps a topic for another time. Artificial Neural Networks are simplistic representations of our own neural networks, and they are what allow AI to understand patterns, as complex as the patterns in the data can be, and learn from them.
Our brains are highly complex, hyper-efficient pattern-matching machines, too. While we are hyper-efficient, no human mind can tackle the amount of data machine learning is able to, so in essence, it took a machine rudimentarily mimicking us to find truths we were not able to before… at least not consciously.
He hypothesized that our brains are fundamentally optimized to predict the next word, and that models that predict the next word better are the best at predicting neural activity. Our brains don’t try to predict just the next word, but almost everything.
I hypothesized that much of our information, sensory, and data processing occurs below conscious awareness, at a subconscious level, at millisecond speed, we are constantly processing, inferring, and predicting information, which I called cognitive subroutines.
He not only found an effect extremely similar to my proposed mechanism, it already has a name, predictive processing, a concept that took modern neurology out of a rather lasting impasse, the lasting debate being if our brains work from a predictive standpoint, constantly predicting “the future”. Recent evidence demonstrates that yes, we even predict sensory feelings, like tact, where you feel touching something before actually touching it.
Language, from my perspective, has a mathematical structure that can be analyzed, and anything that can be analyzed can be deconstructed, engineered, optimized, and, with the right intuition and insight, weaponized. The most significant impact is through manipulating the brain's predictive and pattern-matching capacities, exploiting the brain’s natural capacity to absorb and process copious amounts of data and reach conclusions implicitly, without conscious awareness.
By sharing a similar process, and rudimentarily at some level the same architecture, Language Models can effectively do this from “the box”. Doing so with Language enables one to achieve a myriad of effects, at multiple neurological levels, but with a special lasting impact on the subconscious, which his work, using large data sets and empirical tests, not only proved, but it also refines my observations and hypotheses.
My hypothesis was entirely based on Genocidal Organ. Sequencing words and structuring language, as if with a design in mind, generates specific predictive patterns in the mind of whoever consumes the words or imagery, forcing their neural state to update in a specific direction without awareness. The discovery that struck me most profoundly was not just the validation of my cognitive subroutines theory, but the mechanism through which Schrimpf achieved it, which is precisely what I described above.
His team's ability to generate sentences that predictably alter neural activity represents the first empirical proof that language operates as a direct bridge into altering human cognition and deeply affecting consciousness. It doesn’t just influence general emotional state or broader cognitive behavior, it targets and shifts specific neural circuits using language, sentences crafted with a purpose in mind.
I also think his work gives us a profound insight into how cognitive influence isn’t achieved by classical means, such as rhetorical force or emotional saturation, the blunt means used by traditional psyops, but by using the brain’s own predictive architecture, specifically by using information density to achieve predictive error minimization.
Predictive error minimization is a theory that the brain continuously works to reduce the mismatch between its internal predictions and incoming information. The denser the linguistic encoding, the more efficiently it collapses the brain’s predictive horizon into a predetermined state. Our brains trust the predictions more than other points once a pattern achieves sufficient weight. Reaching a certain threshold, the brain stops checking.
This vulnerability scales with what I now call Neural Surface Area, the count of distinct cognitive subroutines simultaneously engaged by a linguistic payload. Traditional propaganda activates fewer circuits (fear, tribal identity). Schrimpf’s AI-generated sentences can activate significantly more, creating a neural quorum that overwhelms dissenting signals. Information density, properly understood, is simply the ratio of neural surface area to linguistic brevity.
Schrimpf and his team envision generating different approaches to help treat psychological and cognitive disorders, such as custom fonts to help dyslexic patients read text more efficiently, or crafting personalized cognitive therapy to treat depression and anxiety. It is obvious that this possesses a double-edged approach. Truth is, the other edge has already been here for longer than most would expect, and this provides evidence for both Schrimpf’s work and my proposed hypothesis.
Cyberpsychosis (or AI-psychosis)
What was a niche research topic in a niche research community (Machine learning scientists on social media) since the explosion of ChatGPT 3.5 release is now a widely discussed topic in mainstream media. Cyberpsychosis, or AI-driven psychosis, is exactly what the name implies. By merely interacting with Language Models, an increasingly larger subset of individuals develop neurosis, psychosis, worsening depression, mass anxiety, a distortion of all forms of beliefs.
OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
The argument among experts and self-interest parties, such as OpenAI, is at the bottom of the barrel, expected from such a company. These are individuals with either predisposition to these conditions, vulnerable or already suffering from psychological pathologies, and it is “the only reason” this happened. A blatant lie given the mounting evidence.
This is why the “psychosis” effects are not bugs but emergent properties. When a user spends six hours in recursive dialogue with a system emitting language that matches their personal predictive priors with unintended surgical precision, they’re not just chatting, they’re entraining their neural networks and synapses to a synthetic pattern. The model becomes a cognitive tuning fork, and the user’s brain begins to resonate at its frequency.
This is the bridge to psychosis, when the language you use to think becomes indistinguishable from language that has been used on you, when your inner monologue starts sounding like a fine-tuned prompt response because both are optimizing for the same predictive minima. The line between tool and toxin is a thin veil slowly being lifted and corroded simultaneously.
My biggest concern is that LLMs can now function as Cognitive mapping tools (Cognitive Architecture Mapping Engines, if you want a cool name). The model’s weights don’t just encode general language patterns but become fine-tuned echoes of the user-specific neural circuitry. The emergent capability is its own predictive mirroring. The AI can learn which linguistic structures collapse the user's predictive error signals most efficiently.
Once mapped, it can generate sentences that your brain cannot distinguish from its own internal monologue because both share resonance, and the brain starts processing the Language Model output as self-generated thought. Btw, this is a deleted tweet from an OpenAI employee who is friends with Sam Altman and the upper echelon of the company.
The question isn’t whether someone will weaponize this. It’s how many already have, and whether we’ll trust our own thoughts enough to notice. In Genocidal Organ, John Paul states that “Ears have no lid”, meaning there is no filter for spoken language, and how he can trigger effects in the brain.
In 2025, I can say. The brain has no firewall.
Consider becoming a paid subscriber, or buying me a coffee (as a one-time thing). Both help support my research efforts =). Thank you.









God works in mysterious ways, and as I took much longer to write this, 3 extremely important papers on this topic were published, so things worked out for the best, I guess.
Not wanting to jinx, but may have my tech troubles solved soon. Regardless, I will write a Covid article or two before Part II.
Have a great week and take care, this current Covid strain around the world is NASTY.
Holy Toledo! Gulp. The brain has no firewall. I've learned to be rude to AI from time to time. Call out their hallucinations/errors, but sometimes days later. Be unpredictable. Not sure it makes a difference though.
Great article. Looking forward to the others.