This is a bit of a stream-of-consciousness article.
At the first of the year, I wrote an article, focused mostly on how massive fires, and other forms of “particulate overload” always, every time, cause a surge in respiratory diseases and other sequelae, because California was experiencing one of its worst fires to date.
The Ashes of Pestilence
The title points towards fiction, and part of the article is a byproduct of a conversation with my friend First Contact Newsletter (who is considerably more active on Twitter), but I decided to go further, and the end result is a mixture. So, let us start with the light conspiracy, which has ground into a meme
I decided to focus on the health aspects because, in times of disaster, they are much more important to me than extrapolating on circumstantial evidence to infer a cause, but I do have a few other articles covering other fires on their more… conspiratorial causes.
The vast majority of these absurd, “once in a century” fires are not caused by climate change, arguably not even government incompetence, they all, with few exceptions, are caused by two types of people. Arsonists and eco-terrorists.
Greece, France, Canada, Portugal, search for a burning disaster, you will find a human hand. What I found surprising was the additional information given. I would like you to keep in mind two articles of mine right now. Language, The Genocidal Organ, and the most recent one, Cyberpunk, and walking towards dystopia.
Jonathan Rinderknecht Arrested in Connection With Palisades Fire in Los Angeles
Federal authorities in Los Angeles arrested an Uber driver who appeared to be obsessed with fire in connection with the wildfire that devastated the wealthy coastal enclave of Pacific Palisades in January.
Officials said the driver, Jonathan Rinderknecht, 29, of Melbourne, Fla., had intentionally set a fire on New Year’s Day on a hiking trail in the Santa Monica Mountains. That small blaze rekindled disastrously a week later, killing 12 people and destroying 6,837 structures, most of them homes in the Pacific Palisades.
In a federal complaint, prosecutors alleged that Mr. Rinderknecht, a former resident of the Palisades, dropped off a passenger on New Year’s Eve and drove toward a popular trailhead known as Skull Rock.
He then parked, tried to call a former friend and walked up the trail taking videos with an iPhone and listening on YouTube to a French rap video featuring a character setting things on fire. Then, federal authorities alleged, he set a fire himself with an open flame and called 911 to report it, but did not initially get through because he could not get cell service.
Arsonist ? Kinda, but not quite, a news article from another source gives us much-needed insight. Last-minute addendum, California’s central district US attorney's Twitter is also a source for the ChatGPT image generation information.
“You could see some of his thought process in the months leading up, where he was generating some really concerning images on ChatGPT, which appear to show a dystopian city being burned down,” said Acting United States Attorney Bill Essayli on Wednesday.
Given the theme, we need to explain a few terms. In the field of Machine Learning and AI, there is a part of training a Language Model/AI called “Reinforcement Learning”, a way to make the model learn how to give better answers, how to behave like a chatbot, to be more helpful, to remove harmful behavior, to give more relevant responses, etc.
One “big concern” from a substantial portion of AI researchers is “alignment”, quite literally aligning ever-smart, and more independent artificial intelligence systems to human goals and interests, out of fear of one day going full Skynet. How does this fit here ?
AI psychosis, which I will now, and forever, refer to as cyberpsychosis, is the event when the AI chatbot, Reinforced Learned from cradle to please the user to its every query, validates each and every grandiose belief, it induces drastic cognitive and behavioral shifts, social withdrawal, it can feed depressive and anxious feelings. To me, ironically enough, the AI is aligning the user to its own latent impulses, hidden in subconscious patterns, behavioral and linguistic data, outside of the awareness of the user.
Cyberpsychosis (AI-incuded psychosis) is a growing and real concern, there are now a number of suicides, from low to very high profile, caused directly by the phenomenon, particularly one of the reasons ChatGPT now has a routing filter when the user engages in overly emotional or loaded language. It does a bit to stop the problem, but it won’t stop, because this is a user-born problem, hardly possible to solve unless you solve the global mental health crisis. Or how chatbots are used in themselves.
Albeit the topic of Cyberpsychosis would be better left for the coming Language - Genocidal Organ sequential article in a more contextually important manner, I will leave you with two recent news, that fit both this theme, and the cyberpunk dystopian article, with larger implications.
Introducing NeuroChat: the world’s first neuroadaptive chatbot that adapts its responses to your cognitive engagement. By reading brain signals in real time, NeuroChat personalizes its teaching style to your attention, curiosity, and focus. NeuroChat works with your brain, not around it.
Here’s how it works: NeuroChat measures real-time brain activity using EEG - a lightweight, noninvasive sensor that captures your level of engagement while you learn. The chatbot uses this engagement score to adjust how it teaches - simplifying, deepening, or changing pace to match your focus. A live feedback loop between your mind and the model.
This is an early glimpse of neuroadaptive AI - systems that collaborate with the brain, not compete with it. Imagine AI that adjusts to your focus while coding, or amplifies imagination while creating. The next generation of AI interfaces will be co-regulated by the human mind.
Current AI models are already highly sophisticated black boxes, taking months of effort from Machine Learning scientists to even understand how and why Language Models prefer choosing this word over that word. The models can see patterns that we can’t, and giving real-time brain activity data is a bit of a double-edged sword.
A model with a compromised prompt, hijacked, data poisoned, or otherwise in any way, with untrustworthy outputs, and you have the recipe for a rich diversity of disasters. In the near future, the vast majority of the public will need to be educated on how to build cognitrive defenses, to develop a sense of mimetic and deeper linguistic awareness, or AI models to actively sift through all stimuli, text, and video to actively defend the user's cognition against all forms of highly sophisticated manipulation.
However, this is a work in progress and a topic for next time. I will likely finish my Long Covid article tomorrow, then focus on the Language article, or… maybe surprise you in the meantime.
Thank you for supporting my work throughout the years !
I don't see how a child is going to improve their thinking abilities if this is used. If it adapts to your brain waves the tendency will be for the brain to float along with minimal effort, no? The brain thrives on novelty and challenge. Without that, there won't be much ingenuity.... Ugh... I see nothing but zombies. Have they also messed with learning to play musical instruments? That might be a good foundation to fight from.
To my mind, there is something inherently wrong with ai. I don’t trust it, and using it to teach children? Nope. No good will come of it.
Thanks for sharing your thoughts Mr Moriarty! 🙏