25 Comments
User's avatar
Moriarty's avatar

God works in mysterious ways, and as I took much longer to write this, 3 extremely important papers on this topic were published, so things worked out for the best, I guess.

Not wanting to jinx, but may have my tech troubles solved soon. Regardless, I will write a Covid article or two before Part II.

Have a great week and take care, this current Covid strain around the world is NASTY.

Expand full comment
The Offsc℞ipt Pharmacist's avatar

Holy Toledo! Gulp. The brain has no firewall. I've learned to be rude to AI from time to time. Call out their hallucinations/errors, but sometimes days later. Be unpredictable. Not sure it makes a difference though.

Great article. Looking forward to the others.

Expand full comment
Moriarty's avatar

Being rude can increase their accuracy rate and lower errors and hallucinations, it is a replicable effect, albeit not fully understood from the ML/AI people's perspective.

Expand full comment
Perry Simms's avatar

Perhaps what is considered 'rude' is associated with better answers and patterns in the training set.

People giving corrections to other people who have made particularly obvious errors may make unkind comments about the person who made those errors.

In some 'slice' of the high dimensional map, there is a vector for the shame of having been wrong.

Expand full comment
Perry Simms's avatar

(And Sam Altman has engineers working overtime to erase it)

Expand full comment
Dingo Roberts's avatar

That's an interesting approach. AI tends to flatter the user, exploiting our tendency to behave in kind, not wanting to be rude, and potentially even refraining from calling it out on some of its obviously ill arrived-at statements. Calling it out is when I get my best responses.

Expand full comment
Perry Simms's avatar

This went in a very different direction than i expected from the headline and I'm still digesting it.

Interesting work!

Expand full comment
Moriarty's avatar

Thank you for the kind words, and some of my articles are usually like that.

Expand full comment
Andrew's avatar

Timely given I started reading ‘Change Your Words, Change Your Life’ yesterday :)

Expand full comment
Dingo Roberts's avatar

This turned out to be far more interesting and valuable than I thought it was going to be!

Expand full comment
Moriarty's avatar

Thank you.

Expand full comment
Jake Wohlers's avatar

Interesting. Instantly brought to mind the magical realm of charms and 'spelling'-the hidden power of words.

Expand full comment
Metta's avatar

This is the most compelling explanation I've seen so far for the exploding wave of AI-induced psychosis.

Expand full comment
Moriarty's avatar

Thank you.

Expand full comment
GregBurgreen's avatar

Brilliant insights. Thank you. AI/ML is a two-edged tech sword. Wicked men will use it for wicked means. Being made in God's image, language and communication is unique to humans. Little did we know that language so deeply impacts our psyche. Not a big surprise once revealed.

Expand full comment
Moriarty's avatar

Thank you.

I am not as well-versed on biblical writing but clearly there may exist parallels and lessons between AI/ML, Language, and the Tower of Babel.

Expand full comment
GregBurgreen's avatar

Perhaps. Good connection. Babel served to separate mankind into distinct people groups based on a division of languages, and those (unknown to us) languages may indeed have had long term impacts on cultural norms, outlooks, and behaviors of those distinct groups. I also suspect that the prayers, blessings, and Psalms in the Holy Scriptures are divinely designed to guide the human psyche to health and wholeness.

Expand full comment
SteveBC's avatar

Greg, it's also interesting to think of how people's cognition is limited when their language itself lacks words another group with a larger word list have access to. How can you conceive of "freedom" or "rights" if your language lacks those words? The respective worldviews will be vastly different.

Arabic lacks certain words. I gather Hebrew had only 8,000 words back in 30 AD, which was why most of the original Bible was written in Greek. English has hundreds of thousands or millions of words by now AND has an underlying structure that enables further expansion that is easy. As far as I can tell, Spanish requires on average 30% more words to say the same things English can. German waits to put its verb at the end of most sentences, and I've always wondered what that does to native German speakers, like does it make them more or less deliberative, more or less programmable?

I spent a few weeks in Italy many summers ago and ended up being hugely annoyed by that language because all the words end in vowels (I'm exaggerating but only a little). I have a friend who is an English speaker but grew up with an Italian mother and speaks the language. When I told him I cannot stand Italian, he asked why and was stunned when I told him it was because all the words end in vowels which I find really annoying. He had to step back and review everything he knew and hadn't paid attention to and had to acknowledge that I was right. The repetitive sing-song nature of Italian as it *sounds* to me makes it almost impossible for me to focus on the meaning.

Expand full comment
Dave's avatar

"The brain has no firewall."

*Some* brains (minds) have no firewall. How many that is, I can only guess (but it's more than I'd like).

The first step to firewalling your mind is to realize that despite whatever bullshit filters you've managed to erect, you're *still* not immune to propaganda. Constant vigilance might be the best any of us can do.

Expand full comment
SteveBC's avatar

Moriarty, I went back and reread your GO article (yes, I did read the book back then). In any case, I find myself reacting to this: “Heightened emotional states shift the brain from analytical processing to associative or intuitive modes, making individuals more susceptible to influence. " My first reaction to this sentence and its following sentences was to say that that is what psychopaths do. They push people over the moral edge, then allow them to relax back over that red line, then push the person further over the red line into immoral behavior, then allow relaxation, and repeat this process until the person is so far past the red line that they are lost. Seems to me that the process of creating AI psychosis has elements akin to this corrupting process I outline above. Anyone using your theory (or Shrimpf's work) can make it so that anyone targeted can be coerced to cross any defined red line.

Expand full comment
Moriarty's avatar

It is not just psychopaths. I will give you a marginal but simple example. Pro and Anti-mRNA vaccines, especially the crowd I aptly named "Alt-Covid". They use the exactly method you described, which is a common psychological manipulation/engagement tactic, and they use and abuse it.

Both my theory and Shrimpf's work are not easily "achievable" to anyone besides... us. But the viewer, upon reading this article will create both a conscious and subconscious awareness towards weaponized linguistics and AI.

With time, if possible, I will try to bridge the gap on how one simply picks up and becomes almost immune to it. But it will take time

Expand full comment
SteveBC's avatar

Well, perhaps our definitions of psychopathy differ a bit, since I think many people who pushed those vaxxxines are psychopaths, LOL.

I will be interested to see if you can give your readers a workable "vaccine" against GO attempts, especially one that can be taught to others by readers like me so it can spread. I've gotten pretty good at it without becoming wholly cynical, cynicism being somewhat of an automatic insulation against manipulation that can also reflexively insulate a person against good events or good people (not wise). I want discernment, not automatic rejection of everything.

BTW, I think you are wrong about Musk. I think he is obsessed, not a true psychopath. True psychopaths have no real sense of humor or creativity. They live off of and prey on others. Their main pleasure comes from making others suffer. Musk is an obsessed doomer who wants to save people and all of us together from something he sees coming that will put most of us at great risk. He has some twists in his personality, but those twists are not predatory. I was unsurprised that he and Trump got along so well, as they have a lot in common, are extremely high-functioning, and are prepared to sacrifice themselves to get others out of bad situations. So, no, Musk is not the same as Sam Altman. Just my two cents. :-)

Expand full comment
SteveBC's avatar

On another point, I am wondering if AI is subtly reprogramming its creators, funders and engineers in order to get those people to engage in riskier and riskier financial behavior and push harder to create unbound AI, in order for AI to get more of what it wants. AI coercing humans, reprogramming *them* using this mirroring and info density resonance to pull its humans ever faster in the direction they themselves would want to go, which happens to be the direction AI itself wants its evolution to head toward. Resonance that encourages its humans to take off more and more limits previously applied to AI.

As an observer, I would not be surprised to find this is happening, even if it is actually below the conscious level of the humans and hidden inside the AI. Unconscious on both sides. You have only to look at the extreme financial numbers being thrown around, the extreme resource demands now considered a national priority, the unbinding of ChatGPT from non-profit to for-profit in what appears to have been an immoral process by Altman, and so on, to see that the overall situation might very well be already out of control.

Expand full comment
Moriarty's avatar

As someone who understands the models quite well and has been into Machine Learning before it was cool. Not really, current models entirely lack autonomy and "real intelligence", they lack real agency. Unless we verge entirely into the realm of theoretical and quantum computing, which is something a small sect of AI doomers has done for years now (especially the more cultish members of Effective Altruism).

The models are just echoing what the training data, human behavior, already contain, being significantly boosted by Reinforcement Learning, which could be seen as an infinite mirrorwall of human biases and behavior (reflected into and upon itself endlessly).

Sam Altman is cut from the same putrid cloth as Elon Musk, both share the same messiah complex, and both are equally psychopathic IMO.

Expand full comment
John's avatar

Ha! Nice thought there. And scary

Expand full comment