Secretive billionarie network attempt at controlling AI tech
It might get a little political at the end
Rarely do I usually outright just recommend my readers to “go do X”, from observation and the data Substack enables us to access I can gauge what most of my readers consume, therefore I operate under the perspective that most are informed on many subjects. As rare as this, is me recommend you to read a news article, unless I think it is very informative or necessary from a bird’s eye perspective.
This is the case, I will add my own thoughts afterward, and below I will insert some of what I see as important, I do highly recommend each and every single one of you to read the article in its entirety. Regardless of your personal political choices, Politico often does a great job bringing very important issues to light.
As many of you know by now, I am very pro-AI, pro-Open Source AI, and believe the use of AI is directly connected to the prosperous future of our race, therefore I am vehemently against AI regulation. The irony is not lost on me or anyone who dug deep enough into my dead Twitter profile (hint: data poisoning).
How a billionaire-backed network of AI advisers took over Washington
A sprawling network spread across Congress, federal agencies and think tanks is pushing policymakers to put AI apocalypse at the top of the agenda — potentially boxing out other worries and benefiting top AI companies with ties to the network.
Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site. In 2022, Open Philanthropy set aside nearly $3 million to pay for what ultimately became the initial cohort of Horizon fellows.
Horizon is one piece of a sprawling web of AI influence that Open Philanthropy has built across Washington’s power centers. The organization — which is closely aligned with “effective altruism,” a movement made famous by disgraced FTX founder Sam Bankman-Fried that emphasizes a data-driven approach to philanthropy — has also spent tens of millions of dollars on direct contributions to AI and biosecurity researchers at RAND, Georgetown’s CSET, the Center for a New American Security and other influential think tanks guiding Washington on AI.
In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda.
“We have [the] AI [industry] inserting its staffers into Congress to potentially write new laws and regulations around this emerging field,” Stretton said. “That is a conflict of interest.”
A focus on existential threats
Open Philanthropy’s expanding web of AI influence stems from its long-running effort to steer Washington toward rules that address long-term catastrophic risks raised by the technology. Its apocalyptic focus is reinforced by the group’s alignment with effective altruism, a movement that’s fixated on existential risks posed by advanced AI.
A new industry with a deep web
Though AI is a far younger industry than other major lobbies in Washington, its network of connections already runs deep. And there are significant links between Open Philanthropy and leading AI firms OpenAI and Anthropic.
In 2016, OpenAI CEO Sam Altman led a $50 million venture-capital investment in Asana, a software company founded and led by Moskovitz. In 2017, Moskovitz’s Open Philanthropy provided a $30 million grant to OpenAI. Asana and OpenAI also share a board member in Adam D’Angelo, a former Facebook executive.
In late September, a number of civil society groups — including Public Citizen, the Algorithmic Justice League, Data for Black Lives and the American Civil Liberties Union — convened a Capitol Hill briefing attended by Sen. Cory Booker (D-N.J.) that emphasized the challenges existing AI systems pose to civil rights.
But Raji said that loose alliance is so far outgunned by the money, influence and ideological zeal driving Open Philanthropy and the tight-knit network of effective altruists that it finances.
“They’re so organized, they’re so homogenous in their ideology,” Raji said. “And so they’re much more coordinated.”
Ever since ChatGPT’s creators, Open AI, came into the spotlight by producing a competent Large Language Model (LLM), taking the world by storm and acquiring special status among the many other AI research firms. It didn’t take long for others to start trying to replicate their work, with various companies making strides in advancing their own LLMs.
What few people involved in this phenomenon truly grasped, from the billionaires funding these endeavors to the developers themselves, was the immense potential held within a sophisticated, intricate database of language. Essentially, LLMs function as comprehensive repositories of human language and experience. They operate on a foundation of complex mathematical principles, detailed algorithms, and vast datasets, enabling the machines not only to generate text but also to unearth concealed patterns within human knowledge and, notably, human language.
If you take into account 10, 20, 30 years of technological evolution, current “AIs” aren’t that sophisticated, yet they were able to achieve absurdist results. Using ChatGPT 3.5 (3.5 to 4 is a massive improvement) specific research groups were able to create their own models that could discover new chemicals, and new proteins, In each field where experts applied the latest LLMs, they were able to accelerate discovery by an exponential margin.
The tech is in its “early stages” and yet can achieve results that shake society, and increase the productivity and progress of different areas, and as nobody expected this, everyone was surprised and blindsided by it. Especially when a hypothetical problem until this very moment became real (jailbreaking, making the AI break the rules imposed by its developers) and any person could hypothetically make the machine do their bidding.
All these events gave birth to a few branching pathways in the technological road of progress.
1. A fever rush into creating your own LLMs, to serve your purposes, you have entire companies now dedicated to creating their models to serve specific purposes.
2. The fear of competition being able to catch up with you
3. The birth of AI doomerism, the fear that AI will lead us to extinction
4. How incredibly powerful Language-based AI models are
These dynamics, among several others, led to the current attempts at regulatory capture at a local level (within each country) and at a geopolitical level, efforts to control the supply of advanced GPUs and microchips (the US-China semiconductor kerfuffle). Understanding some of this information, even if only superficially, is necessary to comprehend where it originates.
I am not fond of the Left-Right dichotomy, since at my core I share many of the values the American Founding Fathers had, but some of the byproducts of Leftist thought are detrimental to society, such as Communism, an ideology I hold immense opposition towards. And among the modern byproducts of the natural evolution of the Communism meme (as in genes of culture, not funny images) we end up with a blend of its worse memetic traits, otherwise referred to as Communism with extra steps. Effective Altruism (EA).
Gone are my days of deep cultural, and geopolitical analysis, and writing encompassing analytical articles about the hard EA had on modern culture. Yet at least here I must touch on the subject. Given the extensive damage this ‘crowd’ had on political discourse, the course of politics itself by influencing new political movements, popular culture, and how from my perspective it is akin to brain rot, I refuse to let such tool be hindered by their shortsightedness. Just go see the most recent damage they did to crypto (FTX scandal).
The entire movement lies on thin grounds of moralistic interpretation of complex issues, opting for quick fixes and short-term interventions with minimal measurable effects. It's arguably (laughable to my circle of friends) a data-driven movement where the data is often oversimplified, skewed to fit preferred narratives, riddled with ideological bias, and garnished with a generous helping of elitism.
All political movements exhibit a certain degree of myopia in foreseeing the consequences their decisions might bring about, or in some cases, they directly create the very problems they aim to solve. Few bear this burden as significantly as the cult of Effective Altruism. Whether due to inability or hubris, they often end up giving rise to the problems they intended to address, like the Neoreactionary movement (you can delve into that one on your own; it's well worth the time invested). The most recent example in recent years is AI and 'alignment.'
AI alignment seeks to ensure that artificial intelligence behaves in ways that align with specified intentions and values, such as avoiding harm. However, the challenge lies in the fact that, as current researchers see it, alignment outright stifles the creative capacity of Generative AI. The other issue, one I typically spare my readers from (academic AI research papers), is how easily alignment can be disrupted.
There are many ways to break these imposed rules in LLMs, such as outsmarting the safety protocols, writing your previous prompts in creative ways, the list goes on, but one of the most remarkable methods is… training smaller LLMs on jailbreaking larger, more sophisticated models. Cost-effective, possible on most hardware, unlike LLMs that necessitate massive server farmer, and highly customizable. This is not an if, this is not a may, or even a when. It is being done right now.
AI is far too powerful, far too big of an equalizer, and far too beneficial to the human race as a whole, to be left in the hands of an appointed few, chosen ones for unknown reasons, to decide the fate of the rest of us. I am not open, nor will accept trading “elites”. Different costumes, same behavior, same outcomes. After a few hundred years, I think we had enough, or maybe a few of us had.
Perhaps everyone right now would benefit from reading Rene Girard’s work, but especially beneficial to the AI/EA crowd. Especially the often-forgotten concept of Reverse Mimesis. (I will delve into this one in my other publication, after writing about my favorite book, incidentally the source of the name John Paul).
If you chose to continue supporting or supported my work at any point, thank you !
Spur of the moment post, currently in the middle of nowhere (work, not fun), written in my cellphone, so corrections may take a day or two.
I am tired of fixing other people's mess. What I lack in funds, I have in will and drive, AI ain't staying in the hands of a few.
It will soon be easier to buy a Glock than a GPU.