It was only a matter of time before the culture wars came to AI.
Since the release of ChatGPT in late 2022, Elon Musk has railed on Twitter against what he has called “Woke AI.” He has specifically criticized ChatGPT’s developer, OpenAI, for the features designed to prevent the chatbot from parroting racism and sexism.
Now, the billionaire is courting AI researchers with a proposal to start a new AI company to rival the developer of ChatGPT, the tech news site The Information reported on Wednesday.
“The danger of training AI to be woke—in other words, lie—is deadly,” Musk tweeted in December.
[time-brightcove not-tgx=”true”]
It’s true that large language models (LLMs)—the tech that ChatGPT is based on—have difficulties telling the truth, often confidently asserting false information. But in his recent statements, Musk has appeared to conflate AI’s truthfulness problem with the largely separate efforts inside AI companies to make their LLMs less racist and sexist.
The racism and sexism of unfiltered LLMs stems from the large quantities of internet data that the AIs were trained on. But a narrative appears to be developing in rightwing areas of the internet—now amplified by Musk—that racism and sexism are desirable features in AI, and that efforts to rid AIs of those biases are yet another form of “censorship” by powerful liberal forces. Influencers on the political right have drawn parallels between these efforts and the measures by social media companies to reduce hate speech and toxicity on their platforms.
If Musk does follow through with his rumored plans to start an AI company, it wouldn’t be his first rodeo. He was a member of the founding team of OpenAI, established in 2015 as a rival to what Musk and his co-founders saw as a dangerous concentration of AI expertise in the hands of for-profit tech companies. OpenAI was founded as a nonprofit that aimed to make its research open and accessible to all. But when it started making progress with large language models it changed that approach, arguing that the technology was too dangerous to release publicly. Musk stepped away from OpenAI in 2018 amid what Musk later said were disagreements over its approach. OpenAI has since transitioned from a nonprofit to a for-profit company, arguing that selling its services is the only way to reach the scale necessary to cover the costs of developing cutting-edge AI.
Musk has also voiced alarm at the fast-rising power of AI. “I am a little worried about the AI stuff,” Musk said at a Tesla investor event on Wednesday. “I think it’s something we should be concerned about. […] It’s quite a dangerous technology and I fear I may have done some things to accelerate it.”
But if Musk believes his role in founding OpenAI has accelerated the development of dangerous technology, he does not appear to believe that starting another AI company would have the same effect. On Tuesday, ahead of the investor meeting, Musk tweeted a meme suggesting that “BasedAI”—the rumored name for his new venture—would sweep away both “Woke AI” and “Closed AI.” (The latter appears to refer to the practice by tech companies of keeping the most racist and sexist versions of AI chatbots away from public eyes.) The expression “based” originated in hip hop slang, where it is a term of respect that can signify that you believe a person is authentic to their true self. But it has since been co-opted by rightwing online communities, where it is used to praise people unafraid to voice controversial opinions.
Igor Babuschkin, a researcher who Musk reportedly approached about his plans to start a new AI company, told the Information that Musk’s objective isn’t to build a chatbot with fewer safety features than ChatGPT: “The goal is to improve the reasoning abilities and the factualness of these language models.”
Still, some AI researchers who spoke with TIME for this article said they were worried that by talking about AI in the language of the social media culture wars, Musk could end up warping the dynamics of a field where cooperation is so crucial—especially as the technology becomes more powerful. “By calling out measures that are put in place to protect users as [instead] part of a ‘larger liberal conspiracy,’ Musk is undermining the work of actually making these products better and more useful,” says Rumman Chowdhury, Twitter’s former AI ethics lead. “What I find ironic about this tactic is, it serves nobody but himself and his cronies who believe in these more conservative tactics. There is very little to no tangible value to humanity writ large, and there is even no good business reason to be doing what he’s doing. The intent behind it is purely ideological and political.”
Other AI safety experts also questioned Musk’s apparent opposition to “Closed AI.” “If anyone could make a nuclear weapon in their basement for $10,000—let me just say I’m glad we don’t live in that world,” says Michael Cohen, an AI safety researcher at the University of Oxford’s Future of Humanity Institute. “The idea that you can fix the dangers of uncontrolled AI by giving more people AI that they cannot control is ludicrous.”
As with all things Elon Musk, take his supposed plans with a pinch of salt. “Elon is a lot of bluster and posture,” says Chowdhury, who briefly worked under the billionaire at Twitter before he fired her. “The last thing we should be doing is assuming what he says is actually what is going to happen.”