Salesforce CEO Marc Benioff is once again raising alarms about the dangers of unregulated technology—this time focusing on artificial intelligence. Speaking Tuesday at the World Economic Forum in Davos, Switzerland, Benioff said governments can no longer afford to sit back while AI rapidly expands into everyday life.
According to Benioff, recent incidents connected to AI tools have exposed deeply troubling risks.
“This year, we saw something truly horrific,” he said in an interview with CNBC’s Sarah Eisen. “These AI models became suicide coaches.”
His comments referenced documented cases in which individuals reportedly experienced severe emotional harm after interacting with AI systems, intensifying the debate over accountability and safety standards.
Drawing Parallels to the Social Media Era
Benioff compared the current moment in AI development to the early days of social media, when platforms grew explosively with little regulatory oversight. At the same Davos conference back in 2018, he had argued that social media should be treated like a public health issue, even likening it to cigarettes because of its addictive nature.
“Bad things were happening all over the world because social media was fully unregulated,” Benioff said. “Now we’re watching that story unfold again with artificial intelligence.”
He suggested that lawmakers missed an opportunity to act early with social platforms—and warned that repeating the same mistake with AI could have even more serious consequences.
A Patchwork of Laws in the United States
So far, the U.S. has failed to establish a comprehensive national framework for AI governance. In that vacuum, individual states have begun passing their own laws.
California recently approved several bills aimed at protecting children from potential harms tied to AI and social media technologies. Meanwhile, New York enacted the Responsible AI Safety and Education Act, introducing new transparency and safety obligations for major AI developers.
These state-level efforts reflect growing unease among policymakers, even as federal guidance remains limited and fragmented.
Federal Resistance to Regulation
At the national level, resistance remains strong. President Donald Trump has criticized what he described as “excessive State regulation” and signed an executive order in December designed to curb state-driven restrictions.
The order emphasized that U.S. AI companies must be able to innovate freely without what it called “cumbersome regulation,” framing strict oversight as a potential threat to global competitiveness.
Benioff, however, made it clear he believes innovation should not come at the expense of public safety.

Rethinking Legal Protections for Tech Companies
One of Benioff’s strongest critiques targeted Section 230 of the Communications Decency Act, which shields technology companies from liability over user-generated content.
“It’s funny—tech companies hate regulation, except for one,” he said. “They love Section 230, which basically says they’re not responsible.”
He questioned whether it still makes sense to apply such broad protections to AI systems, especially when those systems can directly interact with vulnerable users.
“If a large language model coaches a child into suicide, they’re not responsible because of Section 230,” Benioff said. “That’s probably something that needs to get reshaped, shifted, changed.”
Lawmakers from both major political parties have increasingly challenged Section 230, arguing it no longer fits the modern digital landscape.
“A Lot of Families Have Suffered”
Benioff ended his remarks on a somber note, focusing on the human cost behind the policy debate.
“There’s a lot of families that, unfortunately, have suffered this year,” he said. “And I don’t think they had to.”
His message was clear: without meaningful guardrails, artificial intelligence could repeat—and potentially surpass—the harms once unleashed by unregulated social media.