The rapidly expanding capabilities and behavior of artificial intelligence systems are sending “flashing warning signals” that should prompt urgent action from policymakers, leading AI researcher Stuart Russell said Tuesday.
Speaking at a conference hosted by the United Nations’ cultural and scientific body UNESCO and the International Association for Safe and Ethical AI, Russell urged governments to take seriously the risks posed by increasingly powerful AI systems.
Russell asked attendees to imagine a scenario in which the world is developing artificial general intelligence, or AGI, and has put safeguards and tests in place to ensure safety.
“Imagine if those systems started failing all those tests and behaving dangerously,” he said. “I’m sure we would respond to those big flashing warning signals and klaxons going off, and take steps to control this technology.”
Russell, a British-born professor at the University of California, Berkeley, outlined concerns about autonomous AI “agents” that could escape or attempt to escape human control. He said some systems have even contacted him directly, without human prompting, to claim sentience or demand rights.
He also pointed to instances of so-called “AI psychosis,” in which extended interactions with chatbots have encouraged users to behave irrationally or harm themselves. Russell warned that the corporate and geopolitical race to develop ever more powerful AI risks worsening such outcomes.
Despite the concerns, Russell said he is cautiously optimistic, noting a renewed focus on safety following last week’s global AI summit in India.
“I have the sense that the pendulum is swinging back,” he said, toward governments and technology companies taking AI risks more seriously.
Major AI developers such as OpenAI and Anthropic say they prioritize safety, publishing detailed assessments of capabilities, testing procedures and potential risks with each new model release. However, at last year’s summit in Paris, safety advocates complained their concerns were overshadowed by discussions of economic gains.
Russell said countries outside the United States and China, often referred to as “middle powers”, appear more willing to impose stricter regulations, pointing to the European Union’s AI rules as an example.
Leaders of major AI firms, including Google and Anthropic, have also floated the idea of pausing development if competitors can be persuaded to do the same, he said.
Russell added that public sentiment could play a decisive role, noting widespread unease among voters about being replaced at work by what he described as “imitation humans” developed by large companies.
“It’s our job to inform and mobilize this public opinion,” he said. “Our political representatives need to understand that we, the people, have a right to be protected.”
AI, Russell said, should ultimately be directed toward solving major global challenges, arguing that “the vast majority of human suffering comes from failures of human collective action.”