Close
newsletters Newsletters
X Instagram Youtube

Warnings against AI rise as UN, safety index reports reveal widening global risks

AI (Artificial Intelligence) smartphone app ChatGPT surrounded by other AI Apps in Vaasa, Finland, June 6, 2023. (AFP Photo)
Photo
BigPhoto
AI (Artificial Intelligence) smartphone app ChatGPT surrounded by other AI Apps in Vaasa, Finland, June 6, 2023. (AFP Photo)
By Newsroom
December 05, 2025 03:22 AM GMT+03:00

Governments and tech companies face growing scrutiny over the pace of artificial intelligence development.

Two new reports released this week warn that AI could deepen global inequality and create severe safety risks if countries and companies fail to act.

A United Nations assessment and a separate safety index show fast adoption, uneven preparedness, and weak oversight across regions and firms.

UNDP warns of rising AI inequality risks

The UN Development Programme (UNDP) said unmanaged AI could undo decades of development gains. Its report warned that adoption is moving faster than many countries’ ability to keep up.

Philip Schellekens, UNDP’s chief economist for Asia and the Pacific, said the “central fault line” in this era is capability and added that countries with stronger infrastructure and governance will gain from AI while others fall behind.

The Asia Pacific region carries the sharpest risks. It is home to more than half of the world’s population, yet only 14 percent of people in the region use AI tools.

About 3.7 billion people remain outside the digital economy and one quarter of the population is offline. Gender gaps remain large in South Asia, where women are up to 40 percent less likely than men to own a smartphone.

AI-driven growth is possible. UNDP said AI could raise annual GDP growth by around 2 percentage points and increase productivity in sectors including health and finance. ASEAN economies could gain nearly 1 trillion dollars over the next decade.

Yet structural challenges persist, with 1.3 billion workers in informal jobs, 770 million women excluded from the labour force, and 200 million people living in extreme poverty.

Asia Pacific faces deep digital, gender divides

The report said women and young people face the highest exposure to AI-driven job disruption.

Jobs held by women are almost twice as exposed to automation as those held by men. Youth employment is already declining in high-exposure roles for workers aged 22 to 25.

Meanwhile, bias remains a central problem. Credit models trained mainly on urban male borrowers have misclassified women entrepreneurs and rural farmers as high-risk, shutting them out of financial support.

Rural and Indigenous communities also face exclusion because they are often absent from the data used to train AI systems.

The digital divide continues to shape outcomes. More than 1.6 billion people in the region cannot afford a healthy diet and 27 million young people remain illiterate.

Many countries depend on imported models that do not reflect their languages or cultural contexts, limiting the value of AI in essential services. Digital skill shortages slow progress even as interest in AI rises across the region.

Europe struggles with uneven AI preparedness

The UNDP report noted that only a limited number of countries have comprehensive AI regulations despite growing use in areas such as flood forecasting and credit scoring.

It warned that by 2027 more than 40 percent of AI-related data breaches could stem from misuse of generative AI.

European and North American findings showed similar divides. Countries such as Denmark, Germany, and Switzerland ranked among the top performers in AI readiness.

Albania and Bosnia and Herzegovina fell far behind Western Europe. Kanni Wignaraja, U.N. Assistant Secretary-General and UNDP’s regional director for Asia and the Pacific, said the widening gaps are not inevitable and stressed that many countries remain “at the starting line.”

Report finds no control plans for advanced AI systems

A separate report by the Future of Life Institute (FLI) found that the world’s largest AI companies are failing to meet their own safety commitments.

The 2025 Winter AI Safety Index evaluated eight major firms including Anthropic, OpenAI, Google DeepMind, xAI, Meta, DeepSeek, Alibaba Cloud, and Z.ai. Reviewers said none had produced a testable plan to maintain human control over highly capable systems.

Stuart Russell, a computer science professor at the University of California, Berkeley, said companies claim they can build superhuman AI but cannot demonstrate how to prevent loss of control. He noted that firms admit the risk could be as high as “one in three.”

The index measured companies across six categories including risk assessment, current harms, governance, and information sharing. It found progress but said implementation remains inconsistent.

How did world's largest AI companies rank?

Anthropic, OpenAI, and Google DeepMind scored the strongest overall but each showed weaknesses.

  • Anthropic faced criticism for ending human uplift trials
  • Google DeepMind improved its safety framework but still relies on evaluators who receive financial compensation from the company
  • OpenAI was faulted for ambiguous safety thresholds and lobbying against state-level AI safety rules

A spokesperson said safety is an crucial component of building and deploying OpenAI. "We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts" he told Euronews Next in a statement.

"We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities,” the statement added.

On the other hand, the remaining firms showed mixed results.

  • xAI released its first structured safety framework, though reviewers said it lacked clear mitigation triggers.
  • Z.ai allowed uncensored publication of external evaluations but has not released its full governance structure.
  • Meta launched a frontier safety framework with outcome-based thresholds but reviewers called for clearer methodology.
  • DeepSeek still lacks basic safety documentation.
  • Alibaba Cloud contributed to national watermarking standards but needs stronger performance on fairness and safety benchmarks.

FLI president Max Tegmark told Euronews Next that AI remains “less regulated than sandwiches” in the United States and noted continued lobbying against binding safety standards. He said public concern over superintelligence is rising.

A petition organised by FLI in October urged companies to slow development. Thousands of public figures signed it, including Steve Bannon, Susan Rice, religious leaders, former politicians, and prominent computer scientists.

Tegmark said the broad support reflects growing fear about economic and political instability if advanced systems develop without oversight.

He warned that uncontrolled superintelligence could displace entire workforces and force societies into deep dependence on government systems across ideological divides.

December 05, 2025 03:22 AM GMT+03:00
More From Türkiye Today