Artificial intelligence has joined the list of pressing global challenges world leaders and diplomats will address this week at the United Nations’ annual high-level meeting.
Since the launch of ChatGPT about three years ago, AI’s rapid progress has stunned the world. Tech firms continue to race ahead with more advanced systems, even as experts warn of dangers ranging from engineered pandemics to mass disinformation and urge stronger safeguards.
The U.N.’s recent adoption of a new governance structure marks its most significant attempt yet to rein in AI. Earlier international efforts — including three summits hosted by Britain, South Korea, and France — produced only non-binding pledges.
Last month, the General Assembly approved the creation of two bodies: a global forum and an independent scientific expert panel. The move is seen as a milestone in shaping international AI governance.
On Wednesday, the U.N. Security Council will hold an open debate on the responsible use of AI, including compliance with international law and its role in peace processes and conflict prevention. The following day, Secretary-General António Guterres will launch the Global Dialogue on AI Governance during the annual meeting. The forum will serve as a platform for governments and stakeholders to share ideas and strengthen cooperation. It is scheduled to convene formally in Geneva in 2026 and in New York in 2027.
Trump reveals Murdochs and Dell could be part of TikTok deal
Meanwhile, recruitment will begin for 40 experts — including two co-chairs from developed and developing nations — to join the new scientific panel. The body is being compared to the U.N.’s Intergovernmental Panel on Climate Change, which oversees the annual COP climate conferences.
Chatham House researcher Isabella Wilkinson called the creation of the new bodies “a symbolic triumph” and “the world’s most globally inclusive approach to governing AI.” But she cautioned that the mechanisms might remain “mostly powerless,” pointing to the U.N.’s slow bureaucracy compared with the speed of AI’s development.
Ahead of the gathering, a group of prominent AI specialists urged governments to establish “red lines” for the technology by the end of next year, setting minimum global safeguards against the most serious risks. The group includes senior staff from OpenAI, Google DeepMind, and Anthropic. They are pushing for a binding international agreement, noting past treaties banning nuclear tests and biological weapons.
“The idea is simple,” said Stuart Russell, an AI professor at the University of California, Berkeley. “As with medicines and nuclear plants, developers should be required to prove safety before gaining market access.”
Russell suggested U.N. oversight could mirror the International Civil Aviation Organization, which coordinates global safety standards among national regulators. Rather than fixed rules, he argued for a flexible “framework convention” that can adapt to rapid advances in AI.
Source: Agency