Sign up to our newsletter Join our membership and be updated daily!

World Leaders agree to launch network of AI safety institutes

World leaders AI safety institutes
Ten nations along with the European Union have committed to establishing additional Artificial Intelligence (AI) safety institutes.

Ten nations along with the European Union have committed to establishing additional artificial intelligence (AI) safety institutes aimed at coordinating research on machine learning standards and testing.

This decision was reached at the AI Safety Summit held in Seoul, South Korea, where global leaders convened virtually.

The initiative will facilitate collaboration among researchers from publicly-funded institutions, such as the UK’s AI Safety Institute, to exchange insights regarding the risks, capabilities, and limitations of AI models.

“AI is a hugely exciting technology…but to get the upside, we must ensure it’s safe,” UK prime minister Rishi Sunak said in a press release.

“That’s why I’m delighted we have got an agreement today for a network of AI Safety Institutes.”

Signatories to the new AI Safety Institute network include the EU, France, Germany, Italy, the UK, the United States, Singapore, Japan, South Korea, Australia, and Canada.

The UK established the world’s first AI Safety Institute last November with an initial investment of £100 million (€117.4 million). Since then, countries such as the United States, Japan, and Singapore have created their own AI safety institutes.

According to a November 2023 press release from the UK government, the mission of the UK’s AI Safety Institute is to “minimise surprise to the UK and humanity from rapid and unexpected advances in AI.”

The European Union has passed the EU AI Act and is preparing to establish its AI office.

The European Commission had announced that the head of this new office would be appointed following the law’s approval.

Ursula von der Leyen, President of the European Commission, stated at last year’s AI Safety Summit that the office should have a “global vocation,” aiming to work with similar organizations worldwide.

During the same summit, leaders endorsed the Seoul Declaration, emphasizing the importance of “enhanced international cooperation” to create human-centric, trustworthy AI.

At this week’s AI Safety Summit, 16 major tech companies, including OpenAI, Mistral, Amazon, Anthropic, Google, Meta, Microsoft, and IBM, agreed to safety commitments. These include setting risk thresholds for AI and ensuring transparency.

The UK government, which co-hosted the event, called this agreement a “historic first”. France will host the next summit on safe AI use.

YOU MAY ALSO READ: Chad PM tenders resignation to newly elected junta chief

 

Share with friends