Google, OpenAI, and Microsoft promise to never destroy humanity with their Artificial Intelligence

Google, OpenAI, and Microsoft and 13 other technology giants have signed up to a new charter, promising to never create an Artificial Intelligence model capable of causing damage to humanity

Aaron Brown

By Aaron Brown


Published: 21/05/2024

- 19:30

Updated: 13/06/2024

- 10:13

Historic agreement was signed at a two-day AI summit attended by Elon Musk and Eric Schmidt

In an unprecedented move, 16 of the biggest technology giants have signed a charter pledging to never develop AI systems that pose an extreme threat to humanity, such as helping users build weapons of mass destruction.

The development comes on the opening day of the AI Seoul Summit, with companies from the US, China, Europe and the Middle East agreeing to each publish safety frameworks on how they will measure the risks of their frontier AI models, such as examining the risk of misuse of the technology by bad actors. The event took place days after OpenAI unveiled its latest ChatGPT model and Google confirmed several AI updates.


Amazon, Google, Microsoft, Meta and OpenAI are among the long list of companies which have signed up to the Frontier AI Safety Commitments. It commits these firms to ensure that bad actors can never misuse Artificial Intelligence (AI) models.

In the most extreme circumstances, the companies have committed to “not develop or deploy a model or system at all” if they cannot ensure the risks remain below the threshold outlined in the framework.

Speaking about the announcement made at the two-day AI Seoul Summit, Prime Minister Rishi Sunak said: “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI. It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

The UK announced it would create the world’s first AI Safety Institute during the AI Safety Summit held at Bletchley Park in November last year, to carry out research and voluntary evaluation and testing of AI models, with a number of other countries since announcing their own domestic institutes.

The newly signed “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will see the network of institutes share research, including details about models they have studied, with the aim of advancing global understanding of the science around artificial intelligence.

In addition to world leaders, the virtual AI Seoul Summit was also attended by key figures from leading technology and AI firms, including Elon Musk, former Google chief executive Eric Schmidt, and DeepMind founder Sir Demis Hassabis.

microsoft copilot demoed in the current version of windows 11

Windows 11 already boasts a number of ChatGPT-powered features, including a first-generation version of the Windows Copilot that uses generative AI to put together comprehensive answers to questions, summarise emails, and create How To tutorials for your PC

MICROSOFT

Technology Secretary Michelle Donelan said: “AI presents immense opportunities to transform our economy and solve our greatest challenges — but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology.

“Ever since we convened the world at Bletchley last year, the UK has spearheaded the global movement on AI safety and when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own.

"Capitalising on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security and trust at its core.”

You may like