UK Prime Minister Rishi Sunak has announced the establishment of the world’s first AI Safety Institute. The announcement was made by Prime Minister Rishi Sunak in a speech at the Royal Society.
A team of world-renowned experts in AI safety, including researchers from academia, industry, and government will lead the institute.
According to the Guardian, The institute is being funded by a combination of government and private sources. The UK government has committed £50 million to the institute, while the rest of the funding will come from private donors and foundations.
The UK aims to become a global leader in AI safety, carving out a unique role in the post-Brexit era. The institute will be tasked with testing new types of AI and assessing their risks, ranging from generating misinformation to existential threats.
Interestingly, a prototype of the safety institute already exists within the UK’s frontier AI taskforce. This taskforce was established earlier this year and is currently scrutinizing the safety of cutting-edge AI models.
A Global First Approach
The UK government has recognized the importance of AI safety and has taken several steps to address this issue.
In 2021, the government established a Centre for Data Ethics and Innovation, which is tasked with developing ethical guidelines for the use of data and AI. The government has also invested in research on AI safety, and has supported the development of international norms and standards for AI.
Sunak stated that the institute would carefully examine, evaluate, and test new types of AI to understand what each new model is capable of. The institution will explore all risks, from social harms like bias and misinformation to the most extreme risks.
Despite the announcement, Sunak did not support a pause or suspension in advanced technology development, the Guardian reported.
When asked if he would support a moratorium or ban on the development of a highly intelligent form of AI, he replied: “I don’t think it is practical or achievable. As a matter of principle, the UK has been an economy and a society that encourages innovation. And I think that’s the right approach.”
Global Summit on AI Safety
The announcement comes ahead of a global summit on AI safety that will take place at Bletchley Park from Nov 1 to 2, a former secret military installation located in Bletchley, England.
The summit will bring together international governments, leading AI companies, civil society groups, and experts in research to consider the risks of AI, especially at the frontier of development.
Authorities Keeping an Eye on AGI
The debate over AI safety peaked in March when an open letter signed by thousands of prominent technology figures called for an immediate pause on AI development for at least six months. One potential development in AI that alarms some experts is AGI – a term used to designate a system that can perform any task with a human level of intelligence or higher.
The UK government has released its assessment of AI security risks, including an admission that an existential threat from the technology could not be ruled out. Other threats detailed in the documents included the systems’ ability to engineer biological weapons, mass produce targeted disinformation, and cause substantial disruption to the job market.