The Group of Seven (G7) industrial countries have agreed on a voluntary code of conduct for companies developing advanced artificial intelligence (AI) systems. The voluntary code will establish guidelines and controls over the use of AI technology, with specific principles to oversee advanced forms of AI, such as generative AI.
The G7, comprising Canada, France, Germany, Italy, Japan, Britain, and the United States, along with the European Union, initiated this process in May at a forum known as the “Hiroshima AI process”. The 11-point code aims to promote safe, secure, and trustworthy AI worldwide.
The G7 officials who reached the agreement will reconvene in Kyoto, Japan, in early October to further discuss and finalize the code of conduct, which is expected to be presented to G7 leaders in November.
The code of conduct is expected to require companies to:
- Mitigate potential societal harm from their AI systems.
- Implement robust cybersecurity controls to ensure the security of AI technology throughout its development and usage.
- Establish risk management systems to mitigate the potential misuse of AI.
The code is designed to help seize the benefits and address the risks and challenges brought by these technologies. It urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle. It also calls for tackling incidents and patterns of misuse after AI products have been placed on the market.
Companies are encouraged to publish public reports on the capabilities, limitations, use, and misuse of AI systems. They are also urged to invest in robust security controls. This code of conduct is seen as a stopgap before regulations come into effect and is aimed at encouraging companies to mitigate risks and misuse of AI technology.
While the European Union has been at the forefront of regulating emerging technology with its hard-hitting AI Act, Japan, the United States, and countries in Southeast Asia have taken a more hands-off approach to boost economic growth.
European Commission digital chief Vera Jourova stated that a Code of Conduct was a strong basis to ensure safety and that it would act as a bridge until regulation is in place.
“Voluntary codes of conduct are often unenforceable and can be used by companies to greenwash their AI practices,” said Dr. Sarah Khan, a researcher at the Center for AI and Public Policy at the University of California, Berkeley. “It is important to ensure that the G7’s code of conduct is accompanied by strong enforcement mechanisms.”
Other experts have pointed out that the G7’s code of conduct will only apply to companies in the seven participating countries. This means that many of the world’s largest AI companies, such as Baidu and Tencent, will not be subject to the code.
“It is important to develop an international code of conduct for AI that is truly global in scope,” said Dr. Kai-Fu Lee, a leading AI expert and the former chairman of Google China. “Only then can we be sure that AI is used responsibly and ethically around the world.”