The Biden administration on Friday secured voluntary commitments from executives at seven technology companies that they would take steps to ensure safe development and deployment of generative AI systems, in response to various concerns that have been raised since ChatGPT’s release last fall.
The companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The latter is the developer of ChatGPT, which Microsoft has invested in and included in its Bing search engine.
“These commitments, which the companies will implement immediately, underscore three fundamental principles: Safety, security and trust,” Biden said in announcing the news at the White House, flanked by the company executives.
The commitments, which don’t come with any government monitoring or enforcement, include:
- A call to internal and external security testing of AI systems before release
- Information sharing on risk mitigation across industry, government and academia
- Cybersecurity and insider threat safeguards to guard against leaks of proprietary and unreleased model weights
- Mechanisms such as watermarks that let consumers know when content is AI generated
- Reporting on systems’ “capabilities, limitations, and areas of appropriate and inappropriate use.”
- Prioritizing research on potential societal risks, including “harmful bias and discrimination, and protecting privacy.”
- A commitment to developing AI systems aimed at tackling major societal challenges such as cancer research and climate change.
The administration said it had consulted on the voluntary commitments with a number of other nations, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the UK.
In May, OpenAI CEO Sam Altman told a Senate hearing that government intervention was necessary to provide adequate safeguards against improper use of generative AI. Altman proposed creation of a U.S. or global agency, the Associated Press reported.
A group called the Center for AI Safety, which included Altman, went a step further. Hundreds, including tech luminaries, politicians and academics, signed the following statement in late May: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”