
The White House on Friday announced safeguards for sharing, testing and developing new artificial intelligence programs.
Seven companies, including Amazon, Google, Meta, and Microsoft, have committed to following the guidelines. The commitments are an initial step to regulate AI while Congress considers further legislation. Countries around the world and the European Union are also considering how to regulate AI systems.
What are the regulations? Some of the safeguards include security testing carried out in part by a third party to guard against risks to biosecurity and cybersecurity. The testing includes analysis of potential societal harms.
The companies will also report vulnerabilities in their systems and use digital watermarks to identify when content is AI-generated. The commitments do not address concerns about privacy, copyright, or how AI affects the workforce.
This story originally appeared in WORLD. © 2023, reprinted with permission. All rights reserved.