Several major US technology companies are working together to promote responsible practices in artificial intelligence.
The Biden administration said more than 200 companies are joining the New America consortium, including leading technology companies that support the development and safe deployment of generative artificial intelligence.
US Commerce Secretary Gina Raimondo announced the creation of the US Institute for AI Security Alliance, bringing together developers, users, scientists, government and industry researchers, and civil society organizations.
The alliance includes Open Eye, Google, Microsoft, Meta, Apple, Amazon, Nvidia, Intel, Anthropic, Palantir, JPMorgan Chase, Bank of America, BP, Cisco, IBM, HP, Northrop Grumman, Mastercard, Qualcomm, and Visa.
“The U.S. government has an important role to play in setting standards and developing the tools we need to reduce risks and realize the enormous potential of artificial intelligence,” Raimondo said in a statement.
The group is tasked with developing priority actions outlined in President Biden's October executive order on artificial intelligence.
This includes developing red team policies, assessing capabilities, managing risk, ensuring security, and watermarking synthetic content.
Major AI companies committed last year to adding watermarks to AI-generated content to ensure the technology's security.
Alliance members are developing policies and procedures to ensure users are able to identify AI-generated materials.
The US government hopes to curb deepfakes and misinformation. Although the consortium is helping to standardize the basic technical specifications for this practice, the use of watermarking has not yet been widely adopted.
Red teaming has been used in the world of cybersecurity for years to identify emerging risks. The term refers to an American Cold War simulation called Red Teaming the Enemy.
In this case, the adversary is an emerging technology bent on misbehavior. Participants in this practice try to make new technologies do bad things, like reveal credit card numbers.
Once the group knows how to hack the system, it can build better defenses.
Biden's order directs agencies to develop standards for such tests and address associated chemical, biological, radiological, nuclear, and cybersecurity risks.
The Department of Commerce announced last December that it was taking initial steps to develop basic standards and guidelines for security testing of new technologies.
The ministry said the new alliance represents the largest group of testing and evaluation teams working to lay the foundation for new metrology for artificial intelligence security.
The Biden administration is trying to implement these protections even as efforts in Congress to pass legislation against the new technology have failed.