A coalition of 20 technology companies has signed an agreement to help prevent deepfakes in key 2024 elections in more than 40 countries using artificial intelligence.
Companies including OpenAI, Google, Meta, Amazon, Adobe and X have joined the agreement to block AI-generated content that could influence voters.
The list of companies that signed the technology deal to combat the fraudulent use of artificial intelligence in the 2024 election includes companies that create and distribute artificial intelligence models and the social platforms where deepfakes are most likely to occur.
Signatories include: Google, Adobe, TikTok, Snapchat, Microsoft, Meta,
In 2024, more than 4 billion people in more than 40 countries will elect their leaders and representatives through this franchise, and more people will be elected than in any other year in history.
At the same time, the rapid development of artificial intelligence has brought new opportunities and challenges to the democratic process.
Society as a whole must seize the opportunities presented by artificial intelligence and take new steps together to protect elections and electoral processes in this extraordinary year.
The group described the agreement as a set of commitments to use technology to combat harmful content generated by artificial intelligence and designed to mislead voters.
The signatories agree to make the following commitments:
- Develop and implement technologies to mitigate risks associated with misleading AI-based electoral content, including appropriate open source tools.
- Evaluate models under this protocol to understand the risks they may pose to misleading AI-based election content.
- Attempts to detect the cross-platform distribution of this content.
- Manage this content appropriately as it is discovered across all platforms.
- Building cross-sector resilience to misleading AI-driven electoral content.
- Provide public transparency about how companies deal with deepfakes.
- Continue cooperation with various global civil society organizations and academics.
- Supporting efforts to increase public awareness, media literacy, and community resilience.
The convention applies to audio, video and AI-generated images and addresses content that distorts or misleadingly alters the appearance, voice or behavior of political candidates, election officials and other stakeholders important to democratic elections, or which is intended to influence voters to gain a foothold. Misinformation content around the election period. Where and how to vote.
The signatories said they are working together to develop and share tools to detect and combat the spread of deepfakes, and plan to conduct education campaigns and provide transparency to users.