
Meta Commits to Combating Deepfakes Ahead of Australian Election

In the lead-up to Australia’s federal election in May 2025, Meta Platforms, the parent company of Facebook and Instagram, has announced a series of measures to combat misinformation and deepfakes on its platforms. These initiatives aim to safeguard electoral integrity and ensure that users receive accurate information during the election period.
Enhanced Fact-Checking and Content Moderation
Meta has reinstated its independent fact-checking program in Australia, collaborating with reputable news agencies such as Agence France-Presse (AFP) and the Australian Associated Press (AAP) to review and verify content flagged as potentially misleading. Cheryl Seeto, Meta’s Head of Policy in Australia, stated, “When content is debunked by fact-checkers, we attach warning labels to the content and reduce its distribution in Feed and Explore so it is less likely to be seen.” This approach aims to curtail the spread of false information by limiting its visibility on users’ feeds.
Addressing Deepfake Content
Deepfakes—hyper-realistic videos, photographs, or audio generated through AI algorithms—pose a significant threat to information integrity. Meta has committed to removing any deepfake content that violates its policies or labeling it as “altered,” subsequently reducing its distribution. Users will also be prompted to disclose when they post or share AI-generated content. Seeto emphasized the importance of transparency, noting, “For content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI.”
Consistency with Global Efforts
Meta’s strategy in Australia aligns with its efforts to prevent misinformation during recent elections in countries like India, Britain, and the United States. Despite previously scaling back fact-checking programs in the U.S., Meta has reinstated these measures in Australia, reflecting a tailored approach to the unique challenges posed by each region’s information landscape.
Regulatory Pressures and Compliance
The Australian government has proposed regulations that would impose fines on social media companies failing to curb misinformation, with penalties reaching up to 5% of their global revenue. These measures underscore the importance of proactive efforts by platforms like Meta to monitor and manage content effectively, thereby avoiding potential financial and reputational repercussions.
Challenges in Deepfake Detection
The proliferation of deepfakes presents ongoing challenges. A study involving Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) revealed that existing detection systems struggle to identify deepfakes reliably in real-world scenarios. This finding highlights the need for more adaptable and resilient solutions to detect and mitigate the impact of deepfakes.
As Australia approaches its federal election, Meta’s initiatives to combat misinformation and deepfakes are crucial steps toward preserving the integrity of the democratic process. By enhancing fact-checking collaborations, implementing transparent content labeling, and aligning with regulatory expectations, Meta aims to provide a safer and more informed environment for its users.