Meta announced its plans to introduce technology capable of identifying and categorizing images generated by Artificial Intelligence (AI) tools developed by other companies. This technology will be implemented across its platforms, including Facebook, Instagram, and Threads.
While Meta currently labels images generated by its own AI systems, it aims to expand this labeling to encompass images generated by other AI systems in the coming months. The company hopes that this initiative will drive progress in combatting AI-generated fakery within the industry.
However, despite Meta’s efforts, some experts remain skeptical about the effectiveness of such tools. Prof. Soheil Feizi, director of the Reliable AI Lab at the University of Maryland, voiced concerns about how easily individuals could circumvent such systems. He noted that while detectors could train to flag specific AI-generated images, individuals could evade them through simple processing techniques, resulting in false positives.
Audio and Video content
Meta has acknowledged that its technology capable of identifying images generated by Artificial Intelligence will not extend to audio and video content, despite these being primary concerns regarding AI fakery. Instead, the company plans to rely on users to label their own audio and video posts and may impose penalties for non-compliance.
In an interview with Reuters, Sir Nick Clegg, a senior executive at Meta, conceded the limitations of their current approach, particularly in detecting text generated by tools like ChatGPT.
The Oversight Board, an independent body funded by Meta, recently criticized the company’s policy on manipulated media as “incoherent” and “lacking persuasive justification.” Meta’s Oversight Board issued this critique in response to a ruling on a video depicting US President Joe Biden, which someone had edited to create a misleading impression.. While the video did not violate Meta’s current policy on fake media, the Oversight Board suggested updates to these rules.
Sir Nick acknowledged the shortcomings of Meta’s existing policy and agreed that it needed updating to address the evolving landscape of synthetic and hybrid content.
Since January, Meta has required political advertisements to disclose the use of digitally altered images or video, reflecting its commitment to transparency in content dissemination.