AI Transparency: OpenAI Joins the Coalition for Content Provenance and Authenticity banner

AI Transparency: OpenAI Joins the Coalition for Content Provenance and Authenticity

|News

OpenAI, the American artificial intelligence (AI) research organization that developed and launched ChatGPT in 2022 has officially joined the Coalition for Content Provenance and Authenticity (C2PA) on Tuesday, May 7. Announced in a press release published on the C2PA’s website, OpenAI joins other steering committee members Adobe, BBC, Intel, Microsoft, Google, Publicis Groupe, Sony, and Truepic. 

 

The credibility and ethics of using AI-generated content have been met with many issues; mostly concerning the authenticity and legality of producing content using AI tools. This includes content that is completely generated by AI, as well as original content edited using AI tools. 

 

The spread of content created with AI tools in the process will hopefully be mitigated by the C2PA by enhancing the monitoring of the circulation and distribution of AI-generated content. OpenAI’s strategic movement has presented itself as a solution that will hopefully diminish the spread of disinformation. The decision is proven to be politically timely, as AI-generated content often proliferates during election season. 

 

C2PA’s objective is simple: to increase the transparency around generated content. The committee developed a technical standard for tamper-resistant metadata that will help consumers identify AI-generated content. Content will now be certified difficult-to-tamper metadata that will help consumers become aware of the content’s true creation, whether it is created entirely or edited by an AI tool. Authentically derived content will also have metadata, according to the C2PA’s standard.  

 

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” explained OpenAI. 

 

Image from Artnet News

 

Improving Transparency Around Digital Provenance

OpenAI’s decision to officially partner up with C2PA comes as no surprise as the American research organization has been consistently spearheading important initiatives in the mitigation and production of AI technology that preserves the integrity of human creators. During the first quarter of 2024, OpenAI already began integrating Content Credentials in all content produced by their AI image generator DALL•E 3. This was also applied in ChatGPT and OpenAI API. 

 

In addition to integrating the Content Credentials as a standard in their products, OpenAI also accepts applications for the DALL•E 3 image detection classifier through its Researcher Access Program. This program will help content creators and other moderators identify or predict whether content was created with an AI tool or not. 

 

According to OpenAI, the Researcher Access Program has shown significant performance. 98% of DALL•E 3 images were correctly identified and less than 0.5% of non-AI images were incorrectly flagged. Lapses are still present in this program, though. DALL•E 3 is still not capable of identifying between the images it created and the images that were created by another AI generator tool. 

 

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyses its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” said OpenAI. 

 

The organization also states how crucial it is for the leading entities in the industry to join the movement. 

 

“While technical solutions [like the above] give us active tools for our defenses, effectively enabling content authenticity in practice will require collective action.”