OpenAI takes steps to boost AI-generated content transparency
OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content. The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3 model output in ChatGPT and the OpenAI API. The metadata will be integrated into OpenAI’s upcoming video generation model Sora when launched more broadly. “People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI explained. The move comes amid growing concerns about the potential for AI-generated content to mislead voters ahead of major elections in the US, UK, and other countries this year. Authenticating AI-created media could help combat deepfakes and other manipulated content aimed at disinformation campaigns. While technical measures help, OpenAI acknowledges that enabling content authenticity in practice requires collective action from platforms, creators, and content handlers to retain metadata for end consumers. In addition to C2PA integration, OpenAI […]