OpenAI doubles down on safety as Sora questions grow with ‘Safety Bug Bounty’

OpenAI’s public messaging this week has centered on safety, provenance and stronger safeguards across its products. In the space of three days, the company published updates on Sora safety, teen protections and a new safety bug bounty program.

What OpenAI is saying

On March 23, OpenAI published Creating with Sora safely, setting out the protections it says are built into Sora 2 and the Sora app. The company highlighted visible and invisible provenance signals, C2PA metadata, consent-based likeness controls, internal tracing tools, age-related safeguards and content filtering designed to reduce harmful or misleading use.

This indicates that OpenAI wants Sora to be seen not just as a creative tool, but as a product wrapped in layers of trust and accountability measures. This message addresses the central issues in AI-generated media, especially as synthetic video becomes harder to distinguish from real footage.

The bug bounty push

On March 25th, OpenAI launched a public Safety Bug Bounty program focused on abuse scenarios and AI-specific risks, including prompt injection, agentic harm, data exfiltration and platform integrity issues. The company said the program is meant to complement its security bug bounty by rewarding reports that may not qualify as classic cybersecurity flaws, but still pose real safety risks.

A day earlier, OpenAI also published a separate teen safety update aimed at helping developers build safer AI experiences for younger users. Together, those two posts reinforced the company’s message that responsible deployment is becoming just as important as raw model capability.

The bigger read

Taken together, OpenAI’s public line is about tightening controls as its tools become more widely used. Even with outside reporting raising questions around Sora, the company is publicly stressing safer deployment, auditability and trust across the board.