OpenAI's promised tool, Media Manager, designed to allow creators to control the inclusion of their content in AI training, has yet to materialize, raising questions about its development priorities amid escalating copyright disputes. This delay comes despite the company's self-imposed deadline and growing concerns about the use of copyrighted material in training AI models.

The Media Manager, announced in May, aimed to identify and manage copyrighted text, images, audio, and video across multiple sources, reflecting creators' preferences. This tool was intended to address criticism and shield OpenAI from IP-related legal challenges. However, sources familiar with the matter suggest that the project has not been a major internal focus, with some former employees even questioning if any significant work was done on it.

Adding to the uncertainty, a key member of OpenAI’s legal team working on the project transitioned to a part-time consultant role. Furthermore, communication with external collaborators regarding the tool has been limited, indicating a possible slowdown or reevaluation of the project. Consequently, OpenAI has missed its deadline to have the tool in place by 2025 without providing any updates or revised timelines.

The urgency for a solution stems from the fact that AI models learn by observing vast datasets, which can include copyrighted works. This has resulted in models producing near-copies of existing content, leading to lawsuits from artists, writers, and news organizations claiming unauthorized use of their work. Although OpenAI has established some licensing deals and offered ad hoc opt-out methods, these solutions have been criticized as inadequate.

While Media Manager promised a comprehensive approach to content management, experts are skeptical about its effectiveness. Challenges remain in identifying all copies of content across the internet, addressing alterations to works, and ensuring legal compliance across diverse jurisdictions. Some experts argue the burden of controlling AI training would be unfairly shifted to creators, who may not be aware of the tool.

Despite these concerns, OpenAI maintains that its models create transformative works and has invoked the concept of “fair use” in its ongoing legal battles. This legal defense is bolstered by the assertion that training AI models without access to copyrighted material is not possible, highlighting the company's conviction in its current training methodology. The ultimate legal outcome and if it is not successful the future of Media Manager remains uncertain.