Home Movies The New AI Warnings in Movies Are Basically Useless

The New AI Warnings in Movies Are Basically Useless

by thenowvibe_admin

The warnings started showing up unexpectedly in the end-credits crawl of several major movies this summer: all-caps screamers listed just beneath the usual Hollywood legalese cautioning against unauthorized copying, distribution, or exhibition. “THIS WORK MAY NOT BE USED TO TRAIN AI,” blares one such warning attached to Universal’s animated-heist hit The Bad Guys 2. “ALL RIGHTS IN THIS WORK ARE RESERVED FOR PURPOSES OF LAW IN ALL JURISDICTIONS PERTAINING TO DATA MINING OR AI TRAINING, INCLUDING BUT NOT LIMITED TO ARTICLE 4(3) OF DIRECTIVE (EU) 2019/790.”

Little known outside of entertainment law and European bureaucrat circles, Directive 2019/790 is an artificial intelligence copyright-compliance policy out of the European Union; one that’s basically inapplicable in the U.S. But the intent of that message — which also appears in the credits of fellow Universal Pictures blockbusters Jurassic World Rebirth and June’s “live-action” adaptation of How to Train Your Dragon — is crystal clear. At a time when artificial intelligence engines such as Perplexity, ChatGPT, Nvidia, and Meta AI are attempting to digest the entirety of the internet — harvesting vast terabytes of digital content from books, image databases, social-media platforms, and sometimes even film libraries to enable machine learning and further generative AI — studios are trying to claw back a semblance of control over their intellectual property.

Despite a recent, first-of-its-kind victory for the AI company Anthropic (namely, that its chatbot Claude can legally train a language model on millions of copies of digitized books under fair use doctrine), the jury remains both literally and figuratively out as to whether scraping other kinds of data, including movie content, into the vast, insatiable maw of machine learning constitutes actual copyright infringement. But Hollywood’s back-lot chieftains are increasingly at pains to deliver a legal brushback, anticipating a time when government policy is established — and the suing can begin in earnest.

“There is no law on this yet. The law is unsettled on whether scraping is fair use,” says Darren Trattner, an entertainment attorney who is a prominent voice for writers, directors, and actors in the AI landscape. The end-credits warning tag “represents an attempt by the studios to tell the AI companies that ignorance is not a defense for copyright infringement: ‘Don’t tell us you didn’t know or weren’t warned about legal liability if your computer sucked in our entire movie.’ While it’s not legally enforceable, such a warning will undoubtedly play well for the studios in front of a judge or jury.”

So far, Universal remains the only studio deploying a so-called “do not train” tag in its movies’ credits. But streamers have been quietly burying similar machine-learning saber rattles deep within their terms of service agreements. Among them: Disney+ (“You agree that you will not … engage in any of the foregoing in connection with any use, creation, development, modification, prompting, fine-tuning, training, benchmarking or validation of any artificial intelligence or machine learning tool, model, system, algorithm, product or other technology”), Paramount+ (“We reserve the right to prevent third parties from text and data mining of Content and any information on the Service”), and Universal’s corporate sibling, Peacock (“You may not use any Content for the purpose of directly or indirectly training, developing or improving a software tool or service, including any artificial intelligence tool, model, system or platform”).

According to sources familiar with institutional thinking on the matter, such protective language represents less of a policy shift than a perfunctory industry response to changing technology that could imperil Hollywood IP. The antecedent one insider provides: When camcorders first emerged in the early 1980s, studios responded to new piracy threats by slapping on disclaimers warning of criminal prosecution if users went to the theater to videotape their movies. (Universal declined to comment for this story.)

But according to AI rights-tracking experts, proving that your copyrighted material has been ripped off by data scrapers — or establishing that data has been scraped at all — is in most scenarios impossible. While it is baldly illegal for an AI engine to duplicate and distribute copyrighted material, the process of training machines just to learn by feeding them mass quantities of proprietary IP exists in a gray area of the law.

Dan Neely is the co-founder and chief executive of Vermillio, a tech start-up that works for movie studios, record labels, and in partnership with the heavyweight Hollywood talent agency WME to monitor and protect against unauthorized uses of generative AI (such as shutting down pornbot deep fakes). As he explains it, Universal’s “do not train” disclaimers serve to address the legal uncertainty around artificial intelligence as the studio’s “public way of reserving its rights.” But if an AI company is training, for example, an AI animation engine by feeding its servers every animated movie ever produced, it becomes exceedingly difficult to prove the characters, settings, and aesthetic scraped from The Bad Guys 2 are more than a marginal influence on whatever the platform spews out. And even that can only come up for legal scrutiny once the AI makes it out of beta testing. “If someone has built an animation engine and it exists inside New Animation Studio XYZ, no one ever sees the underbelly of how that animation is actually made,” Neely says. “As soon as the system starts outputting content, then you can start interrogating. The training process in and of itself is a hard thing to enforce.”

Click here to preview your posts with PRO themes ››

It is perhaps not a coincidence that Universal’s “do not train” tags began showing up at multiplexes this summer. In June, in what is already being regarded as a landmark showdown between Hollywood and the burgeoning generative AI powers that be, Walt Disney Co. and NBC Universal teamed up to sue the widely used artificial intelligence image-generator Midjourney. The suit claims the service (which reported a robust $300 million in revenue last year) “functions as a virtual vending machine generating endless unauthorized copies of Disney’s and Universal’s copyrighted works” including such protected properties as characters from The Simpsons, Star Wars, and the Marvel Cinematic Universe. It also characterizes the company’s data-scraping methods as “bootlegging.”

But in a 43-page response filed earlier this month in U.S. District Court for the Central District of California, Midjourney fired back, claiming not only that the platform is operating within the boundaries of fair use but that the studios benefit from the “use of Midjourney and other generative AI tools.” Further, the company dinged Universal and Disney for ineffectively policing the rights around Yoda, Chief Wiggums, and Wolverine: “Plaintiffs could have — and should have — followed Midjourney’s Digital Millennium Copyright Act notice and takedown procedure, codified in its Terms of Service, by identifying specific images they believe are infringing and providing the URLs where they are displayed.”

Last month, the Trump administration unveiled America’s AI Action Plan, focused on enhancing U.S. leadership in the machine-learning space and removing perceived barriers to AI innovation while charting a “decisive course to cement U.S. dominance in artificial intelligence.” The EU, meanwhile, has taken a much more aggressive approach to regulating AI and providing copyright protections. Hence directive 2019/790’s all-caps shout-out in the Universal post-credits tags: For now it remains Hollywood’s only legal bulwark anywhere attempting to exert data-scraping oversight.

Lauren Oliver is the co-founder and CEO of Incantor AI, a new content-creation platform that deviates from standard generative AI by training its servers only with licensed data (i.e., datasets of known and trackable provenance; none of the unauthorized content or IP infringement that typically gets hauled in through the prevailing scraping methods). Unlike the unstructured, open-sourced, multiple-format, world-is-not-enough style of data ingestion employed by companies like EleutherAI, Cerebras, or Databricks, Incantor can operate on a relatively discreet data set while also tracking data rights and providing royalties to its licensees. (Clients to date include major podcasters and indie-movie distributors, which are using Incantor to translate their voice content into different languages; the Hollywood talent agency Verve is advising Incantor as it explores showbiz applications for the tech.)

Oliver is reluctant to speculate whether “do not train” tags will have a chilling effect on data scraping despite the “move fast and break things” pace and proliferation of artificial intelligence development. But she understands the warnings’ sudden appearance in the credits crawl (in tandem with the Midjourney lawsuit) as both a sign of the times and a harbinger of things to come. “Look, there’s a lot of people in Silicon Valley who actually believe and are gleeful about the fact that AI can permanently topple Hollywood,” Oliver says. “That you can remove the power from the gatekeepers and in somewhat dystopian fashion be able to create your own movie that’s perfectly customizable to you with the push of a button.”

The 2019/790 warning “signifies that the industry as a whole is waking up,” Oliver says. Companies like DeepSeek and Google’s Gemini “would not have AI models if they had not scraped every piece of media available in the world right now.”

Neely, for his part, is not hopeful about the efficacy of such warnings. He points out that while Universal may want to curtail AI engines’ capacity to data scrape its movies, the studio already let the genie out of the bottle — widely disseminating its films’ characters, settings, and animation styles via online trailers and social-media posts — thereby foiling its own future legal wrangling. “YouTube contains hundreds of thousands of hours of your content,” he says. “You have influencers post your content, and you give them clearance to do so. So what’s really interesting about this is, when the studio circles back around and decides to sue a platform, the platform says, ‘We scraped it from YouTube.’ And YouTube says, ‘Go talk to the creator.’ And the creator says, ‘Go talk to the studio.’ And the studio is back talking to themselves because they unintentionally granted the right for their content to be used on these platforms.”

You may also like

Life moves fast—embrace the moment, soak in the energy, and ride the pulse of now. Stay curious, stay carefree, and make every day unforgettable!

@2025 Thenowvibe.com. All Right Reserved.