New York Enacts First-of-Its-Kind AI Disclosure Law
New York Governor Kathy Hochul recently signed S.8420-A/A.8887-B, a first-of-its-kind law requiring conspicuous disclosure when an advertisement includes a “synthetic performer.” Under the statute, a synthetic performer is a digitally created or modified asset generated using artificial intelligence (AI) or other software that is intended to give the impression of a human performer who is not an identifiable natural person.
The disclosure requirement applies to any person, firm, corporation, or association (or their agents or employees) who, for a commercial purpose, produces or creates an advertisement for a product or service that is placed before the public in New York, where that person has actual knowledge that a synthetic performer is used. Although the statute does not mandate specific disclosure language, the requirement that disclosures be “conspicuous” suggests they must be clear, understandable, and presented in a manner consumers are likely to notice, rather than buried in fine print or end-card legal.
The law applies broadly, but includes several notable carve-outs. Advertisements or promotional materials for expressive works (such as movies, television programs, streaming content, or video games) are exempt, so long as the use of a synthetic performer in the advertisement is consistent with its use in the underlying expressive work. Audio-only advertisements are also excluded, as are situations in which AI is used solely to translate the language of a human performer.
The statute further provides that media outlets and platforms that publish or distribute advertisements cannot be held liable under the law solely for carrying a non-compliant advertisement.
Violations may result in civil penalties of $1,000 for a first violation and $5,000 for each subsequent violation. The law takes effect 180 days after enactment.
New Executive Order Seeks to Rein in State Artificial Intelligence Laws
President Trump also recently signed an executive order entitled Ensuring a National Policy Framework for Artificial Intelligence, which states its purpose as sustaining and enhancing the United States’ global AI dominance through a minimally burdensome national policy framework, while curtailing states’ authority to regulate the use of AI. Here’s what the Executive Order requires:
Within 30 days, the U.S. Attorney General must establish a Department of Justice (DOJ) “AI Litigation Task Force” dedicated solely to challenging state AI laws that the Trump administration believes conflict with federal AI policy, including laws that improperly regulate interstate commerce or are otherwise unconstitutional.
Within 90 days, the Commerce Department must publish an evaluation of existing state AI laws, identifying those it considers onerous, inconsistent with the Executive Order’s policy goals, or appropriate for referral to the DOJ for legal challenge. At a minimum, the evaluation must identify state laws that require AI systems to alter or suppress truthful outputs, as well as laws that mandate disclosures or reporting by AI developers or users in ways that may violate the First Amendment or other constitutional protections.
States with AI laws identified as onerous may be deemed ineligible, to the maximum extent permitted by law, for remaining non-deployment funding under the federal Broadband Equity, Access, and Deployment (BEAD) program. Federal agencies must also review discretionary grant programs to determine whether funding can be conditioned on states not enacting conflicting AI laws or, where such laws already exist, agreeing not to enforce them during the grant period.
Following the Commerce Department’s evaluation, the Federal Communications Commission must consider whether to adopt a uniform federal reporting and disclosure standard for AI models that would preempt inconsistent state requirements.
Within 90 days, the Federal Trade Commission must issue a policy statement explaining the circumstances under which state laws that require AI models to modify truthful outputs may be preempted by federal laws prohibiting unfair or deceptive acts or practices.
The Administration must also prepare a legislative proposal establishing a uniform federal AI framework that would preempt conflicting state laws, while expressly preserving state authority in areas such as child safety, AI infrastructure, and state government procurement and use.
Taken together, the Executive Order reflects a marked shift toward federal intervention in the AI regulatory landscape. While its ultimate legal effect remains uncertain, it signals a concerted strategy to limit state-by-state regulation and move toward a single federal approach to AI oversight.
Disney Becomes Sora’s First Major Content Licensing Partner
Disney recently entered into a three-year licensing and strategic partnership that will make it the first major content owner to license its intellectual property for use on Sora, OpenAI’s short-form generative AI video platform. Under the partnership agreement, Sora will be able to generate short, user-prompted videos drawing from a curated set of more than 200 characters and elements from Disney, Pixar, Marvel, and Star Wars, including costumes, props, vehicles, and iconic environments. Selected fan-inspired videos will also be curated and made available to stream on Disney+.
The agreement extends well beyond a traditional content license. Disney will also become a major OpenAI customer, deploying OpenAI’s APIs across new products and experiences, including for Disney+, and rolling out ChatGPT internally for employees. In addition, Disney will make a $1 billion equity investment in OpenAI and receive warrants to purchase additional equity, signaling a long-term strategic alignment between the companies. The license expressly excludes the use of talent likenesses or voices, reflecting careful boundary-setting around rights of publicity and performer protections.
At the same time Disney entered into this sweeping partnership with OpenAI, it has taken a more aggressive enforcement posture toward other generative AI companies. Disney (alongside Universal) sued Midjourney over alleged copyright infringement tied to the unauthorized generation of protected characters and has separately raised objections and issued demands to Google regarding the use of Disney IP in connection with its Gemini AI products.
The contrast is instructive. Disney appears willing to embrace generative AI where its use is permissioned, contractually governed, and economically aligned, while reserving litigation and enforcement for AI developers it believes are exploiting copyrighted works without authorization. Taken together, these parallel strategies suggest a future in which rights holders play an increasingly active role in shaping the generative AI ecosystem by rewarding licensed use and aggressively challenging unlicensed appropriation of their intellectual property.

/Passle/5a0ef6743d9476135040a30c/SearchServiceImages/2025-12-24-11-35-11-067-694bcfefc422f0f74b89d3a8.jpg)
/Passle/5a0ef6743d9476135040a30c/MediaLibrary/Images/2025-12-23-23-25-23-001-694b24e3fb64d8ffeb996d09.jpg)
/Passle/5a0ef6743d9476135040a30c/MediaLibrary/Images/2025-12-23-13-53-06-971-694a9ec205335a16ce09d668.png)
/Passle/5a0ef6743d9476135040a30c/SearchServiceImages/2025-12-22-20-15-30-300-6949a6e2a25fd217ade21f21.jpg)