Home NovaAstrax 360 SAG-AFTRA’s AI Deal Shows that Hollywood Still Values Human Actors

    SAG-AFTRA’s AI Deal Shows that Hollywood Still Values Human Actors

    4
    0


    There’s a contingent of actors who will see anything short of an outright ban on AI and synthetic performers in Hollywood as a failure on the part of their union. But after going on strike in 2023 to establish base level rules about compensation and consent when it comes to AI, the guild’s goal in 2026 was to make sure studios valued human actors more than any AI creation.

    SAG-AFTRA‘s new tentative agreement with the AMPTP, agreed upon back on May 2, may not have the headline-grabbing new protection that will prevent a studio from using a synthetic actor if it chooses. But the guild did accomplish the mission of proving that real people in front of the screen still matter.

    NEW YORK, NEW YORK - MAY 15: Former Hollywood film producer Harvey Weinstein appears in Manhattan criminal court for the retrial of his rape case on May 15, 2026, in New York City. Weinstein, the disgraced former Hollywood producer, will be tried for a third time by New York prosecutors in the sexual assault case of Jessica Mann. (Photo by Timothy A. Clary-Pool/Getty Images)

    As was the case with the Writers Guild in its agreement reached with the AMPTP, SAG-AFTRA’s new contract doesn’t scale back any of the wins received in 2023. The technology has improved immensely in the last three years, so there’s understandably pressure on studios to loosen some restrictions, but that didn’t happen. It’s a sign that, hey, maybe studios still do care.

    “The fact that the studios aren’t pushing for more exemptions is a sign that Hollywood still relies on real people,” said Ray Seilie, a trial attorney who specializes in AI issues with Kinsella Holley Iser Kump Steinsapir. “The studios don’t view real people as dispensable right now. They view them as the necessary core to the production of films. If they didn’t believe that, you would see a lot more pushback.”

    SAG-AFTRA’s published summary of the minimum bargaining agreement for 2026 includes 12 different provisions related to AI. They involve rules around digital replicas, security of those replicas, a penalty over the use of a synthetic performer rather than a real actor, and even notice if a studio licenses an actor’s data to a third party for AI training data. All that is icing on the cake for a deal that includes raises to the minimums of 3 percent compounding each year, new rules for protecting background actors, around casting, vertical micro dramas, and the merger of the SAG pension plan and AFTRA retirement plan with some new funding, all while agreeing to a term that for this contract lasts four years instead of three.

    Labor attorney Maria Rodriguez with McDermott Will & Schulte told IndieWire she was “very impressed” at this “thorough” and “robust” deal that touches on numerous areas and doesn’t leave out any members. On the AI side, she argues that the guild built upon the prior AI contract in small, but meaningful ways.

    “They’re being more specific,” Rodriguez said. “And I think part of it is you learn as things evolve. The contracts are always going to be evolving along with our experience and how AI is used, or should be used, or can be used.”

    For instance, Rodriguez noted how the new contract narrowed the definition of what is an acceptable use of a digital replica to alter a “scripted” performance. The old contract left some wiggle room in what could and couldn’t be done in the edit room, but it now specifies that the “script” refers to the material that was actually handed to the actor and not something that was written in later.

    It also acknowledges how AI tech has evolved. You don’t need a full scan to make a digital replica today, so the new terms specify that a digital replica made even without the actor’s help still counts the same way as a complete scan. The same goes for scans that the producers didn’t make themselves on that movie but come from a third party; the payment is still the same. While neither of these things necessarily change how the studios might use or think about AI, it closes some important loopholes.

    The summary also outlines similar protections for background actors and even crucially for minors. It seems incredible that saying a digital replica of a kid actor can’t be used for simulated sexual activity wasn’t already a stipulation previously, but it is now.

    “I think it’s really buttoned down,” Rodriguez said. “They go into even strike protection. Digital replicas won’t be allowable to replace striking actors, which I think is also a protective feature.”

    There’s also attention paid to security of digital replicas, acknowledging that the studios need to take measures to protect these scans and data from hacks, leaks, or unauthorized use, even for background actors. Seilie would argue that this measure, along with guidelines around valuing humans over synthetics, were probably not hard for studios to go along with accepting. They too want to protect their IP when it comes security, and they too don’t know how these technologies are going to be evolving. OpenAI’s Sora, for instance, doesn’t even exist any longer.

    The contract also says that a studio must demonstrate a “significant additional value” for using a synthetic, which is vague enough that Seilie believes it wouldn’t be so hard for any studio to actually do so if it wanted to. So while the guidelines are a bit vague rather than enforceable contract provisions, Seilie argues that may be by design.

    “I suspect that these provisions are designed to be open ended, vague, and subject to further negotiation, specifically because both sides, frankly, want to leave open the possibility that technology requires them to revisit whatever arrangements they have right now,” Seilie said.

    A lot of the provisions specifically mention that the two sides will agree to bargain again later, and Rodriguez says having a procedure for future talks as the use cases evolve is key. It’s also worth noting that this is a minimum agreement, the ones that protect background actors and stunt performers most at risk of being replaced. The real developments will happen once A-list stars begin negotiating with studios on what is acceptable with their likeness.

    But one aspect still left unsettled is training data of third party models. Actors don’t have the right to tell studios what to do with their intellectual property, and this contract doesn’t give them that power. But what it does do is require the studios to notify the actor if their likeness has been licensed to a third party for use in an AI model. That may not sound like much if you have no legal recourse to stop the studios from doing it. But it does prevent someone from secretly licensing your performance or likeness to a company like OpenAI or Anthropic.

    “Transparency is valuable,” Seilie said. “Hollywood runs on public perception. The fact that an actor will now have the power to say, ‘I didn’t want the studio to do this, but I don’t have any power over them’…you can imagine there’s a potential PR use. There’s some value there, because OpenAI is not going to be transparent unless it has to be.”

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here