AI has striking writers freaked out. Can Hollywood avoid nightmare scenarios?

Originally appeared on LA Times | written by Ryan Faughnder

The rapid rise of generative artificial intelligence technologies has Hollywood scribes freaked out that automation is coming for the writers room.

And for good reason.

While large language models such as ChatGPT lack anything approaching J.A.R.V.I.S.-level sophistication, it’s not hard to imagine such applications spitting out story outlines and generic dialogue for humans to punch up.

The AI nightmare scenario has loomed over the ongoing writers’ strike, producing some of the cleverest picket signs (“Wrote ChatGPT this”). There’s some concern among scribes that the technologies are just impressive enough to fool executives into thinking they’re more powerful than they actually are.

The Writers Guild of America said it proposed regulations of AI usage in work covered by its contract with the studios: for example, that AI can’t be used as source material. The studios instead offered annual meetings to discuss technological advancements, according to the WGA. The studio alliance has argued that writers are already protected by language in their existing contracts.

The potential for AI in creative industries goes far beyond the written word, though, extending to voice dubbing, language translation and storyboarding. The innovations present issues for both studios and creatives.

Walt Disney Co. Chief Executive Bob Iger last week told analysts that the innovations represent “some pretty interesting opportunities for us,” while predicting that it would be “highly disruptive” and “could be difficult to manage, particularly from an IP management perspective.”

To better understand the broader issues of AI and its uses in Hollywood, I spoke to Dan Neely, a Chicago-based entrepreneur in the space who works with major studios.

His paper three years ago on the power of generative AI formed the basis for the startup Vermillio, which helps copyright holders get paid from authenticated AI images derived from their IP. The company recently worked with Sony Pictures for an AI-based marketing initiative for “Spider-Man: Across the Spider-Verse,” allowing users to create their own Spider-Man avatars from selfies.

This conversation is edited for length and clarity.

We’re seeing exponential growth in the number of images and words being generated from AI, and all that material is based on something that already exists. Why do copyright holders need a third party to track the uses of their IP that are being created through artificial intelligence?

In the past, it was relatively straightforward. It was a piece of music that you and I could tell if it was from a certain person or not. It was a scene from “Minions,” and we can say, “It’s absolutely a Minion.” It’s easy for us to determine. But in the generative space, that’s just not how this technology works. The machine is now taking 1s and 0s — which you and I don’t read — as training facts. And when it uses those things, we need a way to determine the lineage.

So the basic question here is: Who owns those 1s and 0s that are being fed into the machine to create derivative works?

There is no federal law that protects data. There are state laws in New York. But copyright law doesn’t protect facts, and these 1s and 0s are considered to be facts. And so this is a very weird, murky moment, and that’s why you see people suing each other.

Consumers are going to care about it coming from the authentic place. They’re going to care that when they used Spider-Man, they knew it came from Sony. They know, as a fan of that thing, that it’s going be an authentic version. And that’s why we need authenticated AI.

In other words, as a fan, you don’t want the digital equivalent of a guy in a knock-off Spider-Man suit walking down Hollywood and Highland. This conversation reminds me of the copyright problems that came up in the early days of hip-hop and sampling, when there wasn’t much of a structure for licensing any of that stuff. Are there parallels here?

Absolutely. If you think about what happened with sampling, people have done that for a while. It’s the first time it can be done en masse, though. It’s the sheer volume of what can be created in this context. It’s a scary proposition, but it’s a massive opportunity for anyone that holds intellectual property. You just have to kind of turn your head a little bit to say, “It’s going to be a different way in which we monetize our assets.”

So if someone wants to write an outline of a horror novel that is basically using all the collected works of Stephen King to spit something out, King and his publisher should be compensated for that.

That’s right.

Our belief is that most creators — and we can talk specifically about writers — are going to want their own engines. I think about the writers room, for example. The writers room is limited in that there are a certain number of writers that can be in there, but if I can turn that into an unlimited writers room to give me inspiration, move me in a different direction and test something out, that is actually pretty interesting.

And it’s also not new. If you think about painting, right? Da Vinci had plenty of assistants that worked with him on his stuff. Now it’s a machine that’s assisting in the process, but it can explore things and give me inspiration from different angles. And as long as it’s done in an authenticated way, that should be OK.

This is the part where people start to get worried about automation taking people’s jobs. And that’s a big part of the AI conversation right now. But your view seems to be that creatives need not worry so much. Why is that?

The first reason is that human genius is always going to be human genius. When you read something and I read something, our emotional reactions to those things are going to be different because of your experience versus mine. That’s really hard to teach a machine. Emotion is really complex. I don’t think that’s ever going to happen.

The second thing is, if you think about it in terms of assistants, I might need a way to explore something. Say, I’m having writer’s block and I have this moment where I can’t really figure out where to take this character. But if I can interact with a machine that is very good at giving me 3,000 options, those are great things for me personally as I think about a creative process.

If AI becomes widely adopted, one of the emergent skills is going to be the ability to feed the right prompt into the machine to get the best results. So, for example, my character is stuck at a fork in the story. I can ask the machine to give me five options of where I can take it.

I don’t think that’s any different from the way it happens today when you have input from other people. But you may get a different thing that you’ve never thought of, or the people you usually check in with might give you something different.

What we are seeing in the visual art space, we can talk about those people that have started creating generative versions of their artwork. The value of their artwork — the originals that they drew themselves, painted themselves — is going up exponentially. And the reason for that is because, again, human touch is going to get more and more valuable.

So when I think about what’s going on with the fear in Hollywood, I actually think those who are at the top of that game are going to be more and more valuable.

Right. The flip side of that is that maybe there are people who might be threatened if they’re not at that level. You’ve got the Greta Gerwigs of the world, where you know there’s no replacing them with a robot, but there are some steps that could be automated.

There are things that are going to get automated, but I don’t think it’s any different from things that have been automated when we think about CGI or virtual production.

So you see it as another tool.

That’s right.

Right now, we’re kind of in an unregulated universe where they’re trying to figure this stuff out, but the technology is already being used. So is the answer just that studios and rightsholders are just going to have to sue the hell out of everyone for a while?

I think some people are going to take that stance, but it gets really hard. If someone’s made something that looks like Yoda, it’s easy to say, “That’s our IP.” But when you have no idea that that’s where those 1s and 0s came from, that’s really complex.

When you look at everything that happened with the art community when the generative engines first came out, they were up in arms. Some companies sued around that topic. The same thing is going to happen in writing. But let’s all get around the table and say, “Hey, we know it’s authenticated and we know everyone can get paid in the right way.” It shouldn’t be an issue.

How else do you see this technology being used?

You’ll see more and more pitches that involve this stuff. If you’ve got a great script and you want to bring it to life as you’re pitching it, these are great tools to be able to do that stuff.

So it’s like using AI to do storyboarding, previz and that kind of stuff, right? Or you get to hear something in Samuel L. Jackson’s voice if you’re writing a script with a certain actor in mind. You could do something like that in a pitch meeting.

Yeah. Talent is going to get to a place where they’re also going to want their own engines, because there isn’t enough time in the world to do all the things that come at them. When they do a brand commercial, for example, they don’t necessarily have to show up to do all of it.

So Morgan Freeman could do voiceover for every commercial in perpetuity.

Yes, every car commercial is now Morgan Freeman!

Get our free REPORT

THE “i” in Generative AI

Get our free REPORT

THE “i” in Generative AI

Scroll to Top