NAB
article here
The new frontier of artificial
intelligence is text-to-video and, though it may be a few years before a
blockbuster is produced entirely by AI, it seems incredible to be even reading
words barely a year after the first text-to-image generative models were
launched.
The first generative models to
produce photorealistic images exploded into the mainstream in 2022 — and soon
became commonplace. Tools like OpenAI’s DALL-E, Stability AI’s Stable
Diffusion, and Adobe’s Firefly flooded the internet with jaw-dropping images.
Runway, a startup that makes generative video
models (and the company that co-created Stable Diffusion), released its latest
version, Gen-2, with a quality that is striking, says Will Douglas Heaven in a 2024
trends piece for the MIT Technology Review. “The best
clips aren’t far off what Pixar might put out,” he gushed.
Runway and other GenAI developers are building a world where anyone can actualize a video without doing any actual production.
“No lights. No camera. All Action” is
the company’s tagline for Gen-2.
“Hyper-realistic AI generation is the goal for many
competing companies,” suggests Conner
Carey in a SproutVideo roundup of AI tools.
Midjourney V5, for example, was released in 2023 with stunning results,
according to Benj Edwards at Ars
Technica. Google has premiered Lumiere, which “looks to be one
of the most advanced text-to-video models yet,” rates Matt Growcoot at PetaPixel.
Just as the arrival of the internet
led to an explosion of user-generated content posted to social media,
generative AI will accelerate the creation of video content online. Some
predict that as much as 90% of online content will be AI-generated by 2025.
Alexandra Suich Bass, writing for The
Economist, says AI will transform every aspect
of Hollywood storytelling and predicts it will be used to tell new types of
stories.
“As storytelling becomes more
personalized and interactive, films will change and so will gaming, an industry
where people can choose their own adventures more easily than moviegoers can.
The amount of entertainment available will also balloon.”
Film historian David Thomson has
compared GenAI to the advent of sound. “When movies were no longer silent, it
altered the way plot points were rendered and how deeply viewers could connect
with characters,” notes Suich Bass. Meanwhile Cristóbal Valenzuela, who runs
Runway, says AI is more like a “new kind of camera,” offering a fresh
“opportunity to reimagine what stories are like.” Perhaps both are right.
There is already one filmmaker
claiming to have made the first feature-length film from a single long-form
prompt.
At the end of last year artist Dan Sickles released
a new version of the classic black and white documentary Man With A Movie Camera. Made in 1929 by Dziga Vertov, it captured a day
in the life of Russia’s citizens and used a number of groundbreaking
techniques.
Sickles has used AI to generate 480 unique
iterations of Vertov’s original film in what he calls a homage to — and
interrogation of — the original masterpiece, TV Tech’s Phil Kurz reports.
Each iteration of Man With AI Movie Camera was
generated from a prompt created by the artist that describes Vertov’s original
film shot-for-shot with the exact timing to match the frame. The prompt is
trained on a data set curated by the artist to give each iteration a distinct
aesthetic while retaining the length and essence of each shot to mirror the
original film.
The series was created in using
Stability AI’s open-source models (Dreamstudio, ClipDrop and Stable Audio and
are being sold online via the NFT marketplace SuperRare. It grossed more than
$25,000 when the first sale went live mid-December. Each of the works in the
series will be revealed individually throughout 2024.
Sickles said his project “serves as a
model for how AI can function as an equitable public good for creative
production.”
Outside of artworks and experiments,
could these new video generating AI tools actually serve up a feature film that
might give Marvel a run for its money?
It’s no surprise that top studios are
taking notice. Paramount and Disney are both exploring the use of generative AI
throughout their production pipeline.
In fact, AI presents bigger questions
about the future of stories and the nature of collective storytelling. For
example, poses Alexandra Suich Bass, will Gen-AI simply imitate previous hits,
resulting in more derivative blockbuster films and copycat interpretations of
pop songs that lack depth, rather than original stories and art forms?”
And as entertainment becomes more
personalized, will there still be stories that become part of humanity’s
collective consciousness and move large numbers of people, who can talk about
them together?
No comments:
Post a Comment