NAB
article here
In the past year algorithms became a
lot better at generating illustrations, art and photoreal scenes. The pace of
development is unrelenting, meaning that this year we should expect
AI-generated video tools. The implications of this are as exciting as they are
challenging to the creative community.
A tour through the recent history of AI and its
ability to churn out convincing and commercially viable illustrations,
photographs, and paintings has been sketched by Wired.
Will Knight,
the magazine’s senior writer, primes us to expect higher quality AI-made images
and perhaps the emergence of AI video generators in 2023.
Researchers have already demonstrated
prototypes, although their output is so far relatively simple. Stable
Diffusion, Midjourney, Google, Meta and Nvidia are all working on the
technology.
“AI elicits a special kind of anxiety for the film
and TV industry’s creative classes,” Joshua
Glick, Associate Professor of Film and
Electronic Arts at Bard College, writes in another article for Wired.
“The question is whether
feature-length films made by text-to-video generators will eliminate the
skilled labor of screenwriters, graphic artists, editors, and directors.”
Glick is doubtful that Hollywood
studios will launch a major lineup of AI-generated features any time soon. More
importantly, he doesn’t think audiences are ready for AI-generated feature
narratives either.
“Even as text-to-video software
continues to improve at an extraordinary rate, it will never replace the social
elements crucial to the product Hollywood makes and the culture that surrounds
both gaudy blockbusters and gritty dramas alike,” Glick thinks.
He believes the human influence on
the creation of film and TV shows is what makes storytelling on screen tick and
something that AI can’t (yet) emulate.
Of more pressing concern, he notes,
is that studios will use algorithm-driven predictive analytics to greenlight
only those projects they believe are sure to make money, leading to less
diversity of form, story, and talent.
AI has already made its way into the creation of
filmed stories. This is most notably the case in VFX. The Wētā FX
software Massive, for example, has helped effects artists capture
the seemingly “unfilmable,” especially on the macroscale.
Beginning with the creation of digital hordes of orcs and humans for the realistic combat in The Lord of the Rings: The Two Towers’ (2002) Battle of Helm’s Deep, Massive has since been responsible for expansive collections of lifelike entities, from the shiver of sharks in The Meg (2018) to the swarms of flying demons in Shang-Chi and the Legend of the Ten Rings (2021).
There are further examples of AI use
ranging from the “synthetic resurrection” of iconic actors to performance
capture assisted CGI characters. AI tools are so useful that “synthetic imagery
is sure to become central to preproduction,” Glick believes. For instance,
“screenwriters will be able to use AI-generated imagery for their pitch decks
to evocatively establish the mood and feel of a project and position it within
a larger genre.”
Likewise, concept artists will
benefit from the back-and-forth tweaking of prompts and visual outputs as they
flesh out a film’s narrative arc in the early stages of storyboarding.
Generative AI might also expand the “previs” process of transforming flat
images of material environments and character interaction into 3D
approximations of scenes.
Extending this further, the use of AI
in film and TV production might require a new set of skills able to guide AIs
to desired results. Glick envisions a broader reframing of authorship as VFX
supervisors, computer scientists, concept artists, engineers, and animators
“become increasingly responsible for the movements and expressions of the
characters on screen, as well as the look and feel of the world they inhabit.”
Far from ushering in the death of
cinema, AI can help film the “unfilmable” and make cinema more collaborative:
“Never before has an amateur or
seasoned professional been able to build such an elaborate project on such a
small budget in such a short amount of time.”
This theme is taken up by Rex Woodbury,
who writes about all the ways AI is set to disrupt industries in his article on
Substack.
“Generative AI is the most compelling
technology since the rise of mobile and cloud over a decade ago,” he declares.
“We’re at an AI inflection point… underpinning a Cambrian explosion in
innovation.”
He draws a direct line between tools
like Adobe Premiere and Final Cut for editing, smartphones and GoPro cameras
for action shots, drones for aerial shots, and YouTube and TikTok for
publishing and monetizing video to AI as the next innovation that will democratize
the creative industries.
“Just as AI amplifies creativity, AI
amplifies productivity,” he says. “We see this in the tools that give writers
and marketers superpowers, like Jasper.ai, Copy.ai, and Lex, [to help
brainstorm ideas].”
He predicts that generative AI will
soon collide with other maturing technologies, such as VR and AR and imagines
text prompts that generate immersive, three-dimensional virtual worlds.
“Within the lifetime of someone born
today, we’ll see every part of human life, work, and society reinvented by AI,”
he theorizes.
It’s also likely that AI will help
evolve an “internet of me” — of which TikTok represents the starting rung…
customized content created just for me and you.
“The world is shifting to
personalization, and AI is the fuel on the fire. All of a sudden, a ‘1-on-1’
experience is replicable at scale — and today’s AI applications are still
rudimentary compared to those we’ll see in the coming years. Think of every
Craigslist category — education, books, home decor. Each one is ripe for
reinvention.”
None of these writers dismisses the
very real ethical and legal issues surrounding AI’s pervasion of society. But
it’s more a matter of figuring out how to live with AI, than banning it
outright. That genie has long left the bottle.
“Leaps forward in technology often
walk a fine line between deeply-impactful and dystopian,” Woodbury says. He
lists the major ethical issues we need to work out, among them:
§ Who is responsible for AI’s mistakes?
§ Who is the creator of an AI work? Is it the AI? The
developers? The person who wrote the prompt? The people whose work was used to
train the model?
§ How do we determine what’s human-made vs.
machine-made? Where does the line that separates the two even exist?
§ How do we get rid of AI bias?
§ How do startups differentiate themselves and build
a moat?
§ Where will value accrue in the ecosystem, and how
should value creation be distributed?
§ Will AI be a net job creator or a net job
destroyer? How do we retrain workers who are displaced by AI?
That’s a massive list. Perhaps we need an AI for that.
No comments:
Post a Comment