RedShark News
Our fascination with the potential of AI to automate all kinds of processes and our paranoia of losing control to a conscious machine will continue to trend in 2021. A recent article written by an algorithm brings the subject and the method to light.
https://www.redsharknews.com/hollywood-need-not-fear-ai-is-rubbish-at-scripts
The aim of the article was to reveal just how capable an algorithm
can be at writing a coherent argument. The subject which The Guardian got
the AI to produce was to convince us that robots come in peace.
There’s a hoary old discussion to be had about whether we
are inevitably doomed to death or dominion by our AI overlords but what’s more
interesting is quite how bad the article is.
I’d be nit-picking if I critiqued the article’s grammar but it’s no longer remarkable that this is written by machine so let’s pick some holes and see where it leads.
The main problem is that its boring. There’s no illuminating
insight because it’s all cribbed from the internet and spliced together with
unconvincing statements like ‘believe me’ and homilies like ‘They won’t have to
worry about fighting against me, because they have nothing to fear.’
Each sentence is certainly intelligible. The sentences are
ordered to form a generally coherent argument and of course it is superficially
conveying some complex philosophical ideas in readable language.
Yet the piece does not scan easily. The sentences are unvaryingly short and tediously basic with repetitious phrases (14 uses of ‘I am”) which undermines the content’s flow.
Cutting the AI some slack, we find that the article is not
all it seems. The Guardian has in fact instructed the AI to keep the language
simple and concise and to produce eight different outputs each with a unique,
take on the central theme. The Guardian picked the best parts of each, “in
order to capture the different styles and registers of the AI” cutting lines
and paragraphs, and rearranging the order of them in some places.
“Editing GPT-3’s op-ed was no different to editing a human
op-ed” says the paper noting that it took less time to edit than many human
op-eds.
So as an exercise in automating an opinion-piece the article
is no better or worse – but perhaps cheaper - than many of the paper’s own
opinion columns. It would be more instructive to have printed the raw copy.
But this application is precisely where AI is finding a
place in professional video production. An AI is fed data and a set of
instructions and outputs a set of responses – video clips perhaps – which are
then edited and finished manually by a skilled creative.
What’s missing is evidently the personality of the writer,
unless that personality is HAL. You could read back most of this piece in a
calm HAL-voice if you wanted to anthropomorphize the code.
“Artificial intelligence will not destroy humans. Believe
me.”
The Turing test
It does not pass the Turing Test in which artificial
intelligence is deemed indistinguishable from a human. Nonetheless, whether
AI can or should be a partner in the creative process is a
hot topic in in Hollywood and made more controversial by the smarts of GPT-3,
the language model which wrote the Guardian article.
The launch in June of Generative Pre-trained Transformer #3
by OpenAI is considered a massive leap from its predecessor (GPT-2, launched in
2018). GPT-3 is able to autonomously generate articles, poems, short stories,
press releases and even guitar tables.
It can even write computer code. “The recent, almost
accidental, discovery that GPT-3 can sort of write code does generate a slight
shiver,” said John Carmack, a consulting chief technology officer at Oculus VR.
MIT Technology Review noted that GPT-3’s strength
appears to be “synthesizing text it has found elsewhere on the Internet, making
it a kind of vast, eclectic scrapbook created from millions and millions of
snippets of text.”
With its 175 billion learning parameters, GPT-3 can “perform
pretty much any task it’s assigned … [making] it an order of magnitude
larger than the second-most powerful language model, Microsoft’s
Turing-NLG algorithm, which has just 17 billion parameters,” reports
SiliconANGLE.
That’s interesting because Microsoft has just acquired
exclusive access to GT-3 to expand into its Azure cloud platform. Microsoft
notes that its potential includes “aiding human creativity and ingenuity in
areas like writing and composition, describing and summarizing large blocks of
long-form data (including code), converting natural language to another
language.”
Microsoft had the foresight to invest $1 billion in OpenAI a
year ago and has probably locked out Amazon and Google from using it.
GPT-3’s power has got some people worried. Whereas GPT-2 was
prone to emit sexist and racist language and was never commercially released,
the Middlebury Institute of International Studies’ Center on Terrorism,
Extremism, and Counterterrorism published a paper that indicated GPT-3’s
capabilities could be used to “radicalize individuals into violent far-right
extremist ideologies and behaviors.”
OpenAI is experimenting with safeguards at the API level
including ‘toxicity filters’ to limit harmful language from GPT-3.
Reverting back to its more comparatively more prosaic
attempt to automate the creative process in Hollywood, there is some good news
for script writers.
Yves Bergquist, who works with the Hollywood studios as part
of the SoCal University think tank The Entertainment Technology Center says
that AI is nowhere near ready to write a script that can be produced
into an even mediocre piece of marketable content.
“Language models like GPT-3 are extremely impressive, and
there’s a place for them in automating simple content creation like blog posts,
but there will need to be a real cultural shift in how the entire AI
field thinks about intelligence for AI generated scripts to happen,” he says.
“The technical community in Hollywood is extremely
well informed, extremely dedicated to the final creative product, and
extremely resistant to hype.”
No comments:
Post a Comment