Wednesday, 1 February 2023

This Is What’s Next for AI: Content, Coding… and Western Civilization?

NAB

article here 


On New Year’s Eve, OpenAI president and co- founder Greg Brockman (@gdb) tweeted: “Prediction: 2023 will make 2022 look like a sleepy year for AI advancement & adoption.”

The most well known AI tools are those from OpenAI, such as image generator DALL-E 2 and text generator ChatGPT, but the tech is advancing so quickly that by the time a certain industry has grasped the implications of the latest development, another has emerged to leapfrog in sophistication.

“Now it’s beginning to head toward video, and then it’ll go 3D,” Mark Curtis, head of innovation at Accenture’s Interactive division, tells Patrick Kulp at Adweek. “We’ve had to continuously rewrite this trend over the last month and a half because new stuff was coming up. And I worry that everything we’re going to say is going to be irrelevant by February.”

While imagery and text were the big leaps forward in 2022, there are many other areas where machine learning techniques could be on the brink of industry-transforming breakthroughs including: music composition, video animation, writing code, and translation.

“It’s hard to guess which dominoes will fall first, but by the end of this year, I don’t think artists will be alone in grappling with their industry’s sudden automation,” says Vox’s Kelsey Piper.

He predicts we’ll soon have image models that can depict multiple characters or objects and consistently do more complicated modeling of object interactions (a weakness of current systems).

“I doubt they’ll be perfect, but I suspect most complaints about the limits of current systems will no longer apply.”

Piper also suggests better text generators — ones that provide better answers to nearly every question you ask them. That may already be happening. Microsoft is reportedly planning to integrate ChatGPT into its Bing search engine.

“Instead of providing links in response to search queries, a language model-powered search engine could simply answer questions.”

Marketers also say 2023 will be the year that brands and agencies get serious about how synthetic content can be deployed to serve bottom lines and augment human creativity.

 “The things that agencies should be doing is beyond experimenting with this; they should be calculating now what it means for their business,” Curtis tells Kulp.

Generative AI, he added, “is a tool humans will use to kickstart creative thinking or to create the base level of something, which they then adapt continuously, or to move more quickly. …It is not an answer to everything, but it does radically shift the economics of a lot of what we do in creativity.”

Agency BBDO has experimented and agrees that the ad industry should be thinking more about the various ways it could revolutionize how creatives do their jobs.

“In my mind, it doesn’t appear that many of the people commenting on this have even used the tool,” Zach Kula, group strategy director, tells Adweek. “If they did, it would be obvious it’s not even close to replacing creative thinking. In fact, I’d say it exposes how valuable true creative thinking actually is. It puts the difference between original creative thought and eloquently constructed database information in plain sight.”

Experts say it’s likely that technology like voice cloning, synthetic imagery and generated copy could align in the next year to allow marketers to create full realistic-seeming videos out of whole cloth with AI.

According to Kulp, those capabilities could make it easier for marketers to make targeted, personalized video ads aimed at different segments at scale.

In addition to possible upsides, generative AI also has a host of risks that any marketer needs to be aware of, including the potential for accidental copyright infringement or plagiarism. Brands are already preparing defenses against fake content such as auto-generated user reviews or defamatory content generated at scale.

Within five years, 80% of enterprise marketers will establish a “dedicated content authenticity function” to root out AI-generated misinformation, according to industry analyst Gartner. The consultancy also projects that 70% of enterprise CMOs will list “accountability in ethical AI” among their top concerns as more regulations and risks develop.

In fact, 2023 will be marked by a tightening of regulations around AI. In the US, Microsoft (an investor in OpenAI), GitHub and OpenAI are being sued in a class-action lawsuit that accuses them of violating copyright law by letting Copilot, GitHub’s code writing service, regurgitate sections of licensed code without providing credit.

In Europe, the EU’s proposed AI Act could limit the type of research that produces AI tools like GPT-3, experts have warned. According to a TechCrunch article by Kyle Wiggers, so could more local efforts, like New York City’s AI hiring statute, which requires that AI and algorithm-based tech for recruiting, hiring or promotion be audited for bias before being used.

“Next year will only bring the threat of regulation, though — expect much more quibbling over rules and court cases before anyone gets fined or charged,” Wiggers says. “But companies may still jockey for position in the most advantageous categories of upcoming laws, like the AI Act’s risk categories.”

Brockman’s tweet is actually alarming given the rapid advance of the technology and the failure of rules and ethical considerations to keep pace with it.

“I think a slow, sleepy year on the AI front would be good news for humanity,” Wiggers says. “We’d have some time to adapt to the challenges AI poses, study the models we have, and learn about how they work and how they break.

“And… we might have time for a more serious conversation about why AI matters so much and how we — a human civilization with a shared stake in this issue — can make it go well.”

Synthetic content generators are going to seem trivial in comparison to the broader sweep of AI which is to effectively mimic human intelligence. A human-level AI would be what Max Roser, founder and director of Our World in Data, describes as a machine, or a network of machines, capable of carrying out the same range of tasks that humans can.

Not so long ago the stuff of science fiction, the date for such a development actually happening has been brought a lot closer.

According to a number of experts and surveys, including by the Metaculus community and research by Ajeya Cotra, who works for the nonprofit Open Philanthropy, there is large agreement that the timelines for achieving human-level AI are shorter than a century, and many have timelines that are substantially shorter than that.

In Roser’s article, the majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case, Roser says, it would plausibly be the biggest transformation in the lifetimes of our children, or even in our own lifetimes.



No comments:

Post a Comment