Friday 29 March 2024

GenAI Is Good for Artists, So What’s the Problem?

NAB

OpenAI is voluble about its mission to deliver all the benefits of AI to humanity, but is non-committal at best on whether it should be paying creators for the work its machines are trained on.

article here

Quizzed on this, Peter Deng, OpenAI’s VP of consumer product and head of ChatGPT, told SXSW, “I believe that that artists need to be a part of that ecosystem, as much as possible. The exact mechanics I’m just not an expert in. But I also believe that if we can find a way to make that flywheel of creating art faster, I think we’ll have kind of really helped help the industry out a bit more.”

Generative AI then should be viewed as a definite plus to the creative community and they should all be thankful for it for speeding up their process, and quit moaning about being compensated.

Asked directly whether artists deserve compensation, Deng avoids a direct response.

“How would I feel if my art was used as inspiration [for an AI]? I don’t know,” he said. “I would have to ask more artists. I think that, in a sense every artists has been inspired by some artists that have come before them. And I wonder how much of that will just be accelerated by AI.”

Nothing to see here then, creative community. Move along.

Deng’s main message in the provocative hour long moderated debate was that AI and humanity are going to “co-evolve,” so get used to it.

“I actually believe AI fundamentally makes us more human,” Deng declared. “It’s a really powerful tool, it unlocks the ability for us to go deeper and explore some of the things that we’re wondering about,” he said.

“Fundamentally, our minds are curious and what AI does is lets us go deeper and ask those questions.”

In his example, someone learning about Shakespeare might struggle to get past the language or understand the play’s context. But they could boost their appreciation of the text by quizzing an AI.

In a similar way Deng imagines everyone having a personal AI that they could interact with for any number of reasons such as bouncing around ideas, problem solving or answering questions.

In this sense AI is an evolution of a printed encyclopedia, of Wikipedia or a internet search engine.

“We are shifting in our role from being the answers and the creators to more of the questioners and the curators,” he said. “But I don’t think it’s a bad thing. If you take a step back, what’s really interesting about AI is that it gives us this tool, this new primitive that we can start to build on top of.”

The calculators is another analogy. Instead of spending time doing arithmetic, we can now think about higher level mathematical problems. Instead of spending time recalling every single fact, we have Google or databases where knowledge resides allowing us to ask higher level questions.

“The level of skill that humanity has just keeps on getting pushed up and up and up with every sort of big technology. Since AI is such a foundational technology we’re going to be able to push our skill level up and up.”

Kids, he suggests, could use AI to program, learning how to code even before they learn how to write.

You can’t rely argue with this sort of vague and optimistic approach to AI. It’s Deng’s job, after all, to promote OpenAI’s development.

He goes on to talk about the how the mission of the company inspired him to join it from his previous role at Meta. He claims to want to help create “safe” artificial general intelligence, or AGI, which is the next level of the technology that OpenAI is working on. He wants to “distribute the benefits to all of humanity.”

Deng said, “I’ve never seen a technology in my lifetime that’s this powerful, that has this much promise. Just to be a part of something that’s going to be so beneficial to humanity if we get it right. And I just want to not mess it up.”

However, interviewer Josh Constine, the former editor at large of TechCrunch and now a venture partner at early stage VC firm Signal Fire, is no fool. He does ask the probing questions of Deng. Such as whether bias in training data sets are a concern and what is OpenAI going to do about it.

Deng essentially says it’s up to the user to decide, seemingly absolving OpenAI of responsibility

“My ideal is that that AI can take on the shape of the values of each individual that’s using it. I don’t think it should be prescriptive in any such way.”

Constine tries to get Deng to agree that giving AI a standard set of ethical values must be a good thing for all of mankind, not just an AI which is super intelligent but one which is “empathetic.”

Deng ducks the topic with more platitudes. “The beautiful part of humanity are different parts of the world have different cultures and different people have different values. So it’s not about my values that I want to instill, I would just hope that we’re able to find some way to take the world’s values and instill it.”

Later in the interview he gives this revised approach: “How do we find ways to instill the values that we have and [impart that] learning to AI so that AI can kind of be a part of our coevolution?”

Would Deng trust an AI to defend him were he theoretically in court?

“[If] I were ever to be falsely accused of a crime I would absolutely like to have AI as a part of my legal team. One hundred percent.” AI would act as a an assistant to the legal counsel “just listening to the testimony and in real-time, cross-checking the facts and the timelines, being able to look at all the case law and the precedent, and to suggest a question to a human attorney. I think there’s absolutely human judgment involved. But that level of sort of super power assistant is going to be really powerful.”

That said, Deng wouldn’t yet trust AI for everything. Just as one might use the autonomous functions of your car, it will take to build up trust in the machine. A key part of the evolution for Deng and OpenAI is real-world learning. OpenAI argue that the reason they release ChatGPT and other large language models into the world is to test and trial and adapt and improve them with constant iteration outside of a lab. Deng argues this makes the AI better for humans in the long run.

“I think that the path of how we get there, the repeated exposures and experiencing of it is a huge part of the coevolution. We’re not developing AI and keeping it in the lab. We’re trying to making it generally accessible to other people, so that people can try it out and can gain that literacy, and can get a feeling for what this technology can do for you.”

Literacy or education about how to use and work with AI and its potential threats, weaknesses and strengths is, he says, very important. He advocates education schemes that do this and says OpenAI and its investors at Microsoft are already paying for some of these programs.

One way to ensure AI remains a tool for mass use and mass literacy is to make it free. Deng commits to the idea that a version of OpenAI will always be free.

“There should be there should always be a free version. Absolutely. That’s part of our mission — to distribute the benefits to all of humanity. It just so happens that it costs a lot to serve right now.”

He says enterprise users are paying to use OpenAI tools at a price “commensurate with their use,” but some of that value is able to trickle down.

OpenAI wants to push the boundaries of the tech, “but also make sure that we’re developing it in a very safe way,” he claims. “And the way that we build product on the inside is very much a combination of multiple people with multiple different perspectives on what could be.”

Pushed on whether there is a threat from deepfakes and other AI generated information in this election year, Deng agrees that it is does matter. He points to OpenAI’s support of content credentialled initiatives like C2PA. But will this matter in the longer term? He is not so sure.

“In the future, I don’t know if people will care,” he said. “Walking down the street here in Austin, I’m not sure how much we care that a billboard ad was created using Photoshop or not. Or indeed what tools were used to create that content. I don’t know how people will care [about AI generated content] in future but I do know that if people will care, then it will be corrected for.”

In other words, let the market decide.

Having warmed his subject up with some easy lobs, Constine gets down to the meat of questioning. Where does Deng stand on how fast AI development from OpenAI and others should be? Should AI development be slowed in order for all its implications on society and industry — and regulatory guardrails — to catch up?

“I’m somewhere in the middle. With any new technology, there’s going to be really positive use cases and there’s some things that we need to really consider. My personal viewpoint is the way that we actually figure out what those challenges are and how we actually solve them is to release at a responsible rate in a way that gives society a chance to absorb and make sure we have the right safeguards in place.”

He adds, “I don’t think that AI will be safely developed in a lab by itself without access to the outside world. Companies are not going to be able to learn how people want to use it, where all the good is, and also what are all the areas that we need to be very cautious about [without release in the wild].”

Constine probes; If an AI makes a mistake, who is responsible? Should that AI model be changed or pulled back? Should the engineer be held liable? Should the company?

Deng reiterates that releasing product is the best way of seeing the good and the bad.

“AI will make mistakes, but it’s important that we release it so that the mistakes that are made are ones we’ve already baked in some of the mitigations [safety features]. That iterative deployment is my best bet of how we can kind of advance this technology safely.”

 


No comments:

Post a Comment