NAB
SMPTE has called on the Media & Entertainment industry to be more active and vocal in the in debate about developing ethical AI systems. Doing nothing, or not doing enough, is not an option because “failure may come at a high human cost,” the organization says.
article here
“The time to discuss ethical considerations in AI is now, while the field is still nascent, teams are being built, products roadmapped, and decisions finalized. AI development is no longer just a technical issue, it is increasingly becoming a risk factor.”
This call for action forms a substantial part of the “SMPTE Engineering Report: Artificial Intelligence and Media,” which was produced alongside the European Broadcasting Union (EBU) and the Entertainment Technology Center (ETC). The report was the result of a task force on AI standards in media that began in 2020. Since then, it has become clearer to everyone that AI will transform the media industry from pre-production through distribution and consumption.
“I believe that AI will continue to see exponential growth and adoption throughout 2024,” said SMPTE President Renard T. Jenkins. “Therefore, it is imperative that we examine the overall impact that this technology can have in our industry. That is why the progressive thought leadership presented in this document is so important for us all.”
The report begins with a technical understanding of AI and machine learning, followed by the impact these technologies will likely have on the media landscape. The report then moves on to examine AI ethics and ends by discussing the role that standards can play in the technologies future.
The report describes today’s AI as “disruptive, vague, complex and experimental” — all at once. “It is difficult to understand, and easy to load up with fears and fantasies,” the report reads.
“This is a dangerous combination. The convergence of corporate hype, fledgling methods, biased datasets, and the urgency to productize, are all fertile grounds for failure,” it continues.
“Learning through failure is generally good way of testing and improving tentative tech like AI — except when models are put in a position to make decisions about policing, hiring, synthetic conversations, or even content recommendation and personalization.
“Then, failure may come at a high human cost.”
Organizations must examine the downside risk of deploying underperforming and unethical AI systems, especially because, in most cases, ethical and technical requirements are the same.
“For example, unseen bias is as bad for model performance as it isdiscriminatory. Model transparency is not just an ethical consideration: it is a trust-building instrument.”
SMPTE urges the M&E industry to bring its own voice “and nearly 150 years of success marrying human and technological genius” to the debate.
“Media holds a substantial and powerful place in our society as the mass distributor of human narratives and social norms. Media must bring this unique voice and hybrid human/machine culture to AI development and the debate on AI ethics.”
The report explains how Media & Entertainment companies collect and process large amounts of consumer data and that increasingly, this means they must comply with a growing list of legal regimes and data governance requirements. Similarly, there’s a substantial opportunity to use computer vision in virtual production and post-production processes.
SMPTE suggests that the quality and diversity of training sets — “how color correction can affect representation of minorities” — and the use of deepfake technology are “critical areas” where ethical considerations are paramount.
The media industry’s history of sophisticated legal practice around likeness rights, royalties, residuals, and participations is a “substantial advantage in navigating issues related to computational derivatives of image and content,” it writes.
The paper argues for a standards-based approach to verification and identification, and not only of the image (e.g., format and technical metadata), but also of the talent itself and the authenticity of content.
“Persistent, interoperable, and unique identifiers have aided media supply-chains in the past, and could well help with the labeling and automating the provenance of authentic talent in the future age of AI in M&E,” it states. Such work is ongoing, including at the Coalition for Content Provenance and Authenticity (C2PA).
“At a minimum, requirements for data and model transparency would go a long way towards reinforcing trust in computational methods and help convert those in the industry still reluctant to use statistical learning to optimize human processes.”
Around the corner, the development of conversational agents (chatbots)creates serious ethical risks, especially as the industry looks to create highly immersive and personalized experiences in the multiverse.
“Bias is the model-killer,” SMPTE contends. “Black box algorithms help no one. Intellectual and cultural diversity is critical to high performance. Product teams must broaden their ecosystem view.”
There’s a call for ethical considerations to be embedded in all aspects of digital product design, and development. This seeding of ethics at the product level is essential to view bias as a complex ecosystem of inputs, features, models, outputs and outcomes, it says.
“Any organization’s output, products, and decisions (deliberate or not) inherently fit its culture and values. This is why AI ethics is high-stakes: it deploys an organization’s culture and values on a large scale,” the report argues.
“Because they shape society at scale and have a history of taking the public interest seriously, media companies have a distinct responsibility to move forward with their AI ambitions, with full awareness of these applications’ ethical considerations. They should ensure that all aspects of their development (including data collection), deployment, and end-uses, support the law as well as their own values regarding privacy, justice, tolerance, and human rights.”
The AI Ethics Pipeline
The entire value chain of AI development, from product design to data collection to model deployment, should be secure, transparent, explainable and auditable, says SMPTE.
In contrast, black box machine learning frameworks are “ethically and statistically dicey. They foster sloppiness in data science teams and mistrust for those already suspicious of machine models. What cannot be explained should not be deployed in a decision-making environment.”
The report continues: “In a world where organizations are often too suspicious or too enthusiastic, only secure, transparent, explainable, and auditable machine models can scale resiliently. Additionally, all stakeholders deserve transparency, each in their own language, across different points of view and technical sophistication.”
Ethics, it says, should be part of Quality Assurance for any and all computational systems.
“AI is still a technical ungoverned frontier. Everything around it, from roadmapping to modeling to seeding in company culture, is complex and challenging. Mistakes will happen. Organizations must communicate comprehensively and with humility about their journey to approach and implement processes around ethical AI, for the benefit of all.”
With technical standardization of AI still in its infancy there’s an imperative on the Media industry to provide language and frameworks to support its development, SMPTE urges.
“AI is an emerging technology, and AI ethics is an almost entirely blank slate. Examples of successful, organization-wide implementation of ML transparency and trustworthiness are extremely rare.”
But this should be motivation to try harder, SMPTE says. “Transparency is not just key: it is a perennial concern.”
The report warns of “model drift” where as the world changes, the problem changes, the data changes, and model performance is affected. “There is no longer a fit between the model and the system, or behavior that it is representing. Only transparent and auditable models can catch model drift before it causes damage.”
No comments:
Post a Comment