Friday 20 January 2017

Big data is driving AI across media

KNect365 TMT
Access from devices to cloud AI APIs is the game changer providing highly powerful and scalable real-time capabilities for processing data. 
The concept of ‘machines thinking like humans’, has moved beyond theory thanks to the increase in – and increase in access to – massive amounts of unstructured multimedia combined with the low cost of high-power, specifically cloud-based, computing. Essentially, if it weren’t for big data there would be no artificial intelligence.
“There’s a been a huge influx of data with everyone feeding sound, text and imagery over social media which has accelerated our ability to try to find ways to process and understand it,” says Ian Hughes, analyst at 451 Research. “Traditional processing and analytics are too slow since it doesn’t scale, so research has been pushed into AI as a way of dealing with data.”
Adoption is bound to grow as all media experiences become fully connected and new products are developed to provide more convenience, relevance and satisfaction to the user experience. Voice recognition is an early example.
“Speech-to-text services could become a ‘table stake’ in multi-language video markets providing automated subtitling over any video content,” reckons Nagra senior director product marketing Simon Trudelle.
“You can imagine that AI systems will be able to analyse, by facial recognition or object detection, the actual content for metadata gathering,” says Paul Turner VP, enterprise product management, Telestream. “Given that metadata is key to automated workflows, this could vastly expand our capability to ‘mine’ content for other purposes.”
Some 75% of Netflix’ usage is driven by recommended content that was itself also developed with data – reducing the risk of producing content that people won’t watch and proposing content that consumers are eager for. This groundbreaking use of big data and basic cognitive science in the content industry has shown others its potential.
“The world’s biggest content owners are going direct to consumers,” says Trudelle. “With Netflix and Amazon now in the top ten content creation companies worldwide, it could drive a paradigm shift in the media industry. With a growing stock of videos available, just relying on manually managed catalogues or curated lists to create TV or SVOD services has already started reaching its limits.”
The use of AI relies heavily on massive volumes of unstructured data – and a lot more has become available now that video-enabled consumer devices are connected. Capturing and managing TV/video platform data so it can be exploited by advanced predictive algorithms is becoming a key focus for the media industry.
“With early data mining you needed to be experienced in statistical analysis to take advantage of the technology – now we are able to put some AI tech into the hands of those who don’t have a Phd to make it work,” says ThinkAnalytics founder/CTO Peter Docherty. “We can also take advantage of elastic computing power to build and scale models and begin to develop more industry specific tools.”
Voice assistants such as Amazon’s Echo and Google Home record user voices in order to function, a logical extension of which is to have cameras on smart TVs and STBs relay information back to the operator about who is watching to improve individual profiling, content serving, ad targeting and automated product insertion (using tools like Mirriad).
This may appear more intrusive to the way in which Google or Amazon appropriates data from web searches, for example, and opens up a debate about how much data consumers may be willing to part with for perceived benefit or service discounts.
According to Bloomberg, Google, Amazon and Microsofare already aggregating voice queries from each system's user base to eductate their respective AIs about dialects and natural speech patterns.
“The advent of cloud-based apps and APIs means 2017 will be about personalisation,” says IBM’s EU Cognitive Solutions & IoT executive Carrie Lomas. “It’s not just about knowing age and gender but knowing a consumer’s emotional response to products marketed to them. Cognitive computing enables media and brands to personalise their approach in a frictionless way.”
At Kudelski (Nagra’s parent), principal data scientist Pietro Berkes says machine learning (ML) algorithms are used to assist human decisions in all its core businesses including helping operators understand the behaviour of subscribers, predict churn and optimise their catalogue.
Its security division uses ML methods for “privacy-preserving user behaviour modelling and intrusion detection.” ML algorithms are also applied to help infrastructure operators better manage peak traffic situations and to detect and prevent fraud in deployed systems, he says.
Realistically, it may still take several years before new AI APIs become widely available and adopted by the traditional content creation and distribution value chain. “It’s really a new mindset that players need to have which asks ‘What if there were a cloud AI API doing this?’” suggests Trudelle.
That cultural reluctance may also inhibit AI’s adoption in production although similar benefits of sifting an overwhelming mass of data apply. Video from observational documentary shoots, for example, regularly achieve ratios of 100:1 swamping editorial. Auto-assembly and even auto edit-packages are available to package and polish raw multimedia though instances of use in professional content creation are rare.
“Making one cut of a promo was fine when TV was the only distribution medium, but today that’s not good enough,” says Oren Boiman, co-founder and CEO of Magisto, an AI-based editing software developer that claims 80 million users.
“If you are creating a trailer for a newspaper website, Facebook, Snapchat, YouTube or Instagram, each one should be formatted differently,” argues Boiman. “You might target by gender, age or by country. With so many variants of the same source media required to optimise every impression online, doing so manually is extremely inefficient.”
Other examples of creation aided by AI are springing up. At last year’s Cannes Lions ad festival, a promo produced by Saatchi & Saatchi was reportedly scripted and directed by AI . Even the casting was done by a program that examined electroencephalogram (EEG) brain data from actors and matched them to the emotions it had detected in the song and its singer.
IBM challenged an ad agency to rate a video made by Watson alongside one made to the same brief by another team at the same agency. “They couldn’t decide which one was which but said they preferred the one by Watson,” says Lomas, stressing that creatives are still necessary to shape video and create campaigns “but AI techniques can take away the heavy lifting”.
IBM also demonstrated how Watson could be trained to examine patterns in trailers and horror movies to help create the official trailer for Fox feature Morgan, slashing typical production time from weeks to a few hours.
“It’s a common belief that AI will replace mechanical roles with creative ones,” says Berkes. “While that might be broadly true, the reality is more complex. The output of with tasks like creative filtering and automatic editing of movies must be ultimately evaluated by humans.”
Since ML systems need very large amounts of high-quality data to achieve optimal performance “data collection and curation requires substantial organizational efforts, he says. “The global shortage of ML experts represents one of the most important difficulties for companies wanting to enter the AI market.”
In response, the generic role of the ML expert will be replaced and complemented with more specialised roles. According to Berkes, these include engineers adapting known methods to new sets of data and tuning the model parameters; data scientists supporting ML engineers with big data cluster architectures and for the visualisation and evaluation of the results. Software engineers will also be needed to productise the resulting model.
The biggest losers in an AI ecosystem will perhaps be those in ‘assisting’ roles that can be replaced with automatic systems, rather like the way the traditional role of the ‘in-betweener’ in animated movie production has gradually been replaced by computer rendering techniques.
Even further out, say 2040, where will AI be? “We will have a completely different method of interacting with computers,” says Telestream’s Turner. “The ability to communicate with common language will be a paradigm shift that is as dramatic as the adoption of the mouse was.”

No comments:

Post a Comment