Friday 28 June 2019

Audible AR will totally change the way we interact with our devices

RedShark News
Audio has often played second fiddle to the visual yet to paraphrase George Lucas, sound is more than half the picture. Certainly, when it comes to the immersive information and entertainment experiences that are promised with ubiquitous high capacity broadband the focus has been on what we might see rather than the primacy of the aural. That could be about to change and here’s why.
Personal voice assistants such as Alexa, Siri and Google Assistant are emerging as the biggest interface revolution since the iPhone popularised the touchscreen. By 2023, we will speak rather than type more than half of our Google search queries, predicts Comscore.
At the same time, one of the fastest growing categories of body worn sensors connected to the internet are wearable audio technologies. Known as ‘hearables’ these are likely to harness machine learning to ascertain our preferences, habits and behaviour in order to engage more personally on a one-to-one level.
So rapid is development in this area that the worldwide market for smart in-ear 'hearables' will be valued over $73 billion by 2023, according to Futuresource Consulting.
“The thirst for technology integration, notably voice assistants, exhibits potential to build a unique class of innovative hearable products,” believes analyst Simon Forrest.
Coupled with location-based awareness via an on-board GPS, spoken direction will become “an essential skill for hearables”, suggests Forrest, well beyond the ‘command and control’ voice interface we have today, capable of directing users through spoken step-by-step instructions.
The basic use cases are in health monitoring of pulse or stress levels and as an aid to hearing.

Applied scenarios

One scenario, envisaged by Poppy Crum, Chief Scientist at Dolby Laboratories and an Adjunct Professor at Stanford University, is where you’re trying to follow a football match on TV while in the kitchen cooking. Your hearables know there’s a problem because they’ve detected an increase in your mental stress, based on changes in your blood pressure and brain waves and will automatically increase the volume of sounds coming from the direction of the TV.
Similar audio amplification and directionality could happen to enable you to hear your dinner companion in a restaurant, or a friend in a club.
Hearables can even figure out exactly whom you are trying to hear by tracking your attention, even if you can’t see the person directly.
“We’ve all been at a party where we heard our names in a conversation across the room and wanted to be able to teleport into the conversation.” Soon we’ll be able to do just that, says Crum.
Adaptive noise cancellation technology, integration of voice assistants and addition of smart user interfaces all stem from developments in wireless technology.
Wireless earbuds such as Apple AirPods or Bose Sleepbuds show how advances in miniaturisation and battery technology have enabled small, lightweight devices that weren’t possible just a decade ago. Bose recently introduced Bose Frames which have directional speakers integrated into a sunglasses frame.
All of these new features improve the listening experiences for consumers and helps to reduce dependence on the smartphone for simple controls (such as to pause/ play music, ask for weather or navigation information, adjust volume etc.).
How about ditching all the complicated menus within menus and buttons on your digital camera and simply requesting your personal voice AI to ‘record rapid burst 4K, stop at 3GB, save as JPEG and RAW and give me HDR options’.
That wouldn’t work if you’re taking close-up snaps of easily disturbed wildlife – but it’s as hands free as you’re likely to get. And over time, as the voice AI understands more of your personal photography preferences with natural language processing, you and your AI will develop a shorthand. You’ll be creating images together.

Voice assistants, evolved

Amazon, Google and others are working on ways to evolve assistants from a voice interface that completes basic tasks to one that can handle complex conversational style comprehension.
Efforts are being made to stitch together voice assistant applications under one operating system so that the user need only interface and converse with one wherever they are.
Skip forward a few years and you can readily imagine a scenario as played out in Spike Jonze 2013 film Her in which the lead character Theo falls in love with his voice-driven OS called Samantha. Samantha would pass the Turing Test, her artificially intelligent relationship with Theo indistinguishable in his mind from the real thing.
A new concept of ‘audible AR’ could evolve, presenting opportunity for 5G hearables that overlay spoken information to augment the real-world environment in real-time.
Science fiction? Not for Poppy Crum. She is working toward audio technology that is “truly empathetic” and calls the ear the biological equivalent of a USB port.
“It is unparalleled not only as a point for ‘writing’ to the brain, as happens when our earbuds transmit the sounds of our favourite music, but also for ‘reading’ from the brain,” she says.
Today’s virtual assistants, rely on the cloud for the powerful processing needed to respond to requests. But artificial neural network chips coming soon from IBM, Mythic and others will often allow such intensive processing to be carried out in a hearable itself, eliminating the need for an internet connection and allowing near-instantaneous reaction times.
“Voice assistants will no longer remain quiescent until summoned by the user,” says Forrest. “Instead they will intelligently interject at optimum moments throughout the day, influencing the user’s thoughts and behaviour.”
He suggests that the race is on to identify and monetise services that do not necessarily rely on screens. “Advertisers will be quick to harness opportunity to speak to wearers, conveying precisely timed and relevant information based upon geolocation,” he says.
A whole new world of audible applications will develop alongside visual ones, presenting digital enhancement of the soundscape.
Rather than layer the world with visual information, audible AR offers a ‘layered’ listening experience.
Crum thinks future hearables will use software to translate fluctuations in the electrical fields recorded in our ears drawing on decades of research that have helped scientists draw insights into a person’s state of mind from changes in electroencephalograms (EEGs).
By that stage we may not need to talk to our AI at all since it will be reading our minds before we’ve even processed the thought.

Wednesday 26 June 2019

Tokyo Olympics 2020 to be “The Most Digital Ever” and Put Cameras in Orbit

Streamingmedia

International Olympic Committee broadcaster Olympic Broadcasting Services plans for rocketing increase in digital viewing and remote cloud production for next Olympics.
http://www.streamingmediaglobal.com/Articles/Editorial/Featured-Articles/Tokyo-Olympics-2020-to-be-The-Most-Digital-Ever-and-Put-Cameras-in-Orbit--132730.aspx
The Olympic Games Tokyo 2020 is set to be the "most digital ever," according to its host broadcaster.
Olympic Broadcasting Services (OBS) is briefing the digital teams of rights holders to prepare to connect with an online audience that will be larger than ever and to deliver enhanced digital coverage of the games.
"The future of content delivery is multi-media, multi-platform, personalised, mobile and social," says Raquel Rozados, OBS director of broadcaster services. "To stay relevant and continue our mission of serving the rights holders, and to help them captivate their digital audiences, our focus needs to be on the digital arena."
Demand for content related to the Olympic Games has increased because of the expanded coverage from various digital and social media around the world.
OBS estimates that the amount of programming required by broadcasters during the 2016 Summer Olympics in Rio de Janeiro was ten times more than the volume required at the 2004 event in Athens.
"We started thinking of ways to address the issue," says OBS CEO Yiannis Exarchos. "This is where our paths crossed with Alibaba to explore how we can leverage cloud technology to make the work of broadcasters easier and more efficient."
OBS Cloud
Among the innovations will be the OBS Cloud, a cloud platform built and managed in partnership with Alibaba Cloud, a unit of Chinese e-commerce giant Alibaba Group Holding, which will likely deliver the largest scale live remote production in history.
A cloud broadcast solution would help broadcasters "work remotely and not have to bring so much equipment or so many people" to Tokyo, Exarchos says.
The OBS Cloud will include a specific selection of cloud services, in optimised configurations, for broadcasters to use as the building blocks for their sports production workflows, before and during the games.
The bundling of secured connectivity options between the International Broadcast Centre in Tokyo and OBS Cloud regional data centres is intended to offer easy-to-implement and cost-effective ingress and egress. 
"Our overriding aim is to give the broadcasters the opportunity to retain any part of their production infrastructure, or access to content, even after the Games and for as long as they wish without any interruption or the need for re-installations, re-ingesting," says OBS chief technical officer Sotiris Salamouris.
Alibaba has teamed with Olympics sponsor Intelto develop a sports AI platform for use in the run up to and during the Games.
Instead of using wearable sensors, the 3D Athlete Tracking Technology uses information from multiple standard cameras which provide different angles of the athletes as they train, processed in Alibaba’s cloud. The AI applies pose modelling techniques and other deep learning algorithms to the video to extract 3D mesh representations of athletes in real-time. These digital models will provide coaches with intricate biomechanical data for use as training tools.
"This technology has incredible potential as an athlete training tool and is expected to be a game changer for the way fans experience the games, creating an entirely new way for broadcasters to analyse, dissect and re-examine highlights during instant replays," says Navin Shenoy, EVP and general manager, Data Center Group, Intel.
Having already trialled 5G for contribution links and VR at the Winter Games in 2018, Intel is working with Japan’s NTT DOCOMO to provide 5G-based experiences in 2020. Expect 360° 8K video streams that may showcase live action across high-resolution devices at Olympic venues.  
What is not clear is whether there will be a standard host production in 4K UHD. Rio was covered largely in HD, with some 4K and some experimental 8K. Japan’s NHK will be all over the games with its Super Hi Vision 8K format, and it would make sense for OBS to deliver a UHD 4K High Dynamic Range main coverage since rights holders like the BBC can take this feed for delivery on online platforms like iPlayer. OBS has not however made such a decision public.
Olympics Now a Digital-First Show
The Rio Games were covered by more than 250 digital platforms and featured double the hours of coverage as TV (218,000 hours versus 81,500 hours).
NBC alone exceeded 2.5 billion live streaming minutes, over 1 billion more than all previous Olympic Games combined.
More than 9 million hours of content was streamed on the Olympic Video Player, with as many as 1 million daily unique viewers for live streaming and on-demand video. There was record social media engagement too, with more than 4 billion social media impressions (a metric for the number of times IOC posts have been viewed and 14.6 million Facebook fans, nearly double that of London 2012).
2020 Games from Space
Sometime between March and April 2020, a specially commissioned satellite will be released from the International Space Station to orbit the earth during the games and provide a perspective on the event from space.
The Tokyo 2020 G-SATELLITE will contain a cubicle housing animated mascots of the games (named Gundam and Zaku) and seven cameras which will record and transmit their movements. An electric bulletin board will display messages from the mascots.

BARB Rolls Out Meters to Improve SVOD Viewing Measurement

Streaming Media
UK ratings agency BARB is looking to close a loophole in its cross-platform measurements by introducing meters into homes of its panel members.
http://www.streamingmediaglobal.com/Articles/News/Featured-News/BARB-Rolls-Out-Meters-to-Improve-SVOD-Viewing-Measurement-132729.aspx
The BARB reporting panel is made up of 5,300 homes (including 200 broadband-only homes) that are representative of household type, demographics, TV platform and geography. There are more than 12,000 people living in these homes.
However, tracking streaming activity of VOD services and across devices such as smartphones has been a bugbear that BARB has sought to close.
The router or Focal Meters, which BARB has commissioned for install from Kantar, will be attached to the broadband routers in panel homes and are designed to track streaming activity by any member of the household on any device, with their consent.
Among other things, the meter is intended to provide greater insight into unidentified viewing, which is TV set viewing that BARB cannot identify and which accounted for 19% of total TV set use in 2018.
A significant portion of unidentified viewing is believed to comprise of viewing to SVOD and online video services. Subject to further evaluation, router meters are anticipated to facilitate the reporting of aggregate-level viewing of these services but will still not cover viewing of SVOD services like Netflix and Amazon Prime, which have not signed to BARB.
The meters will also distinguish whether post-broadcast viewing was done through a tagged broadcast VOD (BVOD) service or via playback of a PVR recording. Currently, BARB can only make this distinction in panel homes with Sky; router meters will extend this capability to all panel homes.
At present, BARB has device-based census data for smartphone viewing. Router meters will also enable the demographic profiles of smartphone viewing to be reported.
However, the devices will only track video streaming activity from a designated list of BVOD, SVOD and online video services; other types of internet activity will not be tracked.  
Kantar will begin to install its router meters into new and existing BARB panel homes in October.
"Whether it is live streaming or watching on-demand, people around the UK are getting used to watching content that's been distributed through BVOD services and other online platforms," says Justin Sampson, BARB’s chief executive. "This is why a meter attached to the broadband router in panel homes is a vital capability for BARB to have."
BARB is also running a tender process for agencies to record audience data from the time the contract with Kantar ends in 2022. It is expected that router meters will form part of the winning solution.
A notable recent example of measurement of non-TV set audiences reveals that 4.6 million people watched the June 3 episode of ITV2 reality show Love Island on a TV, and a further 1.3m watched on a tablet, PC or smartphone.
Generally, BARB finds that viewing on non-TV devices adds around 1.3% to TV set viewing, but this does vary by genre and programme—and Love Island is the show with the highest levels of non-TV viewing, with an uplift of 29%.

Craft leaders: Paul Greengrass, Director


IBC
Paul Greengrass, the director of 22 July, Captain Phillips and The Bourne Supremacy has damned the nepotism and public school cliques which he says continue to dominate the UK’s creative industries.
“The entrenched networks that prevail in modern Britain have a very deep presence in film and television and will always privilege those in the industry,” Greengrass told the Sheffield International Documentary Festival. “Recent social advances have made the industry much more open with far more women than when I was starting out but the distance you have travel from a working class background is profound.”
He said an industry “that is awash with money” was not doing enough to encourage diversity. “School kids do not think of film and TV as somewhere where they can go and make their mark. It’s still far, far too reliant on who you know.”
Greengrass speaks as one who broke the glass ceiling, rising from a working class background to A list Hollywood director but apportions part of his career to luck and another to his own rage and frustration at being unable to break into the club at the BBC in the nineties.
“I tried to enter the world of drama and found it a tremendous clique,” he says. “I submitted scripts to the BBC on several occasions only to have them passed on to established screenplay writers [Michael Frayn, Alan Bleasdale] and I struggled with that a lot. It was an institution I wasn’t going to be invited to and I didn’t belong in.”
By his own account, Greengrass has always struggled to fit in. Born in Cheam and growing up in Gravesend and the Thames Estuary “a tough place with a strong, rebellious anti-metropolitan identity” he recalls “all the intense anxiety I felt as a young person – girls, socialising, insularity. I found institutions tremendously hard.”
Having been “quite a bolshie, arsey kid,” he was lucky, he says, to have found his métier at secondary school. “That’s where I learned that I had eyes that I could use. I’d found I could draw and paint and print. The art teacher encouraged me to use an old dusty Bolex camera. Suddenly I’d found what I was meant to do. The camera allowed me to speak at some level.”
There was nothing in his family background to suggest an affinity with moving images.
“I have a theory that my becoming a filmmaker is to do with being quite isolated as a child and my childhood experiences of cinema,” he says. “If you have these incredible powerful experiences in a dark space it creates a fugue state that mainlines to your cortex. I can recall those experiences more vividly than many others. Filmmaking is essentially a psychological effort to recover the intensity of childhood experiences - which is why all filmmaking is one of disillusionment and deep self-loathing.”
TV roots
The route into the industry for aspiring filmmakers like Greengrass in the 1980s was television and with only three TV channels and no insiders to call on, that narrowed the choice down to ITV. He mailed his CV dozens of times requesting work experience, eventually landing a job as a researcher on the sports desk at Granada.
From there he gravitated to the hard-hitting current affairs series, World in Action. “You were taught that London was the enemy, that Manchester was where it was at,” Greengrass says. “Granada had that ‘we don’t care attitude’ which spoke to me. The attitude was ‘we make programmes that people want to watch. The BBC makes programmes they think you should watch.’”
World in Action was an eclectic mix of traditional documentary filmmaking (stemming from founding fathers John Grierson and Humphrey Jennings) and observational filmmaking “wedded to a strong journalistic ethos which gave it a political edge. Plus, it had this weird almost agitprop Private Eye quality to it in which they’d use gimmicks and devices to tell a story.”
Greengrass flourished, directing and producing stories on the Thatcher-era coal mining industry, on M15 investigator Peter Wright (for whom he co-authored the book Spycatcher which the British government attempted to ban) and behind the scenes with Bob Geldof in the weeks leading up to the Live Aid concerts.
“[WIA] enabled me to shoot, cut, write, to tell a story, do it under pressure. There’s nothing so intensely terrifying in having to cut a WIA in two days.”
He believes the process of filmmaking will always tend toward a vacuum and that the essence of the job of the director is to not let it form.
 “You cannot stop, you must move forward. Indecision creates a toxicity which destroys the dream which you set out to achieve.”
As important, his decade working for WIA taught him how to make films for an audience. “You are having a conversation with someone. You ‘ve got to be clear and direct about what it is you want to say.”
Although he never admitted to his colleagues at the time, Greengrass had a hankering to make fiction and tried and failed to transition into the format to the point of considering giving up on his career altogether.
It started well enough. His first film, for just launched Film4, was Resurrected (1989) an anti-war tale based on real events about an MIA soldier in the Falklands (David Thewlis) who turns up alive weeks after everyone thinks he’s dead. It was nominated for the main award at the Berlin Film Festival.
His next steps weren’t so assured. “I did learn the language of TV drama which is authorial storytelling and different to shooting docs - but I always felt that it wasn’t me,” he reveals. “I worked through my thirties but I never quite felt I was being true to myself.”
It became increasingly difficult as he struggled to translate the vision in his head to the screen.
“I’d see a film in my mind and shot it using conventional film grammar. This means you are filming in the third person but it’s not got the first person urgency to it and it didn’t connect me from where I’d come from in terms of documentary filmmaking or connect with me emotionally in terms of things inside me which are attack and pace and drive.
“But I didn’t know how to bring those things together. I went through a couple of films feeling unbelievably frustrated.”
This included TV movie The One That Got Away, about an SAS raid in the first Gulf War based on Chris Ryan’s book. “I was on location banging my head against a Humvee thinking this is not me. Why I am seeing the film that I want to make in my script, but the filmmaking process just puts a wet blanket over it?
“It was a lack of courage, a lack of knowledge and a lack of breakthrough in finding a voice. I hope that’s only truly disastrous film I’ve made. The sense of failure I felt about not imposing myself on that film gave me a rage.”
Take no prisoners
He took that anger into his next film, a docu-drama account of the racist murder of schoolboy Stephen Lawrence.
“I came to that film with some sense of crisis,” Greengrass says. “I’d reached a place where I thought I should give up. The rage and the frustration and fear that I felt at that point gave me a ‘take no prisoners attitude’. On the first day on set I started to shoot in a way that I always do now but then I backed off. I was terrified.
This was the beginning of Greengrass’ now signature handheld, kinetic, micro-cut style that lend his movies the feel of reportage.
“You are almost gambling everything on the one moment. It only lives in that moment, that way, that shot. It is unsettling because your rushes don’t look like ordinary rushes – there is no safety about it.”
Without the encouragement of producer Mark Redhead, Greengrass might have remained stuck. “He said just ‘go for it’. So, we did.”
What Greengrass says he realised is that the screenplay was not ever going to be the template for the film he wanted to make.
“The screenplay is fundamental, but the film exists beyond it and you have to get to that point and the only way of doing that is by speaking in your own voice. Finding your voice as a filmmaker is something that is hard won, and you can only win it by trying and failing.
“In the end, it’s about moving toward being true to yourself in your choice of subject, in the way you handle that subject and the aesthetic choices you make to render that subject. All of those have got to come together.”
Bloody Sunday, his meticulously researched and frenetically paced recreation of the 1972 Derry massacre, shared first prize at the Berlin Festival in 2002 and caught the attention of producer Frank Marshall.
“I’d never thought about making a commercial movie. I’m not an obvious candidate but when they asked me to do Bourne, I remember going to see Doug Liman’s [franchise starter] The Bourne Identity. I thought, I know what to do with that.”
You would imagine that Marshall knew what he was getting with Greengrass but apparently, he was shocked when he saw rushes during the first week of shooting.
“I was sitting in the back of the theatre and I could see Frank jerking around – ‘why the fuck is he shooting the stuff like this – its horrendous.’”
Story through action
When it came to reshooting some scenes for The Bourne Supremacy the studio ordered Greengrass to shoot it both the way he wanted and in a standard locked-off way.
He says though that his experience with Hollywood executives contrasted favourably with clashes with the BBC.
“Studio execs are not scary. They want filmmakers to tell them how to do things.”
Another key aspect of Greengrass’ style is the ability to tell a story through action. This is most evident in the action sequences of the Bournefranchise including Supremacy, The Bourne Ultimatum (2007) and Jason Bourne (2016).
In a scene in Supremacy, Bourne is trapped in a hotel room and recalls a horrific murder that he committed in his past as an assassin. The next sequences show him escaping from the room, running across Berlin to evade Swat teams and CIA goons, eventually getting away on a metro train with a look of resolution on Matt Damon’s face.
“That is quite an accurate psychological state of the character’s mind because when you remember something that is deeply shaming you run away from it,” Greengrass explains. “He is being chased by his own demons and the sequence ends with a realisation that Bourne must face up to them. He cannot run anymore. He has to atone for what he has done.”
He is however dismissive of using shaky-cam simply for effect.
“Action pieces like car chases should all have a character root that is truthful otherwise it is just eye-candy. You want the images you’re capturing to rise authentically arise out of the environment you’re shooting in — so, if you’re running it’s going to feel like what it feels to run. It doesn’t work when you’re not showing detail or developing the moment. It only works when you can get action to enact character.”


Monday 24 June 2019

VVC and AV1 Show Efficiency Gains; MPEG Announces Third Next-Gen Codec

Streaming Media

BBC R&D finds that AV1 produces better low-bitrate quality than HEVC, but the codec picture will get even muddier in 2020 as MPEG fast tracks VVC, MPEG-5 EVC, and LCEVC.
http://www.streamingmediaglobal.com/Articles/Editorial/Featured-Articles/VVC-and-AV1-Show-Efficiency-Gains%3b-MPEG-Announces-Third-Next-Gen-Codec-132632.aspx

Streaming standards AV1 and Versatile Video Coding (VVC) are on track to outperform HEVC as the industry adapts to a multi-codec future.
The most recent tests by BBC R&D comparing all three codecs side by side verified claims by AV1 developers Alliance for Open Media (AOM) that its codec has significantly reduced its computationally complexity.
At the same time, VVC was shown to outperform both HEVC and AV1 in terms of video quality output by up to 35% - a significant improvement on previous tests.
In results published earlier this month, BBC R&D found that VVC (a development of the Joint Video Experts Team and MPEG) performed 27% better than HEVC for HD sequences and 35% for UHD sequences.
AV1, on the other hand, performed very similarly to HEVC, with an average 2.5% loss over HD sequences, and 1.3% gains for UHD sequences (see graph at end of article).
Comparing AV1 to HEVC, BBC R&D found that AV1 could produce higher-quality decoded video than HEVC in low-bitrate scenarios, "which is highly desirable in the video coding field."
The time taken to process the videos through the codec was also tested. Increased processing times means increased complexity, and therefore more computational power is required.
BBC R&D found that the compression gains from VVC come at the cost of processing time. Encoding took around 6.5x longer than that of HEVC and decoding took 1.5x longer. AV1, on the other hand, takes about 4x as long to encode than HEVC, but is 8% quicker than HEVC to decode. 
In comparison to the lab's previous test results last year, "AV1 has hugely improved these processing times. This again highlights the focus of [AOM] of producing a codec optimised for streaming."
It is worth mentioning that the test models used by BBC R&D are intended to only provide an insight into the possible quality a codec can achieve, and they are not optimised for speed. Encoding and decoding will typically happen with optimised software or on hardware, where processing times will be far quicker. BBC R&D acknowledges this but adds that its data does still give a general idea of the relative quality and complexity these codecs have to one another.
Taking these results at face value then, VVC development can be expected to deliver hefty compression gains over HEVC by the time it is finalised next year.
MPEG is targeting a 50% improvement in HEVC bandwidth with VVC by 2020. This will enable live UHD-2 encoded content (8K) to be delivered at less than 50Mbps by around 2020-2022.
Of course, this being compression schemes, life is not that simple.
Since VVC is an evolution of standards and technologies already used in HEVC and other codecs it will not come for free. There are concerns that VVC will be equally burdened by royalties and patent pool opacity as HEVC.
AV1 is also now being dogged by patent claims of certain AOM members including NTT, JVC, Orange, Toshiba, and Philips. A licence for use of AV1’s use in consumer displays (costing €0.32 per device), STBs and OTT devices (€0.11) administered by Sisvel International was issued in March.
To counteract both of these, MPEG has fast-tracked development of two additional competing codecs.
MPEG-5 EVC is being designed with a royalty-free profile, even though the basis for it is V-Nova’s Perseus codec.
A second "main" profile to MPEG-5 EVC adds a number of additional tools, each of which is capable, on an individual basis, of being either cleanly switched off or else switched over to the corresponding baseline tool.
"The main benefit of EVC will be a royalty-free baseline profile but with AV1 there’s already such a codec available and it will be interesting to see how the royalty-free baseline profile of EVC compares to AV1,” comments Christian Timmerer co-founder and CIO of Bitmovin in a blog following the 125th MPEG meeting.
Then at the 126th MPEG meeting, LCEVC was announced.  Low Complexity Enhancement Video Coding is now the third video coding project within MPEG addressing requirements and needs going beyond HEVC. It will form part of the MPEG-5 suite.
Says Timmerer in another blog"The new standard is aimed at bridging the gap between two successive generations of codecs by providing a codec-agile extension to existing video codecs that improves coding efficiency and can be readily deployed via software upgrade and with sustainable power consumption."
The coding efficiency target for LCEVC—to be at least as good as HEVC—has apparently been met, and the goal now is to achieve an overall encoding and decoding complexity lower than that of the codecs it is built on (AVC and HEVC).
With standardisation of LCEVC, MPEG-5 EVC, and VVC due in 2020—about the time AV1 as set to mature—the race is set to come to a head.
Competition has to be healthy even if three of the runners come from one stable, but weighing the merits of each might require as complex an algorithm as the technology itself.
It is not compression efficiency alone which will determine winner, but the processing complexity at both encoder and decoder, as well as the power efficiency and calculation of its business and financial aspects.

Collaborate to keep up

AV Magazine
As the AV sector takes on the ‘as a service’ model, procurement is changing, and providers need to work in partnership with the client to benefit both parties.

The conventional vendor-specific ‘room in a box’ no longer cuts it when the modern enterprise is looking for unified communications and collaboration.
“There has been a shift in the enterprise space in the last few years regarding AV and communications, from tactical to strategic,” says Byron Tarry, executive director at the Global Presence Alliance (GPA). “The modern workplace is moving towards Microsoft or Cisco teams and AV in the conference space is moving from ‘nice to have’ to an integral part of collaborative workflow.”
Ultimately, the customer is looking for “improved collaborative outcomes”, Tarry argues. “From an industry standpoint, if we start to realise we are not technology providers but human collaborative outcomes providers, then it opens up a whole different world of what the opportunity is and the role we as AV suppliers, integrators, vendors and consultants can potentially play to support that goal.”
GPA will formalise this at InfoComm, where it will promote its new Velocity Ecosystem for global enterprises. This is described as an “integrated and standardised portfolio of collaboration solutions”, which includes program planning, hardware/software and deployment and support, along with a strategic management and analytics dashboard. Developed in partnership with Crestron, Cisco, Logitech, Legrand AV, Domotz and LG, the aim of Velocity Ecosystem is to deliver a quick path for standardisation, simplification and scale worldwide.
“The starting point for this was asking ourselves if we could deploy 1,000 rooms in 90 days and we decided we couldn’t do it in the way we’ve traditionally done,” Tarry says. “Even with all the advantages of global alignment, we still felt it would be a tall order.”
He explains: “The effort that goes into room remediation alone would mean it would take years to deliver. But when you shift the lens in terms of what you’re trying to provide to customers, which is essentially to help organisations move faster and which, in turn, brings them competitive advantage, then you think about mitigating the risk of each individual space by working with vendors to innovate and design your way to a fresh solution.”
However, this hugely magnifies the complexity of procurement. From simple models based on cost where, all things being equal, the vendor with the least expensive solution would win, the industry is slowly taking on the attributes of ‘as a service’.
Accountability shift
“In a world where ‘as a service’ is becoming prevalent, the challenge for procuring AV as a service is about a shift in accountability and risk from the consumer of the service to the provider,” he says. “Basically, it’s putting the onus on the provider to deliver results. That’s opposed to the prevailing capital expenditure model where (the industry) sells customers millions of dollars of kit with a service wrapped around it and if the customer doesn’t get what they want then the industry gets them to buy more kit and services.”
In the service model, the customer is willing to pay more if the provider can deliver better outcomes. However, it’s also extremely complex to measure, which is why part of the procurement package has to be about measurement and analytics for return on investment.
“New technologies now offer organisations insights into how and where they can drive efficiencies in existing AV setups, while advanced features such as energy-saving modes reduce the total cost of ownership in the long term,” says Carl Standertskjold, corporate segment marketing manager at Sony Professional Solutions Europe. “This is especially important at a time when budgets are limited and procurement teams need to have a firm understanding of a solution’s return on investment before authorising any purchases.”
He says the Internet of Things will also see AV solutions become increasingly connected, offering organisations two main benefits. “It enables them to collect valuable data on how and when these technologies are being used to spot patterns, understand user behaviour and challenges, and ultimately, help inform future procurement decisions,” says Standertskjold.
“On the other hand, the more connected an organisation’s AV solutions are, the easier it is to integrate new technologies into an existing setup without the need for a complete overhaul. This, again, helps streamline the procurement process.”
Tarry advises that partnership is needed for procurement on the industry and customer side to ensure innovation, optimisation and alignment of complex but highly strategic and business-critical services.
“Together, we must look for ways to minimise risk, create transparency and focus on common goals. It’s about putting a financial model in place that doesn’t create mistrust, yet has benefit for all parties,” he says.
One of the main bones of contention between parties is that AV is often engaged late on in a project, which results in tight timescales.
“The best and most effective processes are those with a defined forecast and defined roll-out, where accessibility to advance information supports a just-in-time operation,” affirms Guy Phelps, end-user account manager in the finance and legal team at NEC.
While he does not highlight any major issues with current policies, pressure points are building as more projects are created at the last minute, often as end users react to the previous quarter’s performance.
“The more advance notice gained from the end user with detailed specifications, the more slick and effective the process will become,” Phelps emphasises. “The AV industry is responding, but the end user needs to understand that as equipment and projects become ever more complex, and with roll-out processes based on ‘just-in-time’ in order to meet tight budgets, access to information to enable accurate forecasting is vital. It is essential that manufacturers, integrators and end users work very closely to ensure the best information is available to all parties in advance.”
However, Tarry contends that the AV sector needs to adapt further: “As a tech provider, we tended to say we were brought too late into the planning phase and that we were reliant on the space and construction parts aligning – but we can’t continually blame everyone else for that. We have to shift our perspective and change our pitch.”
With regard to the procurement of AV systems as part of a construction project, late AV systems involvement “forces projects down a two-stage tender route”, says Daniel Watson, senior consultant – AV and multimedia at PTS Consulting. “Communication between the main contractor/builder and the AV integrator is key, and as such this relationship often takes precedence over how suitable and/or capable the integrator is to deliver on the project.”
‘Procure AV earlier’
While single-stage tendering provides the project with greater cost certainty earlier on, the total cost of variations (such as changes to system designs) can be expensive.
“Given the speed of technology innovations and constant changes in user habits, this is a serious risk to a client,” Watson warns.
With the two-stage process, the client may enjoy greater flexibility but the AV systems’ cost is a moving target. What’s more, the full impact on other services (IT, building management systems, mechanical and electrical) is also unknown until the project end. According to Watson, this is often the root cause of the narrowing of commissioning windows on site as all trades are commissioning at the same time.
“If AV was taken into consideration earlier in the project lifecycle, and consultant practices were engaged earlier, much of the design development, tech trials and third-party integration requirements (IT and networks for example) would be completed upfront,” he insists.
Manufacturers are increasing specialist engineer resource engagements with integrators, especially at the commissioning stages. PTS Consulting reports that a number of manufacturers are providing commissioning services that can be specified by the consultant as part of the AV systems invitation to tender/specification package.
No ‘one-size-fits-all’
Ultimately, the key to successful investment into new technologies is to ensure high user adoption, so new solutions need to be rigorously tested and analysed to ensure they are intuitive to use, perform as intended and meet user expectations before being deployed.
Of course, decisions must also fit with the firm’s wider investment policy. “Once the decision has been made to invest in new technologies that meet the needs of users, it is essential that procurement, facilities, IT and AV managers work together to ensure new solutions they want to deploy are in line with the company’s wider AV investment strategy,” says Standertskjold.
That being said, there is no one-size-fits-all approach here. Every enterprise is unique, with specific needs, so suppliers need to collaborate closely with all AV integrators and managers in order to offer customised solutions that best achieve an organisation’s strategic aims.
By having these conversations between end users, integrators and manufacturers, AV suppliers can continue to have a finger on the pulse of the market and develop solutions in line with the requirements of modern enterprises.

Ready to play a role in what may be the Golden Age of Episodics?

copywriting for Sohonet
Everyone can see that the media and entertainment industry is undergoing a seismic transformation, but the huge bets being placed on its future in some quarters may not be universally shared. There are sizeable financial rewards at stake at the same time that there are snake pits to avoid. Preparing for the journey need not be done without a map.
Experienced facility chiefs and entrepreneurs alike are able to read the runes. It doesn’t take an analyst to divine where the trend toward exponentially rising content costs might lead. Netflix continues to lead the charge, ramping up its annual content spend above an incredible (and possibly unsustainable) $15 billion this year, and in doing so racking up 150-million subscribers worldwide and pulling rival content owners, broadcasters and SVOD players in its wake.

The bulk of this unprecedented spend is not happening on feature film but to episodic TV where consumer expectations for quality and production value just get higher and higher. Virtually every part of the pipeline, from production to VFX, sound mixing and editorial, is impacted. More demand, more content, more need for services. That demand is not going to abate, at least for the foreseeable future.


For example, high-end VFX was once the sole preserve of theatrical spectaculars like Avengers: Endgame or major episodic investments like Game of Thrones, the tentpole stories being commissioned for the small screen like Amazon’s Lord of the Rings adaptation or Disney’s live-action Star Wars series (The Mandalorian, destined for Disney+) are likely to be VFX showstoppers on a par with anything we have seen to date. 

Disney+ is just one of dozens of OTT services entering the fray, multiplying the number of outlets for high-grade digital storytelling. The ad landscape is splintered too as personalized and geo-specific ads follow content in targeting eyeballs across social channels from Facebook to Snapchat as well as continuing to cater for traditional broadcast and VOD offshoots.

Making, managing (and monetizing) all of this content at an affordable price whether for features, episodics or commercials is not possible without advances in technology and workflow. The primary tools at the disposal of facilities, and especially VFX facilities charged with accommodating the surge in demand, are cloud compute and storage and the connectivity in between on top of which artificial intelligence/machine learning can be deployed to deliver even greater time and cost saving benefits.


We believe there will be a continued drive to public cloud resources to access compute and storage resources and, increasingly, for creative applications used by the artists themselves. Over the next few years, we can expect continued improvement in average connectivity speeds combined with affordable software tools and the availability of a professional freelance workforce which will yield a revolution in post-production and VFX.

Gone will be the static business models based on fixed premises and large capital outlay replaced by dynamic ‘VFX as a service’ facilities able to scale up production in the cloud within minutes and site themselves anywhere to take advantage of VFX tax credits and talented freelance labor. The editing room will be increasingly connected, and increasingly mobile; keeping editors near production, or near home.

Long heralded, this will be the era of the virtual workstation and a truly distributed workforce offering work-life advantages to freelance talent and studio heads alike while improving the speed and cost-effectiveness of the content creation itself.

Technology does need to continue to advance. For example, the management and collaboration tools in such a dispersed remote production environment need further refinement, but there’s no doubt this will happen.

And happen soon. The narrative arc we often hear is one that will take less than a decade. Indeed, we think that the continued explosion of file transfer at an individual contributor level will fuel a revolution in the distributed workforce in VFX and wider post-production by 2023.

No matter if you are a start-up digital boutique or 700-seat international powerhouse, the importance of trying to understand these trends in order to capitalize on them shouldn’t be lost.

Thursday 20 June 2019

5G Technology Meets the Achilles Heel of Smartphone Hardware

StreamingMedia
Today, video is the king of content demand—and it will be long into the future. NSR predicts that by 2022, 82% of all IP traffic will be video.
Video is also a prime mover for 5G, with upwardly revised predictions that 5G coverage will reach 45% of the world’s population by end of 2024.
When it comes to the consumer, the 5G emphasis has been on mobile. British operator EE, for example, has enhanced its multi-screen app BT Sport with 4K UHD and HDR timed to coincide with launch of its 5G network.
For other telcos, though, 5G means an opportunity to drive fixed line subscriptions to the home. Connect a 5G router to the set-top box or smart TV in the living room and deliver enhanced TV over the last mile.
Cable providers, too, can put 5G cells into street cabinets and cover the last 500 yards where replacing coax with fibre or enhancing it with DOCSIS 3 is a less viable option.
The other data-heavy application primed for 5G is gaming. It is arguably more of a game-changer than live video since real-time multi-player gaming isn’t possible, certainly over mobile, without it. It’s also nearly impossible to create a shared reality experience if the timing isn’t perfect—but 5G solves this.
Niantic, maker of Pokémon Go, is building a game that renders augmented reality in a near to instantaneous tens of milliseconds of latency. Meaning that in a peer-to-peer multiplayer AR game you can see where your friends actually are rather than where they were.
Synched with this is the potential of edge computing in which logic is moved out of the device and into the cloud. After 20 years of CDNs, 5G can now put compute at the edge. If you can process more encodes and transcodes there you can create thinner client apps. With extreme low latency you effectively stream from the edge with less rendering on the device.
The concept of Niantic’s latest game, branded around Harry Potter, relies on edge compute to perform tasks such as arbitrating the real-time interactions of a thousand individuals playing in a tight geographic area.
But one thing is missing and it could be the Achilles heel of 5G in its early days.
Battery life sucks. Or rather, data intensive apps like video games suck battery life.
One review of Samsung’s 5G-ready Galaxy S10 reports an hour-long video draining power by 9%—in HD and at half screen brightness. Gaming saps energy further, with the S10 losing around 21% an hour.
The Lithium-ion based batteries in current cellphones haven’t changed much in 30 years of consumer electronics and are nearing the end of their shelf-life.
There will likely be a pinch point between the development of more economical battery tech, possibility involving supercapacitors, and the migration of data to the cloud.
Without having to store and compute, the edge will turn your smartphone into a streamlined slimline streaming device but as it stands 5G will strain the hardware in your pocket and the patience of newly converted subscribers