Friday, 31 March 2017

The State of 4K and HDR 2017

Streaming Media

4K is making inroads, but it's the profound visual richness of high dynamic range video that will really revolutionize how people watch television. Streaming networks are leading the way.

http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/The-State-of-4K-and-HDR-2017-117304.aspx
Amazon and Netflix began delivering 4K/Ultra HD (UHD) content 2 years ago, but 2016 ushered in a series of 4K broadcast launches beginning in Canada. With the expertise of U.K. telco BT Sport—pioneer of live 4K broadcasts since fall 2015— Rogers Media’s Sportsnet and Bell Media’s TSN each debuted live 4K services starting with a Toronto Raptors versus Orlando Magic game from London’s O2 Arena in January. This was the first of more than 100 live 4K (including Toronto Blue Jays home games and NHL games) events produced by Rogers last year.
In April, AT&T launched DirecTV’s first dedicated 4K channel with broadcasts from the Masters Tournament at Augusta National and later that month began the first of regular live 4K baseball broadcasts via the MLB Network Showcase. Other events DirecTV delivered in UHD included NCAA Football, UFC fights, PGA tournaments, and the Country Music Awards.
CBS eschewed any UHD transmission of Super Bowl 50, opting for tried-and-trusted HD, although Japanese broadcaster NHK was in the Levi’s Stadium to test 8K Super Hi-Vision. NHK repeated its tests at Super Bowl LI, although Fox again only broadcast in 4K.
The Olympics provided a slightly wider UHD showcase. Customers of DirecTV, Dish Network, and Comcast were able to view certain events from Rio in 4K (provided they had 4K screens, of course).
NBC took the 4K feed of 83 hours (a fraction of the total games coverage of 6,700 hours) produced by the International Olympic Committee in partnership with NHK natively in 8K and downconverted. Because of this, none of the 4K coverage was live.
Comcast made UHD Olympics action available through broadband-connected UHD TVs from Samsung and LG, but only via its Xfinity Ultra High Definition app, not on a linear channel.
There were only around 70 channels globally outputting 4K/UHD content by year-end, according to  Futuresource Consulting. This consists of a mix of 24-hour and occasionally broadcast channels. Even fewer operators are picking these channels up and broadcasting them because, in most cases, the business case doesn’t add up.
A low penetration of TVs with UHD resolution is one factor. Globally, this stood at just 5 percent by the end of 2016. “While higher in developed countries such as the U.S. (15 percent), this is still a low addressable base,” says Futuresource marketing analyst Tristan Veale. “Bearing in mind that 4K/UHD costs more to produce (or acquire) and distribute, in order for pay TV operators to make a profit either the extra costs need to be low, or they are significantly improving the service enough to be able to charge a premium to cover the costs.”
There are only a few circumstances where either one or both of these are actually true. One is that a broadcaster/platform owner produces its own content, and as such gains some efficiencies by shooting and producing in 4K/UHD and then outputting from this a 4K and HD feed (as Dome Productions is doing for Rogers).
A second circumstance is where distribution costs are low. Since the addressable base is low, the most cost-effective solution is delivery via IP, and this is improved further if the consumer is upsold to a higher-value broadband package by taking a double-, triple-, or quad-play service from the operator. “Broadband has a much higher margin than TV, and therefore this offsets the cost of producing in 4K/UHD,” says Veale.
A third scenario is where the content being recorded is of sufficient quality that it is imperative for future-proofing or reselling that the content be produced in 4K UHD. Examples include BBC’s Planet Earth II, which is likely to have a 10-year resell cycle, and major sports events like the Olympics.
“If we take those three criteria for launching a 4K service, we find that BT Sport and Rogers has all three while DirecTV has high-quality content,” says Veale.
However, there are other elements to 4K UHD outside of resolution, notably High Dynamic Range (HDR) and Wide Color Gamut (WCG), which make a significant visual impact on the consumer. Therefore, when they are added into the mix, the business case becomes a lot more attractive.

Bright Future for High Dynamic Range

HDR addresses the difference between the darkest and brightest parts of a picture and is considered a more profound upgrade than resolution, as most people usually don’t notice a difference in pixel density when sitting away at 8' to 10'.
Because of the move to 10-bit coding needed for recording, distributing, and delivering HDR according to UHD specifications, color accuracy and precision is improved almost as a by-product but visually also makes a big difference.
“This is the main reason [we saw] limited activity from pay TV operators [in 2016],” suggests Veale. “They know that when they can distribute HDR to consumers, the visual impact is sufficient that they don’t need the best quality sports or similar content to be able to charge the consumer more.”
This piece of the puzzle is now in place. The International Telecommunication Union (ITU) ratified its standard for working with HDR in July in a move that will accelerate broadcaster UHD services globally.
Netflix and Amazon are ahead of the game. Most Amazon Prime and Netflix content is now produced in 4K, with Netflix adding 600 hours of 4K content by the end of 2016 and Amazon amassing over 175 hours of UHD plus HDR programming including car show The Grand Tour. Netflix recommends at least a 25Mbps connection for Premium subscribers to appreciate UHD HDR Originals like Marvel’s Iron Fist.
Hollywood studios have mastered around 100 titles in UHD HDR for SVOD and Blu-ray, a number predicted to triple in 2017 by Warner Bros. Worldwide Home Entertainment Distribution president Ronald J. Sanders. “There’s a concerted effort to match the growth and install base at home,” he said at the CES. He said Warner Bros. was “aggressively” going into its catalog to refresh titles with an HDR sheen.
While Hulu finally joined the major streamers in offering titles in UHD, including time-travel thriller 11.22.63, its library has no advertised HDR content. YouTube confirmed its support for HDR videos in November.
The lack of a broadcast standard for HDR, as well as complications in introducing it into workflows, prevented broadcasters from adding HDR to their UHD packages in 2016. Rogers announced its intention to do so and then withdrew.
Regionally, only Latin America’s biggest network, Globo, released a UHD and HDR project in 2016 when it offered flagship drama Dangerous Liaisons over its Globo Play VOD service.
If the Consumer Technology Association (CTA) is right, then 4K UHD TVs, “this time, arm in arm with High Dynamic Range,” according to Steve Koenig, its senior director of market research, remains one of the consumer electronics industry’s fastest-growing segments.
The CTA projects shipments of UHD displays to reach 15.6 million units in 2017 (up from 10 million in 2016) and earn $14.6 billion in revenue in the U.S. Global 4K UHD sales are expected to jump from 53 million units last year to 82 million in 2017.
“Growth of the 4K UHD market continues to dwarf the transition to high-definition television,” Koenig explains. “Just 3 years since introduction, cumulative sales of 4K UHD displays are forecast to hit 15.6 million units, while sales of HDTVs reached 4.2 million units in their first 3 years on the market.”
Display size is growing in tandem with resolution. In 2016, 73 percent of 55"+ TVs shipped were 4K-ready, according to IHS Markit TV, which predicts that penetration for sets 50" and higher will rise to 100 percent 4K by 2018. This year, some 78 million 4K TVs will sell, up 40 percent from 2016.
2017 will see a great deal more HDR activity as the addressable base of 4K UHD TV’s with HDR widens. In North America, the penetration of HDR TVs will be between 10 percent and 14 percent at year-end, reckons Futuresource.

Is Dolby Set for HDR Monopoly?

Innovation is never straightforward. Various HDR flavors are being implemented at various stages of the production to distribution chain, filtering into retail and risking consumer confusion about the product.
The baseline standard is HDR10, which is nonproprietary, defines a 10-bit video stream, and is included in the Blu-ray Disc Association’s specification for UHD Blu-rays. It also aligns with UHD Alliance certification.
Complementing and competing with HDR10 is Perceptual Quantizer (PQ), designed by Dolby and marketed as Dolby Vision (which includes surround sound format Atmos). The chief difference is that PQ can manage HDR from camera through post-processing, production, and on to final delivery. Dolby claims, with some justification, that it delivers superior contrast brightness and color. Dolby Vision delivers 12-bit color depth and is a future-proofed format that can play back on displays up to 10,000 candelas—which no consumer display currently offers.
In addition, rather than provide one value of brightness for the end display as HDR10 does, Dolby Vision performs this for every frame. This allows creators to have full control over the final image, a quality that is highly prized in Hollywood.
As a result, Dolby Vision has made considerable headway among U.S. studios. Lionsgate, Universal Studios, and Warner Bros. will be releasing Dolby Vision UHD Blu-ray Discs early in 2017.
A third variant, which has made more ground in Europe thanks to the prevalence of public service broadcasters, is Hybrid Log-Gamma (HLG). Developed by the BBC and NHK, it enables backward compatibility with legacy 4K displays and is considered easier to introduce into a live broadcast workflow. Unlike Dolby and HDR10, HLG is less complicated for live production purposes since it works without the need for additional metadata encoded into the video source. In addition, HLG will display video in standard dynamic range if the receiving device isn’t compatible with HDR. Consequently, this form is more useful to PSBs.
On top of those options, Dolby rival Technicolor has its own capture-to-display system called Advanced HDR.
It’s early days though, and wary of heading toward a dead end, content owners and display vendors are hedging their bets. Amazon and Netflix content is compatible with both HDR10 and Dolby Vision. Any TV with a Dolby Vision decoder will also be able to play back HDR10.
Panasonic supports HDR10 and HLG, but not Dolby Vision in its flagship TX-65EZ1002B, a 65" OLED. Sony’s 4K HDR TVs now support Dolby Vision, HDR10, and HLG. Also on board with Dolby Vision are TCL and Roku.
LG has gone whole hog in support of all four HDR flavors, including Technicolor’s, in its new top-of-the-line model. In return, Technicolor says it will use LG OLEDs exclusively as reference monitors for colorists working in facilities it owns.
Devices capable of playing back HDR are multiplying. Google’s Chromecast Ultra, Sony’s PS4 Pro, and Nvidia’s Shield TV streaming box are among the latest.
In Europe, the feeling is that HLG will assume priority in sports broadcasts with PQ being more popular for drama. In any case, the ITU standard recognizes both HLG and PQ and, crucially, enshrines the ability to convert between them.
“So if a Hollywood movie is delivered to a broadcaster in PQ, it can be converted to HLG for delivery, easily, and without damaging the output,” says Andy Quested, chairman of the ITU group responsible for its HDR standard.
Consumers need to be aware that the HDR label on a device doesn’t necessarily mean the representation of the image is really HDR.
“Some have a maximum light output of 1,000 nits, and some have 400 nits, which isn’t sufficient for HDR,” says Florian Friedrich, managing director at HDR testing service AVTOP and Quality.TV at CES. “It’s important that the TV can represent colors with luminance that are saturated at high levels, not just low levels.”

The number of HDR formats will likely be whittled down. Retail marketing could become an issue. Vendors don’t want the added expense of incorporating more bits into their displays, and studios don’t want to have to keep mastering multiple versions, which they currently have to.

IP Production Comes of Age

Live sports continued to spearhead pay TV operator moves into 4K UHD with rapidly evolving IP and IT technology likely to prompt further investment.
“Probably the most significant shift in broadcast tech we’ve seen through 2016 has been the continued rise of the IP-enabled broadcast operations center,” says Rory McVicar, project manager of CDN EMEA at Level 3 Communications. “As this trend accelerates, Ethernet is increasingly being looked to as the common standard for broadcasters embracing OTT and multiscreen viewing.”
Vendors began the year aligned to different IP paths but gradually shifted behind the Alliance for IP Media Solutions (AIMS) aided by endorsements from Sony, Evertz Microsystems, and vendor trade body IABM.
What AIMS managed to demonstrate successfully last year was interoperability, with a showpiece working studio in HD at Belgium’s VRT being the year’s prime example. The actual impact on broadcasting of IP/IT has, however, been minimal in real terms.
“Most companies are either still in the planning stages or are yet to start formally thinking about IP, but there is definite forward momentum,” reports Adam Cox, senior analyst at Futuresource Consulting.
According to Futuresource’s Video Server Market Overview report, only 9 percent of video server ports were IP by the end of 2016 (up from 5 percent in 2015). The next key stage is toward the true separation of audio, video, and data signals along with synchronization information. This has been encapsulated as TR-03, which is being built into new standard ST 2110 currently winding its way through SMPTE with ratification not likely before 2018.
TR-03 itself is composed of a number of existing standards: RFC 4175 for video, AES67 for uncompressed audio, and SMPTE 2059 for clock synchronization. RFC 4175 is important as a means of reducing the volume of data transported since it will recognize and carry only active pixels, or the visible part of the video.
The work of the Advanced Media Workflow Association (AMWA) is also significant in providing for the ability to plug in a device and make it known to the IP network and then have an open way for that device to describe all of the things it is capable of doing. AIMS has adopted AMWA’s Networked Media Open Specifications (NMOS) protocol, which will possibly be incorporated into SMPTE 2110.
Beyond even this, AMWA has begun exploring how NMOS will work in practice. Ultimately, this will lead to new specifications that will allow the industry to truly embrace data center and cloud technologies and feel confident relying on another company’s platform, hardware, and servers.
The question facing broadcasters is not whether to invest in IP—the move is inevitable, and the benefits from cost savings to greater editorial flexibility are compelling. The real question is whether or not to invest now.
“The economics of IP today make more sense at the enterprise level and probably do not yet stack up for smaller projects,” admits Tim Felstead, head of product marketing at SAM. “The industry has to make a case for IP beyond pure return on investment. IP is not swapping one technology for another. It offers a whole new approach to market.”
This also requires a shift in business model among vendors from selling expensive black boxes on premises (a capital expenditure for customers) toward a revenue-based model based on operating expenditures.
In other words, media organizations are being encouraged to rent or subscribe to services—playout, for example—running virtually in a data center.
“True adoption of IP will come when IP architectures are embraced to bring about all the benefits IP can provide,” says Futuresource senior analyst Adam Cox. “This is the next step, but we’re not there yet, and most of the industry won’t be there in 2017 either.”

4K Live Streaming

4K content still represents a lower percentage of streaming content compared with HD and even lower resolution video. For live, it’s still expensive from a computing standpoint especially if you want to support 4:2:2, HDR, and high frame rates, notes Telestream CTO Shawn Carnahan.
“Today’s 4K OTT levels are small but rapidly growing,” reports Ian Munford, EMEA director of product enablement and marketing, media services for Akamai. “Clearly there are a range of on-demand 4K movies available through various SVOD streaming services, but we are seeing many more live streaming events, particularly sports, taking advantage of online 4K delivery.”
While there are regional technology and infrastructure differences, in Q3 2016 Akamai reported that global adoption of broadband services over 15Mbps (capable of receiving 4K content) had increased 54 percent year on year to 22 percent.
The technical challenges to delivering live 4K OTT services center on improving the consistency and reliability of high bitrate 4K streams from ingest through to delivery—at scale. The challenge is multifaceted and requires different thinking throughout the workflow.
“If you can’t reliably ingest a live 4K video stream into a CDN, you can’t deliver a high-quality viewing experience,” explains Munford. “Likewise, if you can’t stream live 4K video consistently without buffering, then the viewer experience will be dreadful. Traditional streaming technologies use TCP as a transport protocol. This was designed to ensure reliability, but not deliver high-bitrate video, where bottlenecks in the internet may impact quality of experience.”
The combination of ingest acceleration and delivery acceleration has enabled the delivery of live 4K sporting events online. Munford believes we’ll see a maturing of live OTT technologies in 2017, “specifically ... in areas such as live origin services, live transcoding, and 4K delivery.”
Level 3 also thinks 2017 may herald the true beginning of the upward curve, with consumers expecting greater quality in their streamed media. “The actions of content providers will further stoke this growth,” says McVicar.
On Nov. 12, UFC.TV claimed the world’s first global delivery of an event live in 4K at 60 frames per second. The SVOD eticket cost $59.99.
“We were very excited to showcase this on such a big stage,” says Chris Wagner, EVP and co-founder at UFC digital partner NeuLion. “I don’t know any other service other than UFC that is global OTT with a digital ticket in 4K/60. I don’t know of anyone else who has done this.”
NeuLion delivered the 4K show from Madison Square Garden as an HEVC stream in MPEG-DASH encoded in H.265.
This article appears in the March 2017 issue of Streaming Media magazine.

What a brain the size of a planet (on a chip) can really do

RedShark News

Nvidia and Intel developing their highest grade chips to drive autonomous cars will give a boost to the seemingly inevitable concept of turning vehicles from transport cages to multimedia zones. What the tiny supercomputer can also be used for is perhaps more important.
http://www.redsharknews.com/technology/item/4450-what-a-brain-the-size-of-a-planet-on-a-chip-can-really-do


After trailing the chips last autumn and announcing plans to pop them into a self-driving prototype called BB8 (after Star Wars’ sentient robot), Nvidia has teamed with German firm Bosch to build a car with AI smarts.
Nvidia founder and CEO Jen-Hsun Huang said Bosch would build automotive-grade systems for the mass production of autonomous cars — expected to begin in five years time.
The chip powering this is called Xavier (after the X-Men character’s mind-bending powers) and is capable of 20 trillion operations per second from its seven billion transistors while drawing just 20 watts of power.
“This is the greatest [System on a Chip] endeavour I have ever known and we have been building chips for a very long time,” Huang said, announcing the development last year.
Now, as Marvin, Douglas Adams’ character from The Hitchhikers Guide to the Galaxy, might have said: “Here I am, brain the size of a planet and you ask me to drive to Sainsbury’s.”
So what else can Xavier do? Well, it is rather good at computer vision. It has to be in order to recognise objects, send them back to the cloud and make instantaneous decisions about traffic or road conditions.
Xavier contains, for example, a computer vision accelerator which itself contains a pair of 8K resolution video processors. They are HDR, of course.
Aside from self-driving cars, computer vision or smart image analysis is the basis of gesture-based user interfaces, augmented reality and facial analysis.
The Computer Vision program at rival chip maker Qualcomm is focused on developing technologies to enrich the user experience on mobile devices. Its effort target such matters as Sensor Fusion (no, we’re not sure either), Augmented Reality and Computational Photography.
As part of the effort, Qualcomm says new algorithms are being developed in areas such as advanced image processing and robust tracking techniques aimed at “real-time robust and optimal implementation of visual effects on mobile devices”.
Computer vision can, for example, be used to enable AR experiences where an object’s geometry and appearance is unknown. Qualcomm has developed a technology it calls SLAM for modelling an unknown scene in 3D and using the model to track the pose of a mobile camera in real-time. In turn, this can be used to create 3D reconstructions of room-sized environments.
Website Next Big Future runs this math: 50 Xavier chips would produce a petaOP (a quadrillion deep learning operations for 1 kilowatt.) A conventional petaflop supercomputer costs $2-4 million and uses 100-500 kilowatts of power. In 2008, the first petaflop supercomputer cost about $100 million.
Nvidia is renowned for the GPUs which power photo-realistic computer graphics. Now the GPUs also run deep learning algorithms. Sure, we can have autonomous cars, but I wonder what a machine running AI and CG can do? In any case, I want and expect a Xavier in my smartphone soon.

Wednesday, 22 March 2017

Ten things AI can do for you

Knect365 for TV Connect
AI technologies are set to impact the media sector at all stages, from media production to delivery. Part of this revolution has already reached our homes: voice control with Apple TV, creative photo editing with Prisma, Snapchat lenses.
Perhaps the most successful to date has been Netflix’ use of big data and detailed analytics, giving the SVOD service “a significant advantage when it comes to the ability to accurately target viewers with highly targeted shows,” says Futuresource analyst David Sidebottom.
Increasingly, the interface for content discovery and wider Internet of Things in the home will be voice, via virtual assistants like Amazon Alexa in which voice biometrics and improved contextual understanding will become points of differentiation for AI platforms.
Here are nine other examples of AI’s development:
Automated lip reading
Consuming and creating visual content online poses challenges for people who are blind or severely visually impaired.
Facebook’s Automatic alt (alternative) text generates an audio description of a still photo (not yet video) using object recognition technology based on a neural network that, according to Facebook "has billions of parameter and is trained with millions of examples".
Facebook launched it on iOS screen readers for the English language and plan to add the functionality for other languages and platforms.
The hearing-impaired can benefit from automatic subtitles. Researchers at Oxford University’s Department of Computer Science developed AI system LipNet capable of discerning speech from silent video clips and scoring a 93.4% success rate versus 52.3% from professional lip-readers.
According to MIT commentator James Condliffe, LipNet analyses the whole sentence rather than individual words, enabling it to gain an understanding of context (there are fewer mouth shapes than there are sounds produced by the human voice).
Another Oxford Uni study, reported in New Scientist trained Google’s Deepmind AI on a series of video clips featuring a broader range of language, and greater variation in lighting and head positions. The AI was able to identify 46.8% of words correctly, trumping humans who managed just 12.4%.
As Condliffe points out it’s not hard to imagine potential applications for such software. In the future Skype could fill in the gaps when a caller is in a noisy environment, say, or people with hearing difficulties could hold their smartphone up to ‘hear’ what someone is saying.
Automatic subtitles – cost saving
BBC R&D has road-tested technology that aims to automatically recover subtitles for video clips taken from video. Around 850 of the 1000 clips used to create an app celebrating Sir David Attenborough’s ‘Story of Life’ feature subtitles recovered from the BBC archive without human intervention.
As the technology’s developer Mike Armstrong points out, subtitling more than 1,000 video clips from scratch would have been challenging for a team working on a tight budget.
Audio was extracted from each clip and passed through a speech to text engine to create a transcript. This was turned into a series of search strings that were combined with the clip metadata to locate a subtitle file for the TV programme which best matched the clip. Further processing then worked out where the clip came from in the programme and extracted the relevant subtitles.
Around 200 clips needed manual editing due to inconsistencies in the data, caused in some instances by the algorithm failing to match UK programme versions with international ones.
“The challenge of recovering subtitle files has been valuable in proving the effectiveness of the technique and provided it with its first public exposure,” says Armstrong.
Content manipulation
A number of recent developments in Machine Learning research will allow picture and movie content editing with Photoshop-like tools that edit conceptual elements of an image instead of individual pixels. It will soon be possible to directly edit facial expressions, and facial features. Twitter account @smilevector may not be the most technically advanced example but is a demonstration of the options. The Neural Photo Editing tool is more advanced.
It will also be possible to remove unwanted objects with a technique called in-painting, using a simple point-and-click interface. There’s consumer software for this already, and an academic paper on the subject from researchers at Stanford.
Product placement
As digital video recorders and on-demand video proliferate, advertisers face challenges from viewers who skip over their commercials or who ignore traditional online ads. Product placement might hold the answer – but not in its traditional form which is complicated and time-consuming (and for which an advertiser might need to commit a year in advance). It’s also a gamble – what if the spot ends up on the cutting room floor, or what if it ends up being not to an advertiser’s liking?
London-based MirriAd deploys technology which places brands digitally into an event in real-time and using demographic data to target. The technology includes a planar tracker able to recognise the lighting characteristics of specific zones in a video and embeds an object (drinks can in fridge, for example) onto the image taking into account the lighting. Samsung used the technology to advertise its home appliances within fifty episodes of dramas streamed on China OTT service Youku.
Security
Traditional cybersecurity approaches are typically reactive: as a new threat is discovered, new rules and countermeasures are added to the set of techniques available to the cybersecurity software.
“As attacks grow in complexity and scale (sometimes involving millions of compromised machines), AI is being deployed to discover new attacks without human supervision by identifying anomalies in the distributed traffic patterns (in terms of content, frequency, and synchronicity of traffic),” explains Pietro Berkes, Principal Data Scientist, Nagra Insight, Kudelski Group. “In contrast to the traditional methods, this AI approach allows reacting to attacks that had not been previously encountered.”
AI is also used to address fraud related to complex data, like video. In order to identify pirated video, algorithms need to match it to a legitimate content, even if the video has been distorted (e.g. cropped, reshaped, or the coloured have been altered). NAGRA recently announced the launch of its media services offering that already makes use of video recognition technology to identify illegal streams.
Subscriber behaviour
Understanding subscriber behaviour is critical for many key activities: retaining customers, planning marketing campaigns and promotions, negotiating licensing rights and so on. In this context, according to Berkes, traditional business intelligence approaches quickly reach their limits, as the outcome of these activities does not depend on any one factor.
Take the fight against churn as an example. A predictive AI algorithm can take into account hundreds of features in order to compute the probability of churning: viewing patterns, purchase frequency, demographical data, devices used, geographical area.
“AI can even help understanding why a user is churning, and suggest the most effective action to take in order to avoid churn, based on past experience with similar subscribers,” says Berkes.
CRM
Customer relationship management is key to industry sales and marketing and a logical extension of the big data being hoovered by organisations about customers is to have it processed by machine. Oracle and Microsoft are developing AI assisted CRM software but it is Salesforce which has the lead. Last September it announced Einstein – an AI that learns from CRM data, email, calendar, social, ERP, and IoT and delivers predictions and recommendations in context of what the business trying to do.
It got there with a team of 175 data scientists and a string of AI technology acquisitions including MetaMind. In sales, for example, expect Einstein features that provide insights into sales opportunities, offer predictive lead scoring, recommend connections and next steps, and automate simple tasks such as logging phone calls and other interactions with customers.
Enhanced compression
Techniques to beat traditional encoding and decoding will permit the transmission of high-quality media content even in regions with low internet and mobile bandwidth. ANNs are being used not only to build better compression methods but also to artificially clean up and increase the resolution of transmitted images (known as ‘super-resolution.’)
To become the number one provider of live video, Twitter is facing a non-trivial issue with content distribution: a large part of users connect from a mobile device, possibly with a low-bandwidth mobile connection (even more so in markets outside Europe and the US).
Twitter acquired Magic Pony Technologies last June 2016 for $150m to develop ways to reconstruct HD video from a Low-Definition, compressed stream. Super-resolution enables Twitter to transmit content in low definition (thus consuming less bandwidth) and using standard encoders and decoders. The ANNs clean up the compression artefact and upsample images to higher resolution without the result looking pixelated.
“The ANNs are trained with several hours of HD video and learn what typical images look like; more technically, they build a complex model of the statistics of natural images,” explains Nagra’s Pietro Berkes. “Given a corrupted, low-resolution image, they are able to remove all the defects that make it ‘atypical’ by comparing it to this statistical model.”
Twitter is also working to develop better image encoding methods to outperform common JPEG standards as is Google.
“A significant advantage of AI over traditional codecs is that they can be focussed on a particular subset of content, like nature documentaries, and trained to apply a compression model that is specific to this content, potentially saving more bandwidth,” says Berkes.
When systems go wrong
Microsoft trained its chatbot Tay on Twitter last year with disastrous results when it began spewing racist and sexist tweets proving that AI systems are only as smart and benevolent as the training data you use to teach them. Plus it could be a customer relations nightmare.
It’s why a group of luminaries including entrepreneur Elon Musk, and Facebook’s AI chief Yann LeCun were among 2,000 signatories to a set of guidelines published earlier this year – The 23Asimolar AI Principles  – aimed ultimately at protecting humankind from rogue AI.
The guidelines dug into the ethics of AI and even included a principle aimed at diverting the Terminator like “arms race in lethal autonomous weapons."
Others were more prosaic. Principle 12 on personal privacy, states, "People should have the right to access, manage and control the data they generate, given AI systems' power to analyze and utilize that data."
Whether commercial organizations will take heed of this as they compete for business is moot. Just don’t mention SkyNet.

Why Mixed Reality Is the Future of Immersive Broadcasting

StreamingMedia

Intel and Microsoft are among those building tools for a merged reality video experience that could be streamed directly to the home.


Mixed Reality merges real and virtual worlds to produce environments where physical and digital objects co-exist and respond to users in real-time. There are some who believe this could be the future of entertainment.
Distinct from the full immersion of virtual reality (VR) and from augmented reality (AR) which superimposes a graphical layer over the real world, Mixed Reality—or merged reality—overlays synthetic content on the real world that is anchored to, and, importantly, interacts with it in real-time.
The BBC’s Research and Development division is investigating Multiplayer Broadcasting [http://www.bbc.co.uk/rd/projects/multiplayer-broadcasting] which blends live TV with the interactivity of online games by placing potentially hundreds of thousands of avatars alongside presenters in a shared virtual world. It sees this as “the next iteration of audience participation shows in a broadcast-VR enabled future.”
Next week, a pioneering example of the format will debut on Norway’s TV Norge. Producer Fremantlemedia describes Lost in Time as an Interactive mixed reality format which presents contestants playing "inside" a series of video games and incorporates audience play-along via second screens. In January, a StreamingMedia.com article described this in depth.
While the first iteration has been recorded, Fremantlemedia and its co-producer The Future Group, see the TV Norge production as a testing ground and a showcase for future developments which include streaming to a VR app and real-time viewer interaction with the video game world and studio contestants.
As Lost in Time shows, audiences can view MR content on conventional flat screens but the real potential lies in the interactivity afforded by streaming to headsets.
Magic Leap is perhaps the most fabled prototype but other VR/AR or holographic headsets will release ahead of it.
Acer, for example, has shipped a Windows headset to developers this month that can support both VR and AR experiences. Developers including the U.K.’s Rewind VR are creating content for Microsoft HoloLens. Metavision has released an SDK for its AR head gear Meta 2, and later this year Intel will launch Project Alloy. This headset uses Intel RealSense cameras to suck in data about the real world environment of the user.
According to Intel’s Sales Director, Steve Shakespeare, merged reality is more dynamic and natural than other virtual world experiences such as virtual reality, since it allows the user to experience a unique blend of physical and virtual worlds.
“In merged reality, viewers can seamlessly interact with and manipulate their environment in both physical and virtual interactions,” he says. “Similarly, while augmented reality overlays some digital information on the real world, it does not integrate the two in the way that merged reality does.”
The technical challenges are not small, though.  In broadcast in particular, the biggest technology challenge will be how to connect merged reality simulations live.
“At the moment, no networks currently exist that can deliver live mixed reality visuals due to the large amount of information that has to be transmitted,” Shakespeare says. “Significant investment in network infrastructure, such as 5G, will be crucial to creating live mixed reality in broadcast.”
Intel is working with key players in the industry, including the likes of Ericsson and ZTE, to make 5G a reality. The challenges beyond this lie in how developers analyse the full range of data that mixed immersive reality requires—analysing large volumes of data in real-time, as they would need for live broadcast, requires large amounts of computing power.
Intel is addressing this. “In the next year, we plan to include an i7 Kaby Lake processor and Movidius technology into our Project Alloy merged reality headset which will make vision processing seamless, and make the technology invaluable for the live broadcast environment,” explains Shakespeare.
“Secondly, with many merged reality devices, you still end up tethered to external sensors of cameras, which present serious logistical challenges when space is a crucial asset. This is why, in Intel’s Project Alloy headset, the Intel RealSense cameras are attached directly to the headset to allow you to move around the room freely.”
Shakespeare describes the RealSense camera as "seeing" like a human eye. It can sense depth and track human motion. As a result, the experience “becomes much more organic” for the viewer.
Project Alloy is constantly evolving to match the speed of the firm’s next generation technology development. “As we keep incorporating new generations of vision processing technologies, faster processors, and more nuanced sensors, merged reality will become increasingly specialized in intertwining the real and the digital world,” he predicts. “This will create a new generation of immersive broadcasting.”
Further on, Intel expects to introduce a range of additional sensory haptics technology to blur the boundaries between real and virtual even further.
“Our sense of touch is incredibly important, and we’re used to having it in every interaction we have,” says Shakespeare. “This will be crucial for creating the most immersive and natural experience possible.”
U.K. virtual reality and VFX tools developer Foundry is also investigating mixed reality and the virtual production techniques needed for MR content production.
According to Dr. Jon Starck, head of research, mixed reality means the content is connected with the environment around the viewer. This, he says, allows for a more immersive, interactive experience in which the viewer and presenter are essentially able to communicate and steer the delivered content.
“The future of mixed reality could evolve broadcast into a completely non-linear format, adapting content according to the interests or direction of the viewer,” Starck says.
Early stage examples of these formats are already operating with national broadcasters. The BBC showcased CAKE (Cook-Along Kitchen Experience) in 2016, an object-based broadcasting experiment in which customised video content adapted depending on the recipe preference of the viewer.  
However, when it comes to MR experiences, interactive non-linear media is still very much in the research and development stages. One of the main challenges the Foundry comes up against within this format is that of visual quality.
“If you’re incorporating real-time interaction—say of a television presenter or actor—it can be exceptionally difficult to deliver the level of quality that we are accustomed to when watching standard television,” suggests Starck.
There are many ways in which this process is being streamlined to create a more seamless interactive environment. One method is through use of digi-doubles – the creation of photo-real content interactive using game engines. Foundry is then able to create photo-real 3D digital character models that can be animated. Through this process, according to Starck, human scans are used to build photo-real models that are incorporated into the virtual or mixed reality, creating a more realistic outcome, both visually and through interaction.
Another method is Holoportation, which Microsoft describes as a “a new type of 3D capture technology that allows high-quality 3D models of people to be reconstructed, compressed, and transmitted anywhere in the world in real-time. When combined with mixed reality displays such as HoloLens, this technology allows users to see, hear, and interact with remote participants in 3D as if they are actually present in the same physical space.”
According to Starck, Holoportation is a primary example of R&D in the area that could lead to the streaming of mixed reality content to the home.

Monday, 20 March 2017

ProAV: Ready to move forward

AV Magazine


The positive vibe expressed at ISE was no trade show bubble if respondents to AV’s examination of the market in Western Europe are anything to go by. Despite the macro-economic and political challenges and uncertainties that lie ahead, there is a strong and encouraging message among pro-AV kit vendors and service providers.
“If asked twelve months ago I would have been somewhat concerned since all the political and economic upheaval during 2016 at times threatened to have a significant impact,” says Colin Farquhar, ceo of Exterity. “Instead, the market in 2017 is buoyant and feels ready to move forward.”
Initially, the result of the referendum gave “a huge shock to companies and the stock market showed signs of negativity,” says Robin van Meeuwen, ceo of Crestron EMEA. “However, post Brexit and it’s business as usual. Companies have stuck to their plans and commitments which is good for everyone.”
“The underlying demand for AV is very strong and is unlikely to be impacted by economic or political factors,” agrees Rob Muddiman, EMEA sales director, ZeeVee.
Brexit talking points
Naturally the big talking point, not least because of the lack of clarity in potentially revised trade agreements, is the UK’s pending departure from the EU. The issue is one that affects any business which may benefit from a weakened pound on export while paying more for imported goods.
“As a UK-based manufacturer the exchange rate is a positive for us on the one hand and certainly makes us more competitive internationally,” says Farquhar. “The risk is that these benefits are cyclical and short term. At the same time the goods I require to manufacture my products are increased in cost and then we face the challenge of passing that cost on to the customer or accepting reduced margins.”
Price of course is one factor among many for most AV customers. “Value is just as important as price, as is the capability of the product you are delivering,” says Farquar who has promising sales forecasts for the year ahead.
DB Systems invested £1.5 million in new rental stock in 2016, ranging from 2.6mm LED to TOLED screens. “We can only see this investment strategy continue through 2017 and as we approach Brexit, companies will need to get out there to sell their product and service,” says Oliver Richardson, the company’s group sales and marketing director. “The meetings and events industry has a big role to play in Brexit Britain. The events industry in the UK is particularly strong with many UK-based companies playing critical roles working across the EU week in, week out.”
For DB, the most important debate around Brexit is the customs union and whether Prime Minister May will withdraw from the current arrangements to strengthen the UK’s negotiating hand elsewhere. Worth noting too is the upcoming Dutch national election (March), French presidential run-off (May) and September’s federal election in Germany.
“No-one can say exactly what post Brexit will bring but we hope that the UK government and EU will find a way to work prosperously going forward,” says van Meeuwen.
“Like most businesses, we hope that the UK government and the EU will reach a trade agreement quickly,” says Pierre Gillet, vice-president, international sales at US-headquartered, BrightSign. “Essentially, we need to wait and see what the new regulations will be and what, if any, difference they make. We trade in a great many countries with very different import regimes and standards, including Russia for example. We are confident that we will be able to adapt to whatever new situation emerges from Brexit.”
Muddiman shares the view of many respondents: “I think everyone in the business world is hoping for agreement on the trading relationships to be reached quickly, and that there will be the minimum of friction in the new arrangements in terms of tariffs or other restrictions.”
Currency flux
That said, Brexit appears to have had very little impact on business to date. “We don’t foresee any major concerns moving forward, but we are wary and will continue to monitor the landscape closely as more details of Brexit’s implementation unfold,” says Pete Egart, vice-president EMEA, Daktronics.
With multiple offices and two manufacturing facilities located in Europe, Daktronics is “very well positioned to serve customers with local resources. This helps us limit the risk associated with any political fallout that may arise.”
Adds Gillet: “Obviously, we’ve seen the currency fluctuations but as the cost of the (OVP) player is normally a very small proportion of the overall budget for an installation this hasn’t made a huge difference.”
The large currency changes, which have seen the dollar strengthen 25 per cent against the pound and the Euro achieve near parity, is the most influential factor.
“These (changes) could well continue or even widen but, until we see how the negotiations take shape, it is almost impossible to calculate the real affects,” says Melinda von Horvath, vice-president, sales EMEA, for Peerless-AV. “Whatever happens, alongside our headquarters in the UK we are planning warehousing in mainland Europe and will continue to develop our already strong EMEA business with local staff in all major regions.”
The only impact distributor Maverick has seen is price pressure due to the weaker pound. “This has impacted on vendor pricing with many now showing material increases,” says Jon Sidwick, vice-president, Maverick Europe. “From a trading perspective it has had no noticeable impact on sales. Moving forward we know we already have the infrastructure to trade across multiple geographies in multiple currencies. We have one pan-EU ERP system and this is a massive advantage for our customers enabling us to quickly adapt to whatever the outcome of Brexit is.”
The evolution of the pro-AV market across Europe is closely linked to that of the local economy. Recently, investments in Italy and Spain have been put on hold or were kept to a minimum, reports Christophe Malsot, Crestron’s regional director South-West Europe and North-West Africa. “For a few months now, private sector companies, the tourism industry, administrations and governmental organisations have begun investing in new kit. In Switzerland and Germany, where the economy is robust, considerable pro-AV investments have been made. On the other hand, France is a stable market but is in a wait-and-see approach before the election.”
Crestron sees companies turning towards fast-growing AV markets which it marks out as the hospitality industry or multi-purpose stadiums.
“Paris, Geneva and Madrid are very dynamic cities in terms of AV integration, especially in the hotel industry, the retail market and showrooms,” reports Malsot.
Immersive demands
Barco spots demand for more immersive visualisation, more interactivity and more devices continuing beyond 2017. “This will internally drive higher resolutions and more integration across devices and technologies, including BYOD,” says Peter Pauwels, director, strategic marketing, ProAV. “The increased focus on virtual reality beyond the glasses will support that for sure. In a true visitor experience, the guests want to experience things together. This will drive 3D, higher resolution, immersive sound.”
In terms of innovation and the opportunity to push the technology, western Europe offers a wealth of opportunity.
“The European AV market is advanced by global standards and among the most creative in the world – certainly on a par with the US if not ahead sometimes,” is Gillet’s view. “There are a lot of very imaginative display walls going up, with twenty or more screens in random configurations. Europe is catching up quickly in interactive retail kiosk installations also, though there are probably more of these in the US.”
Collaboration into the corporate sphere is by far the largest growth area for Maverick driven by the legitimisation of AV from companies such as Microsoft, Google and Cisco.
“For all sizes of organisation going forward, ensuring a strong collaboration system in a must-have, not a nice-to-have and this is a huge opportunity for all verticals where communication is key,” says Sidwick.
Signage appears particularly strong with BrightSign and Daktronics suggesting the pro-AV market in Western Europe is in a state of expansion.
“There’s strong growth in sport venues, retail shopping centres and digital OOH applications,” says Egart, who attributes this in part to the internet and social media creating an environment “where today’s consumers expect to be engaged and entertained at levels far exceeding the past.”
For BrightSign, retail is “easily the biggest” vertical in Western Europe. “We are seeing more demand from education now, especially European universities which are following the US in making more use of on-campus signage. AV is also spreading beyond the corporate foyer and the boardroom on to the production floor.
“There is no doubt that the major capital cities are a driving force, both in terms of volumes of screens in use and in innovation. London is a pioneer in installing digital advertising hoardings on the underground network. Museums in Paris and Amsterdam, and fashion shows in Milan are also areas of strong innovation.”
Financial sector
At Exterity, demand in the financial sector remains strong, a trend which Farquar puts down to those organisations dependent on information “especially in times of uncertainty” looking to technology to help them more efficiently distribute it.
While there are reports of financial relocations from London to Berlin or Paris, from Farquar’s perspective this is anecdotal. Another, possibly Brexit-related vertical with inflated interest in AV is government. “Logically it makes sense since these organisations across the EU are going through a lot of flux,” he observes. “They are reviewing how they operate as part of the whole process of Brexit and the need to communicate has never been higher.”
Verticals drive growth
Western Europe is one of the world’s leading markets in terms of technical innovation and creativity. Mobile device penetration is extremely high, providing the opportunity to address customers in a direct and very personal way. Look around any airport lounge or restaurant in the region, and most people will be looking at mobile devices. AV professionals in the region are becoming increasingly savvy in tying into that.
ZeeVee’s Muddiman spies growth in education and medical particularly for AV-over-IP products “replacing proprietary matrices in more and more applications. In every case, competitive pressures are driving demand, he says, “whether that’s universities having to compete by having better AV in learning rooms or retailers needing to find ways to engage visitors.”
“There is no doubt that AV-over-IP is being more readily adopted by IT integrators, however the more open minded and progressive AV integrators who are willing to invest will have an advantage,” urges Muddiman.
Peerless reports strong interest in retail and transportation with LED its main focus in the short term, along with video walls and kiosk. “It will be interesting to see how the commercial applications for VR develop over time,” says von Horvath. “It’s an exciting place to be right now. Innovative collaborative products have the opportunity to transform the way people work, play, interact and learn.”


Friday, 17 March 2017

How 4G Will Merge Into 5G

The Broadcast Bridge
There will not be a giant single step to the fifth generation cellular network. Instead, there will be transition as operators upgrade and monetise existing 4G services. Meanwhile the standardisation process continues in parallel with multiple trials. With video a primary driver of traffic over mobile today and tomorrow, the bottlenecks in current technologies are beginning to show.
While momentum toward 5G gathers force, the near term story of network evolution is a reinvigorated 4G spec.
Variously marketed as 4.5G, 4.9G, LTE Advanced Pro or Gigabit LTE, analysts CCS Insight calls a souped-up 4G “incredibly disruptive.”
“Network operators see gigabit LTE as an opportunity to extend the return on their investments in 4G networks. This is going to be one of the hottest tech topics in 2017 as leading operators around the world upgrade their networks,” says CCS Insight analyst Ben Wood, who recently experienced the first commercial deployment of Gigabit LTE technology on Telstra's network in Australia.
At the end of last year Nokia introduced 4.5G Pro and announced plans for 4.9G providing operators with the critical increases to capacity and speed that will be needed for future 5G operations.
As well as boosting capacity, 4.9G should lower latency to 10ms and increase throughput up to several Gbps.
On the radio side, the upgrade to 4G includes using massive MIMO, pronounced "my-mo", and stands for Multiple-Input and Multiple-Output. This is a method of combining multiple antenna into devices to increase the throughout dramatically without moving to 5G. Operators are also looking at carrier aggregation, another feature which can be implemented before 5G (but likely part of 5G). This combines multiple spectrum bands into a single device more throughout and capacity. This is already underway.

Nokia has released a massive MIMO adaptive antenna which can-boost a cell's downlink capacity five fold. The antenna uses 3D beamforming technology, whereby mobile signals are targeted directly to devices, rather than broadcast in all directions. 3D beamforming will also be part of the 5G spec.
Early commercial rollout for 5G is expected mid-2020, the rough date by which ITU Radiocommunication (ITU-R) is expected to ratify a standard.
As part of that process, the 3GPP (a collaboration between telco associations to make a globally applicable standard) is working toward standardisation of a new access technology named 5G New Radio (NR). The first phase of 5G NR specifications - 3GPP Release 15 - is expected to be completed next year. The second part - Phase 2 Rel 16 - is will be finalized late 2019, allowing for commercial deployment from 2022 onward.
Accelerated timeframe
US chip maker Qualcomm feels the technology is at a point where there’s sufficient common ground to advance even these timeframes.
It is working with a number of other companies including Nokia, AT&T, NTT DOCOMO, Vodafone, Ericsson, BT, Telstra, Korea Telecom, Intel, LG LG and Swisscom, Etisalat Group, Huawei, Sprint, Vivo, and Deutsche Telekom to support the acceleration of the 3GPP 5G NR standard.

Mobile video traffic is forecast to grow by around 50 per cent annually through 2022. By then it will account for nearly 75 per cent of all mobile data traffic, according to the latest Ericsson Mobility Report.Its proposal is to use what is called non-standalone 5G NR signalling as part of 3GPP Release 15. This would use existing 4G LTE radio and core network technologies to advance large-scale trials and deployments from 2019 thereby making it less expensive for operators to make the transition to 5G NR.
Qualcomm’s announcement coincides with its own development of a modem capable of supporting 2G 3G 4G and 5G on a single chip. According to Patrick Moorhead writing in Forbes, this means that Qualcomm’s new X50 5G modems will be truly global 4G/5G modems that can even work in the few areas without 4G LTE coverage today. Because the modems allow for connectivity with 4G and 5G networks, simultaneously “companies will be able to deploy a single modem to support Gigabit LTE and 5G connectivity, which will be extremely important for any mobile solutions,” writes Moorhead.
There are dozens of trials already in the works. They include Telstra and Ericsson teaming to conduct a test of Telstra’s 5G networking during the 2018 Commonwealth Games on the Gold Coast.
5G is more than the radio piece though. The core network needs upgrading to support the extra traffic. And there are questions about which spectrum bands will be used and whether the industry can agree on a common band in different countries for harmonisation and roaming.
There is also a question of business model. CCS Insight predicts that most operators will struggle to find solid business cases in support of 5G any time before 2020. Everything from remote surgery to disaster recovery training, oil and gas exploration and automated cars are mentioned. Reaching rural areas currently under served by fast broadband is a key driver for telcos.
CCS Insight forecasts that there will be 550 million 5G subscriptions in 2022. North America will lead the way in uptake of 5G subscriptions, where a quarter of all mobile subscriptions are forecast to be for 5G in 2022.
According to analysts Ovum, more than 50 operators will be offering 5G services in close to 30 countries by the end of 2021. The majority of 5G subscriptions will be concentrated in a handful of markets - including the US, China, Japan and South Korea.
One of the 5G target specifications is a 1ms latency on the radio interface. This is a significant development for mobile wireless networking. Achieving this target specification opens a whole new world of use cases that have been out of the reach of wireless connectivity.
“Previously, applications sensitive to delay could not be deployed on a radio access network such as 3G HSPA or even LTE 4G networks, where the latency is measured in 10s of ms,” says Mark Gilmour, Director, Portfolio Strategy – Mobile at Ciena. “Achieving 1ms on the air interface allows applications such as immersive gaming, augmented reality, and autonomous vehicles to be realised.”
Video burden and opportunity
Before we get there though there are still obstacles when using mobiles to consume video content.
In its recent The Future of Mobile Video report, analysts Cartesian states that these challenges include poor battery life, data charges and network performance. When it comes to watching just linear broadcast TV, 71% of respondents to Cartesian’s survey, who were all industry professionals, said data charges are the biggest barrier, ahead of battery life (67%) and network performance.
“The surprise is how low video performance is for users whether on-demand or live,” says Jean-Marc Racine, SVP at Cartesian. “People appear prepared to compromise on quality while watching video on mobile.”
He adds, “Video can be considered both a burden and an opportunity for the operator. The US is leading the way with operators investing in video convergence through acquisition, such as AT&T buying DirecTV and Verizon acquiring in AOL and Yahoo. We believe more European operators will launch zero rating services, following the popularity of services that provide free video streaming such as T-Mobile" US-based Binge On service. But operators need to take a considered approach because zero-rating video comes under regulatory scrutiny because of net neutrality principles.”
Another Cartesian finding is that in developing countries video is more likely to be viewed on mobile than any other device in the home. This is linked to the overall cost of pay TV and advances in network infrastructure, says Racine.
The majority of survey respondents in developed countries (55%) cited television as the number one device for video in the home with other devices well behind: smartphone (17%), PC/Laptop (15%), and tablet (13%). However, in developing countries smartphones were ranked as the number one device in the home for watching video with 35% of respondents, followed closely by television (34%), then PC/Laptop (19%) and tablet (11%).
There was unanimity that mobile video watching in the home will increase over the next five years, with no significant change in the role of television.
When asked about the top three barriers to watching video in the home, 63% of respondents highlighted the audio and video experience on phones. Battery life is also a challenge (44%) but only 15% said storage limits are a barrier, reflecting the trend towards streaming. Surprisingly, given the availability of Wi-Fi in the home, 31% said mobile data charges remain a factor.
“There is clearly an appetite for increasing the role smartphones play in consuming video both inside and outside the home,” says Racine. “Across the ecosystem, players are looking to operators to take the lead in growing this opportunity. We are seeing operators employing strategies to improve their networks and enhance mobile video. The winners will be the ones who offer the market the best deal first.