Friday, 30 June 2017

MPEG starts work on immersive standard

Broadcast

International standards committee MPEG has begun work on a new standard for future immersive content applications.
http://www.broadcastnow.co.uk/techfacils/mpeg-starts-work-on-immersive-standard/5119549.article?blocktitle=Latest-News&contentID=1151
The development of ISO/IEC 23090, nicknamed MPEG-I, is split into five stages, with the ultimate aim of creating an entirely new codec by 2022.
The first step is to agree an application format for omnidirectional media (OMAF) that will enable panoramic video with 2D and 3D audio, with various degrees of 3D visual perception.
“For some MPEG members, OMAF is the number one activity, since there is an urgent need for a standard,” said Mary-Luc Champel, standards director and principal scientist at Technicolor. “The work is in draft form and the goal is to make it public by the end of this year.”
While OMAF will build on existing compression standard High Efficiency Video Coding (HEVC) and streaming protocol Dynamic Adaptive Streaming over HTTP (DASH), MPEG has called for video test material, including from plenoptic cameras and camera arrays, to build a new codec addressing Light Field. This technique captures all the light rays at every point in space, travelling in every direction.
“If data from a Light Field is known then views from all possible positions can be reconstructed, even with the same depth of focus, by combining individual light rays,” said Champel. “Multiview, freeview point and 360video are subsampled versions of this. Due to the amount of data, a technological breakthrough [in compression] is expected.”

Thursday, 29 June 2017

35mm: the format that refused to die

RedShark News

Seven years ago it looked like 35mm was a dead format. In the digital era though it still seems to thrive, with Kodak's recent announcement that it has re-opened a lab at the UK's Pinewood Studios - by the looks of it to support the Star Wars franchise - part of an increasing amount of background chatter that's becoming difficult to ignore.
https://www.redsharknews.com/production/item/4701-35mm-the-format-that-refuses-to-die

There was a time – let’s call it 2010 – when celluloid had passed its sell by date and digital was set to rule. Kodak had manufactured its last roll of 35mm (having barely survived bankruptcy in 2004); Deluxe and Technicolor merged then shuttered their labs around the world and even Martin Scorsese was making stereo 3D (a trick not so easily managed with dual sprockets).
But die-hard cineastes have had a last hurrah. Kodak has just re-opened a lab at Pinewood Studios in a deal which will last until 2022. That’s not coincidentally the timeframe for Disney’s planned 10 Star Wars films of which JJ Abrams’ The Force Awakens and Rian Johnson’s The Last Jedi (Episode VIII) are 35mm shows (Rogue One went suitably rogue and shot digital). They are all camped in Buckinghamshire.
As is James Bond. Kodak has taken space in the Ken Adam building and will be a cert to handle dailies for 007’s next outing given the success DP Hoyte van Hoytema made with 35mm for Sam Mendes’ Spectre.
While the lament that electronics could never hope to capture the magic of the chemical process was always a bit whimsical, many cinematographers have found it hard to match the aesthetic qualities of negative in digital form.
Even where digital is used, DPs regularly turned to old style optics and anamorphics from Cooke or Panavision to give their work an idiosyncratic or romantic look, often with a flare effect or bokeh which can’t quite be recreated without them.
The cost argument was never quite proven either. Shooting video may allow a production to shoot as much as it wants without needing to finance each reel, but there were ‘hidden’ costs to digital in post production from data wrangling to storage.
Most importantly, film was championed by directors like Christopher Nolan and Quentin Tarantino. So successful was Nolan’s original Batman Begins (2005), there was no way Warner Bros. wasn’t going to acquiesce to the director shooting the rest of the franchise and whatever else he wanted on the format.  
That includes Dunkirk, Nolan’s new epic which is not shot with any 35mm at all. Having lensed sequences of his previous films including Interstellar on IMAX 70mm, this time the entire film is shot on 65mm negative. Seventy percent of the film was shot on IMAX which is 65mm 15 perf. (1.43:1). The rest was shot on 65mm 5 perf (2.2:1) on Panavision cameras (the negative is 65mm, the print is 70mm). Each day the rushes were sent back to Fotokem in LA for processing. Travelling with Nolan during principal photography was a 70mm projector on which he could view dailies.
While such a cumbersome set-up is not for everyone, he and other director’s insistence on retaining film at least as a choice for acquisition forced the studios to force Kodak’s hand. In 2015 the firm reversed its decision to stop making 35mm film and promised a limited supply of stock to Hollywood for the foreseeable future.
Over the last few years, the format’s renaissance has been remarkable. Boyhood, Interstellar, The Grand Budapest Hotel, Inherent Vice, The Imitation Game and Foxcatcher were all 2015 Oscar nominated pictures photographed on film. 2016’s awards front-runners Carol, Bridge of Spies, The Hateful Eight, Joy, Black Mass, Son of Saul, The Big Short and Steve Jobs were also, in whole or in part, 16mm or 35mm originated.
And the choice of shooting on 35mm or Super 16mm film had a clear impact on the 2017 cinematography Oscar race including La La Land, Fences, Jackie, Nocturnal Animals, Loving and Hidden Figures.
More surprisingly, given their heavy VFX content, major releases Jurassic Park, Spectre, Batman v Superman: Dawn of Justice, Suicide Squad, Jason Bourne, The Mummy and Life also originated on film. 
Not without some justification then “movies captured on film are winning nominations and awards at a disproportionately high rate,” claims Steve Bellamy, president of Kodak Motion Picture and Entertainment. "Film benefits from the world's greatest motion picture artists using it, but the world's greatest motion picture artists also make better movies because they use film.”
While shooting film is no guarantee of quality, it does seem to add a certain ‘kite mark’ of auteur intent to the filmmakers behind it.
Nolan, for example, has managed to get Dunkirk released two days early in cinemas with 35mm or 70mm projectors. Perhaps this high-profile endorsement will give a shot in the arm to the art of physical film projection, even if only temporarily (120 years of cinema projection is about to upended entirely by so-called direct view TV panels sold by Samsung and Sony).
In response to this resurgence Kodak also recently acquired a film-processing lab in Atlanta, where film is being processed for The Walking Dead and it has also built an entirely new film lab in New York.

Nor is Kodak the only lab in town. Cinelab in Slough has been flying the flag throughout the period when everyone seemed to be deserting the format, making a living by processing the dailies of a few film projects that were being commissioned but mostly by offering an archiving service. 
Let’s not forget that film is the only medium ever proven that will retain visual media for a century – and you can ‘read’ it by shining a light through it unlike optical disc or solid state variants.
For now, though, film is very much alive.

MAM moves to the cloud

Broadcast

Broadcasters and vendors are reappraising the monolithic media asset management system with a modular approach more suited to cloud business and production workflows.
http://www.broadcastnow.co.uk/techfacils/mam-moves-to-the-cloud/5119274.article
There is a widely held perception – rarely discussed – within the broadcast industry that despite all the marketing hype, media asset management (MAM) has struggled to deliver the expected benefits of time, (budget) savings and functionality. Even MAM systems vendors acknowledge that some implementations have not been a success.
“MAM systems weren’t built to make money, but rather save money,” says Cantemo chief executive Parham Azimi.
“MAM systems streamline workflows, enabling companies to ingest video content or access archived content, find the right assets, work with that content, and distribute it. Making that process simpler ultimately saves time, and therefore money.
“It is much the same as any business process system, where the firm generates its revenue elsewhere, but the system can minimise time spent on administration, which is time that can be spent generating revenue.”
It has always been difficult to calculate the return on investment for MAM, in part because the term itself is too broad to give a blanket answer.
“Whereas a well-executed MAM solution can increase operational efficiency and enable new distribution models and revenues, the precise savings, either forecast or in retrospect, are often unclear,” says Jeremy Bancroft, director at consultancy Media Asset Capital.
MAM is about managing content, and specifically where it is. It’s the modern version of knowing what shelf the tape is on, and the processes for moving it around. Yet at a typical broadcaster, the scope of a MAM could include supporting transmission (TX); archive and library management; production, news or graphics operations; or any combination of the above
“In TX, MAM or content preparation applications, it is relatively easy to set a metric to measure ROI – the number of man hours required to get one hour of broadcast content ready for transmission or distribution,” Bancroft says.
“This is measurable pre- and post-MAM implementation, and we would expect a 30-50% improvement in operational efficiency as the result of an expertly specified and implemented MAM solution.”
A production MAM might shorten production timescales and provide “real savings” in the production cycle, he suggests.
“Archive and news MAM are much more difficult when it comes to creating an ROI, but these solutions can significantly improve production quality by providing choice to producers.”
Assessing ROI
Even then, many MAM systems take a long time to prove. Bancroft, who has been involved with this sector for almost 20 years, says some systems have only really started to deliver in the past five to eight years.
Ian Brotherston, chief executive of media services business TVT, says countless case studies and customer testimonials endorse the position that a well-implemented MAM can boost productivity.
“However, an analysis of media-processing costs, which include MAM, equipment, storage, networking, accommodation and people – which is often the largest cost – will reveal that it is often more beneficial to move to a service model than to do it in-house, no matter which MAM you have deployed,” he says.
Niall Duffy, chief marketing officer at digital workflow specialist Virtual AI and a former head of IT workflow at Sony, goes further.
“The concept of MAM either enabling new revenues or cutting costs is mainly a fallacy,” he says. “A MAM is the necessary glue in any large-scale file-based workflow, not a strategic broadcast system. It does not streamline or cut out manual resource.”
Deployment of MAM has reduced tape costs, he says, but added IT storage costs, “and in total, probably increased storage costs” because people now make more use of content.
“MAM systems have reduced the headcount and costs associated with VTR handling or content operations, but that was down to file ingest and transcoding replacing tape duplication and lines recording,” he says. “In reality, good process and process enforcement deliver cost-cutting, not MAM systems.”
If a MAM has failed to deliver, the blame is placed as much at the feet of buyers as it is vendors. Duffy argues that most broadcasters treat MAM as a technology project, not process re-engineering, in part because they are not very good at process re-engineering projects.
“Unless a technology deployment is firmly rooted in a business benefits context, it will never deliver on any promises made or expected,” he adds.
“With realistic expectations and a willingness to remodel processes, MAM and related technologies can deliver substantial benefits, but they don’t do it all by themselves.”
Tim Burton, managing director of IT systems integrator Magenta Broadcast, agrees: “If a project has not gone well, it tends to be because the user didn’t realise what they needed, and the result is something designed by committee: either too expensive or too bespoke. You can spend a long time delivering it and then not hit anyone’s sweet spot.”
Bancroft says he’s seen many cases of broadcasters being too prescriptive at the proposal stage. This leads to the rejection of all proposals as they cannot possibly meet the stated ‘must have’ requirements, “leaving the customer with the only option of either changing the specification or building their own solution”.
Shifting to the cloud
If decisions to invest in MAM are hard to judge now, what are broadcasters and media owners to make of shifting the process to the cloud?
“The cloud changes the options for video content providers,” says Azimi. “For example, it changes the possibilities of scale-on-demand that organisations can have. It also changes the way people can approach where to manage their content and how much content can be managed on a single system.
“It affects how it will be rolled out to users and how it will impact the workflow within an organisation. Crucially, without an overwhelming IT infrastructure, broadcasters no longer need to become an IT company to implement a MAM system within the organisation.”
Some cloud-ready systems are, however, standard MAMs that have simply been virtualised and put in the cloud. According to Azimi, this causes several problems and will never perform the same as a system built natively for the cloud.
“Virtualised solutions cannot take as much advantage of the elasticity of cloud offerings, nor can they benefit fully from automatic switching to another system in the event of failure,” he says.
“It’s not just the MAM functions that need to be cloud-based,” stresses Bancroft. “The solution needs to take advantage of processing in the cloud for rendering, transcoding, QC checking and so on.”
Howard Twine, director of software strategy at MAM and storage vendor EditShare, believes there are too many software solutions that are not cloud ready.
“MAM vendors are scratching their heads,” he says. “Due to the broad nature of MAM, there are so many components provided by different vendors that need to talk to one another.
“This can be challenging enough in a static and sterile facility. Add to this the changing nature of ad hoc hosted compute instances and you have all the ingredients for the perfect disaster cocktail.”
It seems the key is to view MAM as just one part of the puzzle, which is integrated with other systems and able to orchestrate and automate processes across facilities and locations.
Erik Åhlin, co-founder of API-based MAM platform Vidispine, contends that MAM will be reduced to a “comparably irrelevant software category” unless vendors can turn it into a production-based software-as-a-service.
“Whatever any video platform as a service looks like, it must be something other than just installing software on cloud and paying per month,” he says.
“The media industry must have higher expectations than that. Imagine setting up a new niche channel in a few hours with no Capex, no staff, no infrastructure and no software to run – and then measure the cost on the same terms as revenue.”
THE BROADCASTER VIEW
MAM has meant different things to different people, with the consequence that functions from workflow and ingest to storage management and content processing were all wrapped under one single monolithic system,” says Tom Griffiths, director of broadcast and distribution technology at ITV.
“Broadcasters have been guilty of wanting such a monolithic MAM without understanding the challenges this represents in terms of cost, complexity and integration.”
UKTV director of operations Sinead Greenaway agrees: “Broadcasters have a history of over-architecting MAM, almost treating it as a panacea for every content metadata and image problem the business faces. MAMs haven’t kept pace with the rate of change. They become legacy almost as soon as they are plumbed into an organisation.”
ITV have pursued a more refined model, selecting multiple ‘best of breed’ MAMs and gluing them together with code written inhouse. “For some companies, the overhead may not be justifiable, but having a software development team keeps everything under our control,” says Griffiths.
“If we feel a certain MAM is no longer useful, or another might perform better, we have the ability to swap it out.”
ITV is taking this approach to the cloud. It is in a “transitional” and “active research” phase, in which some of its asset management remains on premises (such as news, where it uses Avid) and others (for content delivery) are increasingly run from data centres.
“Cloud makes a clear distinction between classic MAM products and new vendors that have built systems from scratch in the cloud,” Griffith says.
“The [latter] tend to be much more focused around a specific business challenge like managing rushes in a remote production or work in progress workflows for post.”
The cloud makes it easier to break MAM into different components “for workflow or human decision-making, automation, management of assets and the content catalogue, and content processing”.
“Another thing cloud enables is a change in the economics of production and delivery,” he adds. “We’re moving away from traditional software licensing towards more of a pay-as-you go model.”
Instead of a ‘super MAM’, UKTV also sought a more modular approach, beginning with storage in the cloud and handpickingMAMs for the workflow.
“Avid Interplay works well for post-production,” says Greenaway. “This interfaces with the MAM, which [outsourced post-provider] The Farm uses.”
“We need more modular MAM tools that can cater for the vagaries of all workflows,” she adds.
“Yet there are structural problems as an industry with moving MAM to the cloud. In theory, we can put MAM in the cloud, but in itself it’s not doing anything until we know how our workflow will work.
“That in turn demands a rethink about common standards and security. It still feels that cloud workflow is nascent and the industry is struggling under the weight of service provision.”
Vidispine clients will be doing this before the end of the year at large scale. “Ultimately, cloud is how the chief financial officer steals the ‘media supply chain’ agenda from the chief technology officer,” Åhlin says.
The trend is that more tasks are moving from in-house to specialist service providers where MAM is just part of the service. Broadcasters and media owners don’t want – and, in many cases, can’t afford – the capital expenditure to build media processing operations or the operating expenditure to hire, train and retain the people needed to scale these tasks.
“The simple economies-of-scale argument suggests that MAM will ultimately be moved to service providers who will deliver a fixed cost per asset processed model,” says Brotherston.
“It’s not just the broadcasting industry. Looking at enterprises that are desperately trying to reduce data centre sprawl by virtualising and using software-as-a-service for tasks such as customer relationship management or enterprise resource planning – the move to an as-a-service model makes a lot of sense and MAMs will have to adapt accordingly.”

Gearing up for the HDR revolution

Broadcast

The volume of TV and film content being made available with High Dynamic Range (HDR) is expected to increase significantly in the coming months with the emphasis on HD back catalogues as well as original Ultra High Definition (UHD) programming.
http://www.broadcastnow.co.uk/techfacils/gearing-up-for-the-hdr-revolution/5119509.article?blocktitle=Latest-News&contentID=1151
The technology required to upscale or convert content originally shot in standard dynamic range (SDR) to HDR is now being incorporated into production, distribution and display equipment, allowing broadcasters to re-master archive HD programming for future distribution.
UHD sports channels such as BT Sport, which produced live coverage of the Champions League final in HDR, are likely to implement HDR first, but ABI Research expects HD HDR channels to begin airing in 2018.
“We would expect all the content on these channels to be delivered in HDR to ensure TVs don’t have to switch modes, which can result in visual glitches,” said ABI managing director and vice president Sam Rosen. “This can be achieved through colour up-sampling,” he added.
Some of the latest 4K TVs from Sony, Samsung and LG support such conversion. Technicolor has demonstrated up-sampling of sports content, as well as commercials, to show that many of the benefits of HDR can be achieved even with legacy content, to create HDR channels.
At the same time, French audiovisual lab B-com says it is in discussions with various manufacturers about integrating its conversion technology into cameras, encoders, switchers and TV sets. Harmonic, Embrionix and Intel have already demonstrated it and B-com hopes to have commercial agreements signed by IBC.
According to Nicolas Dallery, B-com marketing and sales director, “Tier 1 European broadcasters” are currently testing it. “Our solution deals with this transition period where many broadcasters are not willing to invest in new UHD HDR equipment because they only invested in HD a few years ago. They don’t have the capex to afford another huge investment. Our solution deals with legacy content and legacy equipment.”
B-com’s technology, which is compatible with the HDR standards Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG), works in real-time for live production where HD cameras can be upscaled to HDR alongside UHD HDR feeds.
ABI predicts that around 60% of TV households in Western Europe will have HDR-capable TV sets by 2020. HDR TV shipments – including HD HDR sets ­– will reach 245 million units globally in 2022, it adds.

Tuesday, 27 June 2017

Sky Takes Stake in V-Nova

Streaming Media


The compression specialist hopes Sky's profile will lift not only the attributes of its Perseus technology but an outcome similar to Amazon's swoop for Elemental.


Sky has bought a minority stake in V-Nova, the London-based company behind the Perseus compression technology.
The investment of £4.5 million ($5.7 million) is being made through Sky subsidiary Sky Italia but complements previous tech and startup investments made by the group including in cross-platform network Whistle Sports, online video aggregator Pluto.TV, U.S. ad tech firm Sharethrough, and in Elemental Technologies nine months prior to the encoding firm's $500 million purchase by Amazon in 2015.
Sky Italia began working with V-Nova in 2015 and has used Perseus for contribution links during coverage of football matches and for IPTV delivery.
"This transaction is important for us given Sky's very successful record with companies like Roku and Elemental," said Guido Meardi, CEO of V-Nova. "Statistically speaking we hope for good things."
A stake in V-Nova will enable Sky "to look at Perseus in a deeper way and to look for additional use cases," said Meardi. "Our work with Sky Italia has not gone unnoticed in the group. We can't say more, other than that the whole Sky group is looking at Perseus for future service applications."
Eutelsat is another minority shareholder in V-Nova.

Perseus 2, the latest version of its flagship codec,
launched in April. No deployments have been announced.
A key selling point is that Perseus 2 can be used in combination with other 'base' codecs including HEVC, VP9, and AV1.
Encoding/decoding processing power efficiency and visual quality are improved in Perseus 2 with 100Kbps necessary to deliver mobile video to all consumers, 1Mbps claimed as the benchmark for "monetizable full HD mobile video" and just 6Mbps, for "high quality UHD movie streaming and 4K 360/VR immersive experiences at scale," according to the company.
V-Nova also took take steps to integrate the codec into standard encoding/deployment technologies, including HTML5 playback and encoding with FFmpeg.
"Although Perseus Plus was original created for 14year-old chips using a mixture of hardware and software available in the device, we can run purely in software in current generation mobile phone chips and not consume any more power than the 'hardware' versions in the devices," says Fabio Murra, SVP product & marketing. "We can of course also still use hardware blocks to further reduce power consumption."
P-link, the firm's encoder and decoder for remote production and contribution, has been deployed by Mediapro (for use during an El Classico match 2015), by Eutelsat (to contribute UHD links of UEFA Euro 2016 matches to Rai customers), and by Sky.
"P-link is unique because it allows you to combine a lot of feeds with dynamic frame by frame multiplexing to makes exceptional use of bandwidth," says Murra. "For example, you can keep the programme feed at very high quality and all other camera feeds at a lower quality to make effective use of bandwidth.
"Even over 1 gigabit you can do remote production/contribution of UHD, which is science fiction with any other low-latency system. There are a lot of players very interested in this."
The company acquired the full global patent portfolio of video imaging experts Faroudja Enterprises in January this year.
While not part of any current deployment, Faroudja's pre-processor and post-processor technology will be fed into future use cases.
"This is a very important technology for our roadmap," says Meardi. "Faroudja's technology has already been demonstrated to provide a bitrate reduction of 35-50% over existing compression techniques."
Meardi says V-Nova is working with SMPTE to ensure P-link works with file formats IMF and MXF for nonlinear production workflows.
"Where productions shoot with multiple high-resolution cameras and require mathematically lossless quality, this can quickly build into expensive and cumbersome storage. P-link has a strong advantage in those contexts."
The company claims to complement rather than compete with established codecs HEVC, AVC, and H.264. It will also work with and encourage MPEG to build its next-gen codec for immersive media, MPEG-I.
"We are extremely supportive of ITU and MPEG," says Meardi. "Perseus can work as a standalone format especially for distribution applications where it works alongside other video encapsulation formats. We don't want to reinvent the wheel of encryption, audio, metadata management and watermarking or to create another transport stream. We want to focus on compression. Our focus is on Perseus to meet the needs of media today but we are not standing still. The work of MPEG is necessary for future applications such as 6Dof, for which HEVC will not be enough."
He adds, "People underestimate the amount of work necessary to really create a codec that can connect billions of users and can be integrated with other platforms. It requires a lot of cash."

Thursday, 22 June 2017

The Landing to return post hub to landlord

Broadcast

MediaCityUK digital enterprise incubator The Landing is to return its post-production department to Peel Media following a revision of its business plan.
http://www.broadcastnow.co.uk/techfacils/the-landing-to-return-post-hub-to-landlord/5119351.article?blocktitle=Latest-News&contentID=1151
Peel Media, part of Peel Group, is the landlord of the Salford site and owner of Dock 10. One possibility is that Peel will now lease the facility back to Dock 10, though this has not been confirmed.
Dock 10 had run The Landing’s post department, which includes 10 Avid suites and Avid Isis storage, since 2013.
The Landing chief executive Jon Corner said: “Given where we are heading as a leading UK innovation centre for tech start-up, scale-up and growth, there is little strategic alignment in continuing to operate a post-production floor. It makes sense, therefore, to return that space to Peel.”
The Landing is backed by the Department for Business, Energy & Industrial Strategy and Salford City Council. It recently secured a second round of funding until 2019 from the European Regional Development Fund and is providing space for start-ups in user experience design, cloud and both virtual and augmented reality.
“This is all in line with our ambition to be one of the UK’s leading tech incubators,” said Corner. “In 2016 alone, we contributed £89.2m GVA to the UK digital economy.”
The post-production department at The Landing, referred to originally as a content production hub, was set up to provide freelancers and SMEs in the North with open-access editing facilities.
Corner said: “Our focus going forward will lean more towards supporting growth tech enterprise, founders and product innovation.”

Tuesday, 20 June 2017

IMAX now have serious rivals in the 'premium large format' sector

Screen Daily
The competition is fierce, with Dolby, Ymagis and exhibitors muscling in on the market.
Premium large format (PLF) is a growing part of the market, with well over 2,500 screens in place at the end of 2016, according to IHS Markit.
Exhibitors such as Vue (VueXtreme) and Odeon (iSense) have launched their own PLF brands so as to avoid paying overheads to market leader Imax (though the future of iSense must be in doubt given Imax’s $25.7m [£20m] deal with Odeon/UCI parent AMC to convert 25 new sites in Europe).
The core PLF technology combines systems allied to comfort seating and large screens (15 metres and above). More recently this has also included high dynamic range (HDR) and immersive sound systems. This space is set to grow further as CinemaNext, the exhibitor services unit of French group Ymagis, rolls out Sphera.
This concept unites a Dolby Atmos-based audio system with EclairColor HDR technology from Ymagis subsidiary clair and LED lighting for ambience (around seats and the foyer), but uniquely this does not require a large-dimensioned screen. Instead, CinemaNext is targeting exhibitors with small- to mid-sized screens who may feel they lack the marketing clout to compete with Imax or PLF rival Dolby Cinema. The first handful of European Sphera installations are expected to be announced at CineEurope.
“We think you can deliver a premium immersive experience in a small theatre if it’s well done,” explains Till Cussmann, SVP at CinemaNext. “Existing premium models are very expensive and limited to those exhibitors willing to revenue share. Many exhibitors don’t have the resources for revenue sharing and are not necessarily best equipped to market their own premium concept. There’s a wide base for a less expensive turnkey solution combined with marketing expertise.”
The market’s dominant HDR format is Dolby Cinema, sold by Dolby as part of a PLF package that includes Atmos and Christie laser projectors. This reportedly costs around $562,000 (€500,000) per screen. In contrast, EclairColor will cost “below $56,000 (€50,000) for a small room and $90,000 (€80,000) for a large room”, according to Jean Mizrahi, Ymagis president and CEO.
Separately, CinemaNext is promoting EclairColor as a mass-market opportunity for cinemas to present films in high dynamic range. Developed with Sony projectors, the software system also works with some Barco projectors, and other vendors are expected to add compatibility by year end. Exhibitors with existing projectors from these brands will be required to pay a small fee for a firmware upgrade.
Currently only Eclair in Paris is equipped to perform mastering in EclairHDR, which adds an extra $22,400 (€20,000) per film to the DCP cost, but the company aims to license the technology to post-production system manufacturers to give productions a wider range of mastering options.
The trick with any new format is to get content made in it. Francois Ozon’s Cannes Competition title Amant Double is one of 35 titles already mastered in the format. La La Landwas the first studio title released in EclairColor. “From the studio’s perspective, it doesn’t make sense to provide content in any format while there are no screens,” says Mizrahi. “Soon, we will have enough screens to make it attractive for them.
Present on more than 50 screens in France, Germany, Italy and Tunisia, and with a pilot install at London’s Curzon Soho, CinemaNext expects 80 screens by year end. With around 300 committed, mostly between AMC and Wanda, Dolby has installed more than 70 Dolby Cinema screens, and has released or announced 75 films to be formatted for the concept. “With video platforms like Netflix investing massively into content and HDR, we believe cinema can’t be left behind,” says Mizrahi. “High dynamic range is a must. The improvement in quality is huge and the consumer proposition compelling when the additional outlay for studios and exhibitors is very limited.”

Monday, 19 June 2017

Sports look to data enrich the fan experience

KNect365 for TVConnectseries 
Hungry for more content, sports fans are being fed more and more data from athletes and the field of play, writes Adrian Pennington, but finding the right approach needs a good game plan 
The value of live for sport is still pre-eminent, but consumption in and around the event must now be considered 24/7 365. The always-on combination of mobile devices and social media means there’s no let-up in the demand and sports rights holders need to cater for this demand.
“Producers are working out how to create more inventory for their clients  – the federations and sports governing bodies – to distribute,” says Tim Godfrey, Partnerships Director – Sport at ITN Productions. “We are a multimedia sports production business meaning that we cater for the new digital fan, who has all sorts of different devices and platforms to follow sport on. We need to generate and present footage suitable for each OTT and social channel.”
One way of doing this is to deploy new production techniques based on IP networks to generate and publish more content more economically to more platforms. Broadcast production equipment vendor EVS reckons only 10% of the total multi-camera footage captured at the average Premiership soccer match actually makes it air.
Using IP production technologies enables more of this media to be stored, transferred, editorialised, packaged and distributed than using  traditional methods. Costs can be cut by doing it remotely, as ITN Productions did by managing the host broadcast from London of the IAAF’s World Relay event held in the Bahamas.
Aside from making more use of the existing audio-visual resource, the other area where sports are looking to add value is to extract more data from the event itself.
“It’s clear that media are starting to use statistics as a powerful form of storytelling in digital and social and especially in visual form,” says Carlo De Marchis chief product officer at sports production specialist Deltatre.
“Numbers are a great way to tell a story. For example, knowing that Lionel Messi’s second goal in the El Classico (end of April) was his 500th for Barcelona,” he says. “Translating numbers into graphics is another key.”
Visually, the line superimposed live on a swimming pool to help indicate who is in the lead or show distance and speed to world record times is now standard. “If you watch a swimming race without it you feel something is broken and missing in the presentation,” says De Marchis.

Motorsport telemetry

Some sports use of data is more advanced than others, in part because they may be more data-oriented in the first place. A prime example is motorsport, where telemetry from cars (speed, split times) in Formula One or World Rally has been available for on-screen presentation for several years. The data is selective – often delayed in the live stream or published in highlight analysis – to prevent competitive advantage during a race.
While on-field data is often outsourced to specialised agencies like Opta Sports (owned by sports agency Perform Group) or IBM, MLB Advanced Media (MLBAM), the digital wing of Major League Baseball, has been able to lead due to its control of media production for all 30 teams.
Some of its data comes from optical capture (cameras and graphic analysis systems), some from radar capture and some is extrapolated from these data samples. It is extremely granular with pitching measurements for perceived and actual ball velocity, and spin rate.
Metrics tracking runners between bases are honed to lead distance, acceleration, maximum speed and home-run trot. There’s even data illustrating the speed of the base runner’s first step and route efficiency.
MLBAM is also exploring how WiFi or Bluetooth beacons can detect fans at a ballpark wearing a smartwatch. If a pitcher throws a fastball to end the innings, the fan could potentially use the watch’s glance action to review the speed and path of the pitch.
The speed and trajectory of balls played out by predictive analysis software has long been integral to the very rules in sports like tennis and cricket – but fans couldn’t interact with or share it.
That’s changing as it is increasingly possible to get fans more immersed in the action and more engaged in the sport by using data as the hook.
When Formula-E attempted to break into the motorsport market it did so with a disruptive ethos, in line with its green credentials in a sport dominated by petrol heads.
As a way to drive interaction with millennials in largely untapped motorsport markets like China, Formula-E controversially introduced Fanboost, the ability for the fanbase to influence a race by giving drivers the ability to enact overtaking or defensive moves by virtue of having won most votes in a social media poll – a first for a mainstream sport.
While such developments may lack credibility in the eyes of hardcore fans, for a sport looking to build from scratch it makes sense. Sports targeting similar demographics, such as drone racing and e-sports (with which Formula-E is also tied via a driver versus fan e-sport competition) are natural fits for social and athlete interaction.
Data gathering is now moving from machines and objects to athletes, but the question is more ethical. Data about an athlete’s heart rate, breathing rate, stress levels, body temperature or G-force is potentially medically sensitive.

Human data

Currently, such data is deemed private and there are limitations on placing sensors on athletes in sports like athletics. However, this could soon change.
“We want to innovate in this area to provide a more in-depth view of an event than ever before,” says Godfrey. “Federations and governing bodies are keen to innovate too, since they recognise that from a fan’s perspective such data increases the sport’s accessibility.”
ITN Productions is testing a number of different “medical grade” sensors and devices and says it would assign a specialist to a future production to assist in the interpretation and presentation of the data on screen.
“It is crucial that human data is measured and used in the right way to ensure its accuracy,” says Godfrey. “That means using medical grade equipment not commercial off-the-shelf devices. We will employ experts, but, ultimately, we would want our graphics operators and producers trained to understand the data.”
Heart-rate monitors have been fitted to members of Formula E race teams including drivers and team bosses to see how their stress levels rise and fall during the crucial opening lap of an ePrix.
Similar data could be gathered from athletes lining up at the start of a 100-metre race, Godfrey suggests.
“Clearly, athletes have to be comfortable wearing the sensors and sharing certain data,” he stresses. “There are questions about how about it can be used commercially. However, there is a loosening of the boundaries in a lot of sports and the tipping point is on the horizon.”
The introduction of the 5G mobile network standard from 2019/20 will enable an even closer tie between a live event and the audience. South Korean city PyeongChang is host to the 2018 Winter Games where there are plans to demonstrate real-time Augmented Reality and 360-degree experiences for hundreds of spectators in the stadia. AR is arguably more interesting from a live event perspective since it can overlay information and graphics on top of a real-world view. Olympics rights holder Euronews, owned by Discovery Communications, plans to introduce AR around its coverage of the 2020 Tokyo Games.
 Data as revenue stream
Data can not only help immerse the fan more closely with a sport but it is crucial to sports teams and their sponsors. “Without it and their spend is blind,” says Elliot Richardson co-founder of online soccer network Dugout.
Sports are waking up to the potential value of largely untapped data to which they hold the rights.
“By making your data more sophisticated, it will add value and exclusivity and therefore something monetizable as a revenue stream,” says De Marchis.
Data culled from the field of play can be used to carve out new sponsorship opportunities yet still enrich the game for spectators/consumers. An example, from August 2016, was driven by Perform Group for Major League Soccer sponsor Audi.
During a celebrity versus press football game in New York, individual players’ performance ratings, calculated from the Opta data, was displayed electronically on the front of the player’s shirts. Player performances were also ranked and displayed live on large pitch-side screen.
Closer to home, Deltatre has worked with UEFA to embed audio watermarks in the audio track of Champions League match coverage. The experiment enables UEFA to link match action to additional relevant information or sponsor driven content on the second screen.
For example, a goal by Ronaldo might offer the viewer links to view his previous Champions League goals, or a call to action for an Adidas e-commerce promotion.
“The need to convert users into customers is becoming an integral part of the online video offer for sports rights-holders,” says De Marchis.

Why new technology could be hurting the film business

Screen Daily

Is the pace and range of innovation now stretching the supply chain to breaking point?

http://www.screendaily.com/features/why-new-technology-could-be-hurting-the-film-business/5119128.article?blocktitle=LATEST-FILM-NEWS&contentID=40562

Digital conversion is now largely complete across the world’s 163,000 screens, and manufacturers, studios and exhibitors are able to bring to market an unprecedented array of technologies.
Creative advances such as higher frame rates (HFR), high dynamic range (HDR) and 4K are feeding through into the distribution workflow and the exhibition sector. The trend for higher contrast, needed for HDR, is driving a new series of high-contrast projectors, as well as laser-illuminated projection. Exhibitors are also evaluating more experimental and speculative technologies, from immersive audio systems to motion seating and panoramic formats - often in combination.
While new technology offers exhibitors greater choice, there are growing calls for a pause in the process to allow for education, standardisation, workflow rationalisation and technology assessment.
“The industry needs to keep a watchful eye on technology proliferation to ensure it is of benefit, not a source of tension and a reason for decline,” warns David Hancock, head of film and cinema technology at analyst IHS Markit and president of the European Digital Cinema Forum (EDCF).
The surfeit of technology is increasing the complex logistical operation behind every release: a major film can now have more than 500 versions going out of a lab, with a range of audio, image, premium, experiential and language versions adding to the possible permutations.
Theatrical mastering, versioning and delivery services claim their systems can handle whatever is thrown at them, but it is also clear the pressure to meet release schedules is tightening. “We’re running as efficiently as we can but the current system is reaching a limit,” says Matthew Aspray, COO at Motion Picture Solutions, currently creating 350 versions for the international release of Despicable Me 3.

Individual exhibitors might receive only a fraction of this number, but it is not uncommon to see 10 different versions of one title (factoring in iterations for 2D, 3D, dubbing, subtitles, 5.1, 7.1 and Atmos audio, HDR and versions for different cinema A/V configurations). This can create potential bottlenecks in delivery to cinemas regardless of whether this is by satellite, electronic network or physical hard disc.
Digital has also encouraged filmmakers to tinker with visual-effects shots up to the eleventh hour of distribution. “We might be working on versions for reels three, four and five of a movie while reels one and two are still being finalised,” says Aspray. “When your time is already squeezed to make 300 versions, this only adds more pressure.”

Investment quandary

While the supply chain struggles to keep pace, exhibitors charged with making investment decisions have their work cut out. According to Phil Clapp, CEO of the UK Cinema Association (UKCA) and president of the International Union of Cinemas (UNIC), the lack of objective information is challenging exhibitors to ensure their investments lead to tangible returns.
“New technologies are coming to market dizzyingly fast,” he declares. “Ten years ago, cinema operators were mainly required to invest in the upkeep of their 35mm projector, consumables such as xenon bulbs and maybe 5.1 and 7.1 sound. Investment was fairly predictable and evenly spaced. Now, even larger operators are challenged to understand how to monetise or operationally manage all the options. The impact on smaller operators is even greater since they don’t have the capacity to engage in research.”
“There is a lot of confusion about new technology and certainly a lack of education,” agrees Oliver Pasch, sales director at Sony Digital Cinema Europe. “Manufacturers must and do try to help exhibitors understand the purpose of each technology.”
Meanwhile, says Tim Richards, CEO of Vue International, “There’s an element of back-to-basics with seat and sound.” Vue’s flagship in London’s Leicester Square is undergoing a $7.7m (£6m) refurbishment, including the installation of VIP seating and Dolby Atmos.
Studios reportedly have recognised the problem. “They’ve acknowledged they need to express in one way or another how they see the technical future of theatrical exhibition,” says Pasch. At the heart of this discussion is whether to replace the digital cinema initiatives (DCI) specification, which is the standard on which digital conversion is built. Originally agreed by the studios in 2005, this documents the route to create a digital cinema package (DCP), including file compression and content encryption, as well as presentation specifics such as image brightness and ambient-light levels.

DCI 2.0?

The original DCI — known as InterOp — was “always intended to be a temporary measure”, says Clapp. “It’s increasingly stressed by the range of innovation.” Anticipating this, the industry is in the process of switching to an updated version known as SMPTE DCP (after the standards body that ratified it, the Society of Motion Picture & Television Engineers).
SMPTE DCP is built on InterOp but accommodates formats such as Dolby Atmos immersive audio and high frame rates. Indeed, films mastered with these attributes cannot be played back without the projection technology in cinemas also being updated to the SMPTE specifications.
SMPTE is being rolled out, but not at the pace the industry would like, meaning both InterOp and SMPTE versions need to be mastered for every market. “We’re not saying SMPTE will make life simpler but it should ease some anxiety,” Clapp says. “Distributors and exhibitors need greater confidence going forward that permutations of technology will not lead to a situation where technology ‘X’ played back with projector ‘Y’ does not work.”
Over and above this is a concern DCI itself may be reaching its sell-by date. “We are introducing a plethora of technologies not foreseen in the DCI,” says Hancock. “While new cinema systems and digital cinema mastering formats use the DCI as a baseline, we’re now seeing far more technologies coming in over the top that are not yet standardised.” For example, there is no open standard for the playback of immersive audio formats such as Barco Auro, DTSX and Dolby Atmos, and no standards covering the amount of digital noise or speckle in laser-screened projection.

Non-standard formats

Emerging LED screens throw another spanner in the works. “It doesn’t make sense to certify a laser projector or an LED screen with DCI in either Interop or SMPTE form since it was never written with these technologies in mind,” argues Pasch.
The increase of premium large-format (PLF) screens adds further versioning requirements to the mix. After a studio’s localised vendor, such as MPS, Deluxe or Eikon, has worked on a title, companies such as Imax, China Film Giant Screen (formerly DMAX), EclairColor and Dolby have additional work to do in order to create versions specific to their systems. “Instead of streamlining the whole process of mastering, distribution and playback, there’s a risk of divergence between premium cinema and the rest, leading to a two-tier cinema industry,” warns Hancock. “That was what the original DCI specification was aimed at preventing.”
It is not clear exactly what is needed to prevent technology overload. Clapp calls for an attempt “to provide a more stable foundation to introduce all these tech changes”, while Pasch suggests it may take the form of “a recommended best practice” led by the studios in concert with manufacturers. “The European cinema sector, with its multitude of languages and diverse structure, faces particular issues,” says UNIC CEO Jan Runge. “To ensure interoperability and access to films for all types of cinemas, the development of a new generation of technology standards will be a strategic imperative for the industry in the future.”

Not everyone agrees there is an issue at all. “Exhibitors have run into problems in the past by being complacent and not trying and testing new technology or investing in infrastructure,” says Richards. “Embracing new technology is part of the world we live in and it pays to be half-a-step ahead.”

Thursday, 15 June 2017

Mahon to chair Foundry board after joining C4

Broadcast 
Alex Mahon will become nonexecutive chairman of Foundry when she succeeds David Abraham as boss of Channel 4 in the autumn.
Until then, Mahon will continue as chief executive of the Soho-based visual effects tools developer, and will be part of the team in charge of recruiting her successor.
Mahon joined Foundry in 2015 after three years as chief executive of Shine Group. She currently serves as a senior non-executive director of Ocado, a non-executive director of the Edinburgh International Television Festival charity and chair of the RTS Programme Awards.
Before joining C4, she will also oversee the pending launch of a beta version of Elara, a cloud-based platform for post-production that combines storage infrastructure with creative applications like Nuke, V-Ray and Houdini.
With Elara, users would set up a virtual studio where production data would live alongside required creative tools and a dynamic render farm. Access to tools would be via web browser, with no local software or hardware required.
Cloud rendering, scalable storage and project monitoring and analysis tools will also feature. Foundry is majority owned by London-based private equity firm HgCapital, which acquired the firm for £200m in 2015.

GMG to follow Input deal with further acquisitions

Broadcast
Gravity Media Group (GMG) is planning further acquisitions in the wake of its purchase of London-based sports producer Input Media.
GMG chief executive John Newton told Broadcast: “We are always looking to add value to our business by offering the latest and broadest range of technologies in as many geographic markets as possible.”
The multimillion-pound deal, announced last week, is the first major acquisition the Hertfordshire- headquartered group has made since investment management firm TowerBrook Capital Partners took a minority stake in the company last September to fund future growth.
Existing subsidiaries include facilities providers Gearhouse Broadcast and Hyperactive. GMG bought French RF links specialist ACTIS in 2015 and owns 50% of German camera firm Spidercam in the Australia and New Zealand region.
“The major driver for us wanting to purchase a company like Input Media was to shift from being just a facilities provider to being a turnkey provider of production services,” said Newton. “We are now able to offer a complete solution to clients who don’t have all the facilities in-house.”
In particular, the deal opens the door to new commercial models for emerging technologies such as remote production, with Input Media overseeing productions from its London HQ and Gearhouse handling the on-site facilities.
“The proliferation of IP-based hardware is changing sports production by allowing us to do more on a remote basis,” said Newton. “However, it won’t change overnight and remote production is not right in all cases.
“It can benefit productions that are spread over large distances, but makes less sense where you want to capture the festivities of a large occasion. If producers are stuck in a bunker, it can lead to a cookie-cutter process.”
The economics of IP production do not yet work, he added. “What you save on not sending people to a venue you spend on connectivity, so it becomes a zero-sum game. Only when the cost of bandwidth and IP equipment comes down will IP allow you to do more remote production and to change working practices.”
GMG partners with BSI to produce coverage of motorsport V8 Supercars including in-car telemetry and on-board cameras.
Newton revealed that the firm is preparing to debut a real-time 360- degree feed from a Spidercam.

Friday, 9 June 2017

Getting Ready for Video Over 5G: How Should the Industry Prepare?

Streaming Media

With operators at risk of a ‘build it and they will come' approach, can a converged video contribution and delivery network monetize 5G?
The lack of a business case for the fifth-generation cellular network belies the recent fulsome pronouncements made by operators and chip makers about its potential as a game-changer.
Nokia, for example, talks about innovating “the global nervous system” which it describes as “a seamless web of interconnected intelligence that underpins our digital lives.”
Intel has said 5G will “enable new experiences across a variety of industries and categories including automotive, virtual reality, artificial intelligence, homes, buildings, factories, cities and infrastructure.”
Ericsson sees no impediment to 5G’s success. “Since 5G networks are designed to operate with ‘slices’, allowing traffic to be segmented according to their specific requirements such as latency or bandwidth constraints,” says Giles Wilson, CTO, head of portfolio & architecture, solution area TV & media, at Ericsson. “5G can usefully be ‘all things to all people’ without significant compromise. This is one of its clear advantages.”
Operators, chipset developers, and handset makers are jostling to set new performance benchmarks and to deploy upgraded elements of the 4G standard in advance of a global agreement on the technology, regulation, and market for 5G.
“This is undoubtedly the elephant in the room,” says CCS Insight principal analyst, operators, Kester Mann. “For all the proofs of concept and ‘world first’ demonstrations, the mobile industry appears little closer to establishing solid business cases to justify the significant investment required.”
Intel is not alone in attempting to define the main industry verticals that operators will want to monetize. Missing from the list is media and entertainment (M&E), something that mobile operator EE-owned by U.K. telco BT—is keen to put front and center of the global agenda.
“We are pushing the industry in the direction of M&E as a vertical alongside automotive, broadband, manufacturing, and e-health,” says Matt Stagg, who is responsible for video and content technologies at EE.
The initial goal is to define what M&E means in this space. “It is about convergence,” explains Stagg. “5G is more than mobile. It encompasses the contribution and distribution of all media and entertainment.”
That ranges from IPTV and 4K/8K video to outside broadcast cameras, live as well as linear video, video on demand (VOD) and caching, the CDN and virtual/augmented reality.
“Longer-term it is about looking at the potential of 5G as replacement for DTT [digital terrestrial television],” says Stagg. “We are at an early stage of defining this, but we believe convergence means using the same technology for all TV and video delivery mechanisms rather than using different formats. Because you get less fragmentation the [data] flows end to end and you can streamline the process. We also include radio (and audio) in this to an extent.”
While 8K will be inclusive of the ultimate 5G specification, Stagg is dismissive of any near-term broadcast applications using the network.
“Even 4K is limited because content is limited,” he says. Far more important from a monetization point of view is the ability to offer more video particularly around live events. He points out that only around 10 percent of all video captured at a live event (such as a football match) is actually distributed. The transition to IP and remote production architectures combined with distribution over 5G will permit far more of the in-game assets—such as streams from player cams—to be monetized.
EE argues that the technology should be disconnected from the application and that unicast, multicast, and broadcast should underpin all verticals.
“Convergence is not just for streamlining video,” Stagg says. “You need to broadcast information to cars that tells the [automated system] when a traffic light is red, for example. Sending the same data to each car is inefficient and adds latency and demand on the network.”
Reinvigorated 4G
This work is already in progress as operators worldwide look to deploy and promote upgraded versions of 4G, elements of which will evolve into the 5G network.
“You can’t afford to focus on 5G without evolving 4G—the two are inseparable,” says Stagg. “You can’t underinvest in 4G and then leapfrog into 5G. It’s not like 3G to 4G, which were completely separate systems. 5G will be built on top of the other from a radio perspective.”
Variously marketed as 4.5G, 4.9G, LTE Advanced Pro, or Gigabit LTE, CCS Insight calls a reinvigorated 4G “incredibly disruptive.”
“Network operators see Gigabit LTE as an opportunity to extend the return on their investments in 4G networks,” says CCS Insight’s Ben Wood.
The analyst says video will represent the main monetization opportunity for telcos in serving more data over a ramped-up 4G. “Given consumers’ insatiable appetite for connectivity on the go, expect telcos to sway consumers to sign up to bigger bundles,” suggests CCS Insight’s vice president, multiplay and media, Paolo Pescatore.
How Should Publishers Prepare?
“Ultimately, most of the content today is developed for the big screen and then adapted for smaller screens,” Pescatore points out. “Expect most, if not all, content and media owners to continue with this approach. However, some content owners are thinking about developing solely for mobile screens. We don’t foresee any significant changes that need to be undertaken for 5G; not only is it still too early to say, but no changes have previously been made for 3G or 4G.”
The way in which we use devices and interact with our surroundings will shift, just as it has previously with 3G and 4G. According to Futuresource Consulting, the targeted download rate is 20Gbps and the upload rate is 10Gbps. This means we will have fibre-like performance available on-the-go.
“From a content perspective, we could see UHD downloads, access and live sharing or streaming across multiple devices, and higher-quality file formats (i.e., super-high-quality audio),” says Futuresource market analyst Tristan Veale. “Mobile video publishers don’t have to be concerned with tailoring video to a mobile audience, rather they can be concerned with monetizing the content which they are producing for larger screens as they have the access speeds to deliver it.”
Driven by higher-access bandwidths, IP, and more capable devices, Ericsson also foresees less requirements for video publishers to prepare content specifically for mobile. “Mobile delivery generally now uses the same delivery protocols, technologies, and video profiles as generic OTT [over-the-top],” says Wilson. “As we move to 5G, we will see this become even more ubiquitous and video delivered over mobile will include full HD and UHD profiles.”
Pre-5G Upgrades and Standardization
On the radio side, the 4G upgrades includes using massive multiple-input, multiple-output (MIMO) antennas, a way of combining multiple antennas into devices to increase the throughput dramatically without moving to 5G. Operators are also looking at carrier aggregation, another feature which can be implemented before the 5G standard is ratified. This combines multiple spectrum bands into a single device with more throughput and capacity.
Sixty-eight 4.5G networks were commercially deployed worldwide in 2016, and the number of networks deployed this year is expected to reach 120, according to Chinese handset brand Huawei.
Nokia, for one, has released a massive MIMO adaptive antenna as part of its 4.5G upgrade; it can boost speeds up to 1Gbps, which is the nominal 5G target. The antenna uses 3D beam-forming, where mobile signals are targeted directly to devices, rather than broadcast in all directions. 3D beamforming will form part of the 5G spec.
Early commercial rollout for 5G is expected mid-2020, the rough date that ITU Radiocommunication (ITU-R) is expected to ratify a standard.
As part of that process, the 3GPP (a collaboration between telco associations to make a globally applicable standard) is working toward standardization of a new access technology named 5G New Radio (NR). The first phase of 5G NR specifications—3GPP Release 15—is expected to be completed next year. The second part—Phase 2 Release 16—is expected to be completed in late 2019, allowing for commercial deployment from 2022 onwards.
U.S. chip maker Qualcomm feels the technology is at a point where there’s sufficient common ground to advance even these timeframes. It is working with a number of other companies including Nokia, AT&T, NTT DOCOMO, Vodafone, Ericsson, BT, Telstra, Korea Telecom, Intel, LG and Swisscom, Etisalat Group, Huawei, Sprint, Vivo, and Deutsche Telekom to support the acceleration of the 3GPP 5G NR standard.
Its proposal is to use “non-standalone” 5G NR signalling as part of 3GPP Release 15. This would adopt existing 4G LTE radio and core network technologies to advance large-scale trials and deployments from 2019, therefore making it less expensive, it is claimed, for operators to make the transition to 5G NR.
Huawei, which received the Outstanding Contribution for LTE Evolution to 5G award at Mobile World Congress in March, has made large-scale 5G NR field tests and 5G high- and low-frequency hybrid field tests. These apparently show that continuous coverage and super-ultra-large capacity can be satisfied simultaneously, and that a single-user peak of 25Gbps can be achieved. In addition, Huawei teamed with Deutsche Telekom to perform a millimeter-wave high-frequency test procedure and achieved a peak rate of 70Gbps—an industry first.
The 5G hype was strong at the 2017 Mobile World Congress, where Huawei received an award for Outstanding Contribution for LTE Evolution to 5G. 
Fixed Wireless Cable Substitute
Qualcomm’s announcement coincides with its own development of a modem capable of supporting 2G, 3G, 4G, and 5G on a single chip.
Rival chip maker Intel is also making a big play for the 5G market after missing the boat on 4G LTE. Its new 5G modem incorporates 3GPP 5G NR—including low-latency frame structures, advanced channel coding, and massive MIMO. It says its goal is to support early trials “and to lay a foundation enabling accelerated development of products that will support the 3GPP NR specification and help drive global adoption of the 3GPP 5G standard.”
The idea is that 5G will be transmitted first to fixed wireless access in trial venues and households as a replacement for cable and fibre optic services.
This will be extremely interesting for providing next-generation, two-way, IP-based TV services to a wider range of consumers whilst minimising capital spend/acquisition costs for the operators,” says Ericsson’s Giles Wilson (right).
After all, as Veale points out, “consumers of broadband data care little how the internet is connected to them but only that they get speed and reliability of service.”
Cable giant Liberty Global is dismissive. Keynoting Cable Congress, CTO Balan Nair said the idea made “no sense” and that the use of higher frequencies was challenging and that significant investment would be required to make the technology viable.
Predications for Rollout
Cisco’s latest Mobile Visual Networking Index forecasts that by 2021, 5G will account for 0.2 percent of connections (25 million), but 1.5 percent of total traffic. Cisco also estimates that by 2021, a 5G connection will generate nearly 30GB per month, which is 4.7 times more traffic than the average 4G connection and 10.7 times more traffic than the average 3G connection.
According to analysts Ovum, more than 50 operators will be offering 5G services in close to 30 countries by the end of 2021. The majority of 5G subscriptions will be concentrated in a handful of markets, including the U.S., China, Japan, and South Korea.
The GSMA expects 5G to have 1.1 billion connections by 2025. Futuresource believes early rollouts will occur from 2019 (possibly late 2018). The pre-5G upgrades paving the way for 5G’s introduction will advance network bandwidth, flexibility, and capacity, and will see MNOs and OTT players able to provide faster services to more people and devices. Futuresource expects this to be particularly relevant in two ways: first, for the ways in which we access content on the go, and second, for the way connectivity will switch between devices, access technologies, and channels. “We’ll have a better indication of how services are planning when we see these implemented,” says Veale.
AT&T is rolling out 5G this summer beginning in Austin and Indianapolis. These will showcase peak speeds of 400Mbps, which is still someway short of the target 5G speed of 1Gbps.
Verizon is planning to get 5G to households in 11 U.S. markets by the summer, again as a trial. Five of these networks are being built with Ericsson to demonstrate the feasibility of fixed wireless access.
In Belgium, service provider Proximus has apparently reached 70Gbps speeds—some 100x faster than 4G—in partnership with Huawei.
“The question is how a theoretical maximum measures up in practice,” cautions Declan Lonergan of 451 Research. “Tests in a controlled environment are one thing, but when you have multiple users using the same network with interference and high demand, you can expect bottlenecks in 5G as much as 4G.”
Arguably, it is extreme low latency rather than greater speed which is the headline feature of 5G. “That could be become a key marketing tool of promoting 5G,” says Mann.
The first operators to launch are likely to be those in mature Asian markets—principally South Korea and Japan—and the U.S., finds CC Insight. “Certainly providers in these territories are showing a greater urgency to be first-to-market and lead on 5G deployment,” says Kester Mann. “In Europe, Germany and the U.K. are probably at the forefront of 5G. Deutsche Telekom comes across as the most upbeat and bullish about the technology. BT will seek to draw on its significant R&D, EE’s strong 4G investment, and its role in the academic research consortium linked to the University of Surrey.”
The U.K. government has voiced ambitions for the country”to be a world leader in 5G” and set aside £1 billion to trial 5G networks.
“If you follow the logic of converged networks, then anywhere with a greater installed fibre base will have a time-to-market advantage,” says Stagg. “South Korea will not lose its mobile leadership in a 5G world.” The country’s SK Telecom plans major 5G public trials for the 2018 Winter Olympic Games.
Pipe Dream?
In Europe, EE points to planning and policy (rather than spectrum) as the main obstacles to 5G growth. “It cannot remain this difficult to build the infrastructure. We need planning and policy at EU government level to recognise the flexibility which network operators require.”
While some efforts to launch early or ahead of standardisation are commendable, the lack of genuine use-cases leads CCS Insight analyst Kester Mann to question if the technology will be economically viable.
5G is the centrepiece of technology demos for companies like Ericsson, here showing off a 5G trial at Mobile World Congress.
“Operators appear little closer to identifying solid business models that will justify the huge investment required to purchase spectrum and deploy networks,” he argues. “Indeed, we are beginning to see a consensus that the ‘build it and they will come’ approach will ultimately prevail. Discussion that applications such as remote surgery could be a reason to deploy early networks suggests that the industry is getting worryingly ahead of itself.”
5G can be used to pipe video to connected cars—and Futuresource expects media services to cars to “bloom”—but the move to mass scale autonomous cars which also require 5G connectivity (but with far larger bandwidth requirements) is a decade or more distant.
“Initially, personal devices will move beyond smartphones, wearables will become more prominent,” says Veale. “Networks will adapt to demand and capacity which will introduce new business models for consumers, B2B, and mission critical offerings. Throw quantum computing into the mix and the speeds of command and control and data processing that could make possible and it is not just devices that will dramatically change but human behaviour also.”
[This article appears in the Summer 2017 issue of Streaming Media Magazine European Edition as "Getting Ready for Video Over 5G."]