Friday, 30 January 2015

Grass Valley sees “clear advantage” as it pushes 6x ultra motion workflows into NAB


Sports Video Group
Grass Valley is pursuing a path of cooperation and competition with EVS as it attempts to challenge the Belgian company’s supremacy in slow-motion production. Its strategy is spearheaded by pushing the LDX XtremeSpeed camera — which can produce 6x realtime speed — in combination with the K2 Dyno S/K2 Summit 3G replay controller and servers.
Having worked with EVS to integrate the LDX XS and Dyno into an EVS file-sharing environment, Grass Valley is also keen to highlight what it sees as clear advantages for operators in adopting a full suite of its systems for live studio or outdoors broadcasts.
“6x is the sweet spot for sports applications,” said Olaf Bahr, GV’s senior product marketing manager, in a webinar evangelising the systems ahead of NAB to an audience of potential buyers.
In comparing Dyno’s specifications to the market leading competition, Grass Valley noted that its kit supports 2 6X Super Slow-motion channel in and 2 channel out, both capable of independent playlists with mix effects where others did not. While the input signals must be same format 1080i and 720p can be mixed on output.
“We can record three additional HD cameras on our server while recording one 6x Super Slow-motion: our competitors can only do one,” he stated. “We write our own code for keyframes but this is not available with other servers. We use keyframes to build playlists and there is no playlist for use with clips in other systems. Our 0-200% variable speed playback is smoother than other replay systems and this includes smooth playback at 17% and below [the value at 6x speed]. No other T-Bar controller is built for that. What’s more, this is all packed into 2RU, not 6RU.”
A basic LDX XS camera chain lists at $160k and for this you get what Klaus Weber, senior product marketing manager for cameras, described as the “highest sensitivity and dynamic range on the market” and “the first ultra motion camera with a straight-forward workflow.”
“The AnySpeed function of the Dyno function enables full use of this dynamic range at playback speeds,” explained Bahr. “Other systems require the rendering of frames and playback at fixed speed. Therefore neither full real-time speed or super slo-motion speed can ever be exceeded. This is a major obstacle to telling a replay story in a timely manner.
“For every second or recorded content it requires six times as long to show the replay in 6x. With AnySpeed operators need only show the desired part in 6x and then ramp up to a higher speeds, including 100%.”
He illustrated this with an example of a defensive football play that occurs at one of the field and leads to a scoring play at the other end. With Grass Valley’s system this can be shown, he suggested, by replaying the defensive action at 6x (17%), the long passes and quick runs at 90%, and then slowing back down to 6x for the assist and score.
“Using any other means would require nearly 40 seconds to show the entire play at 17% and no producer would commit to that,” stated Bahr. “With AnySpeed the same play can be shown in a matter of second from a single camera. Smooth T-bar control over the video provides a consistent delivery of frames without jumping or stuttering.”
LDS XtremeSpeed camera
The LDX XS camera, announced last April, already has rental house customers in the US, UK, Ireland, Japan and Netherlands including VER in the US; Creative Video in the UK, Germany’s Top Vision; and Broadcast Solutions of Singapore.
There were several technical nuts that Grass Valley engineering felt they had to crack to come up with the solution.
This included devising a new CMOS-based imager, the Xensium-FT technology which is common to the entire LDX series, including the LDX HiSpeed 3x super slow-motion cameras. Ultra motion capture necessitates reduced exposure time and light sensitivity has been a bugbear of previous systems.
“The imager offers the same high sensitivity and dynamic range in all progressive formats. That’s important if you look to future 4K UKD applications (which are progressive only),” Weber said.
Another limiting factor of CMOS imagers is the rolling shutter which, under low light conditions and fast pans or action, “will not deliver good results.” The LDX XS therefore sports a global shutter and records 360 frames a second.
This was achieved by adding two additional transistors to each pixel, making five in total per pixel. “We can separate the exposure period from the readout period so we can start and end exposure for all pixels at the same time and generate a global shutter,” explained Weber.
Noise is further reduced by reading each pixel twice – one at the beginning and again at the end of each exposure.
Artifacts caused by flickering artificial light conditions, which are actually enhanced by cameras capturing 6x faster than the prevailing light frequency, have restricted ultra motion units to mainly daytime applications.
The LDX system employs AnyLightXtreme for claimed flicker-free Ultra Motion. An algorithm compares each pixel with that of two frames before and two frames afterwards, extracts information from the five different exposure times and adjusts the output accordingly. “It doesn’t 100% eliminate flicker but it will be non-visible in most cases,” said Weber.
Perhaps the most important advance to Grass Valley’s cause is the immediate delivery of footage from the camera straight to output, without having to wait for the camera to transfer clips from an internal storage device. The LDX XS transfers acquired footage in real-time to an external server for instant on-air use. Recording and replay occurs simultaneously.
“Anytime an application requested faster than 3x then a double action workflow had to be used which is a very time consuming and expensive process,” explained Weber.
GV’s solution was to design a completely new transmission using 10GB fibre (XF Fiber TX) between the camera adapter and the camera base station (XCU). “It can send 6x images without any compression or additional latency over standard SMPTE fibre to a base station for output,” explained Weber. “You can now choose images in Ultra Motion but keep the live workflow.”
The camera is also designed for physical configuration into existing set-ups. The LDX range uses the same XCU cradle, which means users can swap camera position to Ultra Motion “in just a few seconds,” said Weber. A compact version of the LDX XS can be mounted in specialist positions such as on gimbals. The cameras can be run at 3x or normal speed if desired.
EVS integration is facilitated via an XT Access unit which sits between the LDX camera feeds and/or Dyno and XT2/XT3 servers. Folders can be created on either the XT Access for pushing to EVS and a similar workflow can occur in reverse, where pages created on the EVS server will be seen on Dyno.
“There’s a level of working with EVS and we continue to have conversations with them,” explained Bahr. “We’d like to work with them more and I think they would like to work with us more.”

Blending real and virtual elements on set: part 1

The BroadcastBridge

https://www.thebroadcastbridge.com/content/entry/1258/blending-real-and-virtual-elements-on-set-part-1


Not so long ago Augmented Reality (AR) was considered a gimmick in studio presentation, but it is now becoming an integral part of many productions. AR elements are becoming part of the workflow, with integration to NRCS systems, triggered by the switcher or controlled by studio automation systems. Continuous improvements in computing technologies and camera tracking method will also contribute to the future growth of AR usage during production. This three part report looks at the challenges in working with real and virtual elements. For part one we turned to the insight of Orad Hi Tec System’s studio and tracking product manager, Haim Halperin.
A key challenge is to ensure that all the elements look natural and an inherent part of the hard set. “From a design point of view it is important to design the AR elements with the hard set in mind,” says Halperin. “From a technology stand point, blending real and virtual elements requires high performance of rendering to render the graphics while the cameras are tracking.
“One of the best approaches to designing with AR in mind is to create elements that look natural, visually appealing and work well with the set’s motif. As AR elements are immersed into the scene they are an excellent tool to captivate the audience.”
When does AR / VR not work?
Halperin: The short answer is when it's not properly planned. “To be effective, accurate tracking is essential,” says Halperin. “Care must be given to the virtual elements to ensure that they are properly placed within the studio, and the elements themselves serve their purpose. It is also essential that the elements are designed in a way that can best used; transparent, realistic, and multi-dimensional and so on. Orad's experience in the field enables us to ensure that the broadcaster maximizes their use in virtual /augmented reality.
How do you best prepare talent to work with AR?
Halperin: The best way to prepare the talent is to include them in the process itself and to provide them with the capability to rehearse with the elements in the set, ensuring that the talent incorporates the virtual elements are part of the real set. In addition, by being part of the process, the talent can give some suggestions of additional elements, placement of the elements, etc.
How important is lens calibration?
Halperin: Accurate lens calibration is essential for accurate placement of AR elements in the studio. As part of the lens calibration process, it is important to take into account the lens distortion and the lens abrogation for precise placement. The Orad lens calibration process takes into account all those elements for both digital and analog lens, 16x9 and 4x3 aspect ratio, ENG and box lens.

Transcoding Trends for 2015

Streaming Media

What are operators and MVPDs looking for from today's transcoding solutions? Automation, flexibility, and future-proofing are high on the list, and the old hardware vs. software debate continues.

http://www.streamingmediaglobal.com/Articles/Editorial/Featured-Articles/Transcoding-Trends-for-2015-101789.aspx


Ten years ago, it was perhaps idealistic but not entirely unreasonable to hope that the fragmentation in delivery codecs, formats, and DRM schema would be resolved by 2015. Instead, the landscape has gotten even more complex, as the number of connected devices and operating systems has proliferated.
So what are operators looking for when they make investment decisions to refresh their content transcoding capabilities? There are variables in product scope, content scope, financial considerations, and infrastructure capacity to consider. Here are the thoughts of select vendors in this space, gathered ahead of new product launches at NAB in April.

“Today's file interoperability isn't perfect, and every system will have to cope with toxic files, pathological bitstreams, missing assets, and a whole host of operational issues in an elegant fashion,” says Bruce Devlin (right), chief media scientist at Dalet Digital Media Systems. “They have to do this while simultaneously providing configurations for very low-level encoding and wrapping controls to make the files just right for the enormous range of non-standard delivery specifications that are out there in the wild.”


Emphasis on Automation

His prescription is a transcoding system that can create all of the output formats that an operator's new business model requires, but with a level of automation that allows them to do this without a huge increase in staff.
From a file-based perspective, operators require access to flexible resources, says Tony Jones, head of technology, TV compression for Ericsson. While core processing can deal well with day-to-day processing requirements, he suggests, operators will also have times when they gain new libraries of content, and at these times Ericsson views the cloud and other pay-as-you-go options as more appealing to operators. Businesses are increasingly looking to the cloud to manage peaks in content processing, and also to avoid the hefty opex costs tied to on-site provisioning,” says Jones.
For linear content, Ericsson believes operators’ perspectives have changed a great deal over the last twelve months. “A year ago operators were primarily concerned with getting services up and running; now operators are having to pay storage and peering costs every time that content is viewed, and encoding performance is starting to matter much more,” he says. “By reducing the bitrates for linear content or the size of the files captured, the CDN costs, peering costs, and storage costs can all be brought down. Now more than ever, operators are focused on ways to reduce costs per view.”
Time and time again, transcoder vendors argue that scalability, reliability, and future-proofed technologies are the keys to unlock operator wallets.
Flexibility typically has many parameters, outlines Chris Knowlton, streaming industry evangelist forWowza Media Systems. “A transcode solution must be compatible with their existing infrastructure, integrate with their existing workflows, and be fully accessible via API to allow tailored management and automation.
For Wowza, reliability includes predictable performance, round-the-clock operations, and redundancy for high-priority content. Scalability includes scaling up to take advantage of more powerful hardware, scaling out across geographies and both on-premises and cloud compute instances, and a licensing model that scales with usage.
“The solution also needs to be future-proof,” says Knowlton. “Most operators need support for transcoding and trans-rating into multi-bitrate H.264 video and AAC audio, which can be packaged into common traditional and adaptive streaming formats. For those in emerging markets with legacy technology in many devices, support for H.263 video is sometimes still a key requirement. With streaming codecs, formats, and devices in constant flux, operators can’t risk buying a solution today that won’t support their needs for tomorrow, such as HEVC video, 4K resolution, MPEG-DASH streaming, and whatever comes next. As technologies evolve, it’s much better to have a software-upgradeable solution that they know won’t need a truck roll for every update.”
John Riske, director of product marketing for media at Brightcove knows that operators will find an encoding bottleneck an unacceptable impediment to business growth. He says the buying criteria for operators should be around solutions that scale automatically to meet demand, without any capacity planning or other operational friction.
“Another key to avoiding operational friction and expense is to avoid encoding errors and any manual intervention in the process,” says Riske. “Content providers also need instant access to the state-of-the art for video formats and codecs so that they can deploy content as quickly as possible. Formats like HLS and DASH are evolving rapidly, and content providers are looking for solutions that get them what they need in a timely manner, without added expense or a long upgrade cycle.”

HDMI Dongles, Adaptive Bitrates Create New Challenges

A companion trend that Ericsson raises is a rise in higher versions of profiles, such as 720p50 or 720p60 full frame rate HDR. This has been driven by consumer use of HDMI dongles (such as Chromecast or Roku Streaming Stick) with 50- or 60-inch TVs as well as improving capabilities of high-end tablets.
“Use of high-resolution profiles being on consumer TV screens means that image quality is becoming more important and, particularly when it comes to valuable content such as sports, the step from 25/30 frames per second to 50/60 frames per second makes a dramatic difference to the viewing experience,” says Ericsson's Jones.
He contends that as the number of people viewing content via adaptive bitrate (ABR) formats increases, the importance to and expectations of those consumers for multiscreen services increases. “As consumers start using the ABR service as their primary source of TV viewing on larger screens than mobile devices, we’re seeing expectations grow more demanding, and greater pressure put upon TV service providers than before,” says Jones, “This trend is particularly pronounced in the U.S. where use of HDMI dongles has penetrated the most.”
With  ABR becoming a more common consumption method, particularly on big screens, greater emphasis is being placed on the video quality provided by encoding/transcoding solutions.
“Many operators are combining ABR and broadcast encoding/transcoding systems (as these are the most stable with respect to standards and configuration) and leveraging a separate ABR packaging and origin stage to manage the volatility of standards and devices on the consumption side,” observes Tom Lattie, VP, market management & development, Video Products at Harmonic.
A main preoccupation for transcode vendors at NAB 2015 will be multiscreen, as operators tug in opposite directions when it comes to hardware or software encoding and transcoding. It's a perennial theme, but two fundamental differences remain—hardware encoding has lower latency, and software encoding can more readily be tweaked for higher quality.
“For operators trying to reach any screen, a few hundred milliseconds of extra latency at the encoder is rarely an issue, while the need for quality increases as the video capabilities of our playback devices continue to improve,” contends Knowlton for Wowza.
He argues that software-based encoding is more flexible in three key ways. First, operators can continue to upgrade the encoding software as standards evolve and algorithms become more efficient, thus providing a level of future-proofing not typically available with hardware. Second, software typically provides much more granular control, allowing one to fine-tune the look or playback characteristics of the streams as needed. Third, software doesn’t tie you to a specific location, allowing you to spin up transcoder instances wherever you need them, whether on-premises or in the cloud.
Of course, there is a hybrid solution, which is to run transcoding software on computers containing video acceleration chipsets or GPUs from companies such as Intel and NVIDIA, respectively. Wowza sees significant improvements in these technologies making it possible to combine the benefits of software encoding with performance and quality similar to hardware encoders.
“With cloud infrastructure providers, such as Amazon Web Services, providing hardware-based video acceleration in some of their compute instances, it’s getting easier to get great transcoding results without buying hardware,” says Knowlton.
John Riske, director of product marketing for media at Brightcove knows that operators will find an encoding bottleneck an unacceptable impediment to business growth. He says the buying criteria for operators should be around solutions that scale automatically to meet demand, without any capacity planning or other operational friction.
“Another key to avoiding operational friction and expense is to avoid encoding errors and any manual intervention in the process,” says Riske. “Content providers also need instant access to the state-of-the art for video formats and codecs so that they can deploy content as quickly as possible. Formats like HLS and DASH are evolving rapidly, and content providers are looking for solutions that get them what they need in a timely manner, without added expense or a long upgrade cycle.”

HDMI Dongles, Adaptive Bitrates Create New Challenges

A companion trend that Ericsson raises is a rise in higher versions of profiles, such as 720p50 or 720p60 full frame rate HDR. This has been driven by consumer use of HDMI dongles (such as Chromecast or Roku Streaming Stick) with 50- or 60-inch TVs as well as improving capabilities of high-end tablets.
“Use of high-resolution profiles being on consumer TV screens means that image quality is becoming more important and, particularly when it comes to valuable content such as sports, the step from 25/30 frames per second to 50/60 frames per second makes a dramatic difference to the viewing experience,” says Ericsson's Jones.
He contends that as the number of people viewing content via adaptive bitrate (ABR) formats increases, the importance to and expectations of those consumers for multiscreen services increases. “As consumers start using the ABR service as their primary source of TV viewing on larger screens than mobile devices, we’re seeing expectations grow more demanding, and greater pressure put upon TV service providers than before,” says Jones, “This trend is particularly pronounced in the U.S. where use of HDMI dongles has penetrated the most.”
With  ABR becoming a more common consumption method, particularly on big screens, greater emphasis is being placed on the video quality provided by encoding/transcoding solutions.
“Many operators are combining ABR and broadcast encoding/transcoding systems (as these are the most stable with respect to standards and configuration) and leveraging a separate ABR packaging and origin stage to manage the volatility of standards and devices on the consumption side,” observes Tom Lattie, VP, market management & development, Video Products at Harmonic.
A main preoccupation for transcode vendors at NAB 2015 will be multiscreen, as operators tug in opposite directions when it comes to hardware or software encoding and transcoding. It's a perennial theme, but two fundamental differences remain—hardware encoding has lower latency, and software encoding can more readily be tweaked for higher quality.
“For operators trying to reach any screen, a few hundred milliseconds of extra latency at the encoder is rarely an issue, while the need for quality increases as the video capabilities of our playback devices continue to improve,” contends Knowlton for Wowza.
He argues that software-based encoding is more flexible in three key ways. First, operators can continue to upgrade the encoding software as standards evolve and algorithms become more efficient, thus providing a level of future-proofing not typically available with hardware. Second, software typically provides much more granular control, allowing one to fine-tune the look or playback characteristics of the streams as needed. Third, software doesn’t tie you to a specific location, allowing you to spin up transcoder instances wherever you need them, whether on-premises or in the cloud.
Of course, there is a hybrid solution, which is to run transcoding software on computers containing video acceleration chipsets or GPUs from companies such as Intel and NVIDIA, respectively. Wowza sees significant improvements in these technologies making it possible to combine the benefits of software encoding with performance and quality similar to hardware encoders.
“With cloud infrastructure providers, such as Amazon Web Services, providing hardware-based video acceleration in some of their compute instances, it’s getting easier to get great transcoding results without buying hardware,” says Knowlton.

Thursday, 29 January 2015

The future of immersive TV


Broadcast
Virtual reality TV is edging closer, with producers, manufacturers and broadcasters trialling the technology for music, sport, natural history and even news.
Facebook’s $2bn (£1.3bn) purchase of Oculus Rift last March returned the decades-old technology of virtual reality (VR) to the spotlight, but what has galvanised interest among film and TV producers is the recent release of Samsung Gear VR, co-developed with Oculus.
Gear VR and similar headsets, like Google Cardboard, turn smartphones into mobile VR devices for less than £150. VR consultant KZero estimates there will be 171 million VR users worldwide by 2018. Today’s 2 billion smartphone owners make for an enticing distribution prospect, but VR producers are itching for that breakthrough moment.
“It’s like waiting for DVD players to come out, having struck the DVDs,” says Atlantic Productions chief executive Anthony Geffen. “We’re waiting for devices to get our content out there on a regular basis, which in turn will drive the business model.”
Atlantic has transferred its expertise in stereo 3D production to the new medium, creating a series of VR shorts with distribution intended for VR headset brands.
Samsung, Oculus, Sony and Google all have their own VR content stores and are funding content.
Samsung, for example, has commissioned Red Bull Media House, music brand Boiler Room and The Walking Dead producer Skybound Entertainment to create VR films for its Milk VR platform.
Over the past two weeks, the Sundance Film Festival has shown nine VR films in a section devoted to innovative film-making. Oculus’s own film-making division, Story Studio, staffed by CG artists from ILM and Pixar, screened its animated short Lost. VR trailers have also been made to accompany studio releases including How To Train Your Dragon 2.

However, this is not yet a gold rush. In the current Wild West of experimentation, there are many technical and creative issues to solve, not least the type of content to which the intimate, interactive yet potentially claustrophobic format is best suited.
Sky is testing VR on a range of content including drama (Critical; Fortitude), comedy (Trollied) and entertainment (Got To Dance). Last year, it sent a team to the New Zealand set of The Hobbit to shoot a 90-second VR promo coinciding with the release of Peter Jackson’s latest film.
Involvement in virtual reality is handy PR for the broadcaster, which likes to be seen as a technology pioneer. For now though, it is playing down any suggestion that the tests are more than toes in the water.
“These are three- to five-minute pieces intended as workflow concepts,” explains Jens Christensen, founder and chief executive of Jaunt, the US producer and tech developer behind the tests. Sky has invested £400,000 in Jaunt and taken a seat on its board.
Jaunt’s work includes coverage of Sir Paul McCartney and Jack White concerts, WWII-themed The Mission and horror short Black Mass.
“All these are fairly short pieces,” says Christensen. “We don’t feel it’s time for long-form yet. This is a time to develop best practice.”
Jaunt’s aim is to scale up content production. “Our technology is now at a stage where we can record and produce VR experiences in a fairly quick turnaround – days not weeks,” he says.
Christensen dubs these experiences ‘cinematic VR’ to differentiate them from computer game VR, but there are strong parallels between the two, not least in the blurring of narrative with interactive storytelling. The catch is that conventional fi lm and TV production skills don’t easily translate to the 360-degree medium.
“Every rule you take from film into VR has to be relearned the hard way,” argues Mike Woods, who runs Framestore’s VR Studio. “No one has established what those rules are. You can literally do anything to anyone when you’re in a virtual room with them.”
Among the conundrums are: should a VR story be told from a fly-on-the-wall – the way most filmed content is told today – or first-person point of view? To what extent should the user interact with the action? How is audio best used to direct attention to part of the scene? How do you bring other senses into play? And how can you prevent nausea?
“The basic film language that has been finely honed over decades won’t work in VR,” says Henry Cowling, creative director of UNIT9, which launched a VR division last autumn. “The popularity of VR for narrative content is going to live or fall on how we solve these storytelling problems.”
There are questions for actors too. Christensen likens the transition to that of theatre to cinema in the 1920s. “There was a period of overacting as performers adjusted to the intimacy of the camera,” he says. “With VR, the actor’s performance needs to be even more subtle, as the viewer is right there in the scene with them at all times.”
Woods says Framestore encourages its clients to break from thinking of VR as a marketing tool or second-screen support: “Our big goal is to make content that stands in its own right.”
The facility has established a 40- strong VR department on the back of expertise in real-time rendering engines for games, and has completed a dozen projects including trailers for Gravity, Game Of Thrones and Interstellar.
“We are used to a model where writers and directors come to us to solve issues, but we’re finding that model is being turned upside down,” says Woods. “There are no writers and directors who know more about VR than we do. We are technically in pole position to make these photorealist experiences.”
Framestore has even coined new job titles for core production skills. These include a virtual director of photography to advise on grading a scene “no matter where you choose to look”, and a UX (user experience) specialist. “They need to understand how people react to things and how narrative cues are best delivered,” says Woods.
Because VR relies so heavily on specialist technical skills – motion capture is another important building block – producers feel that working through the issues internally delivers the best results.
Atlantic fused 3D production with VFX wing Zoo to form Alchemy and even builds its own 360-degree camera rigs; Drive Worldwide, which acquired VR tech firm Figure Digital last June, is to launch VR-focused R&D lab Drive Technologies; and UNIT9’s ambitions include an expanded SFX capacity, 3D and motion-capture facilities.
“The cost of VR production is at least 50% above the norm,” says Drive chief executive Ben Fender, who believes it will be 18-24 months before VR will have any kind of deep impact. “Every project requires a different way of thinking from script to production pipeline."
Woods says VR production is not hugely different to conventional setups, albeit that capture is more time-consuming. “There is a lot of data to process but the workflow is not dissimilar and the mark-up is not astronomical,” he argues. A two-minute CGI spot is roughly equivalent to the cost of a two-minute VR experience.

While travel and nature programming seem obvious fits for VR’s immersive potential, Christensen suggests that news reporting too could be given fresh objectivity. “You could deliver a live feed from a VR camera in a particular hot spot, such as a riot, to give people the feeling of being present,” he suggests.
Development is already under way in this area. Project Syria by Nonny de la Peña, research fellow at the University of Southern California’s Annenberg School of Journalism, uses news footage and stills to recreate an attack in Aleppo to give viewers a sense of the effect of the conflict on those living there. “There are parallels with large format film-making,” says DJ Roller, co-founder of NextVR.
“You are not going to dive deep underwater or climb Everest, but VR can trick your senses to give you the experience of being there.”
For example, the company recently filmed a Coldplay concert to be viewed on the Samsung Gear VR headset, and US basketball league NBA has trialled NextVR’s live VR recording system, which is based on stereo 3D compression technology.
This week, the company staged what it claimed was the first live VR test broadcast when a reporter was ‘transported’ from Michigan to California’s Laguna Beach.
Roller claims to have had interest from sports bodies and broadcasters in the UK. “Live VR experiences can put the viewer in the front row,” he says.
Others are experimenting with putting rigs onto players. “We’re exploring ways of miniaturising the kit by using multiple mobile phone-type cameras,” says Fender.
During last year’s Commonwealth Games, BBC R&D live-streamed 360-degree video to Oculus Rift headgear as part of a series of tests that also included music performances.
Natural history could be next. “Live VR has particular challenges around synchronising the visual and audio elements and the sheer data rate,” says Graham Thomas, section leader, immersive & interactive content, BBC R&D. “The jury is out on the extent to which VR will become more than niche.”
From a consumer point of view, strapping hardware to one’s face may seem even more antisocial than wearing 3D glasses, but the ability to interact with friends within virtual worlds is considered key to mainstream adoption.
Christensen is not convinced that avatars – graphical representations of users displayed invision – are the answer, but says using voice to communicate and share a VR experience is already compelling in VR gaming. Many are looking to Oculus owner Facebook to integrate a social experience within VR.
“I would hesitate to say VR will eclipse 2D media, but it is a richer experience,” says Cowling. “If I were a TV producer, I’d be looking to learn the new skills. Even if all the productions for it are short-form, it will still be a significant platform and will still drive a significant audience.”


Atlantic: Testing Virtual Worlds

Atlantic Productions, already established as a 3D TV pioneer, has mined its back catalogue to create three VR documentary shorts.
 
CG sequences from David Attenborough’s First Life (2011) have been reworked to create a 12-minute, 360-degree 3D VR experience, a clip of which was used by Samsung as a promotional tool at CES 2015. Featuring a new Attenborough narration, the film places the viewer among creatures of the ocean millions of years ago.



Another short, Micro Monsters VR, revisits assets from Atlantic’s 2013 Attenborough-fronted 3D series and will be available on the Oculus platform. “Because the storytelling is already so strong in 3D, we decided to build a VR world that puts the viewer inside the undergrowth to interact with the insects,” says chief executive Anthony Geffen.
“That is the added value of this experience.” Note will be taken of audience response to the release, with a view to creating new VR episodes.
A third short, still in production, is based on the laser-scanning techniques used to create digital 3D models of the Egyptian pyramids, originally commissioned for National Geographic series Time Scanners.
“It’s a way of freeing up worlds and building on assets we already have,” says Geffen. The indie has conducted VR tests on sport and drama and shot footage from submersibles thousands of feet below the sea for a VR experience to accompany a TV special later this year.
“You don’t have to create an entire project in 360 degrees,” says Geffen. “2D linear content looks fantastic in a VR headset, and combining 2D footage with VR sequences is one way of extending the length of VR beyond short-form.”

Wednesday, 28 January 2015

On-Premises, In-Cloud or Hybrid – Where To Locate Transcoding

The Broadcast Bridge

https://www.thebroadcastbridge.com/content/entry/1230/where-to-locate-transcoding


One of the prevailing technology narratives across the industry is the wholesale transition of processes from dedicated machinery housed on site and managed internally to remotely located servers which offer greater scalability and efficiency. Yet the answer is rarely as simple as transitioning an entire process to the cloud, and nowhere is more apparent than in transcoding/ repurposing. Here, select transcoding vendors share advice with operators on where to place their investment. Broadly speaking they come down on the side of hybrid solutions but there are nuances.
There are number of variables to consider when deciding where to locate transcode farm, such as live vs. file-based workflows, content security requirements, costs of transport links, and how the content will be eventually consumed.
Remi Beaudouin, product marketing director, ATEME: In our view, a hybrid approach is the right architecture. It is clear that off-premises processing will play a key role in the near future as it provides operators with (almost unlimited) capex-free processing resources, on demand. However, to say the entire linear video head-end will migrate right away to the cloud is a different matter. It remains pertinent for operators to keep some on-ground processing, technically and financially speaking. Having said that, even in a hybrid approach, on-premise processing workflow must evolve to provide more flexibility and scalability .
Tom Lattie, VP market management & development, video products, HarmonicIf a customer is working with high-quality studio masters, they may be required to house the assets in a secure location. On the other hand, if the source assets are mezzanine quality, it is more viable to transcode in the cloud as security requirements may be less restrictive and the file sizes will also be smaller, making the economics and time of transferring to the cloud more reasonable.
Furthermore, if you’re hosting the final assets in the cloud, the economics of cloud transcoding become stronger because the transcoding, repurposing, hosting, and final distribution can all be co-located in the cloud, minimizing transfer times and costs. Hybrid models for file transcoding are starting to gain traction, especially when the workflow orchestration is hosted in the cloud, with the processor-intensive transcoding workloads happening on-premises or in the cloud. With orchestration in the cloud, creating new workloads in different hosting environments and geographically different locations is easier as the “brains” are already in the cloud. Hosting transcoding on-premises has two primary benefits for the operator: it addresses asset security concerns, and it can improve the economics. The ability to capitalize and depreciate, over time, compute infrastructure for baseline transcoding workloads can lead to significant cost savings versus a pure OPEX cloud solution.
Live transcoding workflows are trickier to burst, or run in the cloud. Signal acquisition is still very much a brick and mortar proposition, and pumping these streams into a cloud-based operation can be expensive and bandwidth-intensive. For cloud transcoding to make sense, the delivery network must also be cloud-based. If the end point of distribution is across the Internet, then transcoding, packaging, and hosting in the cloud can make sense. If the end point of distribution is across the operator’s private network, then the economics of transferring into the cloud and back down need to be evaluated closely.
Tony Jones, head of technology, TV compression, Ericsson: Even partial outages to the service carry serious consequences for operators, both from a financial perspective and from a reputation perspective. Most broadcast services need 99.995 to 99.999 percent service reliability – especially when their platform is advertising-funded – which data centres are currently not delivering. From a linear perspective, we are still seeing operators following an on-premises and capex model and taking a 24/7 transcoding approach. This gives operators peace of mind when it comes to performance of their service.
On the file-based side however, and also for services that cover one-off events, operators are looking to the cloud for flexibility and lower costs – but often trading off performance by doing so. These operators need to be constantly on their guard for problems with the service, and invest significant energy into planning for the eventuality that their service will go down.
For file-based operators the decision between cloud and on-premises ultimately comes down to the sort of commercial hardware that operators are looking to use. Going down the route of getting a CPU server to run encoding from the cloud, for example, is more flexible, but if it’s used heavily becomes expensive. Consideration also needs to be given to data traffic, since if the content is not processed in the same location where it is stored, then large media files must be moved across networks.
John Riske, director of product marketing for media, Brightcove: Certain operations are better suited to in-house processing – standards conversion, editing, colour management and other features are best handled before encoding for distribution. To the extent that encoding/re-encoding is part of these processes, it can make sense to keep encoding in-house.
However, if multiple renditions of a file are being created and are ultimately destined for some kind of distribution end-point (CDN, broadcast head ends, etc.), the reasons for keeping encoding on premises are decreasing every day. Cloud-based solutions offer many of the same features as on premises solutions, and with rapidly decreasing costs for storage, bandwidth and compute, there’s an attractive ROI to moving more of the workload to the cloud.  With video creative workflows still traversing the two alternatives above, you can start to see where a hybrid approach can be useful.
Chris Knowlton, streaming industry evangelist, Wowza Media Systems:  Any way you look at it, the decision to go with cloud vs. on-premises vs. hybrid really comes down to one thing: cost. Cloud transcoding allows an operator to get started with any-screen delivery faster, and with minimal capital expense. Over the long run, on-premises might be more cost-effective, assuming you have sufficient space, IT staffing, and a transcoding solution that continues to evolve with market changes through software updates. A hybrid solution provides the most flexibility, allowing fixed on-premises costs for predictable workloads and quick access to additional capacity in the cloud when needed.

Dock10 to launch content management service for rushes

The Broadcast Bridge

https://www.thebroadcastbridge.com/content/entry/1228/dock10-to-launch-content-management-service-for-rushes


Dock10 is to launch a content management service for producers looking to store and reuse rushes for recurring series, downstream sales or new programming. Based on dual Avid Interplay asset management systems, one for content in-production, one for assets in deeper archive, Dock10 believes it better placed than some facilities to handle large volumes of rushes thanks to its investment in a data centre.
“We think rushes management is key for indies and the biggest revenue generator for post in this area,” said Emma Riley, head of business development, Dock10 in Manchester, UK. “We invested in asset management because we realise that all the rich metadata that is added in post can get lost when content is moved onto archive on LTO.”
As part of the service Dock10 will advise producers on what rushes to retain or discard. “A production company should consider the use of all content it archives, and this decision should drive the resulting archive codec and workflow, and even whether to keep it at all,” she explained. “We believe there is a balancing act between production requirements and company assets to be had by every indie.”
Pricing will be per production and based on both the amount of data and duration of storage. “With rushes seen as assets rather than bi-products, the old commercial model of pay-once-to-put-on-a-tape is beginning to fall down,” she explained. “At some point the media will need to be outgested or paid for as a company overhead rather than a production cost. Our proposal is to move the model from an overhead to a production cost.”
The facility will also arrange to store content in a form that guarantees playback on edit systems arguing that “archiving rushes in their raw state gives you a few years of guaranteed interoperability at best before you could find yourself transcoding.”
She adds: “The DPP's AS-11 format is a good bet for HD master programmes, but for rushes we need to look for something different. Many of our productions are shot higher than HD, such as Cucumber for Channel 4 which was shot in 5K, so we would look to recommend their raw rushes be transcoded to a higher than HD codec to maintain the resolution they acquired them in. We’re looking at the Avid codecs of DNxHD or DNxHR, however with archive as a hot topic, we expect some rivals to be announced this year that might change our minds.”
The service, which goes live in April, will be open to indies who do not post programming through Dock10. Existing clients BBC Children's (such as Blue Peter) and Red Productions will be among the first users.

Monday, 26 January 2015

BBC Gets Audience Involved in Taster


Streaming Media

The BBC is embracing online feedback from audiences to help it shape new content formats by offering viewers a chance to comment on and rate works in progress.
Much like Amazon Studios pilot season, BBC Taster, which launched today, corrals a number of experimental ideas with the aim of letting audiences direct which go further in development.
The launch forms part of a wider strategy by the broadcaster to shift more of its emphasis online. This includes the forthcoming premiere of 25 shows on iPlayer, shows which would previously have been aired first on linear channels.
In a statement, Danny Cohen, director of BBC TV, said: “We’ve always pushed the boundaries with our creative programming and innovative digital services. These two worlds are coming together and opening up new possibilities for telling stories. BBC Taster will help ensure we stay at the forefront and better serve audiences now and in the future. It’s an exciting opportunity for our world-class production teams to take more creative risks online, try their ideas out and put them in the hands of audiences.”
The background, of course, is increased competition from Netflix and YouTube as a result of the proliferation of mobile devices and consequent viewing of content away from the TV.
Tablets are in 44% of UK households, and 61% of UK adults own a smartphone—including 88% of 16-to-24-year-olds, according to Ofcom. BBC Online receives as much traffic from mobiles as it does from PCs. Social media also plays an increasingly important role among youth audiences, notes the BBC, with 75% of British 16-to-24-year-olds claiming to use social networking sites.
At launch, BBC Taster features ideas from online service BBC iPlayer, News, Radio 1, Natural History, Drama, Current Affairs and Arts.
BBC iPlayer Shuffle, for example is described as "a continuous video player" that will serve up content based on a viewer's previous searches; KneeJerk is a platform for improvised comedy taking its cue from the week's trending tweets, GIFs, and Vines. Body Language offers to mash up poetry with video and stills.
Another idea, which has long been in gestation at BBC Future Media, is to invite audiences of BBC World Service radio to crowd curate its archive of 36,000 programmes by tagging content they either like or dislike. There is also repackaged interview material which has not made it to broadcast and behind the scenes footage of a travel documentary series, repackaged for viewers to better select which part they wish to view on-demand.
Last week the BBC governing body BBC Trust launched a six-month evaluation of the closure of linear channel BBC3. The BBC previously announced its intention to move the entire channel online in order to save costs and, it argued, to better serve the channel's youth-oriented audience.
Independent production companies Hat Trick and Avalon have tabled a bid to take the channel off the BBC's hands and run it at an increased budget of £100m a year, up from the current £81m.
The corporation is also set to debut 30 to 50 hours of programming from BBC1, BBC2, BBC4, and its children’s channels on iPlayer and in advance of transmission.
“The competitive environment for BBC iPlayer is set to become significantly more challenging as major global VOD providers such as Netflix and Amazon establish a foothold in the market,” said the BBC in a statement.
“In order to compete (and thus retain our ability to deliver public value) BBC iPlayer must continue to improve, taking fuller advantage of the characteristics of the internet.”

NEP Carries Super Bowl Responsibility Over The Line

Broadcast Bridge

https://www.thebroadcastbridge.com/content/entry/1210/nep-carries-super-bowl-responsibility-over-the-line

There are select sports which attract an audience far beyond that of the immediate game or fan base. The Super Bowl is one such event. And all things being equal, it is on track to exceed last year’s record 111.5 million viewers to become the most viewed telecast not only of 2015 but of all time in the US.
It's an event that dominates the US TV schedule and dwarfs rival events like the FIFA World Cup domestically. Super Bowl XLIX will also be aired in 128 countries in 25 languages, racking up further eyeballs, even if the event overseas does not rate on a par with a World Cup or Olympic Games. All in, it's a lot of responsibility for NBC and NEP, the host producer and host outside broadcaster respectively, of this year's pictures.
“The Super Bowl has become more of an event outside of what happens on the field because it's a time when family and friends get together,” says Mike Werteen, co-president of NEP's US mobile division. “In the business, everybody compares their production to the scale of the Super Bowl because it is the most watched event in the US.”
NEP may send more resources to other events but no event entails quite the amount of pressure as the Super Bowl. It will have 27 trailers at the University of Phoenix Stadium in Glendale, Arizona by the end of next week, servicing domestic clients NBC, CBS (which hosts in 2016 with NEP as partner), DirecTV and ESPN as well providing the host feed direct for NFL Films which will transmit the pictures to international rights holders. NEP will have 70 engineers, drivers and specialist technical managers on-site.
All the domestic and world camera feeds for the pre-/ match and post-match will be fed into a central distribution hub (NEP's ESU truck) and made available for any broadcaster in the compound to access. One key discussion is around slo-motion coverage. NBC, says Werteen, is seeking greater clarity in replays. “There are two ways that can be done; one is by increasing the frame rate and there are a significant number of HFR cameras at the game. The second is to use 4K UHD and integrate that into the 1080i feed.
“Since every play is so important anything that enables the camera operator to stay wider so as to not miss any potential action, and then to have that available as a replay option, is important,” he explains.
“The sheer technical requirement, the logistics, the number of credential led staff has grown significantly over the past decade. The domestic game coverage, the world feed and the half time show – each is an individual production and all have grown in size and scale.”
NEP parked its first truck at the venue on January 12. It is providing a range of services for the shoulder programming for a number of content aggregators through the week leading up to the game weekend.
Planning has understandably been in the works for some time. “It was not coincidental that we built NBC a new truck [ND1] for NBC's Sunday Night Football coverage in the year that they are in charge of the host feed for the Super Bowl,” says Werteen. “We knew this was going to be an enormous year for them. Discussions about the level of Super Bowl coverage began two years ago.”
NEP's new flagship ND1 was on the road last summer and is planned to take the lead in Super Bowl coverage for 2016 (CBS host) and 2017 (Fox host). It is actually designed as four interconnecting trailers which can be configured according to a client’s requirement. Its HD infrastructure is connected to a Grass Valley Kayenne 9 ME mixer, Calrec Artemis and Apollo audio consoles and 100 channels of EVS recording on XT3s.
If anything says the event is not about pure sport it's the half-time extravaganza where coverage turns from sports to live pop concert. NEP is in charge of output for this too. Advertisers are expected to pay $4 million for a 30-second commercial in the slots leading up to the half-time break.
It's an exceptionally busy period for the company. This week it is in Aspen covering the Winter X Games for ESPN and it will send a further complement of trucks to Scottsdale Arizona for CBS coverage of the Phoenix Open PGA tournament, which runs simultaneously with the football game.

Friday, 23 January 2015

EE to test LTE Broadcast at Wembley; aims to make service ‘better than TV’


Sports Video Group
Mobile operator EE is lining up its next major trial of LTE Broadcast this summer and is set to feature 4K video for the first time. It expects commercial deployments of the technology in-stadia to begin from Q1, 2016. http://svgeurope.org/blog/headlines/ee-to-test-lte-broadcast-at-wembley-aims-to-make-service-better-than-tv/  SVG Europe understands which event is being targeted although EE will only publicly refer to the trial at Wembley as occurring ‘during a high profile sporting event’.
“We know LTE works,” says Matt Stagg, principal strategist, EE. “What we want to do now is lead on how good you can make the experience. We’re looking to do a very feature rich trial at Wembley that will stretch the use of the technology to enhance the experience of watching live sports.”
Features of the trial will include multiple replays, multiple camera angles and realtime statistics.
“We want to push the broadcast industry,” says Stagg. “We don’t see this technology as one which can just be used to do the same only more efficiently. We see LTE Broadcast as being able to improve on broadcast by making it a superb experience that people will see and will have to have.”
Further deployments at other large sports events such as Tour De France and London Marathon are anticipated by EE. By the end of the year EE also plans trials of linear TV over LTE Broadcast and predicts that by 2017 LTE Broadcast will start to be commonplace.
LTE Broadcast uses evolved Multimedia Broadcast Multicast Services (eMBMS) to stream live and on-demand data more efficiently over a LTE/4G network than current one-to-one methods.
EE believes sports delivered over eMBMS will be a big driver for 4K mobile video. Furthermore, it believes eMBMS is the answer to ensuring premium live content is delivered at an economical cost to serve for live sporting events.
Further benefits, according to EE, include reduction in network demand during live events, the protection of other users from slow speeds, and guaranteed quality of experience for both HD and UHD.
“Everybody at the moment has done in-stadia tests and this is a solid application,” he adds. “But we see a bigger use case in delivering a feature-rich experience outside stadia. The issue is that there are spikes in the network around the streaming of live sports events in certain areas. A Saturday afternoon around Premier League football stadia, for example, exhibits high spikes in demand which has the potential to mean that no one gets a really great experience.
“LTE Broadcast can alleviate that but at the same there’s no need to have a broadcast version of what you can get today. Let’s make it better than TV.”
This can be achieved because LTE Broadcast can be used alongside interactivity achieved through unicast. “You could live stream the game over LTE and offer bespoke fan commentary, for example, multiple replays, camera angles, and all manner of other interactivity via unicast,” Stagg explains.
Maintaining continuity
While LTE Broadcast has been proven at numerous trials, predominantly in sports arenas over the past year, there are some aspects that need further work.
“We’re not sure if anybody has cracked the mobility aspect,” he suggests. “What happens when you move between a broadcast area to a non-broadcast area and vice versa. We need to maintain continuity – to swap between unicast and broadcast. That’s one area we haven’t tested.”
Another test is dynamic switching of streams within a cell. “One of the main advantages over previous iterations of mobile TV is LTE Broadcast’s ability to be able to switch between unicast and broadcast based on configurations that the operator has set. Once you see multiple people watching a live stream of an event you should be able to switch from unicast to broadcast to enable unlimited capacity for viewing the same live stream.”
There are a number of unique factors that need to taken into account when applying LTE Broadcast to ‘sports on the go’. Factors like the time of day and the popularity of the event change the demand profile but are easier to predict than the score or popularity during knockout stages.
Last year EE demonstrated LTE Broadcast outside a stadia and in the UK for the first time at the Commonwealth Games. The two-year collaboration involved the BBC, Samsung and Huawei. Three live HD streams were sent successfully to select handsets. The test included integration of Google Maps where users could click on a Google Map of the Games venues and receive specific streams from the venue. A new iPlayer App was developed exclusively for the demo.
The Wembley test will not be public but involve 20-30 handsets. Smartphones equipped with LTE Broadcast chips have yet to brought to mass market.
“There is a clear need to drive device density to realise the true potential of the technology,” says Stagg.
The largest LTE Broadcast trial was conducted by China Telecom and Huawei around the Youth Olympic Games in Nanjing last summer, at which 18000 Huawei eMBMS-enabled devices were handed out free-of-charge to Games volunteers and members of the athletes village.
Consumer handsets
Aside from equipment upgrades to the network’s core and possible software upgrades to cell towers, the main technical impediment to Broadcast LTE rollout is the lack of eMMBS capable consumer handsets.
4G LTE coverage is rapidly being rolled out by Vodafone, EE and other operators and will be ubiquitous across the UK by the start of 2016.
“We wouldn’t do a ‘big bang’ and turn everything Broadcast LTE at once. We’d go to specific areas around stadia such as around Wimbledon where all these spikes are.”
He adds: “Sports content owners and rights holders will have the opportunity to monetise content, network operators can drive up-take of current services and use these new experiences to differentiate their network.
“With this organic growth we can move toward other value add services over LTE Broadcast like public service broadcasting, mass software downloads, digital signage, traffic alerts, weather warnings, machine to machine communication. Anything where multiple devices require a download of the same content is broadcastable.”
There are standardisation issues that need harmonising. EE champions the Mobile Video Alliance, which it co-founded last year (and which Stagg chairs), with the aim of identifying, developing and advocating technologies that harmonise the delivery of AV content to mobile.
EE and Wembley
Meanwhile, EE struck a sponsorship deal with Wembley last year to help deliver the FA’s plans to make the stadia the most advanced connected sports venue in the world by 2016. It has begun a vast network upgrade programme at the stadium to ensure every visitor through the gates can stay connected, even with a 90,000 capacity crowd on event days.
It will shortly switch on its 4G+ network providing speeds of up to 300Mbps and then begin trials of 400Mbps network technology that the operator claims will make Wembley the fastest connected stadium in the world.
The vision to make Wembley the world’s best connected stadium goes beyond installing a world-leading network. Advanced mobile ticketing solutions are already in place, with EE customers also able to use contactless mobile payments at various concessions stands, and work is also ongoing with regard to more service and engagement innovations.
From early 2015 the iconic arch will ‘act as the digital heartbeat of the stadium’, says EE. The arch’s LED lights will react to significant moments such as goals scored and crowd noise, and fans will even be able to control them via social media.