Monday, 27 November 2017

NEP UK discuss the future of live VR and remote production

SVG Europe

Taking stock of the latest technological changes as 2017 draws to a close, SVG Europe found NEP UK’s Keith Lane (VP Client Services) and Simon Moorhead (MD) to be in expansive mood as they considered the present and future potential of VR, the outlook for the OB business as a whole, and its own current progress with regards to IP workflow implementation.

Live VR has yet to take off as some predicted it might. What will it take for it to really go commercial? 

Keith Lane: We have been involved in productions where live VR has been used to bring an immersive view point to the consumer, it does have an audience but can have a limiting feel due to the restriction in the number of angles and content that can be captured.
The technology of capture and delivery has improved and continues to do so, as a result VR is being considered more by rights holders and production teams. It needs to be complementary to the main broadcast but also add an enhancement to the coverage, an insight that allows the consumer to really experience the event from a different perspective and with additional content.
The momentum is picking up but it will still take time to hit the masses. The funding for this experience is often led by consumer tech businesses. The next big step is to establish a financial case by a broadcaster or rights holder that makes VR a live regular product, along with what the consumer wants and values and is prepared to pay for.
We are now constructing two IP-based trucks available in Spring 2018 based around SAM infrastructure, Arista routers, Calrec sound desks and SAM visions mixers. Our IP-based trucks will also be capable of 2110 operation.
We are also in the process of constructing a fully redundant IP-based fly pack system capable of supporting 15 galleries plus and providing feeds to other rights holding broadcasters.

Simon Moorhead: VR is undoubtedly a broadcast concept that has piqued audience interest; for gamers, being immersed in a virtual setting is surely the ultimate in participation and experience. The ability to harness live VR in a broadcast context is one of the most transformational advances for the audience – being fully immersed, virtually, into a live event, be it sport, music or entertainment brings the viewer directly into the venue and feeling closer to the performance.
To achieve this augmentation of the live event requires a robust video delivery connection that can cope with the bandwidth requirement, ensuring minimal latency and consistency of video relay, and this is where the broadcaster is at the mercy of the viewers’ data connectivity or, more specifically, the ability of the internet service provider to super-serve this new broadcast medium.
For these reasons, I see that the broadcasters need to feel confident in the delivery platform, the potential for mass consumption by the potential available audience, and their willingness to either pay for this enhanced experience or have their experience impinged with hard-coded advertising.

Is the OB business becoming increasingly polarised between major sports events requiring large trucks and sizeable crew at venue and a more slimmed down and remote style of production over IP? 
Lane: The type and scale of production will dictate to a degree the choice of equipment and manpower on site; as we see more flexible connectivity being made available the opportunity to do ‘remote/at home’ productions will increase.
Costs are increasing and budgets are getting tighter but the need to keep or even improve production output against those constraints see us adapting what we provide.  Remote productions aren’t new, the decisions are often based on what and how much you ‘remote’ your operation away from the event. Now that the bandwidth and low latency networks are becoming more attainable this will be the key to unlock this workflow more.
The opportunity for broadcasters to decentralise their operation, where different parts of the production can be in different regions, will be high on the agenda of many of them. We need to be flexible in our offering. The OB business needs to keep agile and look to provide different production environments to clients, whether that be large trucks and fly packs for large events or small quickly deployable units that can reduce time and increase efficiency and provide discreet sources if required and on pass to that IP network.

Moorhead: In my personal experience, I don’t think so. At NEP in the UK, we very much cater for our client’s specific requirements and delivering their technical/operational workflow. We work with our clients to tailor a bespoke solution, which meets both their editorial and budgetary objectives. What continues to be a theme is that clients like to have an OB truck and/or flypack broadcast solution coupled with the full complement of crew required to deliver their sporting event.
Where possible, programme makers tend to prefer being as close to the action as possible, ensuring that they faithfully translate and broadcast every facet of the event, maximising on the peripheral benefits of location production to give the audience an in-depth experience which is both immersive and editorially superior to that of a ticket holder in the crowd.
There is no doubt that location production affords the programme maker and their production teams to cast the net wide and deep, enabling the almost seamless capture and dissemination of the build-up to the event, breaking news and behind the scenes developments.
Whilst there has been investigation and discussion with some clients around the concept of remote production workflows, this is always balanced against the potential loss of the qualitative benefits associated with location production, as discussed above. In some circumstances where geography and the associated costs of logistics impact significantly the quantitative/budgetary objective – remote is certainly becoming a workflow that is being closely examined and considered. However, where this relates to domestic event coverage, clients don’t (currently) seem to be sufficiently swayed by any potential economies over the qualitative benefits of location production.

What impact is the delay in standardising IP and HDR having on your equipment purchasing decisions? 
Keith Lane: When we approached the rebuild of our trucks and fly packs after the fire, we didn’t feel IP was mature enough for us to commit to. We needed to keep the flexibility and functionality that SDI has given us. Admittedly the number of sources required for UHD did make the sheer scale of the infrastructure challenging but doable.
Now that the first few standards in SMPTE ST2110 have been agreed, we can see the adoption being much easier. The ability to carry audio, video and data and to separate them out now allows us to provide a workflow that isn’t restricted and far more scalable, particularly when we move to HDR and HFR.
The next step for the UHD productions we do will probably be HDR. There is still a limited amount of equipment out on the market, although this has improved over the last 12 months. Our decisions on equipment to support HDR and SDR was difficult to make then, monitoring, conversion as an example. There is a better understanding of the requirement to support HDR and to produce an acceptable SDR image as well, that workflow and equipment choice will still be a challenge in 2018.

Finally, how has the appetite for UHD developed during 2017 and how do you anticipate this will change looking ahead? 
Lane: UHD has continued to be a big part of our operations. We continue to provide facilities for Sky Sports EPL coverage which is 124 games a season. The appetite for UHD is increasing, we have had a few additional events we covered this year. We took the trucks down to the Allianz stadium in Turin to cover Juventus v AC Milan in March this year; this was a joint production between Sky Italia, Juventus and Serie A.
The demand at the moment is really driven through the broadcasters who have a means to transmit UHD. That said, we are seeing more rights holders and requests from production companies who need to consider UHD for their broadcast partners who do have a requirement for UHD.

Sigma Cine Zooms capture the post-apocalypse

VMI

Content marketing for VMI http://vmi.tv/case-studies/article/135

 To survive in a world destroyed by global warming a family needs to make a heartbreaking decision which could split them apart forever. This is the dystopian vision of writer-director Gagan Singh whose short film Famished is being prepared for entry into film festivals worldwide.

Singh made the film with cinematographer Erick Alcaraz as part of their graduate degree in Practical Filmmaking at the Met Film School located at Ealing Studios.

It’s their second collaboration following Serbor, a relationship drama with a witchcraft twist.
For the post-apocalyptic world of Famished, the filmmakers wanted as naturalistic a look as possible. They shot for four days on location rural Hertfordshire, away from urban life, and used as much natural light as possible. That included sole use of candle light for night time scenes.

“We had to get the sense of the space our characters are trapped in, which is a cabin in the woods, and tried to use as less light as possible to suggest an isolated world where all normal systems no longer work,” explains Alcaraz. “Consequently, I needed a very fast lens but one which would be open enough to achieve depth. Once we’d established the setting, I could frame mainly on the actor’s faces since the fate of this family is primarily told through dialogue.

“I went to VMI thinking we needed Zeiss Master Primes or Cookes but when I got talking to Barry [Bassett] he suggested trying a brand new set of Sigma T1.5 Cine Primes and pairing them with the latest 4K Canon C700 camcorder which had also just arrived.”

 The Sigma T1.5 Cine Primes with the Sigma Cine Cine zooms are among the latest optics available from VMI. They are available in native PL, E or Canon EF mounts and are also engineered for 4K with very high resolution, high contrast – perfect for the project’s 4K low light recording.

“I found the look of the Sigmas amazing to be honest. The lens is quite fast, the aperture opens all the way to F1.5 and has a very quick response focus wise. The lens is not very heavy which is quite helpful bearing in mind the weight of the camera plus all the extra accessories. This enabled us to achieve a faster production schedule which is so important when you are working on a tight budget. Also, the curvature of the lens gives a very nice contrast to the image helping to enhance the colours and shaping the faces beautifully. So, working with candlelight was the best decision I could've take. I will definitely use them again.”

Alcarez felt the C700 performed fairly well in low light. “I think this camera is on the same range as the Alexa. You need to give them both an extra push in the darkest areas in order to get the best of the camera,” he says. “The camera needs to get levels of information in order to not crunch the blacks.”

When it comes to the camera’s ergonomics, Alcarez found a perfect fit for the job. “The viewfinder is one of the best ones I've ever worked with. It’s so sharp and comfortable for the eye, as well giving you full access to the menu of the camera without needing to change your position while shooting. The camera is a bit heavy over time but not as much as the ALEXA Classic is in comparison. Also, the shoulder mount that the camera has rigged is a lifesaver. Very comfortable to use with a great grip to it.”

The finished film will be around 11-12 minutes, suitable for film festival entry. Alcarez is also in charge of postproduction, editing in Premiere and grading using Resolve.

The Met’s BA in Practical Filmmaking course prepares the next generation of smart screen creatives.  It builds creative and technical skills essential to succeed in today’s film and TV industries with professional tutors and high-calibre guest speakers who teach the fundamentals of storytelling, production and screen business skills.

The course makes a virtue of real-life experience where aspiring filmmakers can build an attractive portfolio and put their knowledge into practice by working with external companies and creating projects for industry clients.

“Every time over the past year when I’ve needed equipment I’ve gone to VMI not only because they have an amazing catalogue of camera gear but because the experience is not just about renting kit,” informs Alcarez. “Barry and the other guys are always there to give advice. They take the time to listen to what I need and will make suggestions instead of trying to make the lease bigger. It’s that care and attention which I value.”

Famished will not only help Singh and Alcarez complete their degree but they hope to use it as a calling card for a higher budget feature length production.

VMI wish them the very greatest success.

Thursday, 23 November 2017

Metadata workflows: One format to rule them all

InBroadcast
IMF is coming to broadcast. What does this mean for post workflows and will it achieve the simplicity it intends.
By April next year there will be a new file format to juggle with. This one is intended to be the one format to rule them all. SMPTE is working with the Digital Production Partnership (DPP) on joint development of an Interoperable Master Format (IMF) specification for broadcast and online.
The original SMPTE standard, ST 2067, developed in 2012, dealt with file-based interchange of finished multi-version audio-visual works. That dealt with multi-language requirements, including subtitles and closed captions all of which can be handled within one large IMP – the interoperable master package.  
The DPP then took the lead in drawing up an IMF for broadcast, in co-operation with the North American Broadcasters Association (NABA) and the EBU.
For broadcasters there are two primary use cases: incoming, meaning buying content masters for further compliance processing; and outgoing, which is sales mastering. The goal is to implement a system that addresses the myriad metadata requirements of television and OTT while fitting into broadcasters’ sizable existing archives of content.
However, the jury seems out on how successful the original IMF has been. Paul Mardling, VP of Strategy, Piksel says “extremely successful” but others are less sure.
“The original vision for IMF was that movie studios would use it as the primary format for archiving content and delivering it to the supply chain,” says Dominic Jackson, Product Manager, Enterprise Products at Telestream. “So far there may be some archiving going on, but the use of IMF as a ‘live’ delivery format isn’t really here yet. We are mostly at the point where studios are distributing ‘test’ packages to ensure that recipients can handle them correctly.”
Dalet’s Chief Media Scientist, Bruce Devlin also claims take up within the Hollywood community, However, there is still a missing unifying aspect to IMF. He cites two major components behind this inability to truly make IMF a mass-market format.
“Each studio still has their own IMF delivery standards, both input and output specifications,” he says. “The broader content creative and delivery community feels that IMF is really more like 5-6 different flavours of a similar standard, since they have to make IMF Flavour A for Netflix, IMF Flavour B for a major studio and IMF Flavour C to feed their finishing/transcoding tool.”
The second missing he feels is the lack of tools able to handle IMF. “Componentized media workflows like IMF are very powerful and drastically simplify operations, but they are very complex in the back end. It requires a good platform and adapted management tools to enable simple, cost-effective solutions to ingest, manage, search/find/retrieve and transform IMF for the necessary workflows. And very few platforms have developed the data model and toolset.”
Such tools are expected to move out of testing into actual use cases and production over the first half of 2018. As they do, there shouldn’t be too much difference between the core workflows for IMF version 1 and the broadcast/online version but there are changes.
The principle differences is the need to transport ad break information to support stitching of assets from outside of the IMF container at playout.
“This opens the possibility of creative teams defining different break patterns and cut points in the asset depending on the required ad payload and playout length requirement,” explains Mardling. “There’s also the need to include the relationships between assets in different IMF containers to allow series and episode information to be transmitted within the container.
“There are effectively two approaches to IMF in the workflow – early or late stitching,” he says. “In an early stitching scenario the broadcaster flattens the IMF file to the required formats on receipt. This allows the rest of the broadcast workflow to remain more or less unchanged. In a late stitching scenario the IMF is processed through the workflow and in effect any edits are simply reflected in an updated CPL. Flattening does not need to occur until final playout, potentially on the fly.”
Any new version of IMF needs to be backwards compatible with current implementations. However, the majority of broadcast archives are currently flattened files, often in multiple difficult to track versions. According to Mardling, the principle issue will be in the requirement for updating tooling to work with IMF and the risk that rendering technologies may become obsolete.
“I’d expect groups like DPP to work on defining standard metadata sets for specific markets or groups or broadcasters,” says Jackson. “These metadata sets will also need to be extensible to include specific metadata requirements for individual broadcasters.”
If IMF for broadcast is going to work, broadcasters and the kit suppliers will be required to introduce new ways of handling files.
“Most of the existing tools that broadcasters use are not necessarily very IMF friendly,” says Jackson. “The asset management systems, transcoders, edit systems and the like are all likely going to need some upgrade to deal with IMF successfully. Just the implications of the fact that IMF is a group of files rather than a single one are going to be significant for broadcast supply chain, and that’s just the start of it.”
Codec issues
Deciding on a primary codec for the format seems too thorny an issue to be sorted. “ProRes in IMF seems to be here to stay but I doubt that will meet everyone’s needs,” says Jackson. “Codec choice is too challenging to agree on.”
Mardling believes the issue is a storm in a teacup; “As with other similar standards, shims will be required to ensure interoperability but this should not be a major impediment to adoption.”
Archive
The codec issue arises when broadcasters need to know that IMF will be backwards compatible with their archive.
“Given that IMF itself is a standardized format, and self-referential toward the essence storage, the main concern broadcasters should have with their archive is codec choice,” says Devlin. “Will my chosen codec be usable 10-30 years in the future? You would be hard pressed to agree that any codec designed for archive, beyond JPEG-2000, would answer that question as a yes.
“So, you have a trade off that occurs in broadcast archives that is different from studio archives. Whereas studio archives are generally more concerned with long term preservation, repackaging and resale, broadcast archives are generally more concerned with enabling fast turnaround workflows, with long term preservation a secondary concern.”
Additionally, for archival, IMF provides a mechanism for embedding or linking private metadata into compositions. This allows ‘helper’ metadata to be included in the archiving process for applications such as re-creating MAM records in business continuity applications. Devlin says IMF looks very attractive for archive applications and there is a special application App#4 for cinema archiving. That is currently being used as a template for gathering TV archive requirements.
Automation
One of the chief benefits of adopting IMF for broadcast/online is the promise of more automated (efficient) workflows. Mardling cites the example of a current affairs programme containing content that needs to be cut/edited at the last minute for legal or compliance reasons. Rather than having to return to the edit suite and multiple new versions be produced, a simple change to a CPL can update multiple versions ready for playout.
Jackson remains unconvinced. “The one area where it brings obvious efficiency is for a broadcaster who has a need to archive multiple versions of a piece of content,” he suggests.
His suspicion is that many, possibly most, broadcasters will not adopt broad usage of IMF. “There are a small but vocal group of proponents but I suspect ultimately that many will see IMF as introducing more issues than it solves.”

Related IMF (version 1) technologies
Ownzones Media Network has released Ownzones Connect, a platform which ingests, categorizes and stores multiple media files in one location stored in the cloud. It also includes a dynamic creation of video jobs based on templates, and the introduction of smart agents for proactive metadata optimisation.
Version 5.9, of Rohde & Schwarz’ CLIPSTER offer a complete workflow from mastering, versioning to merging and refining IMF packages. CLIPSTER can be used arrange the various video and audio tracks, to create the master package, to generate any number of versions and to merge or to split the packages.
Prime Focus Technologies supports IMF within CLEAR, including an IMF Player that provides the ability to preview, playback, review and distribute over a streaming proxy. This enables collaboration and decision making in the workflow using proxies without having to necessarily access the original IMF package in high-res each time a CPL has to be played back.
Telestream support for IMF is in media processing product Vantage. It can ingest an IMF CPL as a master source input to create all appropriate outputs, and can create single segment IMF Master Packages as an output. 
Today, IMF packages are becoming simpler to create and interchange, but the lack of a fixed file or folder naming scheme is making management of IMF - especially with tens and hundreds of supplemental packages per title - more complex.
“Implementing a MAM and Orchestration platform such as Dalet Galaxy will allow you to scale and industrialize your operation and break the many IMF versioning workflows with greater ease and accuracy, and at a much lower cost of production,” suggests Devlin.
A testament to the veracity and robustness of the solution is recent Netflix compliance approval awarded to the Dalet AmberFin transcoding platform. This means that producers and facilities needing to create IMF packages for their Netflix targeted content can use Dalet IMF technology to get the job done.
Sidebar: IMF recap
Built on the proven success of formats such as DCP (Digital Cinema Packages), IMF is a media packaging format that streamlines the assembly of multiple versions of a title for any downstream distribution. Instead of a single master file – like a QuickTime or XDcam file, all IMF audio, video and text components are contained within the file as individual assets. IMF components are individually referenced by an editorial Composition Playlist (CPL). As an analogy, the assets are the ingredients and the CPL is the recipe.
An IMF can contain unlimited CPLs, each representing a unique combination of the files contained in the package – or cuts of a programme.
Instead of hundreds of separate versions of a master, users can create one IMP (Interoperable Master Package) with all the various media elements (soundtracks, subtitles, graphics, technical metadata etc.) and endless variations described by the CPLs.
The IMF format also includes an Output Playlist (OPL). The OPL is the IMF package’s technical instructions and it can incorporate additional modifications like sizing, audio, channel mixing and transcoding profiles. Additional OPL functionality for broadcast is still in development, with a range of companies, including Prime Focus Technologies, working to create the specification and tools.
Essential workflow steps for IMF for broadcast
First, broadcasters (and their suppliers) have to start with re-architecting your media handling as essential components of video, audio, captions/subtitles, metadata, rather than as an interleaved asset.
“Componentized media enables all the nice versioning and space saving capabilities of IMF, but does require that you have the ability to match up the components during your WIP phase to associate them back to the project at hand,” explains Devlin.
“In an interleaved media workflow, as has been in the norm in broadcast since the dawn of the file-based age, your main challenge is just passing a single file around between disparate processes, users and possibly, external vendors. This can be accomplished via watch folders, file acceleration services and even, sneakernet of USB keys (as in, ‘Joe, go run this USB key over to edit bay 2’), generally, without the need of a higher level management system, since the single file can be fairly self descriptive and tracked through a linear process.”
In a componentized media workflow, Devlin says, broadcasters have the same challenge of connecting processes, but also the need for a management system that can manage multiple files in a project, assign those files out to parallel media stages, and reassemble them for IMF packaging and downstream transformations.
“Finally, IMF workflows will nearly always be implemented using some sort of orchestration engine. To create an IMF composition, you need to obtain metadata from IMF Track Files, metadata from business systems and metadata from MAMs and put them all in a CPL. This aggregation of metadata from multiple sources needs to be flexible and be subject to automatic QC. This can be done at scale using an orchestration system, but would be painful if performed manually.”

Sunday, 19 November 2017

BBC on course to produce live sport entirely in the cloud

Sports Video Group

The BBC’s move to ramp up coverage of sport online is part of a comprehensive strategy to shift more – and eventually all – of its live event production onto software and into the cloud.
“We at the point of transition from a place where IP is being used for contribution but with conventional gallery production, to a world where everything will in the cloud,” says Tim Sargeant, Head of Production System and Services BBC North and Nations (whose remit includes BBC Sport).
In what BBC Director General Tony Hall dubbed “a reinvention of free-to-air sports broadcasting” the broadcaster has announced plans to produce and distribute an additional 1000 a year of sports online.
These will be predominantly niche sports accessed via the BBC Sport website and BBC iPlayer but more content from the BBC’s portfolio, including of the Winter Olympics, will also be given an IP treatment.
This is in line with moves since 2012 to build out the Corporation’s IP capacity. “It would simply not be possible to deliver 1000 hours of additional sport on budget without either the internet infrastructure or the IP production techniques we are now able to use,” says Sargeant.
The core infrastructure of encoders and networking capacity to points of presence and onward distribution with content delivery networks (CDNs) was built to deliver 2500 hours of IP content during the 2012 Games.
“The long term plan was always to build on this,” explains Henry Webster, Head of Media Services in the Corporation’s Design and Engineering Platform group. “We’ve not had to hire in capacity.”
The area of focus since 2012 has been around distribution at volume. “2012 was by far the biggest streaming effort we’ve ever done but since then there’s been enormous and continuous growth,” notes Webster. “Our last biggest peak to date was around the Euros (the England V Wales game last summer saw 2.3 million unique browsers watching online against 9.3m viewing the TV broadcast) and we expect to be breaking those records again next year.”
Wimbledon is another highly popular piece of live streamed content for the BBC but the World Cup from Russia is likely to smash streaming records for the BBC and most other rights holders.
The hub of the BBC’s live stream infrastructure is dubbed Video Factory. This is packed with Elemental encoders for transcoding contributed live streams into a single Adaptive Bit Rate (ABR) set. Separately, the streams are packaged into varying formats for different devices mostly using HLS and MPEG DASH with some Flash to service platforms still using it. The BBC is also working on an internal delivery network project again to increase capacity and to manage cost.
The packaging process is performed in the Amazon Web Services (AWS) cloud with the encoding operation split between AWS and on-premise. “A lot of capacity which we built for the London Olympics was on-premise kit but increasingly we’re doing live events entirely in the cloud,” adds Webster. “A lot of work is being made to ensure our origin servers can cope with spikes in load back from the CDN.”
Streamlining costs
A key part of the equation is rollout of remote production to further streamline costs. “There are a variety of routes for this,” explains Sargeant. “We’ve already covered some early rounds of rugby league with just a single camera which is then IP contributed back to base where we add BBC commentary. It’s a very light touch production and the production standard and technical requirement is lower than you might find on BBC One. It won’t have 14-camera switching, the graphics will be fairly modest and, because it’s an online offer, we feel viewers don’t mind if there’s a holding card at half time rather than lots of rich analysis.
“We are at the point of transition from a place where IP production is contributed back to base and passed through a relatively traditional gallery with comms and graphics and switching towards a scenario where all of that takes place in the cloud. We are gradually increasing the amount of workflows and processes we operate in the cloud to allow us to do those very basic production activities, then to perhaps switch a couple of cameras and mix an additional audio signal and on to a world where all conventional production tools are software running in the cloud controlled by web browser.”
This technology is common to pretty much anything the BBC streams over iPlayer, notably multicam live events Springwatch and Glastonbury.
There are two other production scenarios. One is use of 3-4 cameras with a very low cost on-site production capacity and local switching before passing into the IP network. Another instance is where the BBC will enhance the live stream already being captured by sports federations for social media platforms or their own websites. The British Basketball League (BBL) is an example where the BBC might take some existing production and overlay its own commentary.
Currently, the BBC’s live stream sports efforts are destined for its own platforms. Discussions are taking place about the merits of distribution to non-BBC platforms – such as Facebook or YouTube.
“The goal is to advance technology that scales on demand and to move away from a world of defined 24-48 channels,” says Webster. “It’s a world where we flex capacity up and down in an ad hoc way.”
Does that mean scope to deliver a 4K stream? “At the moment there’s not a massive demand for that but it’s fair to say that most of our technology is internet and software based and therefore agnostic to resolution,” he responds. “It’s very straightforward for us to scale to deal with higher bit rates and larger frame sizes. IP opens up the possibility of delivering higher rez if that is an option.”
Since all that’s required for switching is a web browser and internet connection there is less need for a dedicated physical sports production centre. “We are in a hybrid world where some events will still come through the gallery in Salford and others where production will be more self-operating and not dependent on a particular centre,” says Sargeant.
Sports coverage produced and delivered over IP currently includes rounds of the FA Cup, ATP World Tour Finals, Women’s Ashes from Australia, Trampoline Gymnastics World Championships and Scottish International Open Bowls and Women’s Soccer League.
Streaming challenges
It should be noted, of course, that live streaming is notoriously bedevilled by a number of issues including buffering and latency.
“There are a bunch of things we don’t have good control over such as people’s last mile connection,” admits Webster. “We do have control over ensuring we have capacity to delivery against those large numbers and we work with multiple CDNs to ensure that we have redundancy and that we scale to meet the demand.”
In announcing the sports plan, BBC DG Hall admitted the BBC has been forced to evolve as a result of the budget for live sport being slashed. “As we have shown time and time again, we will not stand still…not if we want to meet the changing demands of sports fans, not if we want to remain relevant in the media’s most competitive marketplace.
“While we’re privileged to be funded by the licence fee, it’s no secret we don’t have the same deep pockets as those we must now compete against but we have unique qualities that are essential for those sports who want to ensure their events are available to – and able to inspire – the widest possible audience.”
In January Hall outlined a strategy to “totally reinvent the iPlayer by 2020 to increase its reach and become a “must-visit destination”.

The arrival of ultra resolution capture

InBroadcast

The number of film and TV projects acquired in part in 8K resolution is growing but a sharper image is not the first or only benefit to cinematographers.
With 4K production still expensive and bandwidth throttling the throughput of ultra HD content into both cinemas and the home, camera technology continues to advance higher resolutions. 8K cine cameras from RED, Sony and Panavision have raised the bar for the spatial attributes of an image at very least. Meanwhile Japanese state broadcaster NHK is preparing to start domestic satellite transmissions of its Super Hi-Vision system in December 2018 having helped fund an end to end 8K production ecosystem.
8K broadcasting is currently considered a Japanese luxury and unlikely to be exported to other countries any time soon (if at all). While the pictures are undoubtedly pin sharp – especially when played on smaller OLED screens – work needs to be done in terms of editorial grammar to make the content more compelling than mere pretty wallpaper.
For high end recorded content, though, the arguments for use of 8K are more compelling, and not just in the way you’d think.
“The flexibility of having that extra resolution, whether you are downsampling or reframing/cropping, or simply stabilizing footage is too powerful to ignore on a pure image quality level,” says Phil Holland, a DIT (X-Men: First Class) turned DP. “When filming in 2K there's literally detail and color transitions that ‘aren't there’ that you can 100% see when captured in 8K.  There's no amount of upscaling that can create detail that's not captured during that specific moment.  When you see upscaled content, while the algorithms are impressive, you are just evoking a perceptual response to mostly edge detail.”
Holland says he was handling 4K film scans in the late 1990s and has been finishing in 8K lately, working from material captured on the RED Weapon with Helium S35 sensor. “Subtleties, like a deep red velvet pillow's texture, is something often lost in lower resolution capture,” he says. “Beyond that, it's again the flexibility of the format and what you can do with higher resolution material.  Some people punch in for digital zooms, reframe, crop, etc... With more capture resolution than your desired output resolution you have the ability to do that while the image still holds.”
Timur Civan, a New York based DP with spots for Nike, Samsung on his CV agrees that 8K is not about a sharper picture but a softer, more organic one.  “Think of it like this,” he says. “The Apple Retina displays (on iPhone, iPad, Macbook) displays more resolution than your eye can see. The edges of text, hard lines, and high contrast edges are rendered sub visual acuity thereby making it appear more natural, lifelike and frankly easier to look at.
“There’s been a lot of argument about 8K and what that means versus 4K or 6K or RED versus the Alexa versus Sony. I think many people haven't had a chance to try 8K, and see what that actually does to your image.”
He says the most fun he’s had is shooting vintage lenses on 8K and letting the sensor soak up all the character; “Small subtle blooms, smeared edges, smeared highlights are not just flaws, but now a fully rendered part of the image.”
One episode of the science-fiction TV drama series Electric Dreams, produced by Amazon Studios, Sony Pictures and Channel 4, was largely shot in 8K on a RED Epic with Helium in order for the DP to accommodate the look of vintage anamorphic lenses and still deliver a 4K 16:9 master. The team calculated that if they shot compressed at 8.1 the total data would be little different to shooting 4K ProRes.
“What seemed so unrealistic in shooting 8K for a fast turnaround TV project with lots of set-ups suddenly became very doable,” says DP Ollie Downey. “An 8K resolution was a creative choice since it allowed us to take a few steps back and use older lenses to warp and bend the image.”
PostProduction
The chief benefit of recording 8K today is being able to perform basic post corrections in a bigger information space.  Whether that’s grading, post zooms, crops, VFX or stabilization, getting all the image degrading work done in the ‘big space’ of 8K means a 4K output (or lower) will still be super sampled.  
“If your pixels are so small they can't be ‘seen’ you can't see their noise either. “The magic of downscaling cleans up an image significantly,” says Civan. “Even for a 1080P output, shooting 8K means your get 2 extra stops of noise protection. With the 6400ISO noise performance of the Monstro, I can realise a 1080P finish for a commercial or broadcast, shooting 25,600 safely.” 
Molinare’s Commercial Director, Richard Hobbs reports a rise in the number of films acquiring and posting in 4K as that gives producers increased opportunities for future proofing and sales.  “In some instances, we’re doing camera tests at 8K even if delivery may be in 2K, for productions to discuss the possibility of archiving camera rushes at much higher resolutions for future use,” he says.
Studios are also exploring the prospects of producing VFX in higher resolutions. Technicolor recently delivered its first 8K rendered visual effects piece. The Marvel Studios’ blockbuster feature Guardians of the Galaxy Vol 2 was shot by cinematographer Henry Braham on the large format Panavision Primo 70 Prime lenses with the RED Weapon 8K Dragon Vistavision as the best way to render the complexity of the final images.
DPs shooting RED suggest an 8K data footprint works well using its compressed RED Raw format. “You can get about an hour of 8K footage per terabyte shot,” Holland reckons. “Most of my days on set in the last 18 months we've filmed 1-8TB, which is very manageable.  The cameras are capable of recording scaled proxies via intermediate codecs like Apple ProRes and Avid DNxHR, which makes that potential time consuming task of creating dailies something you can do on the fly if you explore that workflow. 8K is where things get heavier if we're talking uncompressed DPX sequences and that's a big hurdle to undertake for some studios.
“I know the post houses I've worked with, literally all of them in the last year or so, I was the first person to bring 8K into their houses,” says Holland. “And in reality, the workflow isn't that difficult if you're targeting 4K or 2K output.”
Lens manufacturers are getting on board with the larger format size and there's a great deal of cinema glass at all sorts of budget levels that cover VistaVision, in particular.  The Tokina Cinema Vista Primes were the first new set designed for the format.  Cooke offers S7s, Leica has large format Thalia primes, there are the Zeiss CP.3s, Sigma Art, and many others which work very well with Vistavision and Full Frame 35 formats. 
“I've even adapted and modified some vintage and modern glass from Leica, Mamiya, Olympus, and the more recently release Zeiss Otus Primes,” reports Holland. “It's been a wonderful and exciting world to explore from a cinematography perspective as all of it was creatively and technically challenging to find lenses I liked for each project.  When I began primarily working in 8K VistaVision I certainly heard a lot of ‘you can't pull focus for that format or resolution’, but seriously, this format has been around for many years and we have more advanced focus ‘helpers’ on camera and on set than we did before, so for me that was never part of the hurdle for me.  In fact, my first 8K VV project I shot at T1.5-T2.8 just to showcase some the appeal of the larger format size in relationship to the depth of field.”
RED is leading the way in pushing sensor resolution, recently adding the Monstro, a full frame sensor for Weapon cameras. This combination, captures 8K full format motion at up to 60 fps, produces 35.4 megapixel stills, and delivers data speeds of up to 300 MB/s.
Sony has launched the 6K Venice and continues to market the 8K F65 CineAlta. Venice houses a 36x24mm full-frame sensor and is compatible with anamorphic, S35mm and full-frame PL mount lenses. Future firmware upgrades are planned to allow the camera to handle 36mm wide 6K resolution.
Panavision’s Millennium DXL Camera outputs 4K proxy files – ProRes or DNx – and large-format 8K RAW files. At the core of the camera is a proprietary image mapping process called Light Iron Color, developed by LA post facility Light Iron, which provides a unique, cinematic look.  SAM’s Rio finishing system is one of the few software tools able to work with the files and the Cinematic Image Mapping Controls.
Of course, a focus on resolution alone is largely dismissed by creatives since what matters is applying the appropriate look for the story. Temporal resolution – or dynamic range – is considered a more perceptually noticeable tool to delivering greater contrast and colour depth.
This is exacerbated by the widening gap between the image quality which can be captured and the quality of the final image on display. While an increasing amount of content for feature films or TV are acquired 4K only a few titles will be mastered for 4K theatrical projection or home entertainment distribution.
According to Holland the argument about which characteristic is more important – resolution or HDR – simply misses the point. “It's about improving the overall image quality.  That is what is important when it comes to getting our carefully captured images to the discerning and deserving eyes of the audiences who experience our efforts.  Motion pictures have always been where science, technology, and art have met and over the years there's been great efforts to improve the visual fidelity to produce a more immersive image for the audience.  That's in my opinion part of the core of how we advance forward in our industry and in our medium.”
When it comes to shooting 8K Civan says resolution at this point isn't about resolution anymore.  “The effective pixels are so small, and so good, that they act less like digital dots and more like paint.”

Broadcast 8K
NHK’s 8K system encompasses cameras (Ikegami’s SHK-810 handheld system and Sony Systems Camera UHC-8300 plus others from Hitachi), switchers (from NEC), Lawo consoles for 22.2 channel audio and giant LCD displays (from LG).
Belgium codec developer Image Matters is backing a European project intended to skip 4K and help broadcasters leapfrog directly from HD and to 8K. Called 8K SVIP, the collaboration between Belgium and Czech manufacturers and researchers intoPIX, Cesnet, Image Matters and AV Media aims to gives broadcasters the tools to migrate from HD and 4K, and to develop advanced transport technologies to manage 8K signals over SDI and IP.
IntoPix has already been selected by NHK as the compression technology for broadcast of Super Hi-Vision. Using a TICO 8K codec the bitrate of an uncompressed 8K stream is reduced to 48 Gb/s (60hz, 10 bit, 4:2:2) making it possible to squeeze a 8K signal down a single 12G SDI cable with a claimed latency of less than 0.2 milliseconds.
German developer Cinegy has also been teasing audiences at tradeshows with 8K recording and playback demos, highlighting the performance advantages of its NVIDIA GPU accelerated video codec, Daniel2. Cinegy is now in public beta with Daniel2 codec-based applications.
Company CTO Jan Weigner has claimed that Daniel2 can decode up to 1100 frames per second at 8K translating into more than 17000 frames of full HD decoded per second. “It is the world fastest professional video codec, leaving any other codec light years behind.”
Cinegy is already touting the performance at 16K of this technology.


Monday, 13 November 2017

Mixed reality: mixing photo real CGI with live footage, in realtime

RedShark News 
New ultra-realistic mixing of live footage with CGI is now being made possible by what was once a dedicated gaming engine.
FremantleMedia, producer of The X Factor, the ‘Idols’ reality-TV show format and The Apprentice, may have another hit on its hands — one which may have cracked open a whole new media experience. It’s basically integrating advanced virtual TV studio technology with games engines and mobile distribution. It merges the physical with virtual worlds. To put it another way, this is genuine Mixed Reality.
Game show Lost in Time is the first concept developed by FremantleMedia using the technology platform developed together with Oslo-based The Future Group (TFG) at a cost of U$42m.
The programme premiered in Norway earlier this year and has just had its first international sale to an Emirates broadcaster which will adapt it for distribution across 20 countries in the Middle East and North Africa.
In the game itself, contestants compete in different challenges in a green-screened studio show against the backdrop of six virtual worlds (Wild West, Ice Age, Medieval Age and the Jurassic Period, etc). The contestants and props are real, but everything you see on screen is made up of visual effects to a standard of which the makers claim it was only previously possible on Hollywood movie budgets. Even better, the VFX are real-time capable in a full multi-cam setup.
The big departure from traditional game shows, though, isn’t just the graphics. What’s unique is that viewers watching at home are also placed into the same virtual environment and are able to participate in exactly the same story as those on TV and to compete against studio contestants via a mobile or tablet app.
At the moment, VR headsets aren’t distributed widely enough to justify a primetime viewing slot for a live show. That’s why TFG’s content officer Stig Olav Kasin says it developed the games for iOS and Android devices, giving a large global audience the chance to compete and engage with the content. “However, once VR headsets are more widespread, it will open up a new world of possibilities for the TV industry to blend the best elements of gaming and traditional TV,” he says.
Signs of this are already evident. Partnered with the Turner-IMG ELeague, TFG injected CG characters into the broadcast of the Street Fighter V Invitational eSports event last spring. Characters from the game struck fighting poses on the studio set, viewable by studio audiences on adjacent screens and by Turner’s TV audience. It took a couple of months to produce the characters but the resulting animations were displayed without post production combined with physical sets and presenters, live.
TFG was at it again for the Eleague’s Injustice 2 World Championship broadcast on TBS, Twitch and YouTube (which began last month and continues until November 10) from Atlanta, Georgia. Among the 3D character animations presented to viewers at home as if interacting with the studio audience, was Batman. This promises to be the first of a wider deal to augment more superhero characters from the Warner Bros stable in mixed reality.
TFG co-founder BÃ¥rd Anders Kasin was a technical director at Warner Bros during the making of The Matrix trilogy when he came up with the initial idea for the mixed reality platform.

A new Frontier

The technology platform underlying TFG’s MR format was developed with Canadian broadcast gear maker Ross Video and is being marketed as a standalone software application by Ross.
Branded Frontier is promoted as an advanced form of a virtual set for the creation of photorealistic backgrounds and interactive virtual objects.
At its heart is the Unreal gaming engine, from Epic Games, used as the backdrop renderer of scenery through features such as particle systems, dynamic textures, live reflections and shadows and even collision detection. This works in tandem with Ross’s XPression motion graphics system, which renders all the foreground elements.
Of course, games engines were never designed to work in broadcast. Unreal, or the Unity engine, is superb at rendering polygon counts, textures or specular lighting as fast as possible on a computer. They do not natively fit with broadcast signals which must correspond to the slower frame rates of SMPTE timecode. However, when it comes to rendering performance, game engines are a real step ahead of anything in a conventional broadcast virtual set.
It’s the difference between a few milliseconds and anywhere from 25 to 50 frames a second.
What TFG and Ross have done is to re-write the Unreal code so that the framerates output by the games engine’s virtual cameras and those recorded by robotic studio cameras match. They have succeeded in putting photorealistic rendering into the hands of broadcasters. The virtual worlds are created in advance with features like global illumination, real-time reflections and real-time shadow and rendered live, mixed with live action photography.
Even this would not be possible without the performance of GPU cards from companies like NVIDIA.
According to Kasin, the biggest challenge now for content creators is developing an MR storytelling structure suitable for TV. “Viewers are used to a linear 2D story,” he says. “Creating a unified experience where people can move around freely like in a game simply isn’t possible, or at least no one has cracked it yet,” he says. “The danger for content creators is that they fall into the trap of making the same stories we have today, simply with new VFX.”
He advises show creators not to get overly focused on the new technical possibilities — “a trap into which many 3D productions have fallen” — but to remember that a good story needs real human drama and emotion.
Engagement is one thing, but Fremantle is also promoting the format’s advertising potential. Product placement could simply be ‘written into’ backdrop animations designed to mimic the virtual environment (think of a Pepsi logo styled to fit a saloon in the Wild West). Commercials could also be created in Unreal Engine so that viewers need not feel they are leaving the show's virtual universe.
Nolan Bushnell, the founder of games developer Atari Corp. and a consultant to the TFG project, claims that the fusion of gaming with TV can "bring a standard construct for new kinds of entertainment."
Other format sales of Lost in Time are pending, while TFG believes the tech’s potential has barely been explored. "What we are producing with now is like the first smartphone," says BÃ¥rd Anders. “There will be a natural progression of this technology.”

Thursday, 9 November 2017

Phenix Wants to Stream to a Billion Viewers—And Says It's Halfway There

Streaming Media 

Eschewing the term "P2P" for the friendlier "peer-assisted," Chicago startup Phenix claims it can offer unprecedented scale and unmatched latency

The technology hasn't yet been built to live stream next February's Super Bowl—reliably—to 110 million viewers , should the NFL's usual TV audience all switch online. Or has it? Streaming systems developer Phenix reckons its platform has the unique capability to deliver global, high-quality, synchronous viewing at broadcast scale today.
"We are good to go and at unprecedented in scale," says Dr. Stefan Birrer co-founder and CEO of the Chicago-based startup.
"We dream of streaming the Olympics opening ceremony to a billion people worldwide in real-time, a task that is not possible with other streaming technologies," says Birrer.
In a recent article for Streaming Media, executives from several streaming and CDN companies concluded that the internet, as it currently stands, is incapable of allowing for mass-scale live streaming.
No so, according to Birrer. Phenix claims to be the only company offering a genuine real-time streaming experience to customers at scale, and operating under the mantra that if it's live, it's too late.
While most services are either HTTP Live Streaming (HLS) or WebRTC, both of which trade scale for delay, Phenix's platform offers the total package—high quality, less than 1/2 second of latency, and a potentially infinite number of concurrent users.
Its suite of technologies, built from scratch, is called PCast, which includes algorithms designed to eliminate unnecessary idle server capacity and to provision resources only when demanded. A "Flash Crowd Elasticity" provisions resources "in seconds" to handle large crowds joining popular streams "without interruption to the platform;" and an Interactive Transport Protocol optimizes packet delivery for each unique connection environment (3G, 4G, LTE, Wi-Fi) and available bandwidth.
"Our platform uses a scale-out architecture, where capacity grows linearly with added resources, and load is distributed to nodes with available capacity," explains Birrer. "Our microservices layer is designed to deal with various load levels to prevent overloading components at any time. Each of our 13 points-of-presence (POPs) can handle several million concurrent users, and the system can scale even further using CNAME or anycast IP load balancing."
Birrer adds, "APIs allow our customers to build any type of application involving interactive broadcasting, whether that's one-to-many broadcast, a few-to-many groupcast, or group chat. These APIs are enterprise-ready, WebRTC standard-compliant, and support all mobile devices (iOS and Android), computers (desktop, laptop, tablet), and web browsers."
PCast runs on the company's own cloud servers, which run on the Google Cloud Platform, but Phenix is seeking other vendors so it's not reliant on Google (as good as it is).
On top of this server-assisted CDN-like platform, Phenix creates scale with a Proximity Multicast technology claimed to decrease operating bandwidth costs up to 80%. The company used to be called PhenixP2P, but being aware of peer-to-peer's negative connotations (illegal sharing for one), they prefer the term peer-assisted delivery and have dropped the "P2P" from the name.
Birrer knows a thing or two about P2P. He has an M.S. and Ph.D. in computer science from Northwestern University with a special focus on researching P2P streaming algorithms. Also on the Phenix team is lead scientist Fabian Bustamante, a professor of computer science at Northwestern.
"Using peer-assisted delivery, applications require less bandwidth from the backend and consequently reduce the total cost for delivering content," says Birrer. "Peers utilize nearby peers to offload some or all of the bandwidth demands from the back-end to the network. Our peer-assisted technology scales naturally with the number of participating peers."
Prior to setting up Phenix in 2013, Birrer worked for Chicago software and financial management company SempiTech where he designed and developed a social video chat application for Rabbit Inc and worked on applications for trading and messaging systems—experience that gave him the insight to develop a video streaming technology that would have a latency in the milliseconds.
"The internet's next wave is realtime. Back when we started Phenix, the delay was in the minutes," he says. "Quality of video has improved since, but lag is still a major issue. There needs to be a better way when the younger generation wants everything now, and not a live stream sports experience ruined because friends or neighbours gets the result a minute before you."
In a similar scenario to the Mayweather vs. McGregor Showtime stream, Birrer suggests some issues could have been allayed with a technology such as Phenix's, which is "aware" when the platform is being overloaded.
"If you project for 4 million users and you have 5 million coming on board in a few minutes, the platform must be able to react to this and scale capacity accordingly. Most systems are not aware of the limits. A fundamental approach to our platform is that with advanced monitoring and control algorithms, Proximity Multicast instantly adapts to changing network conditions."
Each POP is certified for 300,000 concurrent streams today, he says. This, though, could readily be scaled to 5 or even 8 million per data center if an event like the Super Bowl should come calling. In that instance, Phenix would set up 30 POPs (in the U.S), each scalable up to 5 million users, he says.
"Because we built our stack from the ground up, we have a very deep knowledge about it and we can do some options that many other can't. You can have two encoders doing the same thing in parallel and exchange one signal for another if one stream fails. The goal is end-to-end redundancy."
It has 10 paying customers, some for two years, including a "billion-dollar Asian company" and another 20 to 30 testing proof of concept. These include companies in social media, news,  and esports, as well as a sports broadcaster in Europe. No names were provided.
Phenix raised $3.5 million during its Series A funding round last April, bringing the total raised above $5.5 million since launch.
Coming soon to Pcast is edge insertion of advertising to avoid ad blockers. The company is also looking to expand into international markets, notably China.
"Even though we already deliver a super-fast stream we are still looking to make it faster and cut out a couple hundred milliseconds. We want to build technology that can stream the Olympic opening ceremony globally. That is the type of ambition we have."