Thursday, 13 April 2017

Steve Shaw: Chromatic Scaler

British Cinematographer
While the industry is energised with product and pipelines capable of generating and displaying the values of high dynamic range, there are voices wanting to retrain the focus on the essence of visual storytelling.
“HDR can hurt," says Steve Shaw, with characteristic candidness. "Personally, I don’t need searing highlight brightness, since that’s not where the story is being told. A good story needs to be presented accurately. A picture in which the colours are slightly too cool, for example, evokes a different emotional response from the viewer to the one intended and that difference is crucial to me. You can get as much artistic intent off a SDR screen as an HDR monitor so long as colour is managed accurately."
He holds a similar view of higher resolution. “All I’m seeking is for colour to be managed from beginning to end of the workflow chain as consistently and accurately as possible so that a cinematographer’s vision is faithfully rendered when anybody views their work anywhere."
Shaw is one of the most respected figures in the global production community. He has insider knowledge of the emergence of high-end digital post production, understands what it’s like to be an artist for hire and has driven colour science to a new level. His company, Light Illusion, is built on Shaw's ability to understand and communicate complex technologies to the cinematographer and colourist, and to translate their needs into technology in turn.
It all started by happenstance growing up in Newbury. The teenage Shaw was all set to continue education at sixth form college en route to university when an engineering job came up at local start-up Micro Consultants.
"I attended an interview just for the experience, and they offered me a full-time job," he recalls. The company enrolled Shaw on a training programme, learning to solder circuit boards on machines for converting video signals from analogue to digital and back again.
Perhaps only company founder Sir Peter Michael had an idea that this technology was to be the foundation for one of the most successful companies the industry has known. In 1975, Quantel, as the company was now known (derived from Quantised Television) released the first all-digital framestore, the DFS 3000 followed by a DVE, the DPE 5000.
"I began as a test engineer for the DPE 5000 which was capable of picture in picture. The Paintbox was still on the drawing board," he says. "I began to spend more and more time on the road, increasingly in America providing engineering support."
Over the decade from the release of graphics system Paintbox in 1981, Quantel became the worldwide industry standard for commercials production. Effects compositing system Harry and Henry, the first multi layer compositor, were iconic badges which facilities would market at a premium to clients. Shaw, as a product manager, was instrumental in devising its road map and launch.
"We were flying producers and DPs back and forth to Cannes," he recalls. "Everybody was making very good money, including our clients. Quantel's success was based on that partnership. Quantel's equipment was just so advanced, not just in television, but in the world of digital engineering. Quantel was fantastic to work for. I was very lucky to have been part of that."
After 17 years with the firm, it was time to move on. When the late Andrew Christie, chief executive of one of London’s most renowned post houses, Complete Video, asked Shaw to set up a special film VFX division, he made the move.
"Since I knew the kit inside out I knew how to get the most out of it," he says. "By default and with no real intent, I became a creative.”

Everything was shot and delivered on film, and the digital VFX process was extremely slow by today's standards, but for its time Men In White Coats was doing ground-breaking work on projects including Captain Jack, Lost In Space and Elizabeth.
MIWC was "phenomenally successful in a very short space of time" but Shaw wasn't happy. "You realise after a year that you are always in the same dark room doing pretty much the same work. Having spent the best part of the previous two decades running around the world doing all sorts of work on a company Amex card, life as a VFX company owner and operator began to lose its sheen."
Leaving MIWC in 1999, Shaw joined the board of telecine maker Cintel as technical director. "I made a mistake," he admits. "I should have done more research. The guys had a plan and really wanted to turn the company's fortunes around but the market just wasn't responding."
During a short spell at intercoms kit specialist Trilogy he met David Bush, founder and MD of Cinecittà Digital, a new facility and division within the Cinecittà Film Studios complex.
"He gave me a chance to get back to the coal face of digital creation," says Shaw. "I realised that this is what I love. So I got on a plane to Rome."
Working as a freelance colourist and imaging consultant (as Digital Praxis) Shaw helped Bush start digital intermediate division D-Lab, which later served as a technical test bed for Quantel's DI platform, iQ.
"Essentially, this was the start of the rest of my career. I was once more bouncing around Europe troubleshooting and consulting for digital film post production."
Light Illusion is an evolution of Shaw's consultancy work framed around advanced calibration tools for colour management.
"I may have come up with the concept for LightSpace CMS based on my own needs as a colourist, but I'm no coder," he says. "I'm just the front man. When it comes to maths and algorithms then I work with a fabulously talented team of people and development partners."
An advanced colour mathematics engine lies at the heart of LightSpace CMS, SpaceMatch DCM and SpaceMan ICC. Combined, this software offers colour critical creative management for all workflows, including Mac and PC workstations, from digital intermediate grading systems, to paint and graphics systems.
"I'm talking with cinematographers and colourists all the time and while we disagree on some things, which I view as a healthy stimulation, we all understand the concept and importance of colour," he says. "This is more important today than at any time, because of the explosion in different capture formats and variety of monitor technology. Back in the day you might have had a Sony or Barco Grade 1 CRT using phosphors you could rely on for picture reference. These days there are many different types of displays, using vastly different technologies, making the job of colour management and display calibration more critical than ever."

Wednesday, 12 April 2017

Sony claims 10-year life span for new SSD

RedShark News

With data accumulation advancing all the time, reliable and long lasting storage options that don’t require encasing in kryptonite are an essential part of the armoury. Enter Sony with a new solid state disc unit claimed to last a decade for the average user.
http://www.redsharknews.com/production/item/4491-sony-claims-10-year-life-span-for-new-ssd

There are in fact two new SSDs in its G Series range of drives, dockable to camcorders and DSLRs. These are the SV-GS96 with a 960GB capacity and the SV-GS48 with a 480GB memory.
The 460GB drive is rated to reach 1200 terabytes written (TBW) while the 960GB SSD achieves up to 2400 TBW using Error Correction Code which is a way of detecting and correcting bit errors in memory.
Sony calculates that if data is fully written to the drive an average of five times a week then the 960GB model will last you a good ten years. In comparison, you’d need to buy three 300TBW drives to record the same material in the same timeframe, Sony says.
It further claims that “while other SSDs have a tendency for data write speeds to suddenly drop after repeated re-write cycles”, the new machines have been engineered to prevent sudden speed decreases, at the same time ensuring stable recording of 4K video without frame dropping.
It has tested this and states that when paired with an ATOMOS Shogun Inferno, the G series SSD was able to record video at 4K 60p (ProRes 422 HQ) mode stably.
If the drive is removed prematurely or if the recorder’s power is suddenly cut off Sony’s data protection technology kicks in to keep the data secure.
Apparently shock resistant even when dropped from shoulder height, the drive’s connector will stand up to 3000 removals and insertions according to Sony tests.
Available next month expect to pay to pay £430 for the SV-GS96 and £230 for the SV-GS48.

Monday, 10 April 2017

Offspring take wildlife night filming to new levels

VMI
The extreme low light 4m ASA Canon ME20F-Shas been used by Offspring Films to capture the behaviour of one of the world’s fastest mammals at night for the first time.
The Elephant shrew, with a recorded speed of 28kmph, was filmed in the wild in Kenya by Offspring for One Wild Day (working title) for a major Broadcaster. The 3 x 60-minute natural history explores how certain animals survive and thrive in different habitats over a 24-hour period.
For the episode set in the African savannah, Producer/Director Anwar Mamon with DP Mark Payne-Gill DP selected the ME20F-SH. This is a full-frame HD camera with a sensitivity of over ISO 4 million (+75dB).
“Obviously, telling the story of an animal’s life over 24 hours means filming at night,” explains Mamon. “The traditional way of doing this is to use infrared (IR) imaging which gives you a look which is quite cold. In contrast for this series, The commissioning channel wanted to give the show a warmth and that meant making the animals look natural by capturing as much colour as possible.”
The Elephant shrew – or Sengi, as the creature is colloquially known – moves rapidly at ground level making tunnels in the grass as it uses it extended nose to hunt for insects. "The tunnels are about three fingers’ width and provide a roadmap for judging where they might go and therefore where to place the camera,” says Mamon. “We put the camera on a slider so we could move between tunnels.”
Footage was recorded onto a Convergent Design Odyssey 7Q recorder with a 5.6" TV Logic monitor used as a makeshift viewfinder.
“VMI had supplied us with the ME20 before so it made perfect sense to work with them again,” says Mamon. “While we knew exactly what we wanted they delivered it all for us just as we expected.”
Offspring and Payne-Gill had first used the camera to film small primates called Tarsiers in Indonesia last year, so had a good working knowledge of just how far to push it before introducing too much noise.
“We found, in tests, that the ME20 produced excellent results up to 45dbs (approx. 140,000 ISO) after which noise became noticeable but with noise reduction would still produce incredible results,” says Payne-Gill.
 The team were able to use very low soft key lighting provided by an Aladdin 'A' light and Eyelight LEDs without effecting the creature’s behaviour. “As a result we only needed to shoot between 18 and 21dbs (approx 50,000 -70,000 ISO),” says Payne-Gill. “The camera wasn't even having to work hard to give amazing noise free images.”
The use of a little supplementary lighting meant he was able to select f2.8 and f4 macros.
Other programmes in the series explore jungles and deserts. For the deserts programme they deployed the ME20F-SH to film an even smaller animal called the Kangaroo Rat and in the Namibian desert they recorded wild elephants at night.
“At times we were shooting at times during a full moon which is bright enough to see detail on the animals and in the sky,” says Mamon. “We also filmed elephants with less than a full moon and despite not being able to light for such a larger area camera still performed well.”
The offline is being performed at Bristol's Filmsat59 on Avid with online in Autodesk Flame and the grade on Lustre. Noise reduction is done in Avid Symphony using Neat Video. 
TX is later this year

Thursday, 6 April 2017

70 manufacturers back AIMS as it wins IP standards fight

Broadcast

The Alliance for IP Media Solutions (AIMS) has won the battle to establish the industry standard for the transport and management of video over IP, with rivals Evertz and Sony declaring that they will now drop their rival protocols.
http://www.broadcastnow.co.uk/techfacils/70-manufacturers-back-aims-as-it-wins-ip-standards-fight/5116606.article?blocktitle=Latest-News&contentID=1151

One year ago, disparate groups of manufacturers had allied behind competing standards aimed at helping to shift TV production from SDI to IP.
But AIMS will now head into this month’s NAB trade show as the clear winner, with 70 manufacturers, broadcasters and other related parties backing its open standards-based approach.
“There is still some disparity in the industry but by weight of numbers, AIMS has won the argument,” said Tim Felstead, head of product marketing at SAM.
Evertz and Sony – both also members of AIMS – developed parallel strategies for video over IP technologies, branded ASPEN and the Network Media Interface (NMI) respectively.
But Nicolas Moreau, product marketing manager for IP live production and workflows at Sony, said that NMI will now be “dropped” in favour of the incoming SMPTE standard 2110 (ST 2110). Evertz is also throwing its weight behind ST 2110.
“The fundamental elements of ASPEN have been harmonized into ST 2110,” said Mo Goyal, director of product marketing. “All vendors are on the same page. The industry would like to see a standard we can interoperate around and as one of the major players, we’re leading the charge.”
ST 2110, which could be ratified as early as this summer, defines a means of transporting and synchronising audio, video and metadata. Unlike the existing SMPTE 2022-6 standard, which treats all the signals as one stream, each element in ST 2110 is split into separate components.
Benefits include greater flexibility and efficiency in managing audio streams, subtitles, and handling High Dynamic Range video.
Developed in tandem with ST 2110 is the Networked Media Open Specifications (NMOS). Although not part of the standard, this method of enabling different IP devices to identify each other has garnered widespread support.
“We are committed to interoperability for IP live production and in that sense, the networking interface technology we used in the past will evolve into SMPTE 2110 and NMOS,” said Moreau.
Products compatible with ST 2110 and NMOS are expected to debut at NAB, which runs from 22 to 27 April in Las Vegas.

Wednesday, 5 April 2017

Slave to the algorithm

Digital Studio ME

Artificial Intelligence is popularly the subject of dystopian visions from Bladerunner to Terminator. Even starting this article with the phrase ‘AI has begun to enter mainstream consciousness’ is loaded with extra-curricula meaning. The technology has emerged from theory into the sunlight impacting everything from robotic bank tellers to self-driving cars. Media is no exception.
Data, specifically metadata, has been the currency of media organisations for some time and essentially all AI does is to take this to another level. Recent extraordinary advances have been possible thanks to technical and intellectual advances that have allowed the development of very large-size Artificial Neural Networks (ANN - computational models inspired by how the brain works) coupled with the availability of a huge quantity of data to train them.
To illustrate the scale of the progress, the performance of object recognition algorithms on the benchmark ImageNet database went from an error rate of 28% in 2010 to less than 3% in 2016, lower than the human error rate on the same data.
Equity funding of AI-focused start-ups reached a record high in the second quarter of 2016 of more than $1 billion, according to researcher CB Insights.
Most R&D is being driven by computing and web giants Microsoft, Google, Facebook and Amazon who are best positioned to hoover consumer data on everything from buying habits to exercise regimens. Banks of their machines can be fed vast amounts of multimedia for processing and organising by algorithms for object, voice and facial recognition, emotion detection, speech to text or any programme we want to through at it.
Google CEO Sundar Pichai has said the company’s shift to AI is as fundamental “as the invention of the web or the smartphone.” He went so far as to suggest that we are evolving from a mobile-first to an AI-first world.
In reality, most AI apps are productised machine learning (ML) applications for which the term AI is misleading.  ML can be understood as ‘learning machines’ as distinguished from AI, or ‘machines that think’. AI is a branch of computer science attempting to build machines capable of intelligent behaviour, while Stanford University defines ML as “the science of getting computers to act without being explicitly programmed”.
You need robust ML before you get to AI of course and currently there are few true mainstream AI applications outside of autonomous cars.
IBM prefers to talk about augmented intelligence.  “It’s an approach which asks how AI supports decision making and demands a societal change in how we look at technology,” says Carrie Lomas, IBM’s cognitive solutions and IoT executive. “Through personal devices like tablets to all manner of items with sensors, the industry as a whole is taking in lots of data and combining it with different types of information to enable a genuinely new understanding of the world.”
IBM’s cognitive computer system Watson is a set of APIs or building blocks which can be combined for different software applications by third parties.
For example, IBM has combined i Alchemy Language APIs with a speech to text platform, to create a tool for video owners to analyse video – forming IBM Cloud Video. It is able to scan news and social media in real time to understand how people are talking about a company; understand important topics and how people feel about them.
Some 75% of Netflix’ usage is driven by recommended content that was itself also developed with data – reducing the risk of producing content that people won’t watch and proposing content that consumers are eager for. This ground-breaking use of big data and basic cognitive science in the content industry has shown others its potential.
“The world’s biggest content owners are going direct to consumers,” says Nagra senior director product marketing Simon Trudelle. “With a growing stock of videos available, just relying on manually managed catalogues or curated lists to create TV or SVOD services has already started reaching its limits.”
The use of AI relies heavily on massive volumes of unstructured data – and a lot more has become available now that video-enabled consumer devices are connected. Capturing and managing TV/video platform data so it can be exploited by advanced predictive algorithms is becoming a key focus for the media industry.
Voice assistants such as Amazon’s Echo and Google Home record user voices in order to function, a logical extension of which is to have cameras on smart TVs and STBs relay information back to the operator about who is watching to improve individual profiling, content serving, ad targeting and automated product insertion.
This may appear more intrusive to the way in which Google or Amazon appropriates data from web searches, for example, and opens up a debate about how much data consumers may be willing to part with for perceived benefit or service discounts.
According to Bloomberg, Amazon, Google, Microsoft are aggregating voice queries from each system’s user base to educate their respective AIs about dialects and natural speech patterns.
As if to circumvent criticism Amazon, Facebook, Google, IBM, and Microsoft formed the non-profit Partnership on AI to advance public understanding of the subject and conduct research on ethics and best practices.
“The advent of cloud-based apps and APIs means 2017 will be about personalisation,” says IBM’s Lomas. “It’s not just about knowing age and gender but knowing a consumer’s emotional response to products marketed to them. Cognitive computing enables media and brands to personalise their approach in a frictionless way.”
End-users will benefit from the increasing role of AI, in particular in interacting with the media. According to Pietro Berkes, principal data scientist, Kudelski Group, “Virtual assistants will understand their preferences and respond to vocal command, facilitating content discovery from multiple sources.  As traditional media becomes increasingly connected, AI will enable content providers to interact with end-users. AI assistants will help consumers select personalised camera angles for sport events and they will deliver automatic summaries of latest news and missed TV shows.”
Adoption is bound to grow as all media experiences become fully connected and new products are developed to provide more convenience, relevance and satisfaction to the user experience.
At Kudelski, ML algorithms are being used to assist human decisions in all its core businesses including helping operators understand the behaviour of subscribers, predict churn and optimise their catalogue.
Its security division uses ML methods for “privacy-preserving user behaviour modelling and intrusion detection.” ML is also applied to help infrastructure operators better manage peak traffic situations and to detect and prevent fraud in deployed systems.
New ANN techniques are being developed to beat traditional encoding and decoding algorithms for video. “They will allow the transmission of high quality media content even in regions with low internet and mobile bandwidth,” he says. “ANNs are being used not only to build better compression methods but also to artificially clean up and increase the resolution of transmitted images (known as ‘super-resolution’)”. Magic Pony Technologies, acquired by Twitter last June for $150m is able to reconstruct a HD video from a low-definition, compressed stream, for example).
Associated Press uses an automated algorithm to cover earning reports for thousands of companies; Yahoo Sports creates personalized articles for Fantasy Football fans. Both companies use the services of Automated Insight.
What else might AI do? As an aid to accessibility AI can automate description of photos and movie scenes for the blind. Facebook, Microsoft and Google have this in the works. Automatic subtitles for the hearing-impaired can be derived from speech recognition and lip reading.
Video from observational documentary shoots regularly achieve ratios of 100:1 swamping editorial. Auto-assembly and even auto edit-packages like Antix and Magisto are available today to package and polish GoPro and mobile phone captured video though instances of use in professional content creation are rare.
A documentary assembled by the Lumberjack AI system is hoped to be presented before the SMPTE-backed Hollywood Professional Association (HPA) by 2018 and has already helped create Danish channel STV ‘s 69 x10’ episodes of semi-scripted kids series Klassen.
Other examples of Watson being used to inspire human creativity include Grammy-winning music producer, Alex da Kid, who used Watson to inspire his break-out song, ‘Not Easy’
The trailer for the Fox film Morgan was assembled using Watson.
A number of recent developments in ML research will allow picture and movie content editing with Photoshop-like tools that edit conceptual elements of an image instead of individual pixels. One such example is a Neural Photo Editing tool, designed to directly edit facial features.
While automating previously manual processes is inevitable and will inevitably lead to a loss of human roles, other roles will open up. Since ML systems need very large amounts of high-quality data to achieve optimal performance data collection and curation requires substantial organizational efforts. “The global shortage of ML experts represents one of the most important difficulties for companies wanting to enter the AI market,” reckons Berkes.
Realistically, it may still take several years before new AI APIs become widely available and adopted by the traditional content creation and distribution value chain. “It’s really a new mindset that players need to have,”  suggests Trudelle. “It’s one which asks ‘What if there were a cloud AI API doing this?’”

Tuesday, 4 April 2017

VMI launches storage and media card rental service

Broadcast
Kit hire firm VMI is launching a service that will specialise in the hire of media cards and hard drives.
VMEDIA will rent out CFast2 and XQD cards, RED Mini Mags and solid state drives (SSD) for Convergent Design Odyssey and Atomos Shogun monitor-recorders plus associated readers and transfer devices for users in the London area.
VMI managing director Barry Bassett told Broadcast that he is responding to production demands driven by the move to 4K.
“As people embrace higher quality with 4K, the media demands are significantly greater than they used to be,” he said.
“A large production may suddenly need a very generous quantity of extra media. Other clients might require one specific SSD to cater for a last minute job, but everyone is under pressure to meet deadlines. This fluctuation has previously meant it was hard to judge exactly where and when the need might fall.”
The service includes delivery of the media by an electric-powered BMW that will be driven daily into central London.
Since the vehicle incurs no congestion charge, Bassett says he can keep costs to £25 each way “equivalent to the cost of a bike courier.”
A drop-off point for returning media will be made available in Fitzrovia.

The future of SMPTE 2110 and beyond

Sports Video Group Europe

A year ago the broadcast equipment manufacturing industry was at loggerheads in pursuit of incompatible paths to the common goal of transitioning video from SDI over IP. As we head toward NAB, it’s fair to say that there’s been a remarkable sea change. By and large the industry has aligned behind a set of standards which should deliver unprecedented interoperability and therefore accelerate adoption of video over IP.
http://www.svgeurope.org/blog/headlines/the-future-of-smpte-2110-and-beyond/

“There’s no doubt anymore that AIMS – or perhaps it’s better to say SMPTE 2110 – has won the race,” says Felix Krückels, Director of Business Development, Lawo. “Open standards are a must in our niche industry. I would say that NMI stays as long as we have a UHD codec agreed on and ASPEN is more or less dead already today.”
“The industry is coalescing because you need a rock-solid foundation for how next-generation facilities and studios will be built,” says Grass Valley CTO, Production, Chuck Meyer. “A key to that is interoperability since this is the only way the industry can develop and grow at scale.”
Particular attention is trained on SMPTE 2110 which may optimistically be ratified as a standard this summer, though some think it more likely in 2018.
“My feeling is that there’s enough vested interest now that it will happen sooner rather than later,” says Tim Felstead, head of product marketing, SAM. “The longer it drags on the worse it is for everybody.”
His only fear is that one company may scupper it within SMPTE much in the way Russia might use its veto at the UN.
Trade group AIMS has the backing of 70 members, including Evertz and Sony, which are both sidelining their own video over IP variants to back ST 2110.
Evertz developed ASPEN for customers needing to move fast into an IP world than the slower cogs of standards bodies were dictating. Now, not only a member of AIMS but an active participant in the development of 2110, Evertz says ASPEN will morph into 2110 and, over time, its reason to exist will cease.
“I don’t see ASPEN dying away immediately,” says Mo Goyal, Evertz’ director or product marketing. “There still needs to be transition point for those facilities running ASPEN, which has been proven to work for facilities at scale. There is a requirement to have an upgrade path to 2110 and our position on this is via firmware on the device.”
For Sony, Nicolas Moreau, product marketing manager IP live production & workflows, says: “The future of live IP production will be based on ST 2110 plus NMOS and the legacy of our solution.”
These legacy elements of NMI include encryption and the Sony codec LLVC. It will be showing its developments on the AMWA and AIMS stand at NAB.
What is in 2110
Like 2022-6, ST 2110 defines a transport and timing protocol for A/V and metadata, but unlike 2022-6 the key concept is to split the signals into independent essences. This approach, encapsulated as TR-03 devised by VSF, is better suited to a production environment than a composite one as, for example, it makes audio processing much easier since no de-embedding and re-embedding is required.
TR-03 itself is composed of a number of existing standards including AES67 for uncompressed audio, RFC 4175 which defines video and SMPTE 2059 for clock synchronisation.
“ST-2022 is fine for simple OB and studio set up which are self-contained but if you look into more complex broadcast centres, transmission and playout then at that point you are needing to process video and audio separately, as well as handling subtitling information,” says Gearhouse Broadcast, systems integration manager Martin Paskin. “You don’t need all that to be carried in a payload. Moving to active essence-only content we end up with better use of data.”
The compression question
ST 2110 will define uncompressed transport of 3G HD-SDI over IP. The question is at what point compression is necessary. Since last year the industry has witnessed the introduction of 25G interfaces with 40G, 50G and 100G interfaces coming.
“Suddenly the need for compression is less,” says Felstead, “though not gone entirely.”
AIMS adheres to the idea that compression at the heart of production should be minimised if not kept out altogether.
“Any compression causes latency and artefacts when ideally you want picture and audio as clean as possible,” says Felstead.
However, in contribution links or remote production where bandwidth is too costly, then having the option of compression is considered useful.
“Even if it is possible to do 4K uncompressed today you have to consider the cost [of bandwidth] especially when you have a perfectly working visually lossless codec,” says Moreau. “It’s true that 4K uncompressed will save the hassle of having to choose a codec but you have to consider the reality. We may soon have 200GB switches but you have to weigh the cost of using that. Even a 40G interface may be too expensive today.”
One candidate, VC2, is already a published standard in SMPTE. Sony has submitted its LLVC codec, Fraunhofer HHI offers an ultra-low latency video encoder compliant with the H.264 baseline profile. Others back the TICO scheme developed by IntoPix. AIMS does not express a preference although individual members will (Grass Valley is a TICO adherent, for example, while SAM and Lawo favour VC2).
“We’re not far away from 4K 120p which requires 24GB and even 8K, for which we would probably want to use a mezzanine codec,” says Meyer. “The prevailing idea is to use a lightweight codec of 6:1, 4:1, 3:1, for example, as a way for customers to scale and future proof workflows as video formats and framerates evolve.”
A clue can be found at the JPEG committee (a joint working group of the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC). It has formally agreed to develop a low-latency lightweight image coding system known as JPEG XS for which the baseline is TICO. JPEG XS is being designed to support increasing resolution (such as 8K) and frame rate in a cost-effective manner.
Lawo says it prefers one type of codec, but expects to see variations of it: UHD in low latency mode, all formats in high compression mode for contribution and remote production, plus very high compression in low latency for monitoring.
Registration and discovery
To scale systems it is necessary to have the ability to plug in a device and make it known to the IP network, and then have a common way for that device to describe all of the things it is capable of doing.
The work of AMWA project Networked Media Open Specifications (NMOS) protocol is significant here. Inside that is registration and discovery mechanism IS-04 which, de facto, all AIMS members agree to support and incorporate in their product roadmap. It will not, though, be part of SMPTE 2110.

“It will become an essential part of an IP system, but it’s one of those strongly recommended practices rather than necessary as a standard,” says Meyer.
“IS-04 is an essential part of the IP system,” affirms Felstead. “You don’t want to plug all these devices into a router network or a network of routers and have to type in a bunch of IP addresses in any sort of manual sense to get them registered on your control system.
“What shouldn’t be standardised is the control system for devices or networks,” he argues. “When you get down to device level control it’s a value add on the part of the manufacturer. Its sophistication and how well it works are points of differentiation between suppliers. If we try to put a common control mechanism into a standard like SMPTE 2110 we run the risk of sanitising everything and it smothers innovation.”
Going forward
AMWA has begun exploring how NMOS will work in practice. Ultimately, this will lead to new specifications which will allow the industry to truly embrace data centre and cloud technologies and feel confident relying on another company’s platform, hardware and servers.
“The industry needs to look at bringing 4K into the conversation and how we carry and leverage higher bandwidth interfaces. How does 2110 map to 25G; [and] how do we move forward to incorporate 4K, 8K,” says Goyal.
Sony notes that some aspect of live – such as the multiviewer – are on the verge of virtualization while vision mixers will take more time purely because of the amount of data they must handle.
However, obtaining true virtualisation and remote production is arguably more of a battle to find bandwidth and processing power than it will be about standards.
“2110 is not a great determining step toward virtualisation and the cloud for production, but it is a great determining step for taking SDI and putting it over IP,” says Felstead.
A quick fix?
While Grass Valley will be among vendors announcing 2110 compatible product at NAB, many manufacturers are waiting for the standards to settle before they put in an IP core.
“While we can build a full 4K IP studio today it will take up to five years for systems integrators to be able to build one with the same ease and effectiveness as they can with SDI,” says Paskin.
The move to IP had been grinding too slowly for an industry pent up with demand, such that last year a number of US-based OB firms, including NEP Group and Game Creek Video, called on manufacturers to deliver an 12G SDI kit as a viable alternative to 4x3G for transporting 4K.
“This was [conceived] as a temporary solution but it’s now actually coming to the fore,” says Paskin. “People are used to SDI and are able to work with it quickly.”
FOR-A is among the vendors to have brought out 12G equipment. Elsewhere, IntoPix has worked with Japanese developer Village Island to launch the VICO-4 encoder, which takes in quad 3G-SDI and converts it into a single 3G-SDI to reduce cabling.