Tuesday, 18 May 2021

Looks Like We’ve Arrived at a Golden Age of Documentaries

NAB Amplify

The golden age of drama has shifted to a golden era for documentaries. Across every platform, nonfiction storytelling has never been in higher demand.

https://amplify.nabshow.com/articles/did-we-somehow-arrive-at-a-golden-age-of-documentaries/

“We are in a golden age of documentary filmmaking,” says Dan Coogan, producer of the 2018 Best Documentary Oscar winner Icarus. “There has never been as great storytelling in nonfiction film as there is today.”

That’s great news for docu-filmmakers who are seeing the checkbooks open in stark contrast to decades of funding struggles with demand fueled by streamers.

The sub-genre of natural history was arguably the first to benefit from streaming platform interest. Blue-chip shows the caliber of Sir David Attenborough fronted The Blue Planet are expensive enterprises years in the making and consequently the preserve of specialists at the BBC Natural History Unit, Discovery and National Geographic.

Competition among streaming platforms including Disney+ (home of Nat Geo) and Discovery+ means wildlife docs have never been in higher demand.

Netflix’s My Octopus Teacher, about an eight-limbed creature and her human companion, won the Oscar for best documentary this year. The film became a sleeper hit, earning a Directors Guild of America nomination, then an Oscar nod, a BAFTA award and was named top documentary at the Producers Guild of America Awards.

“Audiences have reappraised the documentary genre,” says Lia Devlin, head of distribution at Altitude Films, whose slate includes David Attenborough: A Life on Our Planet. She told Variety, “They are treated very much now as feature films and a solid entertainment format.”

Joe Berlinger, who has directed Metallica: Some Kind of Monster and Conversations With a Killer: The Ted Bundy Tapes, says that until Netflix doc filmmakers had spent decades knocking on the door of the entertainment industry to no avail.

One reason for the change in fortunes is that doc makers are adopting techniques from scripted movies. “Scripted filmmakers and unscripted filmmakers, for the last couple of decades, have been borrowing ideas from each other and therefore, the documentary has become much more creative,” Berlinger said.

Look no further than this year’s Awards darling Nomadland for evidence of the intertwining of fact with fiction. Chloe Zhao could have told the story of America’s roving RV community as a straight documentary but felt she could get to the heart of the matter by blending a narrative with professional actors. There’s little doubt the story has reached a far wider audience as a result — and is overwhelming evidence of the audience’s appetite for true-life stories.

Similarly, David France’s Welcome to Chechnya, about the horrific experience of the LGBTQ community in Chechnya, used VFX techniques more commonly associated with Marvel blockbusters to keep the identity of its witnesses secret and retaining the integrity of their accounts. It was the first ever documentary longlisted for an Academy Award for visual effects.

For documentary makers, there is a realization that it is possible to make money out of feature docs, partly because of the growing number of platforms. “People are coming into the market because there are more sources of funding,” reckons Mandy Chang, commissioning editor for BBC Storyville, in Variety. “It is just about understanding how to tap into it.”

Sundance is no longer just a venue for studios to snap up indie features. Festivals like this are a showcase for docs too. Time, Garrett Bradley’s film about Sibil Fox Richardson fighting for the release from prison of her husband, was acquired by Amazon Studios after Sundance and went on to make the Oscars’ documentary feature shortlist.

Docs picked up by studios benefit from marketing punch and the right story can breakout virally. The New York Times Presents: Framing Britney Spears stoked a wave of international publicity and awareness for its exclusive run on FX and Hulu.

That said, the biggest feature tentpoles dwarf the takings of documentary films at the box office. Avatar, Avengers: Infinity War, Star Wars: The Force Awakens and Black Panther top out around $1 billion each in box office revenue. The top unscripted film released in cinemas is Michael Moore’s Fahrenheit 9/11 at $119 million with March of the Penguins second at $77 million (per Box Office Mojo), although the profit margin on budget to box office is likely to be in the docu’s favor.

The eye-opening $25 million check reportedly paid by Apple TV+, for Billie Eilish – The World’s a Little Blurry, is an anomaly. This fee went to Eilish herself while the production by Eilish’s record label Interscope had an estimated budget between $1 million and $2 million.

 

Monday, 17 May 2021

Computers Can See… But They Don’t Have Vision

NAB Amplify

AI engines are getting pretty good at accurate image recognition but fail spectacularly in understanding what it is they are looking at. An approach used for natural language processing could address that.

https://amplify.nabshow.com/articles/computers-can-see-but-they-dont-have-vision/

In a shoot-out between humans and the AI smarts of Amazon AWS Rekognition, Google Vision, IBM Watson, and Microsoft Azure Computer Vision, the machines came out on top.

On a pure accuracy basis, Amazon, Google and Microsoft scored higher than human tagging for tags with greater than 90% confidence in a test completed by Perficient Digital, an edge-AI accelerator chip company, as reported at ZDnet.

However, in a machines versus humans rematch, the engine-generated descriptions matched up poorly with the way that we would describe the image. In other words, the study concluded, there is a clear difference between a tag being accurate and what a human would use to describe an image.

A couple years on, Perficient Digital CEO Steve Teig says advances in natural language processing (NLP) techniques can be applied to computer vision to give machines a better understanding of what they are seeing.

So-called attention-based neural network techniques, which are designed to mimic cognitive processes by giving an artificial neural network an idea of history or context, could be applied to image processing.

In NLP, the Attention mechanism looks at an input sequence, such as a sentence, and decides after each piece of data in the sequence (syllable or word) which other parts of the sequence are relevant. This is similar to how you are reading this article: Your brain is holding certain words in your memory even as it focuses on each new word you’re reading, because the words you’ve already read combined with the word you’re reading right now lend valuable context that help you understand the text.

Applying the same concept to a still image (rather than a temporal sequence such as a video) is less obvious but Teig says Attention can be used in a spatial context here. Syllables or words would be analogous to patches of the images.

As outlined by Teig using an example of computer vision applied to an image of a dog, “There’s a brown pixel next to a grey pixel, next to…” is “a terrible description of what’s going on in the picture,” as opposed to “There is a dog in the picture.”

He says new techniques help an AI “describe the pieces of the image in semantic terms. It can then aggregate those into more useful concepts for downstream reasoning.”

Interviewed by EE Times, Teig said, “I think there’s a lot of room to advance here, both from a theory and software point of view and from a hardware point of view, when one doesn’t have to bludgeon the data with gigantic matrices, which I very much doubt your brain is doing. There’s so much that can be filtered out in context without having to compare it to everything else.”

This matters because current NLP processing from the likes of Google is computational intensive. Deep learning language models like Generative Pre-trained Transformer 3 (GPT-3) require 175 billion parameters — or 1 trillion bits of information.

If you want to do this at the network Edge, to fuel next-gen applications over 5G, then think again.

“It’s like… I’m going to ask you a trillion questions in order to understand what you’ve just said,” Teig says. “Maybe it can’t be done in 20,000 or two million, but a trillion — get out of here! The flaw isn’t that we have a small [processor at the Edge]; the flaw there is that having 175 billion parameters means you did something really wrong.”

That said, this is all evolving very fast. He thinks that reducing Attention-based networks’ parameter count, and representing them efficiently, could bring attention-based embedded vision to Edge devices soon.

 


Monitoring OTT- The new challenges monitoring ‘all IP’ brings

copywritten for TAG Video Systems 

Performing automated analysis of video and data on thousands of signals while keeping costs down requires sophisticated Adaptive Monitoring, explains TAG Video Systems

p50   http://europe.nxtbook.com/nxteu/lesommet/inbroadcast_202105/index.php?startid=50#/p/50  

Satellite, cable, and telco operators are increasingly using OTT delivery to supplement and even replace traditional media delivery methods but monitoring doesn’t get any easier.  To maintain high quality of experience for their customers, operators need a way to monitor hundreds— sometimes thousands—of channels without compromising real-time error detection. In most cases, the immense scale of their service offerings makes continual visual monitoring of all streams impossible.

In contrast, SMPTE ST 2110-based IP workflows are relatively straightforward. ST 2110 enabled the ability to transport independent streams of video, audio, metadata — as many you want associated with a program. Aside from monitoring the fundamentals of IP transport such as jitter and packet loss, ST 2110 requires the raw essence of each component to be synchronized. Taking just one example, audio timing is vital. For lip sync, of course, but imagine doing a live mix from The Proms. when audio from every mic in the orchestra pit has to be in phased accurately.

“More streams (essences) mean more monitoring points,” says Alain Beauvais, Director of Service and Support, TAG Video Systems. “Plus, we also have to keep an eye on the Precision Time Protocol to synchronize clocks in the network so everything stays in time and locked.”

Paul Briscoe, Chief Architect, TAG Video Systems says, “It is more complex than an SDI workflow but because ST 2110 mirrors legacy wiring the leap can be made. OTT however is of an order of magnitude more complex and a far bigger departure from the monitoring broadcast engineers are used to.”

OTT complexity

OTT delivery is built on a complicated processing infrastructure with many moving parts. The distributed infrastructure supporting the end-to-end delivery chain often includes third-party systems and solutions. Scaling of resources for short-term events poses further challenges, including management of associated peaks in monitoring requirements.

The capability to encode with Adaptive Bit Rates (ABR) means the operator can send more than one variant of the stream to optimise the user experience.  This means monitoring, detection, and alarms all along the OTT delivery chain, from the camera to the consumer’s TV, tablet, or smartphone.

Because it often involves different formats and streams, various CDN and cloud vendors, and even multiple workflows within a single operation, each OTT implementation is unique, with unique monitoring requirements. No two deployments should command the same monitoring solution. In addition, to which there is the perennial challenge of keeping costs low and service quality high.

“Then what happens when the industry advances to the point where they want to enable a live streamed sports event and give the viewer additional live camera to select?” says Beauvais. “Each of those versions now requires monitoring to meet different bandwidths at different bitrates.

“Your operator can receive an alarm and won’t know where it is or what it means or how to solve it. Everything can get out of control. Whenever video or data is manipulated, whether it's being moved between facilities or run through a process such as encoding or transcoding, error detection is essential.”

Bringing back control

One option for operators is to monitor the video signal across many different points, but this approach can rapidly soar in expense, particularly as the channel count grows. In conventional monitoring deployments, the cost of licenses and compute power for full-time monitoring would place a ceiling on the number of points that could be monitored.

Fortunately, help is at hand. As operational workflows across the media industry have evolved to resemble true IP or IT workflows, the presence of errors across the delivery chain can be determined with sophisticated software solutions that minimize the need for human intervention.

“If you know any operator who can stay awake constantly checking a thousand monitors on a screen sign them up for life,” says Briscoe. “The fact is, operators don’t need to look at each and every stream all the time. They just need to make sure all streams are being probed and monitored and that problematic streams are automatically brought to their attention.”

Using thresholds set by the operator or triggered by an API command from external devices monitoring the overall ecosystem, the monitoring software should automatically ensure optimal monitoring of all streams at all times and full visualization of errors whenever there is an alarm.

Adaptive Monitoring

Here’s how it works. In full monitoring mode, the input source is fully decoded and can be displayed in real time in a tile of the multiviewer’s mosaic output while being continually probed, analysed, and alarmed for errors. This is the mode typically used to keep an eye on premium or popular channels, as well as any problematic channels. Each and every frame of the video is decoded to create a smooth and continuous picture, and this requires a great deal of CPU resources. However, many aspects of monitoring simply don’t require realtime video, and full-time decoding isn’t necessary. If, for example, the picture goes to black, a second or a fraction of a second delay isn’t catastrophic.

“When operators begin to use Adaptive Monitoring, they see an immediate impact on CPU usage,” says Bristowe. “Instead of dedicating 100% of CPU power for full monitoring at one point, operators can opt for ‘light’ or ‘extra-light’ monitoring and use a fraction of the resources. They have the agility to balance CPU resources against their need to monitor streams in real time.”

Because just 80% or 90% of the information about the nature of the stream is really needed for visual analysis, errors of this type don’t call for full decoding. That’s where light and extra-light monitoring modes offer a better, more efficient, and more economical option. In these modes, the input source is continually probed and analyzed for errors, but video is not fully decoded and cannot be displayed in real time in a tile of the mosaic output.

Operators can keep their eyes on the most important live streams, knowing that other streams are being continuously monitored for any issues. The rule is ‘monitoring by exception’. If one of those video streams violates predetermined thresholds, it can be immediately decoded and routed, along with associated metadata, to the platform’s integrated multiviewer mosaic for closer inspection and troubleshooting.

“With the freedom to implement different monitoring modes within a single deployment, operators can take advantage of automated and adaptive resource allocation to get the most out of their available server resources,” says Bristowe. “While Adaptive Monitoring is invaluable in optimizing monitoring using on-premises hardware, it yields even greater benefits for cloud-based operations.”

Moving away from physical hardware, operators no longer need to scale their equipment and infrastructure to support maximum channel capacity—or leave hardware unused during non-peak times. Whether processing takes place on-premises or in the cloud, Adaptive Monitoring ensures that if the system detects a problem on a channel, that channel is automatically switched to full monitoring mode.

The right probing, monitoring, and multiviewing software combined with Adaptive Monitoring, and cloud-based processing resources allows operators to move toward a more economical pay-per-use model in which they can scale instances to match their need.


 

Immersive real-time production

InBroadcast

Affordable virtual production techniques and technologies are enabling augmented and mixed reality presentations in broadcast  

InBroadcast p22 http://europe.nxtbook.com/nxteu/lesommet/inbroadcast_202105/index.php?startid=22#/p/22

The pandemic has prevented many creative directors from making content in the traditional way, forcing them to explore alternatives. This has coincided with a coming of age for the next generation of virtual set technology, with increasingly powerful photorealistic real-time rendering.  The key drivers towards adoption are space, budget, and participant location. 

“Increased virtual set utilisation has taken on two forms,” says Liam Hayter, Senior Solutions Architect, NewTek. “The first is bringing remote participants via chromakey into a virtual environment, especially where presenters and guests alike are unable to travel to a studio. The second is for a set extension from smaller studio spaces, where budget and space are at a premium but the look and feel of a full studio environment is desired. 

He adds, “AR/MR particularly come into their own for remote or at-home audiences where a live event space can equally be expanded. Whether this is broadcast or streamed is immaterial. Combining camera and robotics tracking data with on-stage or on-set video walls and virtual set extensions provide visually engaging, dynamic content.” 

Demand for virtual sets is soaring across all markets, including broadcast, corporate communications, advertising and events. Using LED screens rather than greenscreen enable realistic reflections and refractions that actors and camera operators can easily see and explore. 

“Allowing the performer or presenter to see what’s happening around them instead of just a blank screen located behind them, creates a more natural flow to a production,” says Lanz Short, Technical Solutions Manager, disguise. “[LED volumes] also remove the need for chroma keying, which can be challenging when working in a small studio, and people can wear any colour clothing when on set.” 

disguise xR workflow has undergone intensive field testing and refining in collaboration with creative and technical partners. The recent public release the software is claimed by disguise to “significantly reduce the barrier of entry to cutting-edge productions, unlocking the power to create any world, from one location.”  

The r18 release will also debut a cluster rendering feature for delivery of photorealistic scenes to large scale content displays and support for the ACES colour management standard.  “It unlocks the limitations of a virtual production studio by scaling out real-time content up to an unlimited capacity,” Short says. 

Disguise technology has been incorporated by White Light for its SmartStage system. When the pandemic hit it saw a surge in demand for corporate and education use. It has managed more than 500 hours of live corporate presentations from a SmartStage at the Mermaid centre in Blackfriars since Spring 2020.  

Technical Solutions Director Andy Hook, says, “With this technology you can broadcast to a remote audience, globally, in a more compelling way than traditional video conferencing. Everyone is turning into a broadcaster.” 

White Light made a broadcast version for by Eurosport, including at this Summer’s Tokyo Olympics. This features ‘teleportation’ in which athletes are filmed live at the venue and displayed in full body 3D as if standing next to the presenter in the virtual studio. This technique has also attracted corporates to ‘teleport’ CEOs and keynote speakers into the virtual space.  

Enabling this are remote controlled pan-tilt-zoom cameras (and ‘virtual’ PTZ cams).  The application of tracking data to PTZ cams looks set to democratise the use of AR for broadcast. 

Panasonic’s virtual production studio concept pulls together a number of leading technologies, including its own PTZs. The AW-UE150 PRO PTZ remote camera is claimed as the first robotic camera on the market to provide Position Data Notification (PTZF). It's ability to be used in virtual sets is only one of many features that makes it the ideal choice for robocam needs. It features a massive wide-angle of view, 4K / 60p video, and versatile video outputs. 

To be able to incorporate realistic VR or AR studio sets in your live productions, accurate camera positioning data is imperative. Panasonic says its AW-UE150 PRO PTZ camera features the FreeD protocol, which provides the option to output PTZ and Iris information via Serial (RS 422) and IP (UDP) directly to your tracking system. FreeD is a protocol that sends camera positioning data directly from the camera to a virtual reality production system and is supported by vendors with VR/AR and virtual set solutions including Brainstorm eStudio, The Future Group Pixotope, Vizrt Viz Virtual Studio, Ross Xpression and Zero Density.   

“It enables productions to incorporate realistic virtual studio sets and elements into their live video workflow without the need for additional sensors or encoders,” Panasonic say. 

Many vendors incorporate Unreal Engine into their virtual set solutions but Zero Density claims to be the first. That was back in 2016 when ZD released its disruptive product ‘Reality Engine, a real-time node-based compositor and Reality Keyer, its proprietary keying technology 

Since then, Reality has transformed broadcast and powered numerous live events, esports, commercials and episodic TV. A flagship user is Turkish culture and arts channel, TRT2, which went live with the system in 2019. Switching from one virtual design to another with an entirely different light setup takes less than a minute as each program can be saved as graphs to be loaded when needed.  

While Unreal Engine is the renderer, Reality’s live production toolset offers photorealism, sophistication and ease of use for virtual studio and AR. The software’s pipeline is designed to achieve the perfect blend of the virtual world with the physical, and is effective at handling tracking data, keying and intuitive control tools, according to ZD. 

Released in 2019, StypeLand is a rendering solution from stYpe now gaining traction among broadcasters and film companies as it builds on the immense rendering capabilities of Unreal Engine and adds a framework for live work and post production. 


In 2020 stYpe introduced GreenKiller, their proprietary chroma keyer, which seems to be the cause of most of the buzz around StypeLand. GreenKiller excels in preserving natural shadows, reflections and hair detail. Another major release of GreenKiller was released in March this year, which according to the company “makes GreenKiller and StypeLand one of the most wanted green screen workflows in the industry.”

It elaborates, “StypeLand is now no longer just a plugin for Unreal for using stYpe products, but even the clients using other camera tracking products are using StypeLand and GreenKiller in their workflows.”

All of the components typically required in a broadcast setup, such as camera tracking, AR, VR, LED wall control over nDisplay, set extensions, redundancy engines with disaster recovery as well as centralized control of all engines and scenes through a single PC, tablet, or a mobile phone, are all integral to the StypeLand workflow. 

Brainstorm is a leading manufacturer of real-time 3D graphics and virtual studio solutions. Its InfinitySet provides AR and virtual set applications in combination with technology like PTZ cameras. On that note, InfinitySet can receive the video and tracking information from Sony BRC X400 and X1000 cameras and render the virtual scene or the AR objects in real-time, using photorealistic rendering and Unreal Engine. 

Florida-based MIG for example has been using a Brainstorm system for live streamed remote produced events this past year. Its configuration is composed of a InfinitySet +Track with Unreal Engine running on an HP Z4 workstation and a camera tracked jib with Stype RedSpy, plus Ultimatte 12 chromakeyer. A second phase of the installation includes a second InfinitySet workstation allowing MIG to take the system on the road and produce virtual sets and productions on location.  

Brainstorm is also coordinating AdMiRe a multi-industry R&D project to develop and validate Mixed Reality solutions for TV audiences. 

Francisco Ibáñez, R&D Project Manager at Brainstorm, explains, “Currently, TV audiences can only interact with the programmes they are watching through social networks or broadband hybrid TV. AdMiRe will develop a solution to enable home audiences to virtually join the TV show they are watching and interact with presenters and guests in the television studio. The solution will also provide content creators with tools that radically improve the integration of the presenter within hyper-realistic virtual environments and facilitate their interaction with synthetic elements.” 

Aximmetry is an all-in-one graphics solution for virtual sets including its own chroma keyer. The Hungarian developer says there’s no need to buy separate modules and extensions for 2D graphics, real-time LED wall control, video wall display, virtual product placement, projection or mixed reality projects since its software includes them all. 

“Aximmetry’s highly flexible interactive graphics programming interface enables users to create broadcast quality content even with just one fixed camera and a gamers PC by constructing interactive scenes and effects using virtual lights, virtual camera movements and AR,” the firm says. 

The opportunity to open and run Unreal Engine projects with Aximmetry is also supported. The UE4 rendering engine is embedded into Aximmetry’s own user interface. 

Content created in Aximmetry can be live-streamed directly to YouTube or Facebook. It also offers solutions for handling real-time audience participation via second screen devices. For more complex productions a Broadcast Edition of the software can integrate any camera tracking device, is capable of receiving depth information and offers unlimited SDI ports and 4K-SDI. 

Users include Twenty Studios in Stockholm, HÍR TV in Hungary and central Europe’s Tematic Media Group. 

Lisbon-headquartered wTVision is a real-time graphics provider that can be used in conjunction with a virtual set or on-screen tied to a sports field of play. In the former camp, its technology makes it possible for news producers to integrate official election data in real-time. For the recent parliamentary elections in El Salvador, journalists at Canal 10 were able to show results as they were coming in with AR graphics or control relevant information with the help of an interactive touchscreen. 

In the latter category wTVision joined forces with Mediapro Mexico to deliver tied-to-the-field virtual graphics for the first matches of Scotiabank Concacaf League, in Costa Rica. The company’s  AR³ Football software includes virtual graphics for team and sponsors’ logos, broadcast during live games and integrated into the for soccer. 

wTVision also designs 360-degree virtual sets, mixing them with different virtual graphics and live videos inputs to create visual impact. 

Reckeen makes video production and streaming technologies with Reckeen 3D Studio its newest project. It is a multi-channel mixer that combines four video sources with computer graphics and media. Each of these video sources can be pre-processed with built-in chromatic keys and then placed in a virtual 3D scene, creating a virtual studio or an advanced composition combining CG and video. The system works in two independent modes – Reckeen 3D and Reckeen Lite – enabling users to customize the production process. 

“The Reckeen 3D Studio package contains all the necessary modules to produce 3D TV content along with all the benefits of today’s real-time generating and editing of 3D graphics,” the company explains. “The main advantage of the 3D package is a possibility to freely operate four independent virtual cameras. You can set the camera at any angle, at any distance from the studio’s objects, while maintaining appropriate positioning of the on-air talent via four independent chromatic keys.” 

WASP3D offers a broad range of real-time 3D broadcast graphics solutions for virtual sets, eSports, elections and news. Its broadcast TV graphics workflow is designed to streamline production and enhance visual quality for publishing across all media platforms. 

WASP3D points out that multiple virtual cameras, whether static or animated, can be set up within the scene and operated by using its software, to save the production money on manual camera operation. Its software can help convert limited size spaces “to an infinite 360-degree HD virtual set environment” by creating “high polygon realistic virtual sets” designed to capture minute details like mirror reflections, shadows and cloth movement. 

Multi-video window simulations can be integrated to add guests from virtual meeting application like Zoom and Microsoft Teams. Additional live camera inputs is enabled through NDI integration. 

 

Saturday, 15 May 2021

Transform your home into an oasis for health and well-being

Copywritten for Logical Solutions

Nothing has been more important over the past year than our health and well-being. Never before have so many of us concentrated on monitoring our state of health or been on alert for any changes in our surroundings. Individually, among our families, and collectively, our awareness of responsibility for taking care of ourselves and our environment has been heightened – for good.

https://logicalsolutions-av.co.uk/2021/04/01/transform-your-home/

That goes for AV integrators too. Alongside the ever-present focus on lighting, audio, video, and security there is a new demand to augment the Smart Home with electronic health technologies.

Even as we emerge into the light from lockdown, we are likely to spend more of our time at home than before the pandemic. For many of us, the future of the work space will be a hybrid one spanning the office and the home office. We owe it to ourselves to balance our working and our family lives by tuning our habitat to enhance our wellness.

That’s why being able to provide solutions which can improve the mental and physical wellbeing of homeowners will be integral to the thinking of Smart Home professionals going forward.

The particular technology I wanted to highlight in this post deals directly with air purification. That’s front of mind, I feel, when all of us are thinking more about what is in the air we breathe.

DARWIN, made by Delos, a tech company headquartered in New York, is a ‘wellness intelligence platform’. It combines a suite of software, algorithms, and integrated hardware to filter contaminants and pollutants from the air, expel odours and remove toxins, pollen, harmful chemicals and pathogens.

The concentrations of many pollutants indoors exceed those outdoors, with both short and long term health impacts and DARWIN combats this too with air quality readings recorded in real-time. It’s part of a wider set of wellness sensors under the DARWIN brand that can also monitor and control water contamination and circadian rhythm lighting.

What’s interesting is that Delos has now partnered with smart home experts Crestron to integrate DARWIN into Creston controllers. For instance, when DARWIN detects an issue with air quality, it communicates with a Crestron thermostat to trigger fan speed adjustment to move larger amounts of air through the home and remediate the problem via the air purification system.  

You can see all the key data about air quality in each room, such as CO2 levels, on a touchpanel or mobile app. Armed with this, home owners can make important decisions on the air quality where they live to ultimately improve their health, their energy levels and sleep patterns.

Air quality technologies are a gamechanger for individuals with respiratory diseases. Personally, I resonate with the benefits of the DARWIN due to the fact I suffered from asthma and bronchitis as a child. Having this technology back then may have made my conditions easier to manage. Unfortunately, DARWIN is currently only available in North America, but I hope either it or a similar offering will be available in Europe soon.

Thursday, 13 May 2021

5 minutes with top colourist Walt Biljan – REDLAB

 Sohonet

We had the pleasure of sitting down with leading colourist and partner at REDLAB Walt Biljan – and attempted to fit everything from working on Schitt’s Creek to dealing with Covid and performing colour grades and VFX finishing reviews remotely using ClearView Flex. 

https://www.sohonet.com/our-resources/blogs/5-minutes-with-top-colourist-walt-biljan-redlab/

One of the great pleasures of working at REDLAB is hosting creative talent like Eugene Levy, Catherine Reitman and her father, the acclaimed filmmaker Ivan Reitman. Producers, directors and showrunners like them choose to post at Toronto’s foremost high-end boutique in part because the facility’s owners are also its senior artists.

“There’s a beautiful vibe to the building when we have filmmakers of this calibre in our corridors and suites,” says Walt Biljan, colourist and partner at the Toronto shop. “It has been tremendously sad this past year not to have had that opportunity – but hopefully things are turning a corner now.” 

Formed in 2008 and still privately owned by a group of Canadian filmmakers and friends, REDLAB offers three elite grading studios, full audio post and VFX packaging for film, TV and commercials.

In Toronto, it competes directly with Deluxe and Technicolor and has posted multiple seasons of Emmy®-winning comedy Schitt’s Creek, CBC crime drama Coroner, popular CBC comedy Workin’ Moms (created by and starring Catherine Reitman) as well as feature films like Wander, Resident Evil: Retribution and Lucky Day. REDLAB’s thriving commercials division works for clients including Gatorade, Google and Starbucks.

Covid-19 response

When the pandemic hit the majority of its 40-strong staff including all technicians were asked to work from home. For a while, Biljan and his fellow partner-artists worked isolated in their suites, receiving and sending out media on drives.

“That was unsustainable until we found ClearView Flex,” Biljan says. “To be honest, we’d trialled remote solutions over the last decade without any real success. Everything we tried struggled with latency and communication or poor-quality images and bad compression.”

They first came across ClearView Flex at NAB in 2019 and tried it for the first time last April. 

“We had some DPs who were sceptical about any kind of remote review but there were no doubts once we’d set them up with ClearView.”

“It was a revelation not least because our clients were surprised and happy about how good the remote experience was. We had some DPs who were sceptical about any kind of remote review but there were no doubts once we’d set them up with ClearView.”

He explains, “While I’m colour correcting, they are seeing changes on their screen in real-time and we are having an open dialogue on Zoom. They see the output on their monitor. It’s a really efficient workflow.”

Calibration checks

To ensure the sessions run smoothly, Biljan arranges a short call with the client ahead of time to calibrate the remote monitor. He sends the remote monitor a signal that returns a report of what the client is seeing at the other end and he uses calibration charts for fine tuning.

“I’m ascertaining that they can see differences in the blacks or that they can see whites,” he says. “I don’t mind if they see the whites slightly warmer or cooler – our visual perception is after all subjective – but I do care that they are seeing the right contrast levels.

“I am very familiar with what the correct calibration should be on a Mac so if they are using an Apple device such as putting the ClearView signal through their home TV display via Apple TV, we can be confident in what we are both seeing. Alternatively, I’d recommend they use one of the latest LG smart TVs such as the LG C8 or C9 series which is what we use as client monitors in our suite. If they have a different model TV, that’s okay since I can still talk them through calibration. 

“If they want more solid science then we can courier out an iPad Pro which we have precisely calibrated and checked at REDLAB.”

“If they want more solid science then we can courier out an iPad Pro which we have precisely calibrated and checked at REDLAB.”

During lockdown, REDLAB performed colour grading sessions and VFX finishing on the latest season of Workin’ Moms and Coroner, as well as the indie feature Wander starring Tommy Lee Jones and directed by April Mullen.

Biljan also graded a season of programming for YesChef, an online channel which delivers cookery masterclasses from culinary experts like Nancy Silverton and Francis Mallmann. With the director and producers of the show located in Israel, in all probability the grading sessions would have been remote even without Covid travel restrictions.

Remote is here to stay

“I think the world was changing before Covid and that the crisis has sped it up,” Biljan says. “As we look to the future, I can see Apple, Amazon and Netflix execs preferring to review content remotely far more than previously. They don’t have the time to travel, and nor do they need to when the remote experience is so good. 

“As we look to the future, I can see Apple, Amazon and Netflix execs preferring to review content remotely far more than previously.

“A lot of commercials sessions were already becoming remote and I think ad agencies will want to remain remote or working from home. Feature film creatives are always squeezed for time. DPs and producers are travelling a lot or overseas and often the only chance they have to watch the project is as a posting but now they have the ability to get involved with a live stream.”

Adapting to remote life is generally easier when there’s already a good relationship between client and artist. It’s not so much the session itself but the interpersonal touches that create a strong working partnership which can go missing in enforced lockdowns.

“With new clients working remotely is more challenging but I can’t say we’ve had any problems,” Biljan says. “It’s just harder to try to get to know each other on the phone. I can’t take them to dinner afterwards or hang out and talk film which are the type of things I love to do when meeting new people.

“There will be a lot more emphasis put on work from home now that technologies like ClearView Flex are so strong.”

“I think everyone is stir crazy working from home and as things open up there will be a big bounce back to suites. It is slightly more effective to have people in the room but not everyone can get to the room. There will be a lot more emphasis put on work from home now that technologies like ClearView Flex are so strong.”

“It’s a great product, a staple here at REDLAB and one we will use long after the pandemic.”

 

Green Rock cements hybrid cloud facility

copywritten for BASE Media Cloud published in Broadcast

The pandemic panicked much of the post industry into remote workarounds. Now, as vaccine rollouts bring back some normality, companies are grappling with the long-term structure of work and office life.

https://www.broadcastnow.co.uk/broadcast-network/green-rock-cements-hybrid-cloud-facility/5159526.article

“The pandemic has drop kicked the industry 5-10 years into the future,” says Niels Stevens, Snr Solutions Consultant Pro Video and Broadcast Workflows, Adobe. “Some companies already moving in the direction of cloud are setting themselves up for a full cloud-based workflow. Others who have made large investments in hardware are having to segue into a hybrid model as they go towards remote.”

These ideas and the technology underpinning them have been spearheaded by BASE Media Cloud since it’s launch in 2015, having been evangelists of cloud-based post production for at least a decade.

“Back around 2010 when I owned and operated a post production facility in London and Pinewood Film Studios, we were talking about how to move editing to the cloud”, comments BASE Media Cloud Founder and Managing Director, Ben Foakes.

“My now CTO (Damon Neale) and I sat on my facility sofa drinking tea, figuring out how we could build out a cloud-based facility using Adobe products for offline and then running high-end finishing workflows on-prem. Fast forward to 2021, and thanks to tech partnerships with BeBop, iconik, IBM and Adobe, we are now able to bring this idea to reality”.

BASE Media Cloud has worked with UK-based post facility and creative agency, Green Rock since the companies were both early stage and has most recently designed a brand new, end-to-end, cloud-based facility for the team at Green Rock.

“Our journey to cloud predated the pandemic,” explains Simon Green, founder and CEO of Green Rock, a production and post production agency based in London and LA. “I had my first conversations about a completely virtual post facility over a decade ago. Distributed collaborative post production is no longer a dream. It is now a reality.”

With clients including ITV, NatWest, Netflix and XPRIZE, Green Rock is among the first UK facilities to transition away from bricks and mortar into a completely cloud-based shop.

“For us, cloud infrastructure means less capital investment in on-premise solutions and a more flexible approach to team working,” Green says. “These issues have magnified in recent months as more and more clients have begun requesting a more agile way of working.

“We are always looking to create the future first and get that advantage for us, and our partners. We are therefore looking to move completely out of our Soho facility and to use cloud to connect clients and colleagues with media and creative tools. It frees us up to scale at a global level like we have only imagined.

“We have major brands and broadcasters keen to work with us because they can see that we are not limited to the physical suites we have on site. We can start to build a facility using cloud resources that enables us to bill and be billed by the hour.”

Green Rock is well on the way to achieving this with technology partner BASE Media Cloud, leveraging an integrated cloud-based platform with SaaS products from Adobe, Bebop and Iconik.

“What we’ve designed together with Green Rock and are rolling out for them will allow their US and UK teams to collaborate remotely via the Iconik cloud MAM,” says Ben Foakes, Founder & Managing Director, BASE Media Cloud. “On prem storage remains for high performance tasks with burst capacity enabled in the cloud.

“There are no longer big fat PC towers under the desk,” Foakes adds. “It will be an entirely virtual workstation environment enabling Green Rock to re-invent its editing strategy.”

BASE Media Cloud acts as the cloud agnostic storage hub into which media applications such as Iconik, with integrations into the Adobe Creative Cloud suite are plugged. Access to application is by simple login secured with multifactor authentication. Remote Workstations are powered by BeBop Technology, running on a choice of public clouds, including AWS, Azure or Google Cloud Platform, depending on the customer’s preference and budget.

“In essence this means your data becomes centralised in the cloud, your workstations run in the cloud but your users can be anywhere in the world,” Foakes says.

The evolution of post

Anecdotally it seems that from VFX boutiques to global broadcasters the industry is coming out of the pandemic seeking a more formalised and long-term strategy.

“At the start of COVID, everyone was in a rush,” Foakes says. “The broadcast and post community moved to nomadic working and had to quickly spin up reactive solutions such as using a Virtual Private Networks (VPNs) or running remote desktop or PcoIP sessions. These were temporary solutions because no one really had time to design for it.

Now we’re seeing a huge wave of interest in reducing the size of premises but not moving 100 percent to the cloud. It’s about changing the ratio.”

Post production has evolved. From the days of physical film cutting, into tape-to-tape and the transition at the end of the 1980s into nonlinear file-based editing, now things have gone pure digital.

“The virtual hybrid cloud has arrived as part of the eventual move towards full virtual,” Foakes said.