Sunday 28 February 2021

Mars Rover landing: How NASA sent high quality video from another world

RedShark News

Major sports events like the Super Bowl, World Cup Final or the Olympics have long been the pinnacle of global televised viewing and of TV tech innovation. Over the next few years, that status could be dwarfed by a new genre of live broadcast events from space. 

https://www.redsharknews.com/mars-rover-landing-how-nasa-sent-high-quality-video-from-another-world

As nation states and commercial enterprises bid to control tickets to the moon and to colonise Mars, we’re entering a fevered new age of galactic travel and exploration. 

Most of it will be televised. That’s evident at NASA which increases the number of cameras it carries on each successive mission. 

The recent live coverage of the Perseverance landing was part of NASA’s PR effort to justify the $2.4 billion it took to build and launch the thing. The estimate to land and operate the rover during its prime mission alone is $300 million. 

The cameras and onboard microphone “can be considered a ‘public engagement payload,’” say NASA. “They are likely to give us a good and dramatic sense of the ride down to the surface!” 

As if that wasn’t enough to grab people’s attention, the space agency dubbed the rover’s entry and descent to the crater as ‘Seven Minutes of Terror’. One wonders how they will up the ante when it comes to billing the human landing which it is under presidential orders to deliver by 2033. 

Part of this sensationalism is because of the blackout in transmitting data from the craft to earth.  

The delay, inconsequential between Earth and Moon but the margin of four to 24-minutes when covering the 225 million km (140 million miles) to Mars, means a delay in the ‘live’ broadcast during which apparently not even NASA’s ground control are sure of the outcome.  

Another reason for not broadcasting the live stream in something like realtime is because its quality would not be deemed suitable for telly. 

“We probably could do it today, but definitely not in HD,” Stephen Townes, Chief Technologist for the Interplanetary Network at NASA’s Jet Propulsion Lab told Forbes.

Perseverance is able to send data directly into NASA’s Deep Space Network (DSN) antennae on Earth. However, at between 800 bits-per-second and 15,625 bps to a 70m DSN antenna, that’s not going to cut it for HDTV which requires at least 8Mb/s (UHD requires over 57 Mb/s). 

Today, that’s not the primary way of getting data back to Earth. Perseverance was able to send data from the surface to a Mars orbiter at a maximum of 2 Mb/s, for onward relay to Earth via the an orbiting satellite. There’s still a delay but 2 Mb/s is about the same bandwidth required for a stable stream from Netflix. 

“Once it is on the Mars Reconnaissance Orbiter [the signals] can be sent to Earth at 500 kilobits-per-second (kb/s) up to 3 to 4 Mb/s depending on the distance between Mars and Earth,” said Townes.   

Ye cannae change the laws of physics, but you can bend them a bit.  

NASA is currently upgrading from radio to optical communications using a system it calls Deep Space Optical Communications (DSOC). Working with data encoded in photons and beamed over laser light is set to vastly increase the data rate.  

“It’s a very significant step in demonstrating the viability of optical communication at Mars distances,” Townes explains. NASA has demonstrated optical communication of up to 622 Mb/s from the Moon, but Mars at its closest range is over 150 times farther away from Earth, which makes communicating from Mars 22,500 times harder. 

Right now, using a 5 metre telescope in California as a receiver, NASA expect to get around 50 Mb/s using DSOC. With a larger ground telescope of around 10m diameter, it could support 200 Mb/s. 

Eyes in the sky 

Perseverance is carting 25 cameras to the Red Planet – the most ever flown in the history of deep-space exploration. Twenty-three are part of the spacecraft and two are on the Ingenuity Mars helicopter. 

Of the 23, nineteen of them are fitted to the rover. They are all assembled from easily available commercial hardware, according to NASA and include nine colour cameras for engineering; three for entry, descent, and landing (two of which are in colour and dedicated to ‘public engagement’). Another two colour cameras have a zoom and are mounted on the mast that stands up from the rover deck. These are aligned to capture stereo 3D images “providing a 3-D view similar to what human eyes would see, only better” NASA claims. 

There are various scientific and navigation cameras too, the most intriguing of which is the SuperCam that fires a laser at targets smaller than 1mm and vaporises the rock into plasma. An onboard spectrograph records all this to reveal the chemical composition of the mineral. 

If there is anything in those sponge-like rock formations revealed by the rover then it had better watch out. That’s the trouble with tribbles. 

Next Giant Steps 

Space watching via channels like NASA TV is already routine but it’s going to become a cinematic event over the next decade as the number of missions and the ambition of them rockets upwards.  

Ingenuity, the Mars-Copter, will shortly being a series of flights filming over the Martian landscape, ExoMars a Russian and European Space Agency program plans to land another rover on Mars in 2023. There are also probes due to launch to Jupiter and Saturn’s moon Titan in the late 2020s and space tourism is being launched by SpaceX, Virgin Galactic and Russian agency Roscosmos. 

Tom Cruise of course plans to make an actual Blockbuster in orbit on the ISS, in a film to be funded by NASA. 

But ratings will go into stellar overdrive with manned missions back to the lunar surface and to Mars which should lift off by 2030.  

Even if you think it’s all being filmed on a lot in Burbank these seat-of-the-pants thrill rides will be must-watch moments. 

Friday 26 February 2021

VR? AR? Today, It's All About XR

Streaming Media

VR is dead, long live VR! That about sums up the reality check the industry is having to spin about next-gen immersive video. The GSMA, which represents the interests of mobile network operators worldwide, admits that virtual reality (VR) continues to suffer from "post-hype realities," a hype that the GSMA's annual Mobile World Congress (MWC) played a role in building. At the same time, the GSMA points to the current mobile ecosystem as a "laboratory" for AI and immersive reality: "Whereas the smartphone wars centred on the app economy, the new battleground is in AI development and a push towards immersive reality." 

https://www.streamingmedia.com/Articles/Editorial/Featured-Articles/VR-AR-Today-Its-All-About-XR-145251.aspx

Analysts predict the global augmented reality (AR) and VR market will grow. Prescient & Strategic Intelligence charts a yearly 42.9% increase, topping $1.274 billion in 2030, largely based on an uptick in the sales of tablets and smartphones. 

Futuresource Consulting expects that the market will have remained relatively flat in 2020, with just fewer than 18 million VR and AR headsets estimated to have shipped. It says this is mainly driven by smartphone VR viewers, which account for more than half of all VR shipments. The market mix is expected to rapidly develop, however, with Futuresource Consulting forecasting that console VR headsets, PC headsets, and all-in-ones (such as Oculus' Quest 2) will enjoy solid growth through to 2024.

In short, VR has failed to take flight, but its time will come again. This article reviews what happened, what's going on now, and what's next for immersive video.

VR 1.0 Stalls

Facebook lit the fuse for VR in 2014 when it acquired Oculus for $3 billion. VR tech and production companies multiplied, with 2017 being the pinnacle. Jaunt raised $100 million from Disney, Google, and Sky, among others, before being bought by Verizon in 2019. Since then, though, things fizzled. Nokia sidelined its OZO division. Google shuttered its VR platform, Daydream. NextVR hit a market value of $800 million before selling to Apple in 2020.

"The VR market was overhyped and did not deliver meaningful results besides CG-based VR games," says Thierry Fautier, president of the Ultra HD Forum and VP of video strategy at Harmonic. "VR failed to take off because the technology was not convincing, due to severe issues like motion sickness and the ‘screen door effect,' and the quality was not good enough (4K vs. the 16K probably needed). We now see a second wave of VR enabled by 5G and new HMDs [head-mounted displays] with 8K capability."

The hype cycle began for VR and AR linked to 5G deployment in 2017. In April that year, Korea Telecom and Verizon claimed the world's first end-to-end 5G network in a demonstration in which a Verizon employee appeared as a hologram on a monitor at Korea Telecom's headquarters. 

"If the 5G network is commercialized, 3D hologram video calls will be available as one of the representative 5G-based services," a Korea Telecom spokesperson told The Korea Herald. "Through a complete hologram video call, users will be able to meet a person in a remote area in a real size in real time."

At MWC 2018, South Korea's SK Telecom debuted 8K "social virtual reality" and "hologram AI." In September that year, Vodafone demonstrated a live holographic appearance of the England women's soccer captain over 5G. In February 2019, U.K. mobile operator EE showcased its 5G network by putting a hologram of "digital supermodel" Shudu on the red carpet at the BAFTA Awards. At MWC 2019, ZTE and China Telecom teamed up to show 5G 8K VR panoramic live streaming from a traveling bus. 

For years, the industry has been building the hype around 5G and, in particular, its "game-changing" capability to stream video games and AR/VR experiences in real time. After all, 5G is the first network infrastructure that can deliver the speed (100Mbps) and latency (10 ms) required by VR and AR mobile applications. But despite heavy investment from Apple, Microsoft, Google, and Facebook, VR/AR is far from mainstream. 

"The lack of a suitable supporting network is one of the factors that has contributed to a slower than expected uptake in mainstream adoption of VR," says Sol Rogers, CEO and founder of immersive content creator REWIND, in an article he wrote for Forbes. "5G will usher in the next era of immersive and cloud-connected experiences. Faster, more uniform data rates, lower latency and lower cost per bit will ensure it."

VR headset ownership has dipped from peaks in 2017 in Australia, the U.K., and the U.S. Whereas 13% of U.S households owned one in 2017, by 2019 that number had fallen to 8%. According to the GSMA, "Expensive, clunky hardware and continued challenges with dizziness have not helped, but limited content libraries beyond gaming and lack of edge infrastructure are also to blame." 

Sports leagues and broadcasters are among those that continue to see VR as a means of boosting viewer interest. The challenge will be to convince pay-TV customers to spend more for VR broadcasts when they are already being pulled in the opposite direction toward cheaper and more-flexible streaming packages. 

Even the $1.2 billion market size predicted for VR/AR in 2030 is dwarfed beside the existing $1 billion-plus market value of esports or the $1.52 trillion in total global mobile revenues anticipated by the GSMA by 2025.

"XR [extended reality, the catchall term for VR, AR, and mixed reality, or MR] has been a disappointment insofar as it's not met the hype that has often surrounded it, but these expectations were, in hindsight, largely misplaced," says Stephen Mears, a research analyst at Futuresource Consulting. "Too many technological developments needed to be made and coincide with one another for XR to really take off, and there's still a few steps to go. Notably, improved visuals must be combined with spatial audio technologies, and likewise the SoCs [systems on a chip] must be further developed for specific XR use cases."

AR is a little different because it is inherently more suitable for on-the-go mobile computing. Google, Microsoft, and Magic Leap quickly realized that AR's biggest early-days potential lies in B2B. Even Lenovo has launched AR platform ThinkReality for manufacturing.

"[Consumer AR] is hard, as the technology cannot deliver the consumer expectation at a price consumers find agreeable," Fautier says. "Google Glass was a failure because the experience was flawed and [the] price was high. The Magic Leap experience was not great, the ecosystem incomplete, as it required a special piece of hardware, and the price was high."

Earlier this year, Gartner removed AR from its list of emerging technologies, considering the tech to have matured and ready to move into the enterprise space.

VR 2.0 = XR

The term XR is now vogue. You can't get a clearer description of its components than that provided by futurist and strategic advisor Bernard Marr: 

  • VR: A fully immersive experience where a user leaves the real-world environment behind to enter a fully digital environment via VR headsets.
  • AR: An experience where virtual objects are superimposed onto the real-world environment via smartphones, tablets, heads-up displays, or AR glasses.
  • MR: A step beyond augmented reality where the virtual objects placed in the real world can be interacted with and respond as if they were real objects.

Qualcomm is a leading promoter of an XR future. The chipmaker is explicitly linking XR with 5G and has corralled a number of mobile operators to commercialize a head-worn "XR viewer" that can be connected to a 5G smartphone and powered by Qualcomm silicon. "Mobile XR has the potential to become one of the world's most ubiquitous and disruptive computing platforms of the next decade, just as the smartphone of this decade has become," says a Qualcomm blog post. "At some point in the future, we see the convergence of the smartphone, mobile VR headset, and AR glasses into a single XR wearable. … XR glasses could replace many other screens in your life—even big ones like the TV in your living room." 

China Mobile—the world's largest wireless network operator—Deutsche Telekom, EE, KDDI, Orange, SoftBank, Verizon, Vodafone, Tele­fón­ica, and NTT DOCOMO are also on board. "We believe that the next decade will see a transition into a heads up society where eventually glasses will be an alternative to smartphones," says Sean Seaton, SVP of group partnering and devices at Deutsche Telekom, in a press release. 

Qualcomm supports Nreal, an XR viewer that connects via USB-C to an Android device, with all of the processing, storage, and wireless network connectivity needed for XR taking place on the phone instead of the headset. Brands including iQIYI, 3Glasses, and Panasonic are expected to follow next year with gadgets bearing a "Qualcomm XR Optimized" badge. Qualcomm mentions enterprise applications such as how "workplace meetings can be revolutionized through holographic tele­presence with virtual collaboration platforms." 

Exploring XR

Meanwhile, Deutsche Telekom is teaming up with Orange to develop consumer XR content. The first partnership is with Montreal-based Felix & Paul Studios to provide "the ISS experience." Morgan Bouchet, director of digital content innovation and head of XR at Orange, says, "[We] have been moving forwards with projects to develop platforms that deliver XR/5G, and have been confronted with a common challenge: finding high-quality interactive and immersive content. Supply is still limited in this market, with designs being few, far between, and expensive. Together, we are better equipped to compete with larger players, especially [Google, Apple, Facebook, and Ama­zon], in order to purchase premium content."

The two companies are also looking into XR in sports and TV. Daniel Aslam, Deutsche Telekom's global partnering and XR business development manager, states, "We are exploring XR [Immersive] Media use cases where for example the TV interacts with mixed reality glasses. This we will continue in 2021 in a co-development project with Orange." 

"Growth in XR in the short-term will be driven by gaming and B2B, particularly in education and training," says Futuresource Consulting's Morris Garrard. "Improvements in productivity in the workplace will help bring XR into consumers' homes."

Perhaps the biggest (non-Qualcomm) XR product will be Apple's glasses, rumored to launch in mid-2021, possibly with an 8K spec.

"Ultimately, there will be compelling use cases for standalone (untethered) XR headsets and wearables, most likely in enterprise AR initially," says Mears. "However, the industry must ensure a seamless consumer experience and, crucially, software and content developers need to be brought on board to make sure that there is something actually worth doing with these devices when they're developed."

VR and AR require bandwidth between 20­Mbps and 30Mbps and can therefore be consumed in a stadium without an HMD, which Harmonic believes will be a popular use case. Multi-view, multicam, and multi-game HD video, which will also require 20Mbps, has been demonstrated at the RG Lab with France Télévisions.

"Before 2025, we will see in-stadium applications on mobile devices," predicts Fautier. "For VR or 2D, the applications are multicam, wide-angle camera for live and catch-up action. For AR, we envision score and statistics overlay and point cloud for volumetric capture. We believe multi-view and replay of action in the stadium can now be deployed at scale with 5G."

XR Applications

Nor does XR need to be considered a purely visual opportunity. For example, there is capacity for audible AR, in which the world is described to the wearer through a voice assistant. Simon Forrest, principal analyst at Futuresource Consulting, posits that this may take the form of an advanced hearable product that's able to use location-based sensors and spatial awareness to deliver precisely timed, contextually aware information that's relevant to what the user is doing. "This has consequences on the advancement of AI in voice assistants and how internet search operates because all results must be spoken as the top answer, each and every time," he says.

Beyond headline speeds, the ultra-low latency in 5G effectively brings edge devices and cloud services far closer topologically, blurring the boundaries over where a compute resource can be placed. Furthermore, the opportunity for network slicing—effectively reserving guaranteed bandwidth for specific applications—will present mobile networks with advanced capability. 

"This opens up new applications that demand real-time response from cloud services, with XR one vertical that is able to take full advantage," says Forrest. "Nevertheless, we're still around 5 to 7 years away from blanket 5G service coverage in most developed regions, and certainly, devices won't be ubiquitously connecting at gigabit speeds, since this would necessitate a massive densification of 5G network infrastructure."

Forrest concludes that XR products cannot simply rely on high-bandwidth mmWave. Centered on 26 GHz, mmWave adds the "super data layer," providing ultra-high bandwidth service of up to 10Gbps over very short distances. "Therefore, 5G should be considered as one part of the overall solution for XR," he says. "In tandem, SoCs are being developed with neural network accelerators on board. This innovation is affording opportunity for edge-AI, massively improving the compute capabilities available on XR devices themselves. Combining this with low-latency 5G connectivity then allows cloud-based compute to run alongside. Ultimately, 5G enables a balance of compute between edge and cloud, with capacity to execute in near real-time."

For in-home XR experiences, 5G will be "essentially meaningless," says Forrest, with Wi-Fi 6 far more important in ensuring low-
latency, high-quality XR experiences if the content being consumed or used is being directly streamed, such as sports. Moreover, the Wi-Fi 7 (802.11be) standard, presently under specification and scheduled for commercial launch in 2024, is expected to offer peak connectivity speeds of 30Gbps and lower latencies, competing directly with 5G mmWave. Forrest points out that this will necessitate fiber-to-the-premises in order to connect that massive local bandwidth to the internet backbone.

New codecs such as Versatile Video Coding (VVC) and Essential Video Coding (EVC; MPEG-5) will help reduce video bandwidth for broadcast/push VR applications, and these codecs will be used in tandem with 5G and Wi-Fi 6 connectivity. But video frames aren't necessarily the best technology for XR, and there are alternatives under development. One such innovation is in volumetric capture techniques: Video can be captured using multiple cameras, and then a "point cloud" is generated, which describes the 3D [x, y, z] coordinates and color of each pixel in the scene. 

"From thousands of points, a set of 3D objects are created, and these can be rendered in the headset; coupled with the increase in local compute performance, this offers potential for new XR products based around graphical rendering techniques, rather than just video frames," says Forrest.

BT Sport is exploring how to capture volumetric video and deliver interactive immersive experiences over 5G, both within sports stadiums and to augment the live broadcast at home. Ideas include streaming a real-time virtual volumetric hologram of a boxing match onto a viewer's coffee table, simultaneously with the live feed. It is all at the proof-of-concept stage, but is part of a nearly $40 million U.K. government-funded program to develop applications that will drive 5G take up. 

6DoF Free Viewpoint

Companies within MPEG are actively defining algorithms for the compression of point clouds, such that capture and transmission are standardized. These are MPEG V-PCC and MPEG G-PCC for video and graphics respectively, both of which seek to capture 3D volumetric data and reduce bandwidth significantly.

"Beyond 2030, we anticipate there will be 6DoF [6 degrees of freedom] light field applications that will require high computation on the client side, as well as new display technology," Fautier says. "Those instances will [be suitable for] location-based environments. A second application is the free viewport application to render a game. While that application is not in real time today, we expect it to be."

In 2019, Canon used its Free Viewpoint Video System to provide highlight videos at the Rugby World Cup. The company says that the system "makes use of multiple cameras positioned around a stadium … and the high-resolution feeds are combined and converted into 3D spatial data, generating a comprehensive three-dimensional map of the action moment by moment."

Another application, which requires specific software in the 5G network, is FeMBMS (Further evolved Multimedia Broadcast Multicast Service). This new mobile broadcast mode in 3GPP Release 17 addresses distribution of popular content up to a radius of 60 km. Harmonic expects to see the first deployment at the Paris Olympics in 2024. If that is conclusive, it could be widely deployed after 2025. 

"Mobile broadcast is the wild card, as it has failed in 4G," Fautier says. "However, if the QoE is much better in situations where streaming is congested, then FeMBMS will be successful. MNOs will be able to monetize their network to content providers, broadcasters, or event organizers (like the IOC or FIFA)."

If XR is to fly, it needs a disruption in display technology, according to Qualcomm. Attributes of these "glasses" include delivering field of view for both immersive VR and useful AR (completely opaque for VR, yet at least about 85% transparent for AR), driving high dynamic range (HDR) of at least Rec. 2020 gamut, and refreshing at a minimum of about 120Hz (per eye). It also has to cost less than $100. 

Qualcomm also says that 5G-enhanced mobile broadband is required for XR mass adoption. "XR video will be the killer use case for 5G," it states. Target attributes include latency down to 1 ms and a uniform experience—even at the cell edge. Qualcomm predicts that over the next decade, 5G XR should advance to attain speeds of 200Mpbs to 5,000Mbps, with interactive, real-time, 3D Free-Viewpoint, 6DoF, 8K/90 fps–120 fps HDR video.

Spatial Computing Frontier

XR devices are the gateway toward the real prize targeted by tech giants like Magic Leap, Microsoft, Apple, Google, and Nvidia: the next-gen internet, which is theorized as 3D and tactile, requiring a new human-machine inter­action designed around voice and gesture.

Spatial computing, as defined by Magic Leap, "allows digital content to interact with its physical surroundings and people to interact with content seamlessly, without the limits of a screen." The company is bringing this forward by running spatial computing on the 5G infrastructure of partners such as Japan's NTT DOCOMO.

Nvidia's bid for a controlling share of this online future is called Omniverse. Targeted initially at the enterprise, Omniverse "fuses the physical and virtual worlds to simulate reality in real time with photorealistic detail," according to president and CEO Jensen Huang in a recent keynote. "Cloud native and photo­real, with path tracing and material simulation, the Omniverse allows designers and artists and even AIs to connect in a common world. This is the beginning of the Star Trek Holo­deck, realized at last." 

Thursday 25 February 2021

Florentine Films Finishes Hemingway Remotely with ClearView Flex

copy written for Sohonet

Influential documentarian Ken Burns has made many landmark films in his four-decade career including the Academy Award-nominated Brooklyn Bridge, The American Civil War and The Vietnam War. His latest work is Hemingway, a three-part, six hour series made for PBS with producer-director Lynn Novick, exploring the life and work of the legendary writer.

https://www.sohonet.com/our-resources/blogs/florentine-films-finishes-hemingway-remotely-with-clearview-flex/

Like all of Burns’ work it is produced by Florentine Films, the production company he set up in Walpole, New Hampshire with fellow filmmakers Elaine Mayes, Lawrence Hott and Roger Sherman. Each member works independently, but releases content under the shared banner of Florentine Films. 

“We’ll never run out of stories,” says Daniel White, post-production supervisor, Florentine Films. “We’ll have half a dozen in the hopper at various stages of production. I’ll be leapfrogging from film to film to film. There’s always something going on.”

In February 2020, White was working on a restoration of Burns’ marathon 1994 documentary Baseball when Covid started to shut things down.

“We had to very quickly figure out a solution to be able to continue coloring remotely with Technicolor PostWorks, New York. Right away we tested ClearView but it just wasn’t quite ready for what we needed. Within a month Sohonet had ramped it up including with 5.1 audio and we tested it again.”

White sent LG C9 55-inch monitors to PostWorks for it to calibrate to professional standard. The facility shipped them to the homes of Florentine’s post team who, in the meantime, had equipped themselves with the latest version of Apple TV and upgraded to the fastest local internet speeds.

“With ClearView, everyone who works at Florentine Films was working remotely on the project,” White says. “Suddenly, we were able to stream directly from the post house to do color and titling sessions. We were pretty impressed at being able to do it remotely. Usually, we’d send our editors and producers in and out of the edit facility to sit with the online editor. The travel would be expensive and disruptive to our families.” 

We were streaming remotely and doing color grades,” says White. “It looked terrific and worked smoothly – essentially like a Netflix stream.”

While White was in his basement office in Keene, New Hampshire, colorist Jack Lewars was at home in Brooklyn.  “We were streaming remotely and doing color grades,” says White. “It looked terrific and worked smoothly – essentially like a Netflix stream.”

A similar workflow applied to the show’s sound mix. Re-recording mixer Joshua Berger had a ClearView box at his stage at Harbor Picture Company in Manhattan streamed to Hemingway’s sound editors who were remote at home.

“We had one of our dialogue editors in Ithaca, New York listening to streams played out of Harbor in Manhattan. When we did our mix playbacks we had picture and sound coming through in sync and at cinematic quality via ClearView Flex. We pay a lot of attention to detail on both the picture and sound side and being able to accomplish all of this hundreds of miles away from our mixer and colorist was remarkable.”

The pandemic made access to the archive of Hemingway’s manuscripts, correspondence, scrapbooks and photographs housed at the John F. Kennedy Presidential Library in Boston a little tricky. As a result, some of this material requiring a 4K scan came in at the last minute but still the project finished on time and on budget. 

“ClearView Flex improved by leap and bounds during the crisis. More facilities started using it. Without it, I’m not sure where we’d be.”

“ClearView Flex improved by leap and bounds during the crisis. More facilities started using it. Without it, I’m not sure where we’d be.”

White has already moved onto other Burns’ projects on subjects as diverse as Muhammad Ali, the American Buffalo and the holocaust. Even as the pandemic eases the most immediate concerns of working in a facility, remote solutions have opened up a flexible work environment that won’t be reversed. 

“Now that we have these tools in the future we won’t need to travel as much,” says White. “If we’re setting looks it would be nice to sit in the room with your colorist directly or to point to something on a screen to your online editor. Many people will want to do their final review in the facility too. But for two thirds of the process, including titles or graphics sessions, you can do it from the comfort of your couch.

The regular workflow for White and the production team is to have Zoom open on one screen to chat and while sound and picture is streamed from the sound house or finishing facility. 

“I can tell the online editor at PostWorks to ‘pause right there’ or ‘can you back up 5 seconds’ or ‘let’s look at that graphic again’. I can ask the colorist to darken this or that area or make that parchment paper a little bit warmer.

“By contrast, for all of us to schedule something where we all had to be in Manhattan at the same time and figure out hotels and travel and food took a lot of time and expense. 

“Just being able to text each other ‘let’s do a Zoom at 11am tomorrow’ and set up a ClearView session very quickly has more than paid for itself in just one person’s travel costs alone.” 

 

Wednesday 24 February 2021

SMPTE 2110: Is it fit for the future of broadcast?

IBC

As live production moves to the cloud, can SMPTE Standard 2110 evolve to retain broadcast quality standards or should the industry wholeheartedly embrace web technologies? 

https://www.ibc.org/trends/smpte-2110-is-it-fit-for-the-future-of-broadcast/7301.article

Broadcast production is at a crossroads and CTOs have a decision to make: should their new studio or mobile facility be built using SMPTE standard 2110 or something that might be more suitable for the cloud computing age?

It’s a problem that has surfaced in recent months as large-scale live production — the area of premium broadcast programming for which 2110 was principally designed — has shut down or reverted to using less conventional technologies to keep on air.

For many, ST 2110 still represents the bedrock of professional production and a relatively risk-free way to segue the industry’s legacy SDI base into IP. Others see an existential crisis in which broadcast engineering-based standards are a cul-de-sac and that if traditional players are ever to innovate on par with internet-first streamers they need to change the narrative.

“Broadcast TV must adapt to online entertainment formats faster than the online entertainment formats make broadcast irrelevant,” says Johan Bolin, Edgeware’s chief product and technology officer.

“A growing number of broadcasters are asking themselves do they really need to continue building the broadcast stack along conventional lines or is now the time to embrace web and other technologies more generally in production.”

For many the issue boils down to the engineering mindset. If your starting point is to build a perfect pipeline where all the important performance indicators like frame sync are under full control and can be guaranteed, then this will inevitably fail when working in the cloud.

A STOP2110 website is blunt. It lambasts the standard as a “train wreck”, “worse than SDI” and “old school hardware engineering combined with design by committee.” The site doesn’t suggest any genuine alternative and its author lacks the courage to go public, but it begs the question – why has a dry international specification drawn such ire?

The value of ST 2110
From the analogue era through SDI the industry has used baseband video simply because it’s the highest quality. Without processing, uncompressed media also offers the lowest latency because in live you want to interact with your studio and your performers.

When it came to devising a means to migrate the industry into IP, these fundamentals were sensibly maintained. Standard 2110, for which SMPTE and co-developers VSF, EBU and AMWA have been awarded a Technical Emmy, reinvents SDI by providing for uncompressed video and precision timing.

It differs significantly from SDI in splitting audio, video and metadata into separate streams (or essences). Instead of having to worry about running the correct type of cable and signal to various locations, broadcasters have far greater versatility to be a responsive studio business.

“The idea that you could separate the flexibility that you needed from the cabling you laid down was considered a goal worth achieving,” says Bruce Devlin SMPTE’s Standards Vice President.

Since standardisation in 2017, ST 2110 interfaces have been added to core equipment from cameras to multiviewers enabling broadcasters to build new facilities entirely in IP. BBC Wales’ new facility in Cardiff, is one example.

It achieved its aim of unifying the industry around a common suite of IP specs and allows broadcasters to migrate as fast as their investment allows by keeping one foot in the SDI camp.

However, the rocketing rise of OTT streaming and the advance of cloud computing exacerbated by Covid-19, has put the future of 2110 under scrutiny —even at SMPTE itself.

“It is not really that 2110 is the wrong standard, it’s that the means of content consumption has started to change rapidly,” Devlin says. “The global pandemic accelerated this when live sports and stage events, all the stuff that 2110 is dedicated to, almost vanished overnight.”

Cloud-based workstations using PCoIP, and low-cost low bandwidth video transmission has become the norm. Business teleconferencing tools, smartphone cameras and webcams are in routine use in at home scenarios for both production crew and on-air talent. ST 2110 was not designed for this.

What’s more, the audience has begun to accept what the IABM calls ‘Covid Quality.’

“The use of off-the-shelf collaboration tools may not be ideal, but it keeps the media factories running,” it finds. “Audiences started to accept glitches, streaming issues and for that matter more often than not poor video and audio quality; our expectations for more 4K UHD in 2020 turned into Covid Quality.”

PTP meets floppy timing
It’s not as if things will go back to normal when the pandemic passes. Remote production links contributed over the internet were advancing anyway. Now they are entrenched. Cloud computing and cloud services are becoming ubiquitous.

“We’re having to find ways to use the 2110 ecosystem to connect nano-second accurate studio environments with remote operations over the internet or in the cloud where floppy timing exists,” Devlin says.

The Joint Taskforce on Network Media (JT-NM) which coordinates SMPTE 2110 and the wider development of a packet-based network infrastructure, is investigating ways to connect the Wide Area Network of a production plant with tools, applications and facilities outside of the studio.

However, current cloud connections are not up to the quality standards required for low latency live streaming media. Therefore, SMPTE says research into quality-enhancing technologies, such as retransmission or Automatic Repeat reQuest (ARQ), is crucial to improving the network infrastructure required to deliver broadcast-quality transmissions. 

“The JTNM say we still need 2110 accuracy within a facility but we don’t necessarily need 2110 perfection between two facilities or between an OB truck and a facility,” says Devlin.

“It’s finding a way to take the gold-plated excellence of 2110 together with parts of the ecosystem which are less gold plated and using them both to produce better in a Covid world.”

One option is to compress media to get it to and from the main production plant or into and out of cloud. The leading scheme is ISO standard JPEG XS, a mezzanine compression that squeezes the bits sufficiently to save on bandwidth but not hard enough to destroy the quality needed for manipulation, like chroma keying, in production. Crucially for live production, JPEG XS exhibits extremely low latency. It is already mapped into the 2110-22 ecosystem and products are launching with JPEG XS capability.

The BBC also expects ‘hybrid’ architectures to evolve, and its R&D team is looking at how to ensure interoperability. In a blogpost, it says: “ST 2110 isn’t naturally suited to deployment in a cloud, so we expect ‘hybrid’ architectures to evolve, and will be looking at how to ensure interoperability in these. This is likely to include work on ensuring that media identity and timing information is preserved, including where we need to go through compressed channels, such as for contribution from remote studio.”

ProAV interoperability
Also in the works is a proposal to standardise the interoperability of products within the Pro AV sphere. The Internet Protocol Media Experience (IPMX) would encompass many technologies being used by at-home productions such as robotic PTZ cams and web-cams as well video conference codecs.

IPMX is based on 2110 and promoted by the Alliance for IP Media Solutions (Aims), which is chief cheerleader for ST 2110 in broadcast. This makes sense since, according to Aims, a quarter of its members sell into both broadcast and AV markets.

The move also recognises both that AV and broadcast are undergoing a transition to IP. The benefits are similar for both industries such as bi-directional cables and reduced space. The gear used to produce and distribute content for giant screens at music venues, for digital signage or esports events is also sold into broadcast. And in many cases the quality of AV content exceeds that of broadcast.

The elephant in the room when talking to SMPTE, Aims and ST2110 supporting vendors like Imagine and Sony is the widely used video over IP transport scheme NDI. Developed by NewTek and owned by Vizrt, NDI is a live production protocol considered a non-starter by backers of 2110 because its heavy compression is considered unsuitable for broadcast and its proprietary nature incompatible with open standards.

Innovating production
However, these arguments are precursors to the wider challenge of evolving production to deliver truly personalised, interactive media.

This is generally considered the future of ‘TV’ and is tantalisingly in reach thanks to high-speed high bandwidth technologies like 5G. In comparison, the production of content itself remains in the dark ages and ST 2110 is considered by some to be part of the problem.

“Fundamentally, if TV is to transform it must overcome the brick wall between production and distribution,” says Bolin. “These two domains are separated and 2110 is not the solution.”

He argues the while the upstream process in TV all about creating content the downstream process attempts to reach as wide an audience as possible, whether through satellite, cable or DTT and now the internet.

“Upstream has worked with the same production processes and tech stacks for five decades but the growth of the internet has forced broadcasters to increasingly work with internet-based technologies downstream.

“Yet it is incredibly difficult today for viewers to contribute video upstream. This is by design. It is not a consequence of the technology. It is how we have designed the technology.”

Bolin says he wants to see technology that “not only allows” but “encourages the industry to mix and blend downstream and upstream processes” to enable TV formats more tailored to the viewer or concepts that allow viewers to contribute to the live programme.

BBC in the cloud
These are not the thoughts of one maverick vendor. The BBC is thinking along identical lines.

Having started to use IP in production centres like BBC Wales fitted with 2110 it now says, “the content-making capacity and equipment in these facilities is still mostly fixed during the design and fitout stages, meaning large changes can only occur during a re-fit. The business operating model of a current generation IP facility is also fairly inflexible, with large capital expenditure required upfront.”

Those are alarming statements given that they could equally apply to SDI, the prison from which ST 2110 promised escape. Content still has to be created using traditional broadcast equipment in a physical production facility.

BBC R&D is therefore investigating how it can apply the cloud computing technologies which run iPlayer to its production operations.

“R&D are working with colleagues from around the BBC to join up these two areas, enabling broadcast centre-style production operations to occur within a software-defined cloud environment,” it states.

“We think the benefits of this will be huge, making our physical IP facilities even more flexible, and enabling us to deploy fully virtualised production systems on demand. Ultimately, this will help the BBC make more efficient use of resources and deliver more content to audiences.”

Edgeware is making similar explorations. “The idea is to take web-based technologies and the tech stacks and concepts from games development and esports and incorporate those into the TV stack,” says Bolin.

“Broadcast has always been about guaranteed bitrates and guarantee framerates and guaranteed no drift in time. Video on the internet is about accepting its imperfections, accepting that you will have drift and you will have a problem guaranteeing perfect bit rates. The onus is on the industry to build solutions that mitigate these imperfections.”

No matter how perfect the upstream there will always be imperfection in the downstream. That’s true with SDI or 2110 since the source is always degraded in some form during distribution. Bolin says the industry should prioritise innovation in production and work with internet’s concept of best effort distribution.

Indeed, there are a number of protocols for smoothing loss, jitter and latency such as MPEG DASH, RIST and SRT which do mitigate the internet’s deficiencies.

“We should facilitate innovation rather than seek perfection,” he says. “Today’s best effort is pretty darn good.”

 

 


What the Heck Is (n)K Resolution?

Copywritten for AVID

Leaps in video resolution are unrelenting. Acquisition continues its inexorable march from 4K to 8K—and toward a time when pixel counts will be virtually unlimited. This trend is set to unleash new creative possibilities, spurred on by advances in technology and by consumers' desire for more immersive, photorealistic experiences. Avid calls this trend (n)K resolution, and it describes a future in which creatives are no longer constrained by the number or quality of pixels in a screen.

https://www.avid.com/resource-center/what-the-heck-is-nk-resolution

So, what is driving resolution independence, and what can you expect to see from it down the line? Let's break it down.

MOVING TOWARD RESOLUTION INDEPENDENCE

"Within Avid, we've been following a philosophy of resolution independence," says Shailendra Mathur, vice president of architecture at Avid Technology, in a Z by HP report titled Reshaping Creativity. "That's why we call it (n)K resolution. Any aspect ratio, any resolution. We've gone from SD to HD to UHD, and now we're at 8K. That trend is going to continue."

There are many reasons for acquiring video at the highest resolution, including banking a master copy for sale when the format's market (e.g., the install base of screens) catches up. From VFX to frame resizing, high-end content is routinely produced in post using high resolution and bit rates. Acquiring at the highest resolution produces a better-quality output—even if the end device plays back a lower resolution and bit rate.

The industry's adoption of UHD formats is following a similar trajectory to the transition from SD to HD, though at an accelerated pace thanks to digital-first platforms like YouTube—and the momentum is on track to continue through 8K to 16K, 32K, and beyond.

"The biggest driver is the demand by humans for even more immersive visual content," says Thomas Coughlin, digital storage analyst and author of the annual Digital Storage for Media and Entertainment Report. "Other drivers are computing, networking, and storage technologies that can support the creation and use of ever higher resolution content."

THE DESIRE FOR HIGHER FIDELITY CONTENT

Jeremy Krinitt, senior developer relations manager at NVIDIA, agrees. "There's a strong desire among people to experience content in higher fidelity. This has driven higher resolution requirements, but it's also driving technologies like HDR that can more accurately display colors," he says. "Ultimately, all of this is in the service of storytelling. Whether something is recorded on an old webcam or on the latest 8K camera, it needs to be able to serve the storytelling purposes of the person creating the content."

In Japan, 8K broadcasts have already made the air, a library of 8K resolution content is available on sites like YouTube, and the flagship screens/flat panels of major consumer electronics brands are now 8K. However, the creative demand for super resolutions is targeting emerging immersive applications.

"While flat image resolution may reach a limit, 360° content requires higher resolution, driving the resolution and image quality requirements even further," says Coughlin. "Volumetric computing capture and display technology will require the use of even more captured content."

Another factor impeding the breakout of consumer VR is the bottleneck in both resolution and the ability to deliver high fidelity to all parts of the viewing experience, including peripheral vision. VR requires wrapping the participant in a photorealistic experience with a minimum of 8K resolution content delivered to both eyes.

RESHAPING REALITIES WITH VOLUMETRIC VIDEO

"Viewing through two eyes is the natural thing to do, and stereoscopic VR takes us into the next level," says Mathur. "It's a wholly immersive experience." Mathur believes we won't be satisfied with entertainment "until we can offer an alternate reality that matches how our senses work."

This vision is in the early stages of being built by computing giants Apple, Microsoft, Google, and NVIDIA. Otherwise known as spatial computing, it conceptualizes a next-generation, 3D version of the internet that seamlessly blends the physical world with the digital in an extended reality.

"When you create things spatially, you can explore them as either a virtual reality or an augmented reality experience," says Nonny de la Peña, founder and CEO of Emblematic Group, in the Z by HP report. "I think that the idea of the separation between [AR and VR] technologies is going to go away."

As Reshaping Creativity observes, spatial computing offers gesture control—currently only practical in VR—in the 3D world, allowing users to interact with virtual interfaces and objects by reaching out and touching them. Resolution, along with key image attributes like HDR and high frame rate, are key to this future.

8K PRODUCTION HAS ARRIVED

The ecosystem to produce 8K has arrived. Feature films like 2020 Netflix release Mank are part of a growing number of productions being shot in 8K, in part for production and in part for archive. RED, ARRI, Sony, and Blackmagic all have cameras capable of 12K acquisition—the release of more cameras able to record at these higher resolutions is inevitable.

Higher resolutions are entering the mix beyond television and film, too. The first applications will be in digital out-of-home advertising and large entertainment venues, such as the MSG Sphere being built in Las Vegas.

Experiencing images at higher resolutions whets the appetite for pushing visual limits even further. Techniques that capture volumetric video of a 3D space may help create content for VR head-mounted displays, and eventually for free-standing holography, such as those being developed at Light Field Lab.

"I have heard talk that something like the holodeck from Star Trek could require more than 520K video," jokes Coughlin.

Yet NVIDIA CEO Jensen Huang says in this IBC article that the combination of cloud-native and photorealistic tools with path tracing and material simulation, powered by NVIDIA GPUs and AI algorithms, could bring that holodeck to life.

N(K) RESOLUTION IS ON ITS WAY

None of this will be easy. It is predicated on continued advances in compression technology with AI solutions, cloud storage, 5G edge computing processors, and networking bandwidth.

According to Krinitt, it's not just about processing: innovations will rest on higher efficiency and capabilities from networking technology. This has implications for CTOs and IT teams looking to future-proof their infrastructure.

"Since resolution and other important video requirements, such as bits per pixel, will drive ever higher storage capacities, anticipating this need and building for this level of scale will be an important element in future-proofing post-production and archiving architectures," says Coughlin.

It's quite a vision for the industry. Resolution independence opens new possibilities for creatives to tell stories at whatever combination of resolution, color gamut, dynamic range, frame rate, and even dimension they wish, automatically scalable up or down to the viewer's screen, environment, or pleasure. (n)K resolution is on its way—and it might be here sooner than you think.

 

Monday 22 February 2021

Powering Extreme E’s Remote Live Production

copywritten for BASE Media Cloud 

A multi-cloud distribution platform from Base Media Cloud and Veritone helps off-road racing series Extreme E store, manage and share assets with multiple global partners.

https://www.broadcast-sport.com/2021/02/22/feature-powering-extreme-es-remote-live-production/

Imagine a Red Bull air race on the ground. There are certain gates that teams need to pass through but how they get through them is them is down to the skill of male and female drivers on terrain that varies from desert to deforested jungle to deserted glacier.

That’s the premise of all electric rally-style Extreme E, the progressive FIA-backed SUV racing series which launches next month.

With 30 percent of the planet’s CO2 emissions coming from transport, Extreme E exists to showcase the performance of electric vehicles, and to accelerate their adoption.

As such it needs to marry urgent environmental messaging with as lean a production footprint as possible.  That’s particularly challenging for a live broadcast given that the locations are remote and infrastructure-free.

“We want to shine the spotlight on the climate crisis that we’re facing all over the world through the lens of an adrenaline filled action sport,” explains Dave Adey, head of broadcast and technology for Extreme E. “We’re employing remote production with minimal production staff on site and no spectators at the track, so for us content and fast turnaround is imperative.”

There are four constituent elements to the Extreme E production designed by production partners Aurora Media Worldwide and North One. All race camera sources including drones and onboards are uplinked from a lightweight TV compound on site. Car telemetry is managed by Barcelona-based Al Kamel Systems with AR and VR overlays from NEP in The Netherlands. Everything is back hauled to the gallery in London for production of live coverage across each race weekend plus highlights shows, a 30-min race preview and 300 VOD films for digital.

Given the scale of production, Extreme E needed a system that would allow them to manage content, including the ability to upload from anywhere into a centralised secure storage location. They also needed to be able to manipulate, search, view and download content; and to give this functionality to its authorised media partners.

“We need to find any of the content instantly so the user interface needs to be intuitive and the metadata schema rich but precise,” Adey says. “Once you find the clip you want to be able to view it with a proxy version online. We then may want to manipulate that content or create clips or transcode to different file formats. The system we chose had to do all of this and more.”

Extreme E chose to use a sports multi-cloud Digital Media Hub (DMH) comprising a cloud-native storage and content distribution platform developed and managed by Base Media Cloud with Veritone’s AI-powered asset management system.

After transmission, all live programming and all the rest of the content including VT’s, highlights and digital is uploaded to the DMH for rights holder to search, view and use.

“The DMH provides a dual purpose: to make content easily available to rights holders; and provide a rich suite of assets that rights holders can use to enhance their own content,” explains Adey.

 “A key benefit of a cloud-native solution is that the distribution of content is much more cost effective. I don’t have to put up a satellite feed to do a highlights program. Instead, we can create those programs in London, upload them into our content management system and make them immediately accessible via accelerated download for any of our rights owners and media partners around the world.

“It’s also really important that we have very high and very clear, environmental credentials which the multi-cloud sports media solution from Base Media Cloud and Veritone gives us.”

More than 70 broadcasters have bought rights to Extreme E including Discovery, Sky Sports, Fox Sports, BBC, ProSieben Maxx, Disney ESPN and TV Globo. The series launches in April in the deserts of Saudi Arabia and will continue in Senegal, Greenland, Brazil and Tierra del Fuego. 

 

Thursday 18 February 2021

Remote Collaboration is a Fact of Life as Post Production Finds a Home in the Cloud

copyritten for Sohonet 

Chuck Parker, CEO, Sohonet, on Storytellers gaining confidence in remote workflows — and how the technology will ultimately get better and cheaper over time, resulting in a “new normal”  – just as effective but with better work-life balance.

https://www.sohonet.com/our-resources/blogs/remote-collaboration-is-a-fact-of-life-as-post-production-finds-a-home-in-the-cloud/

The collective scramble that our industry colleagues from Avid editors to VFX artists were forced to undertake in the early stages of the pandemic has given way to a universal acceptance and relatively standardized mode of remote working. With health and safety protocols likely to remain in effect for many months to come (.and warnings that we’ll be social distancing and wearing masks well into 2022), it’s clear that we’ll be in a hybrid work scenario for some extended period of time.  The practice of putting the customer into the darkened room while the artist drives the session remotely to preserve pandemic protocols is likely to work into our industry’s “muscle memory” in 2021. 

By the end of 2021, the industry will have experienced 21 months of remote collaboration. What began as a necessity will most likely remain in place even as teams are allowed to travel or return to the office. We are forecasting the bulk of 2021 to be remote, with artists and creative execs traveling to special darkened rooms sporadically and often alone except for those joining remotely.  

Far from diminishing, this trend will continue as people realize that such tools solve practical problems for the content creation process and improve everyone’s quality of life.  Remote collaboration is beneficial to artists. Talented creatives no longer have to live in expensive cities like New York, LA or London to access work. Any location which meets your family’s needs and work-life balance is on the table. Remote collaboration enables the work to move to you, and, while we yearn for togetherness, begs the question of whether we will ever return to a single creative suite with the number of physical participants and frequency we once did.

Post production moves to the cloud

The VFX end of our industry began the move to cloud at scale in 2015, driving improved rendering costs and time efficiencies and introducing new workflows.  In 2020, the pandemic kicked remote use cases for creative tools into overdrive, resulting in more post production processes moving to the cloud, boosting remote distributed collaboration (i.e. lots of team members in lots of different places).

However, while there are many, many more artists using remote tools hosted with public cloud providers, there are still at least two major hurdles for our industry to solve before the physical trappings of our existence ebb away.  First, the simple economic hurdle has to be solved.  Thousands of industry participants have already invested in creative workstations and other tech gear which are already deployed in machine rooms and data centers all over the world.  That “sunk capital” problem will likely take 18-30 months to work itself out, arriving at a future where the majority of new purchases are in a SaaS model vs. the currently most common capital expenditure model (capex).

Second, we need to solve the technical challenges of “critical review output”.  Meaning, we need to provide the same video and audio fidelity and “over the shoulder” responsiveness that our industry demands of our in-suite experiences from the cloud delivered equivalent solutions.  This is not a simple problem because video and audio fidelity requirements demand large streaming payloads which in turn create more challenges for latency and contention on the network — after all, there is a public internet in between the artist and their cloud-hosted tool set, duplicated to every viewer of their live stream.  Pushing the output of those tools to dispersed artists and creative execs around the world scale so that it works right every time won’t be easy.

So while the timing may be harder to predict, our industry’s direction of travel is certainly accelerating towards a future where Storytellers will continue to gain confidence in the remote workflows they are forced to utilize today, which will get better and cheaper over time, resulting in our “new normal” which is likely to be every bit as effective as the old way of physically being together, but with the work-life balance benefits of being where life needs you to be at that point in time.