Tuesday, 6 September 2022

Electric Air Mobility ready for lift off

TechInformed

article here 


Twenty years ago industrial designer Stephen Tibbitts wrote a proposal to NASA for a grant to develop an electric-powered vertical take-off and landing aircraft. Nobody wanted to listen and without funding he was forced to park his idea. Now the space and aeronautics agency is a prime mover in the bid to develop eVTOLs and Tibbitts’ company Zeva Aero is one of a handful to have successfully flown a full-scale craft.

The Advanced Air Mobility (AAM) sector is poised for take off. Morgan Stanley Research believes that such aircraft could be common by 2040 projecting a total addressable market of $1.5 trillion. Venture capital and public money is pouring into eVTOL projects to fund start-ups like Zeva Aero and ‘flying taxi’ prototypes of automotive giants like Hyundai. Morgan Stanley’s more bullish forecast places the market at $2.9tn.

Tibbitts thinks even this is conservative. “If you project out, AAM could be bigger than automotive. It is no longer science fiction.”

“Essentially, we’re seeing a convergence of macro factors like decarbonisation and increasing urbanisation with the maturity of technology that enables Advanced Air Mobility,” explains James Richmond, head of AAM for design and consultancy Atkins.

“We’ve talked about the idea of these new types of aircraft for a long time, but the technology is now maturing to make it a reality – and the fact that these aircrafts are zero emission, electric-powered vehicles that can play a role in tackling some of these big societal challenges like the need for more sustainable aviation. Now is the time there is both a ‘push’ and a ‘pull’ in terms of technology and market demand respectively.”

Electric technology advances

 The vertical take-off and landing concept has been around for decades mainly in military use. Like those, an eVTOL takes off and lands vertically, but when at cruising altitude flies similarly to a plane. The vertical launch means the eVTOL doesn’t require a long runway and it offers carbon-free air travel. Flights would launch from special ‘Vertiports’ of which hundreds of thousands will be needed to facilitate point to point journeys.

“At a foundational level, the technologies that enable successful eVTOL operations, such as batteries, are at a place today fit to build an economically viable business,” says Adam Goldstein, CEO and founder at eVTOL maker Archer Aviation. “VCs who are attuned to the space understand this timeline, as well as the promise of the eVTOL industry, and the many benefits this next generation of transportation will bring.”

While the end goal is mass market consumer transport, the two initial target markets are business oriented.

In a 2021 paper NASA states that “eVTOL aircraft will have the potential to become an essential tool to Public Service agencies around the world in applications such as firefighting, public safety, search and rescue, disaster relief and law enforcement.”

In 2018 NASA funded an industry contest aimed at accelerating UAM development. It has since teamed with Elroy Air to develop the world’s first automated VTOL aerial cargo system (VTOL is a hybrid EV and conventionally-fuelled craft which will have the longer ranges and greater payload before electric technology catches up).

“Fire departments are responsible for regions that are very wide spread and they want vehicles that can carry a medic with a full life saving kit,” Tibbitts says. “They want 120 miles range to fly out and return. Our target is a vehicle that will fulfil that need.”

Zeva has a ship-to-shore model in development called Z2 with interest from the US Navy. “The Navy has an RFP out for a compact VTOL that doesn’t require a launch or capture system and can haul 200lb ship to shore,” he explains. “The other category is rich people. Our compact design can land on almost any boat without modification. It doesn’t have to be mega yachts.”

UAM for business travellers

The pitch to the business traveller is about time, speed and efficiency.

“From JFK to Manhattan by road can take two-three hours,” says James Bircumshaw, UK and EMEA infrastructure manager, at AAM infrastructure group Skyports. “By air it’s six minutes.  Traditionally the only other way is by helicopter and these are prohibitively expensive to the mass market including most business travellers.”

Skyports is a recipient of the UK Government’s Future Flight Challenge providing grants to help accelerate AAM. This includes building a new test vertiport outside London. Separately, the company last year acquired a public heliport close to Canary Wharf and plans to develop it as a potential vertiport for eVTOL aircraft.

“This will not replace the bus or train. Paddington Express is going to be the quickest route from Heathrow to central London. But Heathrow to Canary Wharf? That’s 90 minutes on public transport or two hours by car. eVTOL will start with premium customers and eventually get to a price point that is mass transportation.”

The lead times for craft certification take years and building an operational aviation grade infrastructure even longer but with the first eVOTLs on track to be certified by 2024-25 developers like Skyports says they need to invest now or risk missing out.

This business case is being backed by concrete orders. California-based Archer has a $1 billion order with an option for an additional $500 million of aircraft from United Airlines as part of the airline’s “commitment to decarbonisation” according to Goldstein.

“The partnership will enable United customers to travel to and from airports in a sustainable manner,” while helping Archer accelerate its own development roadmap.

Last month, Archer received the first $10m pre-delivery payment from United, one of the first of its kind in the industry.

The four year-old company listed on the NY stock exchange in February 2021 valued at $3.8bn. It is focused on urban air mobility (UAM) where VCs are convinced that eVTOL aircraft can overcome the overcrowding, pollution, and aging transportation infrastructure. The developer has already signed partnerships with Los Angeles and Miami, “two cities in the heart of the overcrowding crisis.”

“We’re focused on improving urban mobility, easing commuter congestion, and making it possible for passengers to experience new parts of their surroundings made accessible by the speed and range of eVTOL travel.”

Archer’s ‘Midnight’ craft has 12 rotors, with six tilting rotors in front of the wing and six fixed rotors used for the transition phase of flight that are only used for hover and cruise.

“This design is key to our production aircraft’s ability to fly at 150mph for distances of up to 100 miles, enabling intra-city mobility as well as longer range trips to the areas surrounding cities,” Goldstein explains. “It will have a payload of over 1,000 pounds and carry four passengers, plus a pilot.”

Remote region connectivity

Other eVTOL companies that are working on UAM are building wingless multirotors with ranges of 10-15 miles, with design differences emphasising the aircrafts’ varying purposes. Zeva’s Zero craft for instance is uniquely saucer shaped and will fly one passenger/pilot over 50 miles at 160mph.

Tibbitts thinks the urban-first approach is fraught with difficulty. “The hurdles to clearing regulation in any city is horrendous. I think the FAA [Federal Aviation Authority] is very open to discussion and to getting the rules in place but I don’t see this as short term.”

He points instead to use cases for eVTOL in remote parts of the world “that are never going to have infrastructure like roads. Indonesia comprises 18000 islands. Norway has a similar disperse geography, or the Amazon basin. That is where eVOTL shines.”

Bircumshaw disagrees and says the focus of most of the industry is on UAM. “eVOTL companies have raised millions if not billions of dollars and they are not going to return that investment by operating a fleet of five vehicles for search and rescue work. They have to be operating in LA or Dubai, markets where they can get thousands of movements through every day.”

Aviation tests, safety and certification

eVTOL manufacturers are currently accruing the thousands of hours of flight testing necessary for certification.

According to Richmond, aviation authorities are already developing regulation in their jurisdiction – “in the UK, the CAA has already established a route to certification – as are the likes of the FAA in America and the EASA for the EU.”

Tibbetts says Zeva is certified to fly today as an experimental aircraft and provided the pilot has a licence and flies over nonpopulated areas during the day.

Bircumshaw concedes there are ‘massive safety concerns’ which is why eVOTL are undergoing rigorous tests.

“These vehicles are being certified to the same safety level as an Airbus A320 commercial jet. There are no shortcuts. They will be safer than helicopters.”

Archer says it is on track to certify Midnight for commercial use by the end of 2024. Once FAA Type Certification has been obtained, it plans to launch the first [business consumer] flights in 2025.

By 2026 up to half a dozen different eVTOL crafts could be certified for use, predicts Bircumshaw. “We need to be ready,” he says.

Skyports’ main European test site at the Cergy Pontoise Airfield, Paris, is timed to open for the Olympics 2024.

“We want to do very advanced demos from Charles de Gualle to the Olympics village. To do that we are spending the next two years in extensive tests with vehicle manufacturers and a number of other partners to make the ecosystem viable.”

 


Apple TV’s Earth At Night In Colour

 written for VMI

article here

Apple TV’s nature docuseries explored life after dark in ways not previously possible using new camera technology and the expertise of cinematographer Mark Payne-Gill.

Over a 30-year career MPG (as he is known) has gained a reputation as one of the world’s leading wildlife and lighting camerapersons with credits including Planet Earth (BBC NHU/Discovery Channel); Frozen Planet (BBC NHU/Discovery); long running BBC show The Sky at Night and six series of Stargazing Live (BBC Two).

His experience and interest in shooting natural history and astronomy made MPG the perfect candidate to be the technical lead for Offspring Films’ ambitious concept to film an entire series at night, under just moonlight conditions.

Mark Payne Gill using Canon ME20fsh super low light camera with Astrograph lens

“I love the idea of pushing boundaries and no one had attempted to shoot a whole episode let alone a series at night using just the moon as the lighting source.”

The default for night shoots is to use infrared light but this would counter the series’ unique selling point of being in colour. An alternative is to flood the night scene with generator-lit lamps which have a detrimental effect on the natural behaviour of the wildlife subjects.

“However, new ultra low light sensitive cameras had just come to market which I thought could be used to capture incredible night scenes in real-time,” Payne-Gill says.

These included the Canon ME20F-SH full-frame Full HD camera and the Sony α7S II.

“The main problem was how to shoot in the dark with long lenses. Fact is, you can’t use conventional daytime lenses which are too slow for moonlight. Tests showed we would need at least as fast as T2.8 which is at the limit under full moon conditions. If we could use a T2.0 that was better since it meant we could shoot either side of the full moon and using a T1.4 / 1.5 we could shoot five days either side as the moon waxes and wanes yet still has enough light to illuminate a scene.”

From his astronomy work MPG knew that astrograph lenses would offer the speed but, being designed to operate as telescopes, would they work with regular cameras?

“I needed to squeeze as much out of each lens as we could. I sourced a T2.8 400mm and a T3 600mm which had that crucial bit of extra reach in the dark. The downside was the sheer bulk of these lenses. Attached to small camera bodies they are like buckets. A T3 900m lens weighs over 20kg and since we’d need to travel on foot for large parts of the location work that was a non-starter.”

Weighing up the weight to light gathering capacity, MPG settled on an Officina Stellare (RH Veloce) 600mm T3 lens. He then tested both cameras with the lenses recording an owl in a studio to replicate moonlight conditions.

For reference, he also tested a Panasonic VariCam LT with dual ISO, a Canon C700 also with dual ISO, an ARRI Amira and RED Gemini.

“We invested a lot of time in the grade to do the whole test justice and the grade highlighted where the images stood out and where they started to fall apart.

“The α7S had too many artefacts in the image when you cropped in. When freeze framed the texture of the feathers was pixelated and had a red amp glow in the corner of the frame (an artefact that all extreme low light cameras seem to suffer from and caused when the sensor heats up). That could cause issues in post.”

He continues, “The resolution of the ME20 images held up really well. You could see detail of the feather texture with no breakdown of images at the edge. I also liked the noise structure. It was more filmic whereas the α7S had a clinical video look which was not as pleasing to the eye.”

On location for the series MPG took with him the 600mm astrograph and a set of Sigma PL Super Speed Primes. Both the ME20 and the Sigma set were supplied by VMI.

“The kit was hired from VMI by Offspring but I knew Barry from the time we’d tested the Phantom VEO. I knew he and the VMI team would deliver everything we needed.”

Payne-Gill also took a RED Gemini for shooting evening into night time. Not only does this camera have decent light sensitivity but it enables onboard RAW recording negating the need for an external recorder (which after all, the cameraperson is going to have to carry).

OffSpring made 12 half episodes over two seasons for which Payne-Gill was lead technical expert and the main lighting cameraperson on four of the episodes.

“The shoot had so many variables and a real challenge because of the tight turnaround. Cloud and bad weather can limit the ten day window of filming because of course when the moon disappears we have no light source. Also where we are on the planet had a bearing. The latitude of a crew position on earth in relation to the moon elevation can vary wildly. You can be at the equator and have the moon overhead all the time but in Finland in October the moon never gets that high so you’re illumination is diminished. And that can mean your lens choice is compromised.”

The series was the first to film Peregrine Falcons at night. It also delivered some rare sightings, like hippos trekking through the grasslands and two cheetahs playing and hunting together.

Most significantly, the filmmakers recorded tiny tarsiers hunting in Sulawesi, Indonesia capturing them for the first time exhibiting this behaviour on camera.

“Tarsiers are tiny, they move very fast and they live under a dense rainforest canopy with very little moonlit coming through. It wasn’t possible to use moonlight alone so we had to replicate it to at least gave us a chance to capture beautiful images of these remarkable primates.

Using any sort of big noisy generator with lights instantly shrinks the creature’s pupils to a pinprick and that goes against the whole point of the shoot. “They need their pupils to be as wide as possible to gather light and see their prey.”
Instead, Payne-Gill, set his battery powered lighting rig to just sufficient illumination as a full moon and filmed the animal natural and unaffected with pupils that you can see are large and wide in the footage.

“They were comfortable that there was no threat to start hunting in front of us,” he says. “The key is fast lenses which we were just as integral to the success of this production as the camera.”

 


Friday, 2 September 2022

AI Is Going Hard and Going to Change Everything

NAB


The best AI systems from DALL-E 2 and DeepMind’s AlphaFold to OpenAI’s GPT-3 are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift.

article here 

Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing AI; many now believe that major changes are right around the corner, for better or worse.

“We all need to start adjusting our mental models to make space for the new, incredible machines in our midst,” says Kevin Roose, a technology columnist and the author of Futureproof: 9 Rules for Humans in the Age of Automation.

Take Google’s LaMDA, the AI that hit the headlines when a senior Google engineer was fired after claiming that it had become sentient.

Google disputed the claims, and lots of academics have argued against the engineer’s conclusions but take out the sentience part, “and a weaker version of the argument — that state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows,” Roose says, in an article for The New York Times.

It seems as if AI models targeting all sorts of applications in different industries have suddenly hit a switch marked turbo-charge.

“AI systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ajeya Cotra, a senior analyst with Open Philanthropy, told Roose. “People should take more seriously that AI could change things soon, and that could be really scary.”

There are plenty of skeptics who say claims of AI progress are overblown and that we’re still decades away from creating true AGI — artificial general intelligence — that is capable of “thinking” for itself.

“Even if they are right, and AI doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society,” Roose says. “AI gets built into the social media apps we use every day. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use AI to determine who’s eligible for loans, and police departments use it to investigate crimes.”

In a few years, it’s likely that the vast majority of the photos, videos and text we encounter on the internet could be AI-generated. Our online interactions “could become stranger and more fraught,” as we struggle to figure out which of our conversational partners are human and which are convincing bots. And “tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale,” distorting the political process in ways we won’t see coming.

Roose outlines three things that could help divert us from this dystopian future.

First, regulators and politicians need to get up to speed.

“Few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the AI frontier,” he says.

If more politicians and regulators don’t get a grip “we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance.”

Second, Roose calls on big tech — Googles, Meta and OpenAI — to do a better job of explaining what they’re working on, “without sugar coating or soft-pedalling the risks,” he says.

 “Right now, many of the biggest AI models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate PR or buried in inscrutable scientific papers.

“Tech companies won’t survive long term if they’re seen as having a hidden AI agenda that’s at odds with the public interest.”

And if these companies won’t open up voluntarily, AI engineers should go around their bosses and talk directly to policymakers and journalists themselves.

Third, it’s up to the news media needs to do a better job of explaining AI to the public. Roose isn’t excluding himself from criticism either.

Journalists too often uses lazy and outmoded sci-fi shorthand (Skynet, HAL 9000) to translate what’s happening in AI to a general audience.

“Occasionally, we betray our ignorance by illustrating articles about software-based AI models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.”

Cotra has estimated that there is a 35% chance of “transformational AI” emerging by 2036. This is the sort of AI that is so advanced it will deliver large-scale economic and societal changes, “such as eliminating most white-collar knowledge jobs.”

Roose says we need to move the discussion away from a narrow focus on AI’s potential to “take my job,” and rather to try to understand all of the ways AI is evolving for good and bad.

“What’s missing is a shared, value-neutral way of talking about what today’s AI systems are actually capable of doing, and what specific risks and opportunities those capabilities present.”

We need to do this in a hurry.

 


When Your Kubrick AI Isn’t HAL… It’s an AI Kubrick

NAB

AI is already so proficient at copying a particular artist’s work it won’t be long before filmmakers need to protect themselves from plagiarism.

article here 

There could even be a need right now to copyright camera moves, editing choices, color palettes, lighting schemes, or compositions because there is nothing to prevent an AI from entirely generating a new gangster movie in the style of Martin Scorsese or a sci-fi film that looks and feels like it has come from Stanley Kubrick.

On the other hand, there will be some in Hollywood no doubt calculating that if an AI could perfect a hit movie without having to pay for the fuss, the time, all the micro-decision making and risk that human talent brings it’s a price worth paying.

This is not idle speculation; the debate about artistic infringement by algorithm has become a hot one in the art world.

Swedish artist Simon Stålenhag is among those sounding a warning. “AI basically takes lifetimes of work by artists, without consent, and uses that data as the core ingredient in a new type of pastry that it can sell at a profit with the sole aim of enriching a bunch of yacht owners,” he tells Wired’s Will Knight.

Stålenhag’s style was recently used to create images on the text-to-image AI Midjourney by academic Andres Guadamuz in an apparent attempt to draw attention to the legal issues surrounding AI-generated art.

Stålenhag was not amused. In a series of posts on Twitter, he said that while borrowing from other artists is a “cornerstone of a living, artistic culture,” he dislikes AI art because “it reveals that that kind of derivative, generated goo is what our new tech lords are hoping to feed us in their vision of the future.”

The dawn of a new era of AI art began in January 2021, when OpenAI announced DALL-E, a program that used recent improvements in machine learning to generate simple images from a string of text.

In April this year, the company announced DALL-E 2, which can generate photos, illustrations, and paintings that look like they were produced by human artists. This July, OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.

 “As access to AI art generators begins to widen, more artists are raising questions about their capability to mimic the work of human creators,” Knight says.

Digital artist David OReilly, for instance, tells Knight that the idea of using AI tools that feed on past work to create new works that make money feels wrong. “They don’t own any of the material they reconstitute,” he says. “It would be like Google Images charging money.”

But it’s not clear if the legal framework is strong enough to protect an artist’s work from AI-generated imitation.

In a blog post, Guadamuz argued that lawsuits claiming infringement are unlikely to succeed, because while a piece of art may be protected by copyright, an artistic style cannot.

 

Lawyer Bradford Newman tells Knight, “I could see litigation arising from the artist who says ‘I didn’t give you permission to train your algorithm on my art.’ It is a completely open question as to who would win such a case.”

In a statement to Wired, OpenAI defended DALL-E 2, saying that the company had sought feedback from artists during the tool’s development.

“Copyright law has adapted to new technology in the past and will need to do the same with AI-generated content,” the statement said. “We continue to seek artists’ perspectives and look forward to working with them and policymakers to help protect the rights of creators.”

Painted art, like motion pictures or literature, evolves and builds upon everything and everyone that has gone before it. Imitation could be homage or pastiche. Brian de Palma’s work, including Body Double and Dressed to Kill, was heavily influenced by Alfred Hitchcock. Hitchcock’s film Psycho has been recreated shot-for-shot by Gus Van Sant, in color. The infamous shower scene in Psycho is credited to Hitch, but may have been designed by Saul Bass. There is no clear line between imitation as flattery and straight out plagiarism.

As AI gets more and more sophisticated to produce longer-form narrative video, including deep fake or CG actors, the dividing lines will increasingly blur.

Short film The Crow shows just how far text-to-video has come.

 One worry for Hollywood is that while AIs might make certain types of production cheaper to churn out, the same technology could easily be in the hands of anyone. DALL-E 2 and Midjourney, for example, are simple enough to operate by just typing (or saying) a series of simple words.

On the other hand, there is an argument that AI art should not be given the same equivalence as art generated by a human. It should matter, the argument goes, that a piece of content pertaining to mean something has been created by people who have actually lived the experience.

If AI video is inevitable perhaps the AI owner, the AI prompters, and all of the AI’s data-set training wheels should be credited in the titles?

Watch This: “The Crow” Beautifully Employs Text-to-Video Generation

NAB


Sooner or later an AI, or several of them, is going to make an entire narrative film from script to screen. A step closer to that inevitable day has been provided by computer artist Glenn Marshall.

article here 

Marshall’s works are entirely created through programming and code art. In 2008 he won the prestigious Prix Ars Electronica for a music video he created for Peter Gabriel — unique in that it was created entirely out of programming and algorithms. He also created an AI-generated Daft Punk video.

The Crow is a finalist for The Lumen Prize, considered to be two of the most prestigious digital arts awards in the world, and is also eligible for submission to the BAFTA Awards.

“I had been heavily getting into the idea of AI style transfer using video footage as a source,” Marshall told The Next Web. “So every day I would be looking for something on YouTube or stock video sites, and trying to make an interesting video by abstracting it or transforming it into something different using my techniques. 

“It was during this time I discovered Painted on YouTube — a short live-action dance film — which would become the basis of The Crow.”

Marshall fed the video frames of Painted to CLIP, a neural network created by OpenAI.

He then prompted the system to generate a video of “a painting of a crow in a desolate landscape.”

Marshall says the outputs required little cherry-picking. He attributes this to the similarity between the prompt and underlying video, which depicts a dancer in a black shawl mimicking the movements of a crow.

“It’s this that makes the film work so well, as the AI is trying to make every live action frame look like a painting with a crow in it. I’m meeting it half way, and the film becomes kind of a battle between the human and the AI — with all the suggestive

Marshall says he’s exploring CLIP-guided video generation, which can add detailed text-based directions, such as specific camera movements.

That could lead to entire feature films produced by text-to-video systems. Yet Marshall believes even his current techniques could attract mainstream recognition.

Deep learning is not coming to Hollywood. It is already here.

 


Thursday, 1 September 2022

Encoding.com Talks Telestream Acquisition and Powering M&E VOD and Broadcast Workflows

Streaming Media

A few months after being acquired by Telestream, VOD media processing specialist Encoding.com claims that combined service is now set to revolutionize cloud media processing workflows for Media & Entertainment.  

article here

“What is exciting for us is that we’re bringing together the market leader in cloud based video processing in Encoding.com and the market leader in providing very powerful video processing tools for many if not most of the M&E companies in Telestream,” says Jeff Malkin, previously president at Encoding.com and now VP Cloud Revenue at Telestream. 

“Encoding.com have been powering hundreds of thousands of workflows over the past decade for large M&E companies,” Malkin told Streaming Media. 

“We have processed billions of videos and are doing a trillion API requests a year. Over the last 14 years we have evolved into a very mature and highly scalable platform," Malkin adds. "We’re now utilizing over 50 different media processing engines underneath our API to drive a growing suite of microservices like transcoding, ABR, broadcast packaging, DRM, captioning and subtitles conversion, and complex audio management. Our approach has allowed our customers to integrate our API once and have their workflows future-proof. 

“The acquisition means we can now start to add and replace engines specifically from Telestream. What that means for customers is that by adding Telestream’s IP to the Encoding.com platform we can support a much larger variety of post and broadcast workflows.” 

Before the acquisition, Encoding.com was powering OTT workflows for steaming services including those of Hollywood studios. This included ingesting mezzanine source video and transcoding it before adding ABR packaging, DRM, and DAI triggers and dealing with caption conversions. 

“In the last couple of years we started supporting broadcast workflows but we weren’t doing it so well,” Malkin admits. “Before the acquisition many of the engines we were utilising were open-source engines which made it very difficult to support broadcast workflows.” 

Telestream by contrast, has “already solved the difficulties in powering content preparation workflow for broadcast. So by adding Telestream Media Framework engines into our platform customers can now power OTT and more complex broadcast and post workflows. We are combining the best in cloud-based media processing but using the same trusted engines that have been driving on premises video workflows for many years.” 

For example, one large US media company uses Encoding.com primarily for broadcast, “But now with Telestream Media Framework we can support the entirety of requirements in the cloud. I anticipate the pace of broadcast migration to 100% cloud processing will significantly accelerate. The same workflows we support on prem we can now support in the cloud.” 

Malkin also predicts an acceleration of workflow convergence. “In most media companies the OTT side of the house and digital video supply chains grew up separately to broadcast. We will soon be launching [early 2023] a complete new set of technology called Broadcast HLS that allows us to utilize a lot of the HLS content we are producing for OTT workflows and modify it for broadcast workflows. That allows a customer to remove the duplicate workflows and all the hardware and software they are managing for dedicated broadcast workflows.” 

Does this signal a move into live workflows for sports? Not yet, but there are hints this is on the roadmap. 

“We have the capability and technology to support live linear workflows but we’ve chosen not to support that right now," Malkin says. "It’s a different beast. Supporting live linear is fairly simple from a transcoding and processing perspective but complex from an operational standpoint needing 24/7 NOCs in place.” 

Several years ago, Encoding.com did engage in live production but pulled the plug a year later “for strategic and business decision making,” says Malkin. “Focusing on VOD has been a competitive advantage for us. VOD content preparation, transcoding, and packaging is very complex. Other competitors out there try and support VOD, live linear and provide analytics and a player and ad serving. When you start supporting multiple components and workflows it is difficult to be the best in world at any one. We chose to concentrate on VOD processing in the cloud.  

“Who knows how things may change. Now we have a lot more resources behind us with the Telestream acquisition… I wouldn’t be surprised if we started supporting that in the near future.” 

Most of the processing it does is with Amazon but Encoding.com supports multiple cloud platforms including Azure, GCP, and open stack for private cloud implementations. 

Since being acquired by VC firm Genstar Capital in 2015, Telestream has expanded its presence with a series of acquisitions with those in the last three years alone including Tektronix, PandaStream, EcoDigital, ContentAgent, Sherpa Digital Media, and Masstech. 

Ernest Russell, Senior Product Marketing Manager at Telestream says, “Every other company that we acquired became a product of Telestream when we incorporated it under the Telestream family. Encoding.com is different. It is best in class, cloud native and we are clear that we would not take away the Encoding.com brand.” 

The brand and logo are now badged ‘Encoding.com--powered by Telestream’. 

Malkin says that soon after the acquisition he and his business partner Greggory Heil (Founder and CEO Encoding.com who is also now senior at Telestream) realized the full strategic value of combining the two companies. “At Encoding.com we are at the orchestration level. We developed the engines ourselves to operate in the cloud but we were not operating at the data layer. Telestream operates at the data layer with proven tools that customers are already comfortable with. Vantage was already being used by the customers I was going after. I was selling against Vantage on prem to customers. While customers understand the value of cloud – ease of deployment, efficiency of dynamically spinning up and down instances, SaaS cost models--they were weighing that versus all the powerful features that Vantage had that weren’t in the cloud. 

“Now the same tools that power Vantage on prem are being incorporated into Encoding.com cloud. We will leverage the framework to add new tools and make an unrivaled cloud solution.” 

Encoding.com claims to be the fastest cloud encoding platform available, and even guarantees it with queue-time SLAs. It claims to be 61% faster than on-premise hardware and 100 times faster than FTP. The queue time is as fast as 18 seconds, again with guaranteed SLA.  

A "Ludicrous Mode" accelerates this further by processing HD content at speeds up to 20% of real-time and UHD up to 30% of real-time.  

“Anyone transcoding Dolby Vision longform content knows it can take 48 hours to process just one hour,” Malkin says. “Now we’ve added Dolby Vision support for Ludicrous to slash the time to just a few hours.” 

In a press statement Dan Castles, CEO at Telestream, said, “Over Encoding.com’s 13-year history, the company has generated significant traction powering video supply chains for leading streaming platforms, content distributors, and web-based VOD platforms. Being cloud-native from inception, the technology fits perfectly within the strategic direction at Telestream to offer our customers the ultimate flexibility to meet their most demanding workflow needs across cloud, on-prem or hybrid environments. This acquisition, together with Telestream’s 25-year heritage of continuous media workflow innovation, cements our leadership position across the entire VOD cloud media processing ecosystem.” 

 

Getting the Right Look for Netflix’s The Tinder Swindler

 PostPerspective

Audiences were amazed by the depth of deception exposed in the Netflix true-crime story The Tinder Swindler. The film follows the victims of perpetrator Shimon Hayut, aka Simon Leviev, who posed as a billionaire diamond mogul on dating apps. He met multiple women and conned them out of thousands of dollars.

article here

“These women fall into a trap thinking that someone is in love with them,” says cinematographer Edgar Dubrovskiy. “Shimon Hayut is brutal, telling these women for months that they will buy a house together and have kids.”

The Tinder Swindler was produced for Netflix by Raw TV, the British indie that is credited with delivering cinematic production value to tough documentary stories.

Dubrovskiy and director Felicity Morris chose to acquire on Red, making this the first show for Netflix of any kind shot on the Komodo camera system. It was mastered in HDR in a predominantly ACES color-managed Dolby Vision workflow.

The doc’s central story is told in interviews with Hayut’s victims, filmed in Sweden, Norway, Amsterdam, London and the United States. Interview footage was rounded out with stock footage, archive footage and dramatization, plus graphics of social media posts. Post production went through Molinare, where senior colorist Ross Baker handled the grade in FilmLight Baselight 4.

“From the first scene, it was clear Felicity and Edgar wanted to embrace a romantic and evocative world,” says Baker. “They achieved this with the soft warm tones from the interviews, using very minimal lighting. Edgar opted to just use practicals in the interviews to light the scene. The Komodo handled this with very little noise.”

Dubrovskiy sent Baker SDR stills as references for an approach that would distinguish the look between the victims and the journalists. Baker translated the SDR stills into HDR grade.

“Using Molinare’s proprietary streaming software, MoliStream, we shared the output with Edgar and explored our options, discussing the look live,” Baker says. “Here, we delved into how we could treat the images in different ways and how this would be perceived in each dynamic range.”

As many documentary filmmakers are experiencing HDR in production for the first time, it can be a massive learning curve for all involved. Baker says the extra dynamic range can sometimes be too much as it natively appears in ACES and HDR.  “Edgar liked the IPP2 roll-off that you see from the Red in SDR when you apply the soft tone-mapping and low-contrast options. This played nicely to the desired cinematic style required by Felicity. As these options are not available in ACES, I created a curve that would give us the same result while allowing a little extra head room to the highlights. I created a custom-grade stack that allows me to work with SDR content and push it to 1,000 nits (if desired) without breaking the images apart.”

Dubrovskiy shot 6K RAW to fit Netflix deliverables, with the data overhead allowing Molinare’s finishing team to punch into the image as required.

“Blending the Komodo footage alongside a wide range of sources is always going to be tricky as image sensor quality and lens choices play a massive part in the final look,” Baker says. “I’m fortunate to have worked on many documentaries that have used lots of different media sources, so I’m able to draw on years of experience on what works.

“A big part of making the sources work together without compromising the ‘hero’ camera is to understand the difference in contrast and chroma and try to align them all together. It’s never plain sailing but with the Baselight you have the tools at hand to make the adjustments needed.”

For example, the social media posts and text messages, which assist in driving the story, shouldn’t jump out of the edit. “The graphics are predominantly bright white screens that could be jarring. Controlling the luminance and softly vignetting with a small amount of grain helped to maintain consistency with the interviews and reconstructions.”

Baker adds, “This was my first time working with Edgar, and he is a very creative and technically minded DP. He was very passionate that the right look be achieved.”