Wednesday, 20 November 2019

The next generation of immersive sees the light

IBC
In Steven Spielberg’s movie Ready Player One there’s a shot of actor Ty Sheridan putting on virtual reality (VR) headgear which transitions imperceptibly from real to virtual cameras as the shot moves to an extreme close-up. In Gemini Man, Will Smith’s digital double is among the most realistic yet created for the screen.
Both instances made use of a Light Stage facial scanning system at Google and is just one of a number of breakthrough applications led by Paul Debevec, a senior scientist working in the company’s immersive computing wing.
A pioneer in image-based rendering who directed experimental short The Camponile in 1997 using photorealistic animation techniques adopted by the makers of The Matrix two years later, Debevec was named one of the top 100 innovators in the world aged under 35 by MIT in 2002. He’s been working with Google since 2015 as well as being an adjunct professor at the USC Institute for Creative Technologies in Los Angeles.
IBC365 caught up with Debevec at the VIEW Conference for visual effects in Turin where he presented Google’s latest efforts to capture and process light fields for a more realistic sense of presence in VR.
“Filming in 360-degrees only captures one perspective on how different materials react to light,” he says. Light fields can give you an extremely high-quality sense of presence by producing motion parallax and extremely realistic textures and lighting.
“We need to replicate how the world reacts to you as you move your head around and there are clues to this with how light bounces off surfaces in different ways.”
VR at a crossroadsIt is not, however, a great time to be in consumer VR. The BBC has just disbanded the team it created to make VR content, Disney and Sky-backed VR tech venture Jaunt was recently sold to Verizon and Google has halted sales of its Daydream View smartphone headsets.
Debevec believes VR is still “on the incline” but admits it was hyped out of proportion.
“So over-hyped that [Google] pulled me and my group out of our environment at the University. For a moment it looked like VR had potential as a new and interesting media and that it would become a platform that, if you were not on it, you would miss the boat. That kind of mindset gets a big tech company to throw people and resources at something.”
He says the main concentration in the tech industry now is on augmented reality (AR) but flags that it’s another instance “where the VPs and execs see it both as an opportunity with great potential and a risk that they’d miss the boat if they don’t get involved.”
There is a quality problem with VR which Debevec is trying to solve.
“Users are presented with a stereo view in any direction. If your head moves, the whole image comes with you. In effect, your whole perceptual system is attached to the world and that causes nausea.”
He says: “If you want to create a great virtual experience that takes advantage of 6 degrees of freedom (6 DoF), we need to record not just two panoramas but an entire volume of space that is able to be explored interactively as you move your head around.”
Light field is the answer. It’s a means of capturing the intensity and direction of light emanating from a scene and using that information to recreate not only the volume of the space but subtle light changes, shadows and reflections.
A very brief history of light fieldThe idea goes as far back as motion picture’s founding father Eadweard Muybridge who, in 1872, recorded subjects moving sequentially in still images.
A hundred years later, another array of cameras was used to take images of a subject simultaneously, combined into a time-slice and used to create synthetic camera movements.
Deployed first on film in Wing Commander and Lost in Space then, ironically, on The Matrix, virtual camera techniques have become increasingly sophisticated.
“Light field rendering allows us to synthesise new views of the scene anywhere within the spherical volume by sampling and interpolating the rays of light recorded by the cameras on the rig,” he says.
Under Debevec’s direction, Google has built a number of light field camera arrays. These include a modified Odyssey Jump called Oddity which consists of 16 GoPros revolving in an arc and triggered to take photographs synchronously.
“Absolutely the key concept of light field rendering is that once you record all the rays of light coming into that sphere (scene) you can use the pixel values and the RGB values of each image to create images from different perspectives and views where you never actually had a camera,” he explains.
“By sampling or interpolating information from the hundreds of recorded images, you can synthetically create camera moves moving up and down forward and back – every view you might want to view in a VR headset with 6 DoF.”
Test shoots included one aboard NASA’s Discovery command module at the Smithsonian Institute’s Air and Space Museum.
Google focused on static scenes first, partly so it could work with relatively inexpensive camera rigs and also to perfect techniques required to create the best image quality.
When light field camera maker Lytro folded last year with Google in pole position to acquire its assets, it was Debevec who decided not to pursue development.
Rather than camera arrays, Lytro had built single body video cameras with dozens of micro-lenses including a cinema camera that was the size of a small car.
“That should be in a museum,” Debevec says. “The main drawback of Lytro’s system was that its spatial resolution was decimated by the lens array,” Debevec says. “If they had an 11-megapixel sensor the output resolution would only shoot 1k x 1k images.”
Light field video experimentsWhen Google turned to video, they retained the camera array arrangement but needed even higher quality machine learning algorithms to generate interpolations.
This is what Google’s computer vision experts have advanced with a machine learning process it calls DeepView.
“DeepView gives quite high quality viewing interpolations using an ML technique,” he explains. “It’s not depth maps plus geometry but volume with RGB alpha output.”
In a first test, it modified the Oddity rig into one called Iliad using 16 GoPros to generate 100 depth points of RGB alpha. With this data, they were able to generate synthetic camera moves around such ephemeral elements as smoke and fire, as well as recreating realistic reflections and specular light formations.
“It’s not completely artefact free but it blew our minds,” Debevec says.
Its latest light field camera array is its largest yet. The Sentinel comprises 47 x 4K action sports cameras capable of capturing a 120 x 90-degree field of view.
One application is as an aid for postproduction effects including camera stabilisation, foreground object removal, synthetic depth of field, and deep compositing.
“Traditional compositing is based around layering RGBA images to visually integrate elements into the same scene, and often requires manual artist intervention to achieve realism especially with volumetric effects such as smoke or splashing water,” he says. “If we use DeepView and a light field camera array to generate multiplane images it offers new creative capabilities that would otherwise be very challenging and time-intensive to achieve.”
At its offices in Playa Vista, Google has also built a larger volumetric light stage capable of scanning the whole human body, not just the face. It’s one of a number of such capture stages springing up around the world. Hammerhead VR operates one based on Microsoft technology in London. Paramount and Intel have built one in LA covering 10,000 sq ft, the world’s biggest, ringed with 100 8K cameras.
At Google, experiments continue with DeepView including the recording light fields of Google staff performing various simple movements, then using machine learning to render them into entirely new scenes, complete with detailed illuminations that match the new environment.
There are problems, though, in building the technology out to capture larger volumes.
“We wish we could take you all around a room in a light field but we’d have to move the camera to different parts of the room then find a way of linking images captured from each position. Just managing the amount of data is still daunting at this point. We’d have to ask machine learning to step in and help us.”
He is sceptical of holographic displays although believes the technology will advance.
“Any solution to this needs to have an extremely high pixel density,” Debevec says. “We may have hit the limit of human vision for conventional displays, so is there enough market to create 1000 pixel per inch (PPI) displays let alone 5000 and 10,000 PPI displays that will allow you to use the pixel surplus to output arrays of light in omnidirections?”
Editorially too, Debevec thinks there’s a lot of learning to do for VR to become as compelling an experience as cinema.
“We need to figure out how to tell stories in immersive filmmaking. Once you fill a user’s whole field of view so that they can see everything all the time and you take away the directed ability to zoom in on close-ups, you are giving them a lot of extraneous information.
“It would be like reading a novel where between any line there would be a whole paragraph describing what is going on in the rest of the scene. You would lose the story, become confused. The danger for immersive media is that it doesn’t focus the user’s attention.”

Friday, 15 November 2019

Behind the scenes: War of the Worlds

IBC
Peaceful Edwardian England turns into a chaotic warzone and then an apocalyptic world of red weed, in the new mini-series of HG Wells’ classic.
Peter Harness (Jonathan Strange & Mr Norrell for TV) adapted the novel into three parts in Mammoth Screen’s production for the BBC, directed by Craig Viveiros (And Then There Were None).
Relocating the landscape from Surrey to the North of England, the production shot in and around Liverpool including at a former oil-blending plant turned make-shift studio on the Birkenhead docks.
At the time of the story’s turn of the twentieth century setting Britain is the world’s superpower but the invasion begins to sap the life from the seat of empire.
The contrast between the industrial power of man and the creeping alien world is reflected in the organic sound design of the title sequence which sets the tone for the series.
“Craig wanted the world to feel vibrant, full and powerful,” says Tony Gibson, sound effects editor, Molinare. “We talked a lot about getting an alien feel to the world slowly creeping in with the insect sequences before we fully reveal the alien sounding red world.”
Gibson combined research into the period with his own library of sounds accumulated from years of drama work.
“The pre-invasion sections have a sense of hubris, of an Empire at its peak but on the verge of starting to fall,” says re-recording mixer Dan Johnson (Molinare).
Some characters talk of Britain’s military and technological superiority and this is represented by the sound of machinery and the technology of the time. These include sounds of trains and printing presses, cameras, a giant telescope, a gramophone and, later, machine guns.
The Martian technology, on the other hand, is alien and unknowable. “It had to be uncanny but not in a synthetic way – things are stranger when they are almost familiar but different in some way,” Johnson says.
The black smoke had to be tangible, thick, suffocating and enveloping. “In a way you are trying to use sound to compensate for the lack of some of the senses such as touch, smell, taste. The occasionally exaggerated use of sound helps to fill in these gaps in the viewer’s experience.”
The red vegetation is intended to sound almost creepily alive – half-plant, half-animal. “Adding the sounds of crowds and people helps to enhance the idea that appalling things are happening.”
Dialogue editor Filipa Principe (Molinare) recorded many hours of crowd ADR to reflect the horror on screen.
As the story unfolds, large scale scenes of destruction and mayhem are contrasted with much smaller domestic, intimate scenes.
“It was really important to highlight this – as the contrasts really enhance the nature of these scenes,” Johnson says.
That entailed controlling the dynamics so that scenes of terror and destruction sounded loud without fatiguing the audience. One of the ways this was achieved was building in a slightly quieter section before something that was supposed to be deafening.
“Our volume perception is based on comparing to what was there before so if everything is constantly loud then it can lead to nothing being perceived as loud,” explains Johnson. “We made extensive use of high-quality, controlled distortion to simulate what happens when sounds are loud in real life. For example, the tripod roar is distorted to mimic the sound of the air itself distorting and being unable to contain the amount of sound.”
Viveiros and Johnson maintained the illusion of volume by doing much of the final mix on small TV speakers. “If the sound didn’t feel loud on those then we had to keep working until it did,” he adds.
The MartiansAs expected, the Martians and their tripods were one of the main creative talking points.
Their construction, sound and movement had to appear not only ahead of its time for the Edwardian period but light years away from our own.
“In the artwork of the 1900s there was a lot of Meccano style riveted steel,” Viveriros says. “These were monsters born out of the industrial revolution but we wanted something that also had a life of its own, that could regenerate.”
The producers tasked Realtime UK, a Lancashire based facility renowned for video game cinematics, to create the VFX.
From an audio point of view the producers wanted the tripods to sound natural and not too robotic. There were clues in some of the script. For example, Ogilvy, the astronomer, talks about something in the mysterious capsule that lands on Horsell Common as sounding like clockwork. Gibson created sounds that sounded like giant clockwork “with a slight, uncanny twist to them.”
Viverios’ briefed the tripods’ voice to mimic that of whale song in being able to communicate with other over great distances.
“We sampled a library of whale calls and morphed these with some didgeridoo, elephant calls and some fog horns to convey their range of emotion,” Gibson explains. “Dan worked wonders bedding this sound into the show and allowing the range for them to be heard at full volume.
“The tripods are covered in a graphite-like substance that is always breaking and regenerating as they move and they needed to sound massive. We put this together from ice sheets cracking, building destruction libraries and some ingenious Foley work.”
The Martians themselves were designed with a membrane over their mouths leading the sound team to select a clicking / rattling effect recorded using a bottle of water and software called Dehumanizer.
“This allowed us to integrate other animal sounds like seals and pigs into their screeches and chatter,” Gibson says. “Once again we collaborated with the Foley department to get a leathery and slimy sound for their movement.”
Foley focusHackenbacker (Ciaran Smith, Foley mixer; Stuart Bagshaw, Foley editor; and artists Ruth Sullivan and Paula Boram) worked on Foley. Almost all of it was featured in the final mix.
“I often think of Foley as the audio equivalent of focus, subtly drawing the audience’s attention to something on screen,” observes Johnson.
And in a not so subtle way it helps to bring a level of detail, focus and reality to what is often a highly constructed scene. For example, when Stent, another astronomer, touches the capsule, the foley sound of the squelch allows the audience to imagine what this would feel like if they had done it themselves.
Red gradeDirector of photography James Friend worked with Molinare senior colorist Andrew Daniel to develop a “classy and low contrast look for the project” in particular testing mixtures of lighting and grading for the ‘red world’ scenes.
“In the end we decided to sort of meet each other half way to allow room for manoeuvrability in the grade,” says Daniel. “We had the idea that we could make the show feel like a black and white film that had been colourised. It gave the series a feeling that it had existed for a long time and we were only just discovering it.”

Industry Innovators: Bill Warner, Avid

IBC
Invention is the mother of necessity and all that but there are only a handful of people equipped to turn a problem into an opportunity. Faced with the torture of editing video using existing linear technologies, 28-year old Bostonian Bill Warner channelled his frustration into launching the company that has dominated digital editing for the quarter-century since.
Author Russell Evans declared Avid’s breakthrough “the biggest shake-up in editing since Méliès played around with time and sequences in the early 1900s.”
That ignores the strides made by Sergei Eisenstein, Charlie Chaplin or any of the dozens of editors and filmmakers working before 1980 but there’s no doubt that Avid shook up editing technology and put a whole new set of creative storytelling possibilities into the hands of a lot more people.
Put it this way: pre-Avid, film editing relied on the century-old technique of cutting and splicing frames of celluloid together using flatbed systems like the KEM, Steenbeck and Moviola.
This slow and clunky approach (nonetheless demanding a discipline many older editors decry in modern NLE) was actually a lot more non-linear than the prevailing videotape editing technique which by the 1970s involved playing back master footage from one machine and copying select takes onto another.
This was the linear wall confronting Warner in 1984. He was a marketing manager at 3D graphics workstation manufacturer Apollo Computer when he decided to make a series of ‘how it works’ videos to assist the company’s sales team.
“I went to the local postproduction house armed with U$3000 to edit the first one and was bitterly disappointed,” he relates. “I assumed that something like Avid existed when in fact there was this giant disconnect between the idea of digital editing and the actuality of computer editing at that time using tape decks to perform frame-accurate linear editing.”
The seed was sown. “My initial strategy was to wait for technology to advance,” says Warner. “Seventy-five videos of increasing complexity later and it was clear nothing was going to give.”
In 1987, he quit the job and set up Avid in his garage.
Accident and opportunityHistory may have taken a different turn had not Warner suffered an accident aged 18 that severely damaged his spinal cord and left him using a wheelchair, and eventually, crutches.
While in rehab, the teenager had the idea to help other paraplegic’s take back control by designing a ‘whistle switch’ to perform functions like turning lights on and off, changing TV channels and dialling the telephone.
He started the Bionic Control Corporation to market the device, which helped him get into MIT. He wrote his MIT thesis on improving a handcycle for people with disabilities (an interest that saw him start-up New England Hand Cycles to manufacture them a few years later).
It was his experience at 3D CAD company Boston firm Computervision and at imaging hardware developer Lexidata plus his time at Apollo that gave him the keys to build Avid.
“What is interesting is how all the pieces of the puzzle came together,” he says. “People forget how hard this was. Nobody had shown motion video in a computer for anything longer than 30 seconds that wasn’t run without specialised hardware.”
At Apollo, Warner had helped win a major contract for the General Motors-owned Electronic Data Company for a workstation capable of displaying high-resolution video using third party graphics boards.
The processor Apollo engineered was nicknamed Giraffe “because that’s how far out we were sticking our necks out.” It formed the basis of the workstations on which Warner would demonstrate the first Avid prototypes.
“To do something like this you have to get people to imagine what is possible,” Warner says. “If we resign ourselves to living in a world where we have to show the real thing in order for them to believe then we’re all in trouble.”
The Avid originalsWarner’s friend, engineer Eric Peters, and Greg Cockcroft, a college graduate who had clicked with Warner on a business meeting for Apollo, joined him in the endeavour as CTO and “chief problem solver” respectively.
They worked out how to scan, compress and digitise video and display the result on screen.
The proof of concept which convinced investors to put $500,000 into initial development was based on a series of still images displayed rapidly to simulate video.
“For the first time you could see what looked like live video, but there was no sound,” Warner says.
For all the ingenious soldering and coding, it was the efforts they made to talk with editors which lay the foundation for success.
Peters says: “We began by taking more than two years to study the art and craft of editing. We made no products during this time, only prototypes, which we showed to hundreds of working editors, in every corner of the industry, from feature films to commercials to music videos to infomercials and industrials. We built a lot of prototypes and tried a lot of models.”
“We got really good at listening to customers,” Warner adds. “Eric would go out and listen to 100 editors, write down copious notes of all their problems and go back and solve 97 of them.”
Avid’s first employee was software engineer Jeffrey Bedell who was primary author and architect of the original Media Composer code. Fellow software engineer Joe Rice and Tom Ohanian designed the UI for the Avid/1 Media Composer, shown behind closed doors at NAB 1988 and officially debuted a year later, this time running on Apple Macs.
With its three processors working in parallel, the Avid/1 could simultaneously handle full-motion colour video at 30 fps and two channels of 44Khz, 16-bit CD-quality sound. The main editing window simulated the familiar source and record monitors of a traditional editing system. A timeline window displayed a map of the edited sequence. Priced between $50,000 and $80,000, Avid integrated all of the monitors and tape recorders that were previously needed to get from one place to the next in the video editing process. Avid did it on a PC-based platform and in a visual way that let editors click directly on an image.
Most importantly, it provided random access. “Linear editing meant that if you changed your mind, you lost your work. It was really painful. Non-linear meant you could keep building without losing your previous selections.”
Warner, Peters, Cockroft and Bedell, are the patent holders of the ‘method and apparatus for manipulating digital video data’ at the heart of the machine.
By the end of 1989, the company posted revenues of $1 million. By the time it went public in 1993 revenues had jumped to $112 million. That was the same year that Lost in Yonkers, directed by Martha Coolidge, became the first studio feature to be edited (by Steven Cohen) on the system.
By 1995 dozens had switched to Avid, away from celluloid, a move cemented in 1996 when Walter Murch accepted the Academy Award for editing The English Patient, which he cut on the Avid.
Success, resignation, inventionBy that time, Warner had resigned from the board.
“I am a starter, not a person who will scale a company up,” he says.
He has since designed the first speech-based electronic secretary, a precursor to Siri and Alexa which sold to Orange in 2000 and set up FutureBoston, designing high-resolution mapping systems that combine past/present/future maps as layers long before Google Maps. He continues to focus on open-source designs of mobility tools for those with disabilities and has angel invested in more than fifty companies.
Avid was awarded an Emmy in 1993 the Media Composer and a technical Oscar in 1999 for its success in transforming the editing process in filmmaking.
One of the creative changes enabled by NLE is in the number of cuts per show. Film and TV shows of yesteryear can appear slower because the average shot length (achieved by dividing the length of a film by the number of shots) is longer than more modern productions. Figures from online database Cinematics illustrate the gradual decrease in ASL: for example, 1932 classic All Quiet on the Western Front has an ASL of 9.2; Chaplin’s The Great Dictator of 1940 (14), Don’t Look Now in 1973 (5.8) and the original Terminator from 1984 (3.9). Edgar Wright’s Baby Driver has an ASL of just 1.6.
Warner was recently inducted into the US Patent office’s National Inventors Hall of Fame which puts him alongside such luminaries as Thomas Edison.
“Clearly my work has nothing on Edison – but I guess I did enough!” he says. “I am proud of Avid having played a foundational role in the creation of digital NLE.”

Wednesday, 13 November 2019

Craft leaders: Peter Greenaway, filmmaker

IBC
Cinema should evolve beyond the medium of storytelling. Speaking to IBC365, Peter Greenaway – filmmaker, artist and provocateur - casts his critical eye over Hollywood
Peter Greenaway would hate to be classified and indexed like one of the subjects of his films A Zed and Two Noughts and Drowning by Numbers. The maverick filmmaker, who became synonymous with British arthouse cinema in the 1980s, would prefer to be recognised for his own uncompromising evolution as painter, director, curator, art historian, theorist, video installation artist – even as VJ.
Above all, he relishes being a provocateur.
Greenaway’s anti-Hollywood sentiments are well documented and designed to court controversy. His main attack is that all film and TV is reliant on the medium of text rather than forging a genuinely independent and visually inspired language.
Calling for all scriptwriters to be given the sack - or shot, as he said in one interview - is mischievous shorthand for his belief that cinema is selling itself short by being rooted in storytelling.
When challenged that his distaste for cinema actually stems out of love for its possibilities, Greenaway agrees.
“That’s an accurate description of basically how I feel having played with cinema for about 40 years,” he tells IBC365.
“Despite the evidence, cinema is not very visual and is really a literary medium,” he contends. “Nobody seems to make anything without writing a script. Most cinema is some form of illuminated text. I would argue that we’ve yet to see any piece of cinema worthy of the name.”
The 77-year old is impatient for cinema to fulfil its promise as a breakthrough media for human expression, finding it stuck in the same formula throughout its short history.
“We’ve had 8000 years of lyric poetry, 5000 years of theatre, 400 years of the novel. Text has played so many games. Perhaps I am being unfair since cinema is still infant but does anyone need multi-screen adaptations of Jane Austen?”
Conservative formatLong before Martin Scorsese and Francis Coppola scorned Marvel movies as anti-cinema, Greenaway was condemning Harry Potter and The Lord of the Rings as cynical money-making exercises.
He feels the similarity between their argument and his to be superficial, instead chastising these cinematic legends for being part of the problem.
“Scorsese makes the same films as DW Griffith in 1910,” Greenaway says. “His films are structured the same way and are presented, organised and formatted the same way. He is working in very much a conservative format.”
I ask if he has ever been emotionally moved by a film but he ignores this. It doesn’t matter since, in his view, cinema needs to be ripped from its comfort zones of story and form whether that film is Casablanca or Apocalypse Now. He even invokes the Bible to make his case.
“In Genesis, it says ‘In the beginning was the word’. Not true,” Greenaway declares. “In the beginning was the image because how can He send anything out without an image?”
He continues: “If you believe cinema was born in 1895 [when the Lumiere brothers patented a movie camera and projector] then you’ll know that in the same year HG Wells, Johann Strauss and Van Gogh were alive.”
In contrast to the huge changes in artform since then “in literature (post-Borges), music (post-Stockhausen) and painting (post-impressionism),” he contends that the dial on cinema has barely budged.
“Other art forms have traded in old rituals and come up with so many extraordinary things but cinema is intent on recording narrative,” he insists. “There are very few filmmakers who have a great and profound sense of visual literacy.”
I counter that cinema language has evolved further than he gives it credit for and that there are cinematographers pushing the art of working in light, form and colour. Greenaway asks me to name some and I suggest Vittorio Storaro (The Conformist) and Roger Deakins (No Country for Old Men).
“Cinema has always worked with light,” he dismisses. “Charlie Chaplin worked with light. No-one is taking responsibility for cinema as a form of visual intelligence.”
He continues: “As much as I admire Deakins - his cinematography on Blade Runner 2049 is to be respected - I don’t regard that as a film and Deakins, like everyone, works to a script."
With some irony then, Greenaway is accepting the Lifetime Achievement Award for Directing given by Polish cinematography festival Camerimage later this month.
He reserves a special place for Sergei Eisenstein, the Russian filmmaker who shaped the language of film editing, and The Last Year at Marienbad, a 1961 film by French director Alain Resnais which perhaps comes closest to Greenaway’s own experiments with heavily stylised non-linear form.
“Eisenstein was a shining light and one of the very few really cinematic film directors,” says Greenaway, who made biographical drama Eisenstein in Guanajuato in 2015.
Documentary films and attempts at realism get short shrift too. “What’s wrong with the notion of experiencing the world for what it is rather than fixing it in time? Do you want artforms to be about reality when the human imagination is the most extraordinary apparatus?
“I’m not against literature. I’m not against narrative,” he insists. His favourite book is The Bridge of San Luis Rey, a 1927 novel by American author Thornton Wilder. “I just don’t think they belong in cinema.”
He is fascinated by the form rather than the meaning of text, having made calligraphy central to his 1996 feature The Pillow Book.
It is painting, though, which he prides as the highest non-narrative artform – albeit one that cinema can yet emulate and surpass.
“I am an archivist”“I suppose it all began a long time ago in adolescence when I was first conscious of the ephemerality of everything,” the Welsh-born polymath explains. “There is nothing in my background or family or education related to painting but it occurred to me that drawing and painting was an attempt to somehow fix the ephemeral, to make it permanent. It is an archival pursuit. I could be described as an archivist.”
After art school in London he “tried and failed to make some sort of living as a painter,” then turned to the art of the moving image.
“I started writing articles as a journalist about the connection between the 450,000-year history of painting and the 120-year history of cinema,” he says, concluding that cinema was “desperately lacking in experimentation and needed a radical rethink of its theories and practice.”
It is the attempt to shoulder this responsibility himself which has been Greenaway’s lifelong project.
“Cinema can fuck with everything and is part and parcel of other art forms but I have always felt that it is lacking,” he says. “I am trying to invent a cinema which is present-tense, multi-screen and non-narrative. People find it difficult to imagine that. Maybe my demands are too high.”
He gave “incredibly simple and rudimentary” storylines as a concession to audiences in his most well-known works The Draughtman’s Contract (1982), The Belly of an Architect (1987) and The Cook, The Thief, His Wife and Her Lover (1989) but it is the scientific organisation of these films which is striking. He uses taxonomy, maps, grids, numbers, diagrams, symbols, quotations and codes to break what he would describe as cinema’s formal rules.
Acclaimed for their visual acuity, his films are often derided as pretentious, fixated banally on themes of sex and death, ironically called out for their formalism and judged cold and devoid of humanity.
This would miss the mark of his coruscating sense of Thatcher-era politics in ‘The Cook, The Thief’ or the wit on display, notably in The Draughtman’s Contract.
There are elements of the cinema that Greenaway says he does appreciate. These include the use of sound (and silence), an experience of colour and performance, “a certain choreography” and, in particular, how all of these are “interwoven, connected, contradicted.”
More recently he has embraced technology as a way of breaking the theatrical frame and says the film industry ought to have settled on 60 frames a second as a more ideal speed than 24fps.
“Cinema is a mechanical art,” he says. “Theatre technology is constantly changing. Like James Cameron playing with 3D. All these digital technologies I enjoy and have used, but they are still a retinal phenomena that doesn’t fundamentally change the syntax and vocabulary of cinema. They’ve been used for certain profit and entertainment but not metamorphosed cinema in any new direction.”
Multimedia artHis first major multi-media work was 2003’s The Tulse Luper Suitcases which spread across three films, a website, two books, DVDs and a touring exhibition.
His partnership with Dutch artist Saskia Boddeke has yielded several operas and music theatre performances with Greenaway writing librettos. Their latest multimedia installation, Artuum Mobile, features his sculptures and paintings
“I’ve never stopped painting or working on installations,” he says. “I’m still making films and still playing with museum-ology, the notions of gallery and replication.”
Greenaway has previously stated his desire to euthanize when he reaches 80 – just three years away. It’s one reason why he lives in Amsterdam.
“That was an intellectual argument,” he dismisses. “But can you tell me who did anything valuable after the age of seventy? Darwin, Einstein, Picasso had all done their best work by the time they were fifty. After fifty we are all fiddling in the dark or repeating what we’ve done before. My best ideas were 30 and 40 years ago and I’m still trudging through them.”
He suggests, with characteristic impishness, that old people only exist beyond the age of 80 “out of a sense of selfishness”, unable to contribute, using up valuable resources.
So, what does motivate him? “Curiosity,” he says. “And closing the gap between desire and practice in every part of life.”
Mostly he means bridging his conceptualisation of a purer cinematic expression with the ambition of achieving it.
“I am looking for a retina cinema in the sky.”

Tuesday, 12 November 2019

BBC and Sky Collaborate to Fend off Common Foe

StreamingMedia

The BBC and Sky have signed a new deal that will see the two broadcasters collaborate across content and technology.
Sky Q customers will have access to the iPlayer app and BBC digital interactive red button, while the BBC is to experiment with Sky's AdSmart addressable advertising platform to run personalised promotional content.
The deal fits the wider drive by UK broadcasters to defend their position against the rising tide of streaming services.
"It's a sign of the times," says Paolo Pescatore, analyst at PP Foresight. "The tie-up builds on Sky's approach to aggregating a wide range of services. More importantly for the BBC, it will enhance its quest to break down the barriers to adoption of BBC services on other platforms. It's a win for consumers."
The collaboration means Sky users will now be able to access the iPlayer app directly through a specific button on the main menu page. 
Using SmartAd data, the BBC will trial targeted trailers to viewers watching BBC channels through Sky+ or Sky Q boxes—about 13 million households. The promos would be shown during breaks between programmes when watching BBC channels live.
In a press statement, Sky said the two broadcasters were also exploring other ways they can work together, including the possibility of making audio/radio app BBC Sounds available on Sky and NOW TV.
Addressable linear is a bigger and more strategic prize for broadcasters than SVOD ads, and Sky holds the key.
According to Enders Analysis, in the context of dwindling linear viewing and rocketing online video ad spends, "the adoption of Sky AdSmart and similar services on YouView and Freeview could take addressable TV ads from a sideshow to a pillar of revenue."
A study released by Sky in August suggested addressable TV cuts channel switching by half (48%) and boosts ad engagement by more than a third (35%). It has created 17,000 campaigns for 1,800 advertisers since AdSmart launched five years ago. It is expected to reach 60% of UK households by 2021.
In September, Sky signed PSB Channel 4 to the platform.
"All eyes are now on the remaining sole PSB yet to support AdSmart," said Pescatore.
ITV has resisted. The commercial broadcaster is using its own addressable technology in partnership with Amobee to allow advertisers to target the 30 million viewers signed up to its catch-up service, ITV Hub.
However, all of this is a drop in the ocean compared to the grip which Facebook and Google have over the UK ad market. According to eMarketer, the U.S. tech giants will command 68.5% of the £14.56bn ($19.41bn) UK digital ad market this year-a figure expected to surpass 70% by 2021.

BritBox to the Rescue

Last week, ITV, BBC, Channel 4 and Viacom-owned Channel 5 launched SVOD service BritBox. It costs £5.99 ($7.60) a month for HD and multiscreen and contains mostly archive programming from the UK broadcasters.
It carries a different content catalogue from the BritBox service that BBC/ITV launched in the US and Canada two years ago and which has amassed 650,000 users. 
Samsung will feature BritBox as a ‘Recommended App' on its Smart TVs and mobile operator EE, owned by BT, signed as exclusive mobile partner for the SVOD.
Deltatre is providing the user experience platform for BritBox (which it already does for US version); Irdeto has the security piece and  LoginRadius is responsible for customer relation management and the BritBox access management platform. Akamai is providing the CDN. 
According to Enders, ITV's investment in the service is "modest when compared to its global competitors—up to £25 million in 2019, £40 million in 2020 and declining thereafter—but it is a prudent low-risk entry."
While BritBox is being pitched as complementary rather than a rival to Netflix and Amazon, there are questions about whether the British consumer is prepared to pay again for content they may feel they've already paid for via the licence fee.
The obvious criticism, notes BBC media editor Amol Rajan is that it contains a lot of repeats which "means the user experience and a high quality of curation will be vital."
He adds: "The broadcasters have been in negotiations for years, because they have very different needs and interests. The BBC wouldn't want to damage iPlayer too much. ITV, Channel 4, and Channel 5 need to keep their advertisers on side.
"That they have all reached agreement despite these different priorities shows they feel they simply have to make a big, joint play in the streaming market."
Sky—now part of Comcast—has also made moves to shore up its audience. Last month, it began trailing live streaming news on Amazon's Twitch and separately agreed to extend its content deal with HBO.
In March it struck a deal with NBC Universal to bring AdSmart to U.S. clients.
Additionally, Sky has signalled its intent to challenge BT Sport for rights to the UEFA Champions League. Bids for the three years of rights to the popular soccer tournament from 2021/22 are being submitted today. BT Sport paid over £1.2 billion for exclusive rights to the last round.

Monday, 11 November 2019

Jackinabox Explains Touchscreen Vision Mixer

BroadcastBridge
Jackinabox is a unique Flyaway Gallery/ PPU designed by John Surdevan and Sam Gardner, a multicamera director and a computer vision software engineer. After working together for years and looking at how things could be improved, they began to customise their own suite of live production kit that harmonises industry standard systems with their own hardware and software. John Surdevan explains to The Broadcast Bridge how they have blended Blackmagic Design gear with the Raspberry Pi Compute Module to create a multi-touch vision mixer to help them and other production teams work more creatively, faster, smarter and more economically.
Jackinabox is one part production company, one part R&D Lab and another part hire shop where all its kit is available to hire as standalone units; vision mixer; ISO recorder; live streamer or as a fully configured PPU.
Over the course of working together the pair began to tweak vendor product with additional hardware and software, turning a standard VM set-up into a multi-touch tool.
“This is more like conducting than using a traditional console,” says Surdevan. “For me, as a director, it’s beneficial in terms of what I can do in a live cut. Without it I would feel compromised. It’s basically brought lots of extra features that don’t exist anywhere else.”
Lift the lid on the integrated 5U flight case design and up pops a 24" HD touch screen. Simply plug in the power, cameras, outputs - and it's ready to go but the touch screen control interface is the revelation.
“It’s a totally intuitive and liberating experience,” Surdevan says. “By directly touching the multiview it influences quicker decision making and promotes creativity. This is especially effective with live music and unpredictable action. The mixer has only 1 line of delay (less than a frame), which is great for events with IMAG (big screens) with lip-sync requirements.”
The main hardware is a Blackmagic Design ATEM 4K vision mixer.  This does all the heavy lifting in terms of vision mixing and the team use its multi view with a graphical user interface from a Raspberry Pi keyed this over the top using another BMD ATEM (the original TVS). The touch screen commands are sent to the Pi which then relays the instructions to the ATEM 4K to select and cut and perform a whole range of other functions.
Jackinabox haven’t customised the ATEMs exactly. The ATEMs are already designed to receive signals from other devices on a network but rather than those signals coming from Blackmagic hardware/consoles or its desktop software, Jackinabox have written their own software that lives on the Raspberry Pi.
“This was written in C++ to make it really robust and it boots up quicker than the ATEM,” Surdevan says. “The Pi has a number of little services running on it. One is a relay to control the ATEM and listen out for its status. Another service/server to relay tally status. We also have our own little boxes that mount on camera hot shoes. They are running on ’Spark Core’ which are like mini versions of Raspberry Pi’s and they connect over WiFi.”
The Jackinabox Ingest system offers more than multicamera ISO record decks and the team have customized it so it’s now possible to record 15 HD feeds, timeline synced and ready to go.
“When combined with our ingest system it allows anyone on the team with a smart phone, tablet or laptop to see any of those return feeds,” Surdevan explains. “As a camera operator, you might want to see what another particular camera operator is doing so you’re not chasing the same shot – or a sound engineer can see during a sound check what guitar is being plugged ready for the next band. Everyone on your team has a matrix to see what camera feed they want - or all of them if they want to. There are lots of little advantages along these lines.”
Rapid Turnaround Editing
Designed to be paired with Jackinabox Ingest systems, the package makes it possible for an editor to be working with footage as it’s still being recorded.
“Seconds away from realtime, an editor can immediately get to work re-editing the vision mix (in Adobe Premier with Resolve to come), adding graphics, VTs, lower thirds and logo bugs,” he says. “It’s brilliant for concerts, conferences, or anything requiring a rapid turnaround. You can potentially complete the edit, get client approvals and loading out before the cameras have even been derigged.”
Rather than waiting around for backups at the end, the ingest servers also creates a realtime backup to a 30TB NAS so you can leave a site quickly without waiting to back-up at the end or bring any additional hard drives.
There’s now a local web server running on it which allows the other devices on the local network to see the live multiview and cut the program/aux1/2/3 buses in the same way as our built-in touch screen does. This requires one of the ingest systems to be operational at the same time since it pulls the multiview videos from there.
Enabling Production
It’s difficult to know where to start with the benefits of the Pi during production because technically every command ripples up to the Pi and then down to the corresponding device. At the same time, if it wasn’t for the ATEM’s, tally boxes, or the ingest systems, it would be useless, Surdevan says.
“Some features enable jobs to be done with a specification that otherwise would be impossible,” Surdevan says. “Others are just so useful, fast... and it’s actually fun to use which encourages better decision making and less resistance.
“Cost savings can potentially be off the scale for the right project. As some of these features don’t exist anywhere, the only other way to do them would involve lots more crew, lots more editing time, big/heavy and extremely expensive equipment.”
One benefit is being able to use the multiviewer in a browser – so someone could control the touchscreen and use it in exactly the same way as the touchscreen but via a mouse or by touch on an iPad or even a phone.
Quicker and Better Production
Other benefits mean fewer crew required, making productions more manageable. “So many things are just quicker and with a better result. Some things are just so much quicker it can be done in seconds which could take someone days.”
Speed isn’t even just important because you can make do with less crew, though. A production will always be better and fulfil its potential if things are setup properly with time to adjust and perfect.
“In my experience of live OB’s, no matter how much budget/time there is, there’s always something to tweak 5 mins before TX,” Surdevan says.
Jackinabox are about to implement a new feature which is a cut logger that creates a project file for XML/EDL/Premiere/Final Cut/Avid. According to Surdevan, this will let an editor open a project at a later date and see cut incisions at the relevant times (by using timecode) once married up with the TX recording. This gets even cleverer when connecting all the other camera ISOs “meaning tweaking an edit is completely painless whilst having full control."
The system has already helped deliver results for a broad range of OBs ranging from BBC hidden camera shows to music festivals and award ceremonies.
These include John Grant and the BBC Philharmonic (BBC 6 Music & Radio 3 / line cut); Festivals: Kendal Calling, Bluedot (streaming/IMAG); Paul Heaton and Jaqui Abbout at Castlefield Arena Manchester (Channel 4); An audience with Robert DeNiro (IMAG and records); UB40 - Birmingham Arena / DVD; Roy Chubby Brown / DVD and Man City Women’s Awards / record. They also streamed the Charlatans live to cinemas around the world.
“On a personal note, even though time and money are probably the most important features to producers looking at bottom-lines, my favourite feature is still the ability to just touch and cut to what I want.”
“As a director, feeling good and free by way of having something enjoyable to use, makes me think clearer which lets me react quicker. All of this results in a tighter cut and I think all the crew feed off this, especially camera operators - which again maximises results as everyone wants to maintain delivering the absolute best production values.”
There is also a range of even more radical features that the team are finishing now and hoping to sell as Intellectual Property to a major vendor.