Monday, 18 April 2016

Faster than a ray of light

Broadcast 
Lytro’s light-field cinema camera could revolutionise film-making, while developments in holographic display are bringing Star Wars-style technology closer to reality.
Despite swapping celluloid for silicon, the fundamentals of recording images haven’t changed since the invention of photography.
New advances in light-field technology, first conceived in the 1840s, are about to change all that. Light-field is a way to capture or project rays of light in a scene.
Technically, a light field is 5D and includes three spatial dimensions (X, Y, Z) plus two angular dimensions, describing the direction and intensity of the light ray.
Record all of this with enough fidelity and in theory you have a holograph. While there is currently no commercially available display capable of showing a holographic video (see box), there are potential applications in using light-field data flattened into 2D to create visual effects and other image manipulations in post.
Moving rapidly from science-fiction to experimentation, light-field has made it onto the agenda at the SMPTE Cinema Conference at NAB and at the MPEG JPEG committees, where a new working group on the topic, plus sound fields, has just been established.
What’s more, the company that brought light-field to the mainstream in 2012, with a consumer stills camera that enables users to refocus after capture, claims to have developed the world’s first light-field cinema camera.
“Light-field is the future of imaging,” declares Jon Karafin, head of light-field video product management at Lytro.
“It is not a matter of ‘if ’ but ‘when’.” Light fields can be captured in two ways. One is to sychronise an array of cameras, each recording a different point within the same space; the other is to place a microlens array (MLA), comprising hundreds of thousands of tiny lenses, in front of a single sensor. Lytro has chosen the latter approach, building on a decade of software R&D, 18 months of intensive hardware design and $50m (£35m) in funding.
Last November, the company released stills camera Lytro Illum and announced Lytro Immerge, a video camera designed for virtual reality, which it has not yet released. These products are stepping stones to the company’s new system, the specifications of which are astounding.

Lytro Immerge
Currently in alpha test and due to launch during the third quarter of this year, the Lytro Cinema Camera carries the highest resolution video sensor ever made at 755 megapixels (MP), capable of recording 300 frames per second (fps) through an MLA comprising more than 2 million lenslets.
The next highest resolution sensor publicly announced is a 250MP model being development by Canon.
By contrast, HD equates to 2MP and 4K to 9MP. The resolution needs to be vast for the system to process the unprecedented volume of information. According to Lytro, the effective output resolution will be 4K. “We are leapfrogging the leapfrog, if you will,” says Karafin. “We are massively oversampling the 2D to be able to change the optical para meters in post.
Everything that makes the image unique is retained but you can re-photograph every single point in the field of view.”
For example, the shutter angle and frame rate can be computationally altered in post. As studios move towards higher frame rate cinema (Ang Lee’s Billy Lynn’s Long Halftime Walk is shot at 120fps), Lytro thinks there’s a market for being able to render the same project at 24fps or 120fps for theatrical release and 60fps for broadcast, at the touch of a button.
With plug-ins for The Foundry’s Nuke available on launch, the system has a route into facilities where calibrations including depth of field refocus, tracking, stereo 3D and Matrix-style VFX can be created from the same raw information. Lytro admits the technology is “very challenging” and that work needs to be done on the software algorithms, compression system and hardware.
Nor will it be cheap. Too large to be handheld, the camera needs to be physically mounted to a dolly or crane and be cabled to a server up to 100 metres away.
Video village
The server itself is powerful enough to crunch data up to 300GB/s. Even then, the server will sit in a ‘video village’ supervised by technicians. The camera requires “a non-standard optical format” and Lytro will offer the unit with custom focal lengths.
The whole system, including operators, is being offered for rental. Cinematographers, already wary of virtual production technologies eating into their core crafts of camera position and light exposure, are likely to feel threatened.
Anticipating this, Lytro has incorporated a physical focus control and aperture check on the camera to give DoPs reassurance that the decisions they make on set are preserved down the line.
“There are those who swear by film as a capture medium, but for most cinematographers there is no right or wrong, just a tool that best meets the creative requirements,” says Karafin.
“Ultimately, this is a disruptive technology and we need the feedback of the studio, cinematographer and VFX community to make sure what we are creating meets their needs, as well as helping them understand the creative control this unleashes.”
Light-field-captured video may appear little different to the viewer until there are holographic screens capable of projecting a three-dimensional image.
Unsurprisingly, there are major developments here too, with companies including Zebra, Holografika and Leia3D among those with designs.
“We are not there today but we will cross that threshold,” says Karafin. “Holographic is the next generation of display technology and will truly change the way we all think about capturing native imagery.”

LIGHT-FIELD RIVALS AND HOLOGRAPHIC DISPLAY

Fraunhofer IIS
Fraunhofer researchers believe light-field technology could be a more efficient means of creating VFX in post. It has test-shot a live-action fi lm from a 16-HD-camera array, but its main work has been in devising software to compute the data and route it to a conventional post environment.
A plug-in for Nuke will be available soon. “An MLA will not work so well for movies because the baseline difference between the images captured by one sensor is too small,” says Fraunhofer’s Siegfried Foessel.
“It’s good for near distance but not larger distances, and for this reason we work on camera arrays.”
Raytrix
Raytrix has designed a micro-lens with three in front of a 42MP sensor to capture 2D video plus 3D depth information.
This outputs 10MP at 7fps. Raytrix markets to robotics and science industries because cofounder Christian Perwass believes light-field systems are not suitable for capturing most film scenes.

Raytrix
“They are workable with close-up subjects like a face, but if you want to extract depth information for scenes 10-20 metres away, you might as well use standard stereo 3D cameras,” he says.
Visby Camera
The US start-up is in stealth mode on a light-field VR system until 2017, the first part of which will be a codec.
Founder Ryan Damm explains: “The potential data stream with light-field is enormous since you need to encode and decode a 5D image at every possible angle.
To make this workable, these should not be much larger than today’s large video files.”
Leia3D
A team of former HP researchers is developing a holographic display using diffraction patterns.
Due for launch next year, the prototype is able to process 64 views with a depth that lets a viewer move their head around the image. Samsung and Apple have patents fi led on similar technology. “Our patent has priority,” says co-founder David Fattal. “We are talking with them.”

Leia 3D
It has also invented ‘hover touch’, the ability to interact with a holograph without touching the screen, and may combine this with technology that gives a physical sensation of touching a hologram, based on miniature ultrasonic microphones developed by the UK’s Ultrahaptics. “Holographic reality will be on every screen, everywhere,” says Fattal.

No comments:

Post a Comment