NAB
Software, cloud and digital and data-based tools are
changing the face of moviemaking, but the way cameras work has remained
essentially unchanged for a century. That could be changing as technical
innovation advances to alter what it means to truly capture a scene.
https://amplify.nabshow.com/articles/how-do-you-get-all-the-on-set-data-ever/
“I think the camera of the future will be a flat camera,
about as thick as an iPad, that absorbs and times light and can calculate
depth,” Sam Nicholson, ASC, the CEO of virtual production company Stargate
Studios, says in an article for the Frame.io blog.
Since the invention of film cameras have worked by focusing
light onto a flat plane (whether photochemical film or digital image sensor)
that turns the action in front of the lens into a two-dimensional image. Even
3D camera systems work this way, just with multiple sensors.
But productions have started capturing a lot more data on
set than just that picture. As Frazer points out, many cameras now have
accelerometers to record data about tilts, pans, and movements. There are also
intelligent lens systems (from Cooke Optics among others) that record metadata
for iris, focus, and focal length.
These new types of data are vital for post-production teams,
like VFX departments, whose job it is to convincingly mesh their digital
creations into the real-world scene that was captured by the camera.
“This is one of the key issues facing modern filmmakers,”
Frazer says. “How do you capture more, better information about the world
around your camera in a way that enables modern post-production techniques?
Bridging that gap between the real world and the digital world is something
that modern productions need to prioritize, because our workflows and tools
will continue to demand more and more data.”
He suggests that photogrammetry is the answer, although
there are many different routes to the generating extra data about the real
world in front of the camera.
To Frazer, the fundamental advantage of photogrammetric
capture is that, by acquiring multiple images of a scene from multiple points
of view, cinematographers are no longer limited to capturing a flat image of a
scene with no real depth information.
You achieve that result not by one camera but with
camera/sensor arrays. The data that results can be used in post to manipulate
the scene including resetting the focal point or deriving entirely new
(virtual) camera angles. Another name for this is volumetric capture, light
field or computational cinematography and another potential output of the data
is a hologram.
Anyway, photogrammetry, which specifically measures physical
objects and environments by analyzing photographic data (images), has long been
a staple of VFX.
Frazer instances its use by VFX teams in Quantum of
Solace (2008) to simulate Daniel Craig and co-star Olda Kurylenko in sky
diving free fall. A variation of the technique was also used to create the
animated holographic advertisements called “solograms” in the live-action
remake of Ghost in the Shell.
As multi-camera arrays become less expensive, photogrammetry
techniques will become more powerful.
Marc Côté, founder and CEO of software developer Real by
Fake, says photogrammetry (or comp cine) could allow an editor to select not
just the best take but also to dictate the precise camera angle.
“In the Avid you could change the camera’s position to help
with timing or even create a new shot if you don’t have the right angle,” he
says. “Just imagine the Avid timeline with a window showing what you’re seeing
from a given camera angle that allows you to go into the shot and change the
angle.”
That would be alarming for a cinematographer but not as much
as the changing face of the camera and the core ability to record light itself.
Nicholson, for example, thinks the acquisition of depth
information will become so important to filmmakers that cameras may eventually
be made without traditional lenses.
“Think about how small your cell phone lens is. Put a
thousand of them together, right next to each other on a flat plate, and now
you’re capturing 1,000 images, all offset a little bit and synchronized, and
using AI, you put them all together. Each frame is a 1,000-input photogrammetry
frame.”
He singles out the pixel-shift technology found in Sony’s
Alpha-series cameras as a hint of what comes next.
In pixel-shift mode, the camera takes a rapid series of
exposures with tiny sensor movements in between each shot. This is just enough
to shift the sensor’s color-filter array a single pixel, which improves color
resolution in the final image by allowing the camera to gather red, green and
blue light instead of just one filtered color at each photosite.
“What if it actually looks for depth data when it shifts?”
Nicholson asks. “If it can shift back and forth fast enough, you could do
photogrammetry with a single chip and a single lens. It’s photogrammetry, but
on steroids.”
Still think this is for the birds? Another advance was
recently debuted by Apple. Object Capture is a photogrammetry tool that
stitches together the images of an object to create a high-quality 3D model.
During the unveiling at the WWDC21 conference, Apple said
developers like Maxon and Unity are already using Object Capture to explore
entirely new ways of creating 3D content within their apps including Cinema 4D
and Unity MARS.
It seems photogrammetry is becoming more realistic, accurate
and, above all, easier.
No comments:
Post a Comment