Friday, 5 May 2023

Is this the answer to rendering’s failure to replicate vision?

 AV Magazine

Never mind photorealism and optical fidelity in visualisation technology. FovoRender aims to overcome the limitations of linear perspective itself.

article here

We hear a lot about ‘photorealism’ and ‘optical fidelity’ in cameras and 3D graphics engines – but what if they are not in fact perfectly representing the world as we see it?

Proof of this lies in the fact that artists have hardly ever used accurate linear perspective to create pictures of 3D space, despite being trained rigorously in the method for centuries.

Why? Because other than in a few very special cases, linear perspective geometry simply cannot reproduce the kind of wide and deep visual space that we naturally experience.

It’s a problem that a tech start-up spun out of Cardiff Metropolitan University set out to tackle. In doing so Fovotec claims to have developed a visualisation technology that will make it significantly easier for computer systems to replicate how we see objects, and even revolutionise how we interact with everything, from the metaverse to architectural design and gaming.

Replicating vision

“The problem came to me twelve years ago when I tried to draw what I could see,” explains Robert Pepperell, co-founder of Fovotec and professor at the Cardiff School of Art and Design. “It seems straightforward. I’ve been an artist since I was a kid, I’ve taught in art schools, and kind of thought I knew what I was doing. But when I really started to think about it, I realised I couldn’t get everything I could see on to a rectangular sheet of paper.

“Even though I’ve been drawing all my life there were lots of things going on in my visual experience that simply weren’t being recorded when I made a drawing. That made me aware of the same issue in photography and film – anywhere where you make an image that represents the visual world you are compromising by making a choice about what you see, and what you don’t see.”

It dawned on Pepperell that something in human vision that is fundamental to the way we interact with the world was not being recorded in any of the images we make about those experiences.

“We can see our own bodies, our own nose, the frames of our glasses, in the periphery of vision that we don’t normally record. Then there’s the volume of space, this feeling that you are in the world and that it has a deep dimension to it. You tend not to get either from photographs or drawings. Everything is flattened out.”

It preoccupied Pepperell as an interesting art project – how far can you go with a piece of paper and pencil in recording those extra dimensions of the visual experience?

Other artists have grappled with the conundrum. It’s been examined by philosophers and scientific inquiry, but no-one had come up with a practical solution.

Pepperell set himself the task and it was at this point that creative technologist Alistair Burleigh, senior research Fellow, Cardiff Metropolitan University comes into the picture.

On graduation from Bristol University, he had set up projection mapping firm 3DWrap, an expertise that he says proved very important when trying to map visual space.

“To date, there’s never been a really powerful fundamental solution to this problem,” says Burleigh. “We think we’re the first to deliver that solution. Doing that meant understanding how the brain processes and interprets images and objects.

“We started looking at this as a more technical problem around media in general. How can you make media emulate these additional aspects of human vision that it doesn’t normally capture?”

The pair undertook several years of lab experiments with people, with different cameras, surfaces and light to prove firstly that there is indeed more in our visual field than was being represented in conventional imaging media. Then they begin to manipulate images computationally in software.

They asked why does modern art – including all photography and all film – not capture the full reality of human vision? Because everything uses linear perspective.

“Every stills camera, every cinema camera, every 3D engine uses linear perspective as the core geometry,” says Pepperell. “Linear perspective is not flexible enough or dynamic enough to capture the way people really see.”

The clue is in the name. Linear perspective is, by definition, linear. But we have known since at least the 17th century that human vision is non-linear.

“Whether you are aware of it or not, the world you actually see is really quite curvy (a consequence of several things, including the physical shape of the eyeball). It was by doing hundreds of experiments in our lab, and exploring art history and the history of vision science, that we gradually worked out the basic structure of visual space.

“But then we found out something even more interesting. When we make images based on this complex geometrical structure – which we call ‘natural perspective’ because it’s based on natural human vision – people tend to say that they look deeper, more immersive and more real than images created using linear perspective.”

It’s not as if linear perspective is broken. “Linear is a very elegant method of turning 3D space into a 2D image – the maths is simple – and it’s incredibly effective at what it does.

The problem is that it’s very, very limited. It does certain things brilliantly but there’s a lot it does badly or not at all.”

Most standard scene visualisations, for example, show only 30 per cent of our actual field of view. “There’s a whole mass of volume around that space you are not recording. There are inherent distortions in the image (a sphere, for example, will always be flattened on a 2D surface). If you try to record more of the image, the distortions will accentuate.”

Visual artists and cinematographers have long learned to counter this by, say, moving the camera back to get more content into the shot. That’s not always a practical option. You could make the camera’s field of view wider with a wider angle lens – “but you hit the inherent limits of linear perspective to the point where it becomes unusable as an image,” says Pepperell. “You’ve got a lot more space but not the kind of space you can relate to as a human being. That’s not how we see the world.”

Other tricks include using a Fisheye lens, useful for some scenes but rarely for a whole film, and hardly ever for a product visualisation where extreme realism is required.

Use in AV visualisation
FovoRender is the outcome of their work. Fovo (Field of View Opened) is a computer graphics process that implements natural perspective in 3D renderers. Fovotec is licensing it to 3D artists and users in visualisation industries to create stills and animations, and taking their feedback to iterate development.

“It’s a very new concept to many people so our main goal is to raise awareness. Sometimes they are surprised about the whole idea, mostly they are aware of the issue but didn’t think there was anything that could be done about it.

“We want to free the consumers of 3D content from the narrow letterbox viewports that linear perspective renderers currently impose on virtual worlds.”

Virtually every visualisation for architectural, retail, automotive or product design is done with linear perspective so this is a prime market.

“Visualisation artists are really receptive,” says Pepperell. “For example, automotive designers and marketers or architects often want to illustrate quite cramped spaces and want a bigger field of view without stretching the image.”

Creative studio, Lightfield London has used FovoRender to create more immersive architectural renders with higher visual impact and increased immersion. The R&D project Lightfield built with Fovotec was designed to mimic the “worst case scenarios” the studio faces when trying to visualise virtual spaces most effectively, such as when shooting large atriums and lobbies for a single shot.

For example, in standard Unreal the only way to render the full extent of a large atrium realistically is to manually pan or dolly the camera using a tight-angle camera setting, or to place the camera far outside the building and clip geometry, which is very time consuming.

With FovoRender’s enhanced wide field of views those techniques are no longer required and more space can fit more naturally into the same screen areas even in one shot with far less camera movement, and with less setup effort.

“FovoRender can potentially be effective enough for still images to replace the need for some walkthroughs/stills,” says Robin Smith, 3D Artist at Lightfield.

Evolution of imaging
While Fovotec is talking with Epic Games, the maker of Unreal Engine, and also works with Unity, its approach fundamentally conflicts with all 3D engines.

“The things we do are deviating from linear perspective and rendering engines don’t like that,” says Burleigh. “To get FovoRender to scale, which means for it to be used on any imaging device, we need to get to the GPU level and optimise the hardware processes. That is the longer term goal. Everyone watching movies or playing games should be able to receive the benefit of this underlying new form of geometry.”

The software can also be applied to digital photographic media although depth information is crucial. “You can do things with flat content but it’s not as powerful unless you’ve got depth,” Burleigh says. “Even smart phones have depth capture of some kind and in future we will see much more volumetric media generated by Lidar or lightfield systems.”

The team says its work is part of a wider development in imaging which will evolve into dynamic interaction via head tracking, voice command or gesture.

“Our experience of watching a movie or a game in future won’t be on a flat, static screen but will encompass a more volumetric experience with images that interact with our behaviour. We see Fovotec as part of the evolution of viewing methods bringing greater realism into the way images are captured, processed and displayed.”

 

No comments:

Post a Comment