Thursday, 27 January 2022

How Canon's VR lens is becoming integral to the Metaverse experience

RedShark News

3D stills and virtual reality video will soon become as second nature to creators as regular digital photography. Affordable tools like the new Canon VR lens are arming professionals and consumers with the tools to capture three dimensional assets to populate the emerging spatial internet.  

article here

The Canon RF 5.2mm F2.8L Dual Fisheye lens was launched late last year and received further publicity at the Consumer Electronics Show at the start of 2022. It is an interchangeable lens designed for the EOS R5 and is listed at less than $2000. 

“A lens like this opens up the world for users to go from 2D to 3D stereoscopic 180 VR,” says Brandon Chin, Technical Senior Specialist at Canon USA. “It means the multi-purpose use of the R5 can now be exploited in a completely new medium to deliver imaging for future content creation purposes. You now have VR in your camera bag.” 

VR videos can be immediately published today on apps like YouTube VR and for viewing in headsets like Oculus.  

“You can imagine recording a concert and instead of seeing it in a flat two dimensional way we’re now able to see it with depth and also look around with freedom to view in a way that’s not communicated through conventional 2D apps,” Chin said. 

Most previous methods of capturing stereoscopic imagery relied on two cameras and two lenses paired on a rig which was not only expensive and complex but fraught with challenges in aligning the optics and then again the files in post. 

“The big difference is that this lens is two separate optical systems mounted as one single lens so all the alignment that would normally take a custom rig to achieve - this camera can do on its own.” 

The dual circular fisheye lenses on the front of the camera are mirrored by two circular displays (for left and right eye) on the back. Recording of both images, however, is made as a single file to a single card. 

“Because you are getting one file from one camera the post process is substantially more streamlined. Optically it is doing the job of two separate lenses.” 

He also points out that since Canon makes lens, sensor and software for the process, the previous difficulties in getting different elements manufactured by third parties to match is eliminated. 

The image sensor records 8K DCI “as a maximum” although the captured resolution per lens will be slightly less than 4K due to the two image circles being placed side by side on the sensor coating. 

The file can be brought into post using one of two apps: the new EOS VR Utility standalone app for Mac and PC or EOS VR Plug-in for Adobe Premiere Pro. 

Both applications will change the circular side by side image into a side-by-side equirectangular 1x1 image and can be output to different file types and resolutions. 

If using the Premiere Pro plug in, following conversion, you are able to then drop clips into the timeline and do colour correction in the normal way. 

Clearly the parallax between the dual lenses is pretty fixed but there are some slight adjustments to the alignment that can be made in post. 

The camera does not support live streaming VR natively but does have an HDMI port. Chin says he wouldn’t be surprised if someone in the market would go out and “build some sort of ingesting application that will allow people to see very high resolution 180-degree imagery.” 

Asked whether Canon would look to add further depth-sensing technology (such as LiDAR) to the system, Chin said Canon was looking for feedback from the market. The company is targeting adoption of VR across many sectors such as training, travel, sports, live events and documentaries. 

“Innovators in VR are trying to do things that are extremely challenging technologically. This is a great new area that is unexplored by us. We are receiving all that information and feeding back to Canon Inc (the manufacturer) how to best support it.”  

“We’re very excited about what the future holds for immersive content and all the ways metaverse will play into our lives.” 

Imagery captured for Canon’s new immersive VR video calling platform, Kokomo, was captured using this lens.  

This video https://youtu.be/573sAhUYATk  gives a complete introduction to Kokomo, the app and how Canon wants the 3D experiences of VR to be combined with the ease of video calling. 

Currently in development but due for launch this year, Kokomo will allow users to video call in real time “with their live appearance and expression, in a photo-real environment, while experiencing a premium VR setting in captivating locations like Malibu, New York, or Hawaii.” 

The app uses Canon cameras and imaging technology to create realistic representations of users, so calls “feel like you are interacting face-to-face, rather than through a screen or an avatar.” 

Mass 3D asset creation  

The creation of 3D assets is one bottleneck among many in the way of growing the 3D internet, or the metaverse. Some developers think this might be solved with the advent of mass market LiDAR. New cell phones (such as iPhone 12) contain LiDAR, putting this technology in the average user’s pocket.   

Rumors abound that the iPhone 13 Pro could contain a second-generation LiDAR scanner, which combined with machine learning algorithms could turn the stills we take everyday into three dimensions almost overnight.  

“Many experts think 3D snapping is as inevitable as digital photography was in 2000,” reports Techradar.

It’s not just still images either. LiDAR could hold the key to user-generated volumetric video. As pointed out by Apple Insider patents published by Apple in 2020 refer to compressing LiDAR spatial information in video using an encoder, “which could allow its ARM chip to simulate video bokeh based on the LiDAR's depth info, while still shooting high-quality video.”  

3D media management platforms like Sketchfab and Poly.cam are based on interoperability standards such as glTF and already enable viewing and interactive manipulation of 3D models via a web browser.  

“LiDAR technology … now allows anybody with the latest iPhone to mass render the physical world, translate it into machine readable 3D models and convert them into tradable NFTs which could be uploaded into open virtual worlds very quickly populating them with avatars, wearables, furniture, and even whole buildings and streets,” says Jamie Burke, CEO and Founder, of London-based VC firm Outlier Ventures.  

 


No comments:

Post a Comment