Skip to main content
June 21, 2021
Niantic Researchers Reveal New Depth Technology to Transform Any Smartphone Into a Powerful 3D Mapping Tool
ManyDepth paper and GitHub repository now available

Image of ManyDepth Hero

3D reconstruction of a benchmark street scene video, processed through Niantic's ManyDepth, now available in Lightship.

Niantic was originally founded to bring augmented reality (AR) technologies to the devices people actually use – including smartphones – rather than waiting for AR wearables that were years away. This focus on available devices has helped AR reach large audiences: Pokémon GO has been downloaded over one billion times and was named one of Apple’s apps of the year for 2020 for “reinventing play.” Tens of millions of Niantic Explorers around the world play our games, which also include Harry Potter: Wizards Unite and Ingress.

The success of our games led us to develop an advanced AR platform for third-party developers. Niantic Lightship dramatically simplifies AR development, enabling creative teams to focus on designing the games and experiences they want to build, rather than worrying about the enabling code, networks, platforms, and individual devices. We’re constantly updating Lightship with new and improved technologies to push AR forward.

To that end, the Niantic Research Team is excited to announce a major breakthrough in AR technology. At this week’s Conference on Computer Vision and Pattern Recognition (CVPR), we are presenting groundbreaking research that dramatically improves the depth-sensing capabilities of smartphones for AR and 3D mapping experiences. Moreover, instead of waiting years to commercialize the research, we are making this cutting-edge technology available right away to any AR creator through the Niantic Lightship ARDK.

Image of ManyDepth Comparison

This new Niantic depth technology brings better and more immersive AR experiences to more devices, and simultaneously enables Niantic Explorers to collaboratively build the most accurate, up-to-date 3D map of the world. Let’s dive into what’s new here, and why it matters.

Using 2D cameras to perceive 3D depth

Smartphone cameras are typically limited to capturing the world in 2D, not 3D. Prior to our work, there was no easy way to use a regular smartphone’s 2D camera hardware for live, high-quality 3D mapping, and very few phones have built-in 3D LiDAR depth sensors.

So the Niantic Research Team created a unique solution in software, training smartphones to perceive depth by inferring the 3D shape hidden in traditional 2D photos. Using advanced machine learning techniques, the latest Niantic depth technology can determine the depth of objects from a single 2D image. Further, as the user moves their phone and gathers more images, the neural network enhances the 3D depth map.

Our new research paper nicknames this innovation ManyDepth, because it creates great depth maps – the key to mapping our world in 3D – from one or multiple images using a single RGB camera. ManyDepth efficiently overcomes challenges such as moving objects, scale ambiguities, and a fully static camera. This is also a world first, as it enables multi-view depth networks to be effectively trained without ground-truth depth data.

“ManyDepth is a state-of-the-art innovation to address what was thought to be a forced choice in 3D reconstruction, between classic triangulation over multiple frames versus instant-but-fragile single-frame inference with a neural network,” explained Gabriel Brostow, Niantic’s chief research scientist. “Our self-supervised software works as an alternative or complement to LiDAR, which means it’s an ideal for situations where your phone or wearable needs to understand the 3D world around you.”

3D maps with limitless potential

Years ago, the Niantic Research Team turned a global-scale technology challenge into our mission: Build the most up-to-date 3D map of the planet Earth. Can you imagine what it would take to create a digital 3D map of the entire world? Where would you even start to gather all the technology, the map data, and people needed to dynamically map everything?

To meet this challenge, we began building custom tools, then expanded our own mapping team by acquiring brilliant researchers and developers including Matrix Mill and 6d.ai. It was clear to all of us that building a massive, frequently-updated map would require global collaboration, and that the same devices that could display 3D maps could also be used to build them – with the right software.

Niantic’s new depth technology enables common smartphones to create 3D map meshes for individual locations, so users can opt into contributing pieces to a world-scale map as easily as people share photos today. As Niantic Explorers play AR games, they can choose to dynamically expand and improve the global map everyone’s using for mixed reality experiences.

Given Niantic’s current lineup, you might correctly expect the new depth technology to aid AR mapping apps and games by blending virtual objects more convincingly into the real world. Beyond creating digital signs and guide arrows within public spaces, the technology will enable digital characters to scale the sides of buildings, bounce off walls, and hide around corners.

But that’s just the beginning. As our partners build apps with Lightship, you’ll also see the new Niantic depth technology empowering sports teams, musicians, and artists to augment physical stadiums and galleries with immersive, physics-based digital content, as creative agencies and brands bring new experiences to life through your phone’s screen.

Our research continues – join us!

We’re very excited to get the new Niantic depth technology into the hands of our developers around the world. Starting today, you can dive deeper into the ManyDepth research paper we’re presenting at CVPR this week, and if you want to test it yourself, preview our ManyDepth code on GitHub – the repository includes samples and instructions.

The Niantic Research Team is also presenting three other papers at CVPR 2021. If you’d like to learn more about how our team is creating solutions to real world AR challenges, check out our work on cuboids (identifying complex 3D shapes in 2D images), depth estimation using wavelet decomposition, and panoptic segmentation forecasting. And if you’d like to be a part of the Niantic Research Team responsible for cutting-edge innovations like this one, we’re hiring; visit our Careers page to apply today!

-The Niantic Research Team


Get the latest