- This year, we have drastically increased our utilization of Machine Learning across the company (beyond our Computer Vision and AR capabilities)
- There have been significant launches in Pokémon GO and Trust & Safety, and in this piece, we’ll explore how ML is essential for the maintaining and growing of our underlying maps
- We’ve begun exploring how to now utilize Generative AI for internal and external applications, stay tuned for more updates
by Hugh Williams
Overview
Niantic is known as a leader in Computer Vision and AR – our research team has developed a number of computer vision advancements over the course of our history as a company, and we’ve been fortunate to publish and present at numerous conferences (see the research we presented at CVPR 2023). However, we use Machine Learning (ML) in many other ways across our company to make our games, systems and products better. The focus of this post is primarily “classic” supervised ML, but we’re also actively prototyping generative models (e.g. LLMs) in a number of key areas.
Before diving into the details of our ML and AI initiatives, it’s important to note that our ability to train and deploy large-scale ML models is due to our investment in a strong data infrastructure. We have dedicated significant time to building the proper logging & telemetry, identifying reliable metrics, and curating labels so that our models are both accurate and useful.
As mentioned, ML is used across the company from upholding safety standards (see Camille Francois’ post on Our Approach to Safety) to refining game aspects in Pokémon GO and other titles. But our most long-standing ML initiative is scaling and efficiently maintaining our maps.
ML for Maps
Our maps are the foundation upon which everything we build at Niantic, so it’s of utmost importance that we ensure they are both accurate and up to date. Our Wayfarer program keeps our maps dynamically updated and growing as our players continually identify hidden gems in their neighborhoods, as well as update landmarks that have changed. It’s hard to keep pace with the fluid state of the world, so we employ ML to help. Specifically, we use ML models to help:
- Identify low quality wayspot nominations or edits to our map (e.g. blurry or violative photos, poor/inaccurate descriptions, incorrect locations)
- Flag duplicative wayspots
- Detect abusive behavior for our team to review and investigate
To tackle all of these tasks, we developed a collection of multi-modal deep learning models capable of synthesizing data from a myriad of sources. Each model’s architectures vary slightly, but at a high-level we employ embedding services for each feature modality (e.g. image, text, etc) and prior to passing this information to a fully-connected layer. We then train the models using labels provided by the Wayfarer community or internal Ops teams depending upon the task. Needless to say, significant effort was also put into data cleaning with our teams manually reviewing hundreds of examples to ensure we properly internalize the challenges experienced by our Wayfarer submitters and reviewers.
These models are incredibly impactful at reducing the pool of ineligible nominations or edits. That impact not only helps Niantic, but has a positive impact on the Wayfarer community by:
A. Not having Wayfarers review clearly ineligible nominations/edits.
Our Wayfarers want to review interesting, creative, and potentially exotic wayspots as opposed to spending their time downvoting blatantly invalid images (e.g. with watermarks, people’s faces, etc).
B. Shorting the turnaround time for Wayfarer Explorers.
By letting models handle the poor quality submissions, this means our Wayfarers will have more time to review (and hopefully approve) the viable submissions from the community. That means the nominators see quicker feedback on their new Wayspots, with a 3x reduction in the turnaround time.
Regardless of the application though, a critical prerequisite of productionizing ML models is performing both offline and online evaluations. For both our Maps models and our games-side ones, we carefully curate offline eval sets for us to estimate model metrics (e.g. precision, recall). Once our models are run on the offline eval sets, we can tune relevant decision thresholds to optimize for certain metrics and estimate impact prior to launch.
When our models do go live, we partner closely with our experimentation platform to validate the fidelity of our offline estimates. Experimentation isn’t always as simple as running an A/B test, so we often get creative by using geo-spatial or temporal testing techniques to root out the underlying impact of our models. We also continuously audit our models by holding out a percentage of predictions to be reviewed by humans so that we have a fresh assessment of live model performance.
Generative AI
Niantic is a pioneer of cutting edge technology, so naturally we’ve been investigating ways to utilize Generative AI (GenAI) models. Earlier this year we introduced Wol, the first GenAI powered mixed reality character who knows a lot about the Redwood forests of Northern California. More recently, we added GenAI modules to 8th Wall to make it even easier for WebAR developers to bring GenAI tools from OpenAI and Inworld.ai to their projects.
Many of the models we are exploring are still in their infancy, so we’re testing both enterprise-provided solutions as well as hosting our own models internally. As for where we apply them, our strategy is three-fold:
- Internal scaling and efficiency
- Gameplay feature enhancement
- New experience development (e.g. Wol)
Please stay tuned as we begin to roll out prototypes in each of these areas both publicly and internally.
Going Forward
We strive to be a data-driven company where our players’ actions and opinions steer the direction of our products. That’s why we continue to leverage machine learning and explore the potentials of Generative AI. It’s essential that as part of this exploration we ensure that this technology is utilized in ways that align with users’ needs and desires. The community we’ve built is truly unique and we look forward to using ML and AI to better foster and support that community.
*Figure 1 source: The AI Hierarchy of Needs by Monica Rogati
Hugh Williams is a senior product manager for Machine Learning & AI at Niantic