A Multistep Approach to Upsampling for Planetary Scale
November 29, 2024 Technology
How can developers generate an entire planet in real-time? And what techniques make it possible to render these massive landscapes in a way that looks realistic without exhausting resources? At PLAYERUNKNOWN Productions, we’re building realistic, explorable environments at massive scale: A task that can quickly produce difficult-to-manage quantities of data. This is where a technique called “upsampling” comes in useful, to balance between scale, visual quality, and performance.
This article discusses our approach to upsampling, and particularly our research into multistep upsampling, which layers data incrementally. We will explore the challenges of finding the right number of upsampling steps, discuss the machine learning (ML) techniques that help evaluate and enhance terrain quality, and detail how the team fine-tunes each stage to achieve realistic results. The process is both technical and creative, demanding a blend of expertise and experimentation.
And, as a tease of what is to come, you can look forward to getting hands-on with this technology in early December.
Author: Noel Lopez Gonzaga
Upsampling and Realism
Creating Earth-size environments challenges us to ensure the right level of detail and realism, sufficient for the player’s immersion, across a vast space. Generating high-resolution planetary data in a single process would place an overwhelming burden on any system, as a full planet’s dataset could demand thousands of terabytes of storage. Additionally, querying such vast amounts of data is challenging due to bandwidth limitations, especially when designing and creating multiple planetary variations, making this approach highly impractical. To overcome these challenges, we generate low-resolution planetary data and use a technique called "upsampling" to increase its detail and realism. This multistage approach lets us build a visually stunning, explorable world at an unprecedented scale without sacrificing real-time performance.
Upsampling is a technique that originated in digital signal processing and image enhancement, where it was first used to improve the resolution of audio signals and low-resolution images. In essence, much like the “enhance that” trope from TV and film, upsampling involves taking a low-resolution input and progressively adding finer details, often by interpolating new data between existing points or pixels. With the development of Generative Adversarial Networks (GANs) and other Artificial Intelligence (AI) models, upsampling has become an important technique in various fields—from medical imaging to satellite data analysis.
From Low to High Resolution
Upsampling is an extremely important part of our approach and a good deal of the teams’ research has been engaged with how to most efficiently manage it. Particularly, we have found that a multistep processes to offer advantages, each step of upsampling refining the planet’s surface, adding complexity while keeping system demands reasonable. Each stage introduces new data, transforming broad topographies into richly detailed landscapes that appear realistic but don’t require massive storage.
One of the core challenges has been choosing the right number of steps. Essentially finding a sweet spot between too few and too many, both of which result in reduced performance and quality. Finding the balance has been a process of iterative testing, gradually changing settings and training different models until we meet both visual and resource management goals. Our process currently takes around 20 steps, beginning with a low-resolution foundation, typically where a terrain area of one by one kilometer corresponds to a single pixel. This rough, low-data version of the planet provides the skeleton for everything that follows. Through upsampling, each subsequent step adds finer details until the resolution approaches a granular scale of just one centimeter per pixel. The current number of steps is equivalent to a reduction of storage requirements by a factor of 4^20, approximately 1 trillion times.
Real-Time Terrain Upsampling
One of the distinctive aspects of our upsampling process is that it’s executed entirely at runtime, generating terrain details on-the-fly in the vicinity of the player rather than loading pre-generated data from disk. This approach is essential for managing storage demands; if we pre-generated and stored the terrain data at the level of detail required, it would require many terabytes of storage. Instead, by generating it in real time, we can achieve a much higher level of fidelity without overwhelming system resources.
This approach differs from typical upsampling used in other games, where low-resolution rendered scenes are upsampled to high-resolution without affecting the performance. For our planetary environments, we upsample the terrain itself, creating a richly detailed and dynamic landscape. The upsampling models run directly on the player’s local GPU, allowing the terrain to unfold around them as they explore. This sequential upsampling approach brings the world to life in real time, letting players immerse themselves in an ever-evolving environment that responds to their movement.
Using Machine Learning to Evaluate Realism
Generative Adversarial Network (GAN) models power the upsampling process. GANs consist of two neural networks: the generator, which in our case creates terrain detail, and the discriminator, which evaluates the realism of each output. Like dueling artists locked in an endless dance, the generator crafts counterfeit landscapes while the discriminator hunts for telltale brushstrokes. Each round sharpens their skills - the forger's craft and the critic's eye evolving in a continuous cat-and-mouse game.
One advantage of a GAN compared to a diffusion model is its faster performance and compact size, enabling real-time operation and allowing the team to more efficiently train, upgrade, and experiment with upsampling networks. This adaptability is crucial in an industry where technology advances quickly and where even slight improvements in model performance can impact the player experience. Ultimately our choices of, and tunings for, models will impact the performance efficiency experienced by players as they explore vast, detailed worlds.
Fine-Tuning
To make this evaluation process more measurable, we define certain quantifiable terrain properties—such as slope, stream distribution, and elevation variance—that are used as benchmarks for quality. These metrics give the GANs clear criteria for improvement, helping to ensure each stage of upsampling produces terrain that not only looks right but also behaves in a realistic way. These metrics also allow us some extra levers to pull to adjust the balance of fine detail and overall coherence across a landscape.
In this regard, fine-tuning the parameters of each model requires a careful blend of mathematical rigor and intuitive adjustment. While certain parameters are adjusted based on specific mathematical criteria, there are many instances where subtle, almost artistic tweaks make the generated terrain look more natural or cohesive. These tweaks and adjustments are refined iteratively, with each adjustment moving us closer to a landscape that feels truly alive.
Looking Forward
As our upsampling project advances, we learn more about optimizing terrain generation at a planetary scale. The multistep approach lets us deliver environments that feel both expansive and immersive, and as new technologies emerge, we continue to adapt and refine our methods to push the boundaries of what’s possible to generate in real-time.
For those interested in a deeper dive into the technology and techniques behind our upsampling work, we’d love for you to join the conversation on our Discord. There, you can connect with other developers, players, and fans who share our enthusiasm for building new worlds and exploring the intersection of art and technology in gaming.
About the author:
Dr. Noel Lopez Gonzaga is the Lead AI Researcher at PLAYERUNKNOWN Productions, where he works on machine learning research initiatives in game development. He holds a PhD from Leiden University and an MSc from the Universidad Nacional Autónoma de México (UNAM), with a research focus on the structure of dusty environments surrounding supermassive black holes (SMBH). His academic interests include neural networks, data science, astronomy, numerical simulations, and physics. Noel's work has been published in prestigious journals, including Astronomy & Astrophysics and the Monthly Notices of the Royal Astronomical Society (MNRAS). Outside of work, he enjoys building Lego sets, cooking, Formula 1, and mezcal tasting.
Share: Twitter Facebook Copy link Next article
Keep me posted
Stay updated on our journey to create massive, procedurally generated open worlds.
For more latest news and updates you can also follow us on Twitter