At PLAYERUNKNOWN Productions, we spend a lot of time thinking about worlds, how to build them, how to bring them to life, and how to make them feel rich and consistent. That means pushing the boundaries of procedural generation and machine learning, and experimenting with techniques that might one day help us generate vast terrains, lifelike textures, or even entirely new ecosystems.
Author: Noel Lopez Gonzaga, Lead ML Researcher
Including work by: Pavel Nakaznenko
Before dreaming big, we need to start with some small experiments. During one of our latest experiments, we explored Rectified Flow Matching (RFM), a training method for generative models that is typically used for making images, text, and sounds. Generative models, such as diffusion models, often work by gradually transforming noise (random blobs) into meaningful data (like an image). RFM does it differently. Instead of working with noise over many steps, it learns the paths from randomness to the final data, while also adjusting the process to make learning and generation more stable, reliable, and fast.
To understand this method, we did not jump right away to create mountains or forests. We began with blobs. Gaussian blobs, to be exact. And we trained a model to transform those blobs into something more structured: a star-shaped distribution.
It might sound abstract (and it is), but this offers a simple and clear window into how these methods work.
2D visualization of the initial and target distribution.
Why We Looked at RFM?
One of the hardest parts of procedural generation is achieving control and managing the high dimensionality of parameters that artists need to tweak, while also keeping the iteration time short. We want worlds that feel organic and surprising, but also controllable and consistent. If you ask for a snowy mountain valley, you should not get a desert plateau by accident.
This is where RFM comes in. It is a technique for learning how to transform one distribution (say, “flat noise”) into another (“structured terrain”, for example). RFM achieves this by learning a straightened path between the two distributions. Think of it as guiding each data point along a carefully planned journey.
Visualization of individual points moving from the initial distribution into the target distribution
Why does that matter to us? Because stable and fast transformations mean more consistent results and quick iteration times, which, in turn, means we can start thinking about giving artists, designers, and modders knobs to turn, sliders to adjust, or paths to steer.
A Hands-on Prototype
We believe it is important to demystify the ML tools we use. If everyone understands what a model does and its limitations, then it is easier to set expectations and get the most out of an ML model. This is why we created a tutorial notebook that walks through RFM step by step, using more concrete and simple examples that you can run yourself.
In it, we:
Start with a simple Gaussian distribution.
Train a neural network to transform it into a star-shaped distribution.
Visualize the entire process so you can see the points gradually flowing from one shape to the other.
It is not terrain yet or a game-ready asset. But it is the "Hello World" of RFM, a minimal example that shows the method’s heart.
Snapshots at different steps. Top row: 2D visualizations of the distribution, and bottom row: individual points
But hold on, that is not the end of it. The technique does a good job of learning the full journey from noise to meaningful data, step by step. So it knows how to move a blob until it becomes a star (or an image). Once RFM has “mapped the roads”, the next question is, can we get there faster?
This is where ReFlow enters the stage. ReFlow is a technique that is designed to find shortcuts. Instead of taking all the little turns, it finds a way to skip ahead and make the process faster. In practice, it compresses the many incremental steps of RFM into fewer steps. In this example, we go from generation using 100 steps to only one step.


One step generation using ReFlow
The final result? Faster, more efficient sampling, which, in practice, could mean interactive tools for designers instead of slow offline processes.
Why This Matters to Us?
So why are we, a game studio, digging into this? Because techniques like RFM strike a good balance between power, control, and speed. It lets us generate content that is:
Stable: the transitions do not jump around randomly.
Interpretable: we can peek at intermediate steps, see how something evolves, and adjust along the way.
Efficient: once trained, ReFlow lets us generate quickly, which is vital when you want fast iteration in creative workflows.
For terrains, that could mean morphing landscapes smoothly between different areas. For textures, blending styles in controlled ways. For whole worlds, it could mean more responsive systems that react to player actions in real time.
A Peek Behind the Scenes
We have shared the notebook openly because part of our philosophy is to be open about our development. These experiments do not need to be locked away until they are production-ready. In fact, showing them early helps us connect with researchers, developers, and fellow explorers who might want to push in the same direction.
The visuals in the notebook, points flowing from circles to stars, are simple, but they are also satisfying. They show the math in motion, demystifying models so that we can understand the process behind them and make decisions on how we can modify and use them. They give a glimpse of how these abstract ideas could scale up to something much bigger.
What’s Next?
This is not the end of the story. RFM is a tool, and tools only become meaningful when they are applied. For us, that means asking: how can we bring this kind of fast and controlled transformation into the worlds we build?
We do not have all the answers yet, but the journey from Gaussians to Stars is a good first step. And like any good flow, we are excited to see where it takes us.
If you are curious, you can explore the full tutorial here: Rectified Flow Matching on GitHub.