Project Artemis by PLAYERUNKNOWN
PlayerUnknown Productions has ambitious goals for Artemis:
1. A true earth-sized, richly detailed environment
2. A breathing, living world that feels real & has genuine interactivity
3. Share this world with many thousands of users
These three pose our main research challenges. Nothing new, rather familiar wishes. New is the opportunity to genuinely take them on with a fresh perspective and a minimum of legacy considerations.
One of our Applied Research projects is dedicated to engine-tech innovations to support research projects and ultimately Artemis. The Tech was named “Melba”, to celebrate someone who worked on pioneering projects. The Melba-team strongly believes it starts with a simulation minded approach.
We have already discovered the answer for the first research question; “how to create a huge, high fidelity environment on-demand?”
Our solution is combining “streaming hierarchical (space) partitioning” with “Machine Learning Agents” running on the user’s system. It introduced the first major workflow change. How to design an environment with no map or level to work on?
To prove the solution, the “Prologue” tech demo was created. During its development, we solved many integration challenges and learned many things. Amongst them, that available engine-tech does not match our requirements very well.
All engines make assumptions about their use, the kind of games a developer wants to create and how. That is the added value of an engine. Functionality shared between different developments so your project can benefit. However, if your requirements differ too much from the assumptions made by the engine, your development will not be helped but rather be handicapped.
A decision needed to be made, either tone down ambitions or take on the challenge of developing more scalable technology that can support our research projects and is in line with Artemis' requirements. Given the unique opportunity, the latter was selected which is not a challenge we take on lightly.
Philosophy
Software development, in general, has its roots in the single-core hardware era. Support for multiple cores is often added purposefully and as an afterthought, not with scalability in mind.
The main challenge for writing any non-trivial software lies in how to do “complexity partitioning”. Software design quickly becomes too complicated for a single human mind to fathom in its entirety. So partitioning is required to either solve parts in sequence or apply multiple minds on separate parts in parallel, then hopefully integrate these parts into a working whole. Much of the software industry is focussing on this.
A different partitioning
Melba’s way of partitioning is to separate data from processing and relate them only when required for the duration of a processing step (data-A → processing → data-B). This way any data can be input for any processing, and different processes do not need to know about each other. Communication between processing steps is always through data exchange, each process accesses a database independently. A processing context is always local, which is great for scalability. This is the essence of Data-Oriented Programming (DOP).
Combining this with an inherently multi-threaded processing approach provides hardware-performance scalability as well. Melba’s threading strategy is completely lock-free. These are important concepts at the core of Melba.
Melba features an “Entity Component System” (ECS) where, during every simulation step, systems run for entities to update data in components. All available cores are applied for this always. It is the system programmer’s task to implement algorithms in a parallel-friendly way.
Architecture
While a fresh start sounds great, in reality, any new effort builds on top of earlier efforts. To make improvements, it is good to be able to revert decisions from the past, that made sense for circumstances at that time, but no longer do for Artemis. It is vital to select the right amount of rolling back to create a balanced combination of both introducing improvements and, at the same time, still benefiting from existing functionality.
As an example, the Melba team is rather small. We aim to stay relatively small to remain flexible. A small team also means being thoughtful in where to invest efforts. The most gain is to be had from a solid core architecture based on sound priorities.
It also means a custom physics implementation, for example, would not have a high priority right now. Good solutions already exist.
As existing libraries are often not designed with (scalable) parallel processing in mind, what is required is a way to enable the integration of existing libraries without interfering with Melba's core concepts.
Another example is resource allocation. For ECS, the default is to do one processing step for many data items (DOP, non-interleaved components) versus many operations for one data item (OOP, interleaved object members). ECS-component memory is owned by the ECS kernel that also controls its lifetime. In OOP, however, Object data members are part of the object and lifetime is linked to an object instance.
Melba does support STL, but mixing contradicting design philosophies can be a precarious undertaking.
After properly addressing this intersection where different worlds meet and introducing some guidelines for best ECS practices, system implementations now fully benefit from both existing 3rd party libraries and Melba’s ECS foundations.
Scalability
Larger game environments, in higher detail, with more interactive content, is many gamers’ wish. Still, with increasing hardware capabilities, primarily visual detail has substantially improved. Environment size and interaction detail have remained relatively stagnant in comparison. Combining larger environments, greater detail and more interaction is widely regarded as unfeasible.
Comments
Post a Comment