- The world can be split up into multiple streaming blocks
- Visibility can be computed for these blocks individually
- Blocks can be combined dynamically in the runtime to solve visibility
Umbra supports streaming by allowing the user to compute occlusion data for the entire world, but also enabling Umbra queries performed with only a subset of the data. The user is able to divide the world into a number of streaming blocks for which the occlusion data is computed. Then, the engine runtime determines which of these blocks need to be loaded into memory and combines them into an object on which visibility queries can be performed. Umbra queries are then performed using only this set of data rather than that of the entire world. This is illustrated in Figure 1.
Figure 1: High-level overview of the streaming workflow in Umbra 3
There are a number of advantages in not requiring the entire world’s occlusion data to be at hand at all times. First, the runtime memory footprint decreases quite significantly especially in large worlds where only a relatively small subset of the data is active at any given time. Second, it enables splitting up the occlusion data in a manner which allows for versatile storage and downloading scenarios. For instance, it may be a requirement for the game to be able to start with only a part of the first level ready, while rest of the assets are still being downloaded – a requirement fairly common in modern games. Third, by being able to switch between two different versions of the same streaming block, the user is able to implement semi-dynamic state changes in the scene geometry. Finally, this design enables the game artists to work independently on different areas of the world, locally computing occlusion for these areas alone and still being able to combine these results seamlessly in as late a stage as possible – in the final game runtime.
The user may of course compute a single Tome for the entire scene and not care about streaming. However it is also possible to divide the computation into multiple Tomes, the subset of which may be combined in the runtime into a TomeCollection, on which the visibility queries are performed. This process is illustrated in Figure 2 through Figure 6.
You can specify a computation AABB in LocalComputation::Params (or in Task::create()) for which to compute the Tome, if there's a 1:n relationship between a Scene and its Tomes. In other words, this allows for the division of a single Scene into multiple streaming blocks. On the other hand, it is perfectly OK to create a separate Scene for each streaming block and compute a single Tome for each Scene - the result should be no different.
Figure 3: The world is split up into streaming blocks, for each of which a Tome is computed
Figure 4: In the runtime, the active Tomes streamed in, based on the camera’s location
Figure 5: The loaded Tomes are combined into a TomeCollection
Figure 6: A visibility query is performed using the TomeCollection
Once all the computation jobs have been completed, a Tome is generated for the given streaming block. This generated Tome can then be used in the runtime or serialized into the game data.
In the runtime, the renderer first determines which of the streaming blocks are required for the current camera position, according to its streaming scheme. Once this is done, the corresponding Tomes are loaded into memory, and combined into a TomeCollection object by calling TomeCollection::build(). Visibility queries can then be performed using this collection of Tomes, once Query::init() is called using the TomeCollection. Note that initializing the Query with a single Tome is still supported, in which case the Query internally builds a single-Tome TomeCollection and uses that when the query methods are called.
The TomeCollection build is somewhat CPU intensive, so it is recommended to run it non-blocking in a background thread over a few frames.