Accurate occlusion threshold is one of the parameters to queryPortalVisibility. It's an innocent looking detail, but has a significant effect on culling quality and query times. I'll go over what it does exactly, and what its relationship to computation parameters is. This article applies to Umbra version 3.4.
Here's the short version first. There are several LODs (levels of detail) of visibility data in a tome. Accurate occlusion threshold tells how far, in world units, the most accurate LOD is used. The LOD switch occurs approximately 2 * "accurate occlusion threshold" units away from camera. Bigger value gets you more accuracy, but also increases query time. Since the query only has constant amount of working memory available, a very big value can also cause memory to run out and ERROR_OUT_OF_MEMORY is returned.
You would want to set the parameter as big as possible within your time budget, but in practice finding the best value might be difficult. You can visually see the difference by looking at the built-in visualizations: when increasing the value, occlusion buffer accuracy improves and DEBUGFLAG_PORTALS shows more portals. Increasing the value also improves visible object count. Automatic setting (-1) means smallest possible value is used.
Now for the detailed version.
Umbra's visibility data is internally a cells-and-portals graph. Cells are continuous volumes where visibility everywhere within the volume is similar. Portals are rectangles in space that connect two cells.
Visibility data is further organized in tiles. A tile is shaped liked an AABB (axis aligned bounding box), and contains a cells-and-portals graph - that is, a tile consists of many cells.
There are many tiles in a Tome. Each tile represents a piece of a particular visibility data LOD. A tile with less accurate LOD is bigger than more accurate ones. Tiles are organized hierarchically in an tree. Tome is thus subdivided in tiles - each less accurate LOD level using coarser subdivision.
The selection of which LOD levels to use - i.e. which tiles to select - is performed at runtime using the "accurate occlusion threshold" parameter. The first switch between LOD levels occurs approximately at two times the distance of "accurate occlusion threshold". The doubling originally was an oversight, but fixing the interpretation now would break existing content. The algorithm has to select whole tiles at a time, and all siblings in the tree - this further limits exactly where LOD switches can occur.
The most accurate LOD is computed using the parameters provided for computation by the user, i.e "smallest occluder", "smallest hole" etc. Less accurate LODs are computed using less accurate computation parameters. "Smallest hole" and "smallest occluder" parameters both double for each less accurate LOD. This means that the algorithm effectively closes bigger and bigger holes the farther something is.
Below is another visualization similar to above. Each box is a tile. Suppose smallest hole is set to one world unit. The numbers on boxes now show how smallest hole increases within the frustum - green boxes, most accurate tiles, use one as "smallest hole". Smallest hole doubles with each LOD level. "Smallest occluder" multiplies similarly.
A slider below allows increasing and decreasing "smallest occluder" value. Try changing the slider with "accurate occlusion threshold" both at the automatic setting (-1) and valid value. Note that "smallest occluder" affects the automatic value (minimum) for "accurate occlusion threshold". Click to move the camera.
In reality tiles are not as uniform as the visualizations on this page make seem, but form more optimal units with respect to visibility. For visualization purposes this is sufficient.
Another way to look at "accurate occlusion threshold" is through what kind of error it provides. Consider function f(x), which provides pixel size for one world unit on screen at distance x.
Considering horizontal screen dimensions only:
f(x) ~= w / (2 * x * tan(fov / 2))
where w is viewport width in pixels, fov is the horizontal field of view.
Suppose you would like to guarantee that "smallest hole" on screen is not bigger than N pixels for all LODs besides the most accurate one. Considering the first LOD switch at 2 * accurate occlusion threshold, where smallest hole becomes 2 * smallest hole:
f(2 * accurate occlusion threshold) * (2 * smallest hole) <= N
Generally for any LOD switch s, where s is 1, 2, 3 ...:
f((2^s) * accurate occlusion threshold) * ((2^s) * smallest hole) <= N
Solving for accurate occlusion threshold, we get the following inequality, which doesn't depend on s:
accurate occlusion threshold >= smallest hole * w / (2 * N * tan(fov / 2))
Try it out here:
If you attempt using a value from this equation in practice, you might find that the query isn't fast enough or that ERROR_OUT_OF_MEMORY is returned. Consider increasing "smallest occluder" parameter value if this happens.
Other ways to improve query times:
- Enable Umbra internal object grouping, especially with high object counts. Testing individual objects for visibility often dominates query's CPU usage.
- Run query as one or several parallel jobs. Hide the cost by parallelizing the jobs with other work.
- Use smaller "accurate occlusion threshold" value.
Solving for N, we can also measure maximum pixel error for given accurate occlusion threshold: