Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.
|Number of pages
|IEEE Transactions on Visualization and Computer Graphics
|Published - Aug 28 2017
Bibliographical noteKAUST Repository Item: Exported on 2020-10-01
Acknowledgements: We thank the anonymous reviewers for their insightful comments and for pointing us to related work. We thank John Keyser for the ‘KESM Mouse Brain’ data . ‘Dreh Sensor’ courtesy of Siemens Healthcare, Components and Vacuum Technology, Imaging Solutions; reconstructed by the Siemens OEM reconstruction API CERA TXR (Theoretically Exact Reconstruction). This work was supported by funding from King Abdullah University of Science and Technology (KAUST) and KAUST award OSR-2015- CCF-2533-01.