In this research field, we addressed the following challenges:
Volume Rendering Techniques for General Purpose Graphics Hardware
Interactive direct volume rendering has previously been restricted to high-end graphics workstations and special-purpose hardware, due to the large amount of trilinear interpolations that are necessary to obtain high image quality. Implementations that use the 2D-texture capabilities of standard PC hardware usually render object-aligned slices in order to substitute trilinear by bilinear interpolation. However the resulting images often contain visual artifacts caused by the lack of spatial interpolation. We propose new rendering techniques that significantly improve both performance and image quality of the 2D-texture based approach. We show how multi-texturing capabilities of GeForce Family are exploited to enable interactive high quality volume visualization on low-cost hardware. Furthermore we demonstrate how this hardware can be used to efficiently render shaded isosurfaces and to compute diffuse illumination for semi-transparent volume rendering at interactive frame rates.
Automatic Adaption of Transfer Functions
In most volume rendering scenarios implicit classification is performed manually by specification of a transfer function, that maps abstract data values to visual attributes. An appropriate classification requires both specialized knowledge of the interesting structures within the data set as well as the technical knowhow of the computer scientist. Recent automatic data-driven techniques are very well capable of separating different regions in the data set. However, their applicability in practice is limited, since they do not contain any information about the critical structures which are of interest. In this scenario we propose an efficient and reproducible way to automatically assign transfer function templates, which include individual knowledge as well as personal taste. The presented approach is based on dynamic programming and was successfully applied in medical environment.
Object-order texture-based volume rendering decomposes the volume data set into stacks of textured polygons. The performance of such hardwarebased volume rendering techniques is clearly dominated by the fill-rate and the memory bandwidth, while only very little workload is assigned to the vertex processor. We discuss a vertex program which efficiently computes the slicing for texture-based volume rendering. This novel technique enables us to balance the workload between vertex processor, fragment processor and memory bus. As a result we demonstrate that the performance of texture-based volume rendering can be efficiently enhanced by trading an increased vertex load for a reduced fragment count. As an application we suggest a novel approach for empty space skipping for object-order volume rendering.
Real-Time Volumetric Deformation
As a consequence of the development of efficient volume rendering techniques, growing demand for volumetric deformation models has arisen in the last couple of years. Apart from obvious applications of free-form modeling in visual arts and entertainment, the ability to accurately model local deformation is extremely important in medicinal application such as minimal invasive surgery and computer assisted intervention. In a typical clinical application scenario, tomography data is acquired before the intervention for a detailed surgery planning. During the intervention however the pre-operatively acquired image data does not match the actual situation due to anatomical shifts and tissue resection. In consequence the spacial misalignment must be compensated by adapting the volume data to the non-linear distortion.
Monte-Carlo Volume Rendering
This work presents a practical, high-quality, hardware-accelerated volume rendering approach including scattering, environment mapping, and ambient occlusion. We examine the application of stochastic raytracing techniques for volume rendering and provide a fast GPU-based prototype implementation. In addition, we propose a simple phenomenological scattering model, closely related to the Phong illumination model that many artists are familiar with. We demonstrate our technique being capable of producing convincing images, yet flexible enough for digital productions in practice.