Volumetric Rendering for Medical Imaging

Volumetric Rendering for Medical Imaging

Posted by Siddharth

This summer, my friend and built an application to visualize human anatomy. We used a data source that consisted of 120 consecutive X-Ray images taken in slices of a human skull. The objective was to turn these stacked 2D images into a 3D model that could be treated as if it was rendered natively from a 3D mesh.

head-004

One of the slice images from the data set.

The technique we used to generate the 3D model is called Volumetric Rendering. There are multiple ways perform this process, with both CPU and GPU optimized implementations.

Since the GPU is very well designed for such parallel execution techniques, we chose to write a volume raycasting algorithm to achieve the effect we desired. The entire volumetric rendering is performed on the GPU with shaders. This is results in a very high performance rendering algorithm.

The images are loaded into a OpenGL 3D texture from a C++ application. Once the data is loaded, the texture is iteratively sampled in the fragment shader using rays that are “cast” into the volume from the camera (the “eye”). For each pixel on the screen, a ray is cast. Each ray is marched through the texture, with the color values being added up along the way.

This implementation allows for a very flexible approach for different kinds of image data sets. Results are below.

screen-shot-2016-10-06-at-12-03-13-pm screen-shot-2016-10-06-at-12-03-26-pm screen-shot-2016-10-06-at-12-02-53-pm

There are some artifacts with the rendering and the overall structure of the skull is stretched in the y-axis. Since every image data set is different, we worked on making the volume raycasting algorithm as generic as possible, which is what causes the artifacts.

Built with ♥ with OpenFrameworks.

Project can be found on Github here

The Seventh Line - Oct 6, 2016 | Code

Leave a Reply