This summer, my friend and built an application to visualize human anatomy. We used a data source that consisted of 120 consecutive X-Ray images taken in slices of a human skull. The objective was to turn these stacked 2D images into a 3D model that could be treated as if it was rendered natively from a 3D mesh.
The technique we used to generate the 3D model is called Volumetric Rendering. There are multiple ways perform this process, with both CPU and GPU optimized implementations.
Since the GPU is very well designed for such parallel execution techniques, we chose to write a volume raycasting algorithm to achieve the effect we desired. The entire volumetric rendering is performed on the GPU with shaders. This is results in a very high performance rendering algorithm.
The images are loaded into a OpenGL 3D texture from a C++ application. Once the data is loaded, the texture is iteratively sampled in the fragment shader using rays that are “cast” into the volume from the camera (the “eye”). For each pixel on the screen, a ray is cast. Each ray is marched through the texture, with the color values being added up along the way.
This implementation allows for a very flexible approach for different kinds of image data sets. Results are below.
There are some artifacts with the rendering and the overall structure of the skull is stretched in the y-axis. Since every image data set is different, we worked on making the volume raycasting algorithm as generic as possible, which is what causes the artifacts.
Built with ♥ with OpenFrameworks.
Project can be found on Github here