Over the last few months, I have been continuing the volumetric rendering for medical imaging project that I first wrote about here.
In September, we got access to the Korean Visible Human data set. This data set, in addition to containing MRI and CAT scans, contained full color photo of the cadaver cross sections. Our new goal was to be able to render the volumes formed by the color images and selectively display models of specific organs, bones, and other body parts.
We ported the project to Unity, and successfully were able to rewrite the shader algorithm with Unity’s ShaderLab HLSL API. The core algorithm was unchanged, but some additions were made in order to achieve the above goal. The cadaver data set is complemented my a legend data set. This legend data set consists of images that match one for one with each cadaver cross section. There is a unique legend data set for each body part. For example, to render the cranium, there is a set of legend images that correspond to the images in the cadaver data set that consist of the head sections.
Each legend and cadaver image is uniquely identified by an index. The legend image is purely black and white. For each image, the black pixels signify that the corresponding pixel in the cadaver image must be sampled when performing the volumetric rendering. Pixels in the cadaver image that correspond to white pixels in the legend image are discarded.
In this way, the shader can perform filtration on the volume to only display the body parts desired. This is the main modification that was made to the shader algorithm after porting, the rest of it is mostly unchanged.
We were able to successfully build a prototype of the application that demonstrates various displaying various body parts in VR on the Oculus Rift. Work on enhancing the resolution of the displayed models is ongoing in 2017. Our primary short term goals are to build a menu system, and to solve some visual artifacts that occur in certain situations. Below are some screenshots of the prototype.