Raycast Rendering

Raycast Rendering
Posted by on Jun 16, 2017

Focus and Depth of Field

For my CSE 168 final project, I implemented two techniques into my ray caster, depth of field with focus and volumetric rendering. The former can be seen in the above image, where the focal point is in between the two Laughing Buddha models. Below, you can see the gradual effect of increasing the f/stop, i.e. reducing the aperture diameter.

 

f/Stop values of 20, 80, 160, 500, 1000 from top to bottom

Volumetric Rendering

Secondly, I implemented volumetric rendering. My algorithm consists of a ray marching approach, albeit one different from the standard forward evaluation kind. For each point on a particular path, after the ray traversal returns to that depth level, the algorithm samples the ray at various points in the reverse direction. At each sample point, I evaluate the 4 parameters of the volume and modify the radiance at that point accordingly. This also applies for rays that miss geometry; they are evaluated at infinity.

The 4 parameters of the volume are:

  • Emission
  • Absorption
  • In-scattering
  • Out-scattering

In order to simplify my algorithm, I chose to make 2 core assumptions about the nature of the volume, that is is infinite in all dimensions and homogenous in its density. By making these two assumptions I was able to simplify my logic routines. If a volume is infinite, then all rays will pass through it. If a volume is homogenous, then absorption and emission can be treated as constant and generalized into a single extinction factor.

Volume Rendering Quality

The quality of the rendered output when rendering a scene with a volume is directly proportional to the granularity with which the back sampling is done on the ray. If multiple samples are allowed, then the noise present in the volume will greatly decrease and the overall image will be much smoother. However, this comes at significant computational cost, as the running time becomes adouble exponential.

Simple test demonstrating point lights and DoF blur in fog

 

 

 Performance

In order to increase performance, I parallelized my ray caster. The image is subdivided into tiles, where each tile consists of a block of pixels. Depending on the supported hardware concurrency of the machine, a number of threads are created. Each thread is assigned a tile. As threads finish processing tiles, they request new ones from the master thread. In this way, all processors can be utilized fully.

The example on the left was rendered with far greater granularity and path length than the above image. Despite that the above image has 16 times the number of pixels than the one on the left, the left image took longer to render.

Other Miscellaneous Features.

I also added some additional features for programmer productivity. When starting an image rendering function, the engine has an optional parameter for previewing. The tiles are organized into blocks, and I typically demarcate each image as having 5 blocks (20% of the image). Whenever a block finishes processing, the image is displayed on screen. This allows me to monitor the rendering and save my own time by quickly exiting the program if my parameters are incorrect and/or the image is not rendering as desired. This is particularly useful, as high resolution images and volumes can take many hours to render, even with optimum parallelization.

 

 

Medical Imaging with Unity

Medical Imaging with Unity
Posted by on Jan 16, 2017

Over the last few months, I have been continuing the volumetric rendering for medical imaging project that I first wrote about here.

In September, we got access to the Korean Visible Human data set. This data set, in addition to containing MRI and CAT scans, contained full color photo of the cadaver cross sections. Our new goal was to be able to render the volumes formed by the color images and selectively display models of specific organs, bones, and other body parts.

We ported the project to Unity, and successfully were able to rewrite the shader algorithm with Unity’s ShaderLab HLSL API. The core algorithm was unchanged, but some additions were made in order to achieve the above goal. The cadaver data set is complemented my a legend data set. This legend data set consists of images that match one for one with each cadaver cross section. There is a unique legend data set for each body part. For example, to render the cranium, there is a set of legend images that correspond to the images in the cadaver data set that consist of the head sections.

A sample image of a legend cross section

A sample image of the actual cadaver cross section.

 

 

Each legend and cadaver image is uniquely identified by an index. The legend image is purely black and white. For each image, the black pixels signify that the corresponding pixel in the cadaver image must be sampled when performing the volumetric rendering. Pixels in the cadaver image that correspond to white pixels in the legend image are discarded.

In this way, the shader can perform filtration on the volume to only display the body parts desired. This is the main modification that was made to the shader algorithm after porting, the rest of it is mostly unchanged.

We were able to successfully build a prototype of the application that demonstrates various displaying various body parts in VR on the Oculus Rift. Work on enhancing the resolution of the displayed models is ongoing in 2017. Our primary short term goals are to build a menu system, and to solve some visual artifacts that occur in certain situations. Below are some screenshots of the prototype.

 

Swarm GPU

Swarm GPU
Posted by on Oct 6, 2016

Recently, I’ve been experimenting with shaders and programming the GPU. Some time ago, I built a particle system that operated on Perlin noise and Newtonian physics, and I’ve always wanted to push that particle system further. The first iteration of that project involved a particle system that was purely based of visualizing Simplex Noise (a derivative of Perlin noise). I called it Swarm.

However, I was always severely constrained by the resource consumption of the old particle system. Due to a mixture of reasons, including my inefficient code and the complexity of the Simplex Noise algorithm, Swarm simply could not run at acceptable framerates with high particle counts. For example on my laptop, when the number of particles went higher than a few thousand, the framerates would often slow to the single digits. This constraint affected the resultant String Theory project as well.

However, I soon realized that a particle system implemented completely on the GPU with shaders would be able to surmount this problem. With this project, I was able to replicate the aesthetics and functionality of the original Swarm project, this time with 800,000 particles. That’s 3 orders of magnitude more particles running at an average of 60 FPS!

Below is a video recording of the particle system in action. The framerate is slower during recording due to the added overhead of the recording software. Looking forward to implementing the physics on the GPU!

Made with ♥ using OpenFrameworks

Project can be found on Github here

Volumetric Rendering for Medical Imaging

Volumetric Rendering for Medical Imaging
Posted by on Oct 6, 2016

This summer, my friend and built an application to visualize human anatomy. We used a data source that consisted of 120 consecutive X-Ray images taken in slices of a human skull. The objective was to turn these stacked 2D images into a 3D model that could be treated as if it was rendered natively from a 3D mesh.

head-004

One of the slice images from the data set.

The technique we used to generate the 3D model is called Volumetric Rendering. There are multiple ways perform this process, with both CPU and GPU optimized implementations.

Since the GPU is very well designed for such parallel execution techniques, we chose to write a volume raycasting algorithm to achieve the effect we desired. The entire volumetric rendering is performed on the GPU with shaders. This is results in a very high performance rendering algorithm.

The images are loaded into a OpenGL 3D texture from a C++ application. Once the data is loaded, the texture is iteratively sampled in the fragment shader using rays that are “cast” into the volume from the camera (the “eye”). For each pixel on the screen, a ray is cast. Each ray is marched through the texture, with the color values being added up along the way.

This implementation allows for a very flexible approach for different kinds of image data sets. Results are below.

screen-shot-2016-10-06-at-12-03-13-pm screen-shot-2016-10-06-at-12-03-26-pm screen-shot-2016-10-06-at-12-02-53-pm

There are some artifacts with the rendering and the overall structure of the skull is stretched in the y-axis. Since every image data set is different, we worked on making the volume raycasting algorithm as generic as possible, which is what causes the artifacts.

Built with ♥ with OpenFrameworks.

Project can be found on Github here

String Theory

String Theory
Posted by on Jul 4, 2014

Moments in time are like individual ephemeral images layered on top of each other, ones that we usually process individually. The question arises, what happens when these scenes are amalgamated, blended, mixed, layered, and stacked in such a way as to become indistinguishable from each other? Do new details arise?

This post represents the next series of generative art designs I am creating. The gallery below contains the images produced by this program; the program I created for this project has the same spirit as the previous one, Artificial Artist, albeit with different principles. Whereas Artificial Artist was a foray into the field, String Theory represents a development of my artistic coding skills specifically for this new medium. Here, I have attempted to add behavioral elements and physical interaction to the system of Artificial Artist.

At the core of the program are three elements, Attractors, Emitters, and Agents. As their names suggest, the emitters are constantly discharging point agents, which feel a variable attractive force from the attractors. What is key is that every single attractor exerts a pulling force on every single agent, in fidelity with the gravitational paradigm. The equation I used to structure the behavior of the system is the same equation used to calculate gravitational attraction of macroscopic bodies. This has the effect of making the entire system an accurate representation of 2D reality. The pictures below are themselves the layered sequences of images showing the agents’ motion.

However, the resultant images would be quite lackluster if the attractor and emitter elements weren’t moving. In order to animate the system further, I coded behavioral motion into the attractor and emitter objects. Each respective set moves along a path defined by the Noise function, developed by Ken Perlin in the 1980s for Disney’s Tron (you can read more about it here). The Noise function itself is a structured randomness function, created to model behaviors in nature that superficially appear random, but in fact have too many variables to be purely random motion. The classic example is the movement of a honeybee in a field of wildflowers. The attractors and emitters move along their paths, but once they hit the edge of the window, they are relocated to a random location in the space.

Unfortunately, this program was too resource consuming to create a scaled JavaScript version, but I have included a video below that shows the “live action” version of the agents’ movement. The attractor and emitters are hidden, but you can sometimes glimpse them when they are silhouetted by the agents. The agents’ motion may appear completely random, but as the images in the gallery show, structure can be seen in the paths the agents take.

For each image, I altered an aspect of the program. Some are obvious like the agents’ colors. Others are less apparent, like the number of elements in the system and rate of particle emission.

The video is best seen in full screen. The gallery images can be expanded to full screen.

In addition, this project is on Github , for anyone to view and modify. It can be found here



Artificial Artist

Artificial Artist
Posted by on Apr 6, 2014

Technology has influenced all spheres of life, from communication to health to art. What many people don’t realize is the extent to which art has not just been changed, but supplemented by technology. Math and logic form the basis of all aspects of nature, especially art. Our minds are built in such a way that the innate logical and mathematical calculations we perform are so natural, that when writing a song, painting a picture, or performing any other artistic process, we do them unconsciously.

This post represents a new avenue of creative exploration that I will be pursuing and sharing. Computer code, though it may seem like a mundane and uninventive medium in fact has a vast wealth of possible creative pursuits. Just as a saw may not appear be a creative tool to anyone but the artistically minded carpenter, code is a tool, all one needs to do is learn how to use it.

The process of art making can be broadly divided into two actions, the concept that forms in the mind and the hand or other limb that implements the mind’s concept. What I have attempted to achieve with the Artificial Artist is the segregation of the two. When I started this project, I had a rough idea of what I wanted to achieve, but no idea as to how to implement it. By transferring the actual drawing process to the computer, the hand that draws gets separated from the mind. Ultimately, I designed the program to be self sufficient; rather than being a puppet to my strings, it is self driven and doesn’t require mine or any other entity’s input (both literally and figuratively). The idea may have originated in my mind, but it is visualized by the computer.

For those who would like to know, the program was originally written in Java, but I have included a slightly simplified JavaScript implementation at the bottom of this page that shows the drawing process. It is designed to refresh every thirty seconds. Since the program draws its root from variables that change with every refresh, the pattern drawn will be different each time. The genre of art that this falls into is relatively new, only having arisen in the last 60 years or so. It is called generative media.


Artifical Artist-1

Artifical Artist-3

Artifical Artist-6

Artifical Artist-5

Artifical Artist-4

Artifical Artist-2

Artifical Artist-7