Archive

Posts Tagged ‘Deep Opacity Maps’

Performance

As a final step for this project I tested the performance of the three algorithms I implemented throughout the summer: Opacity Shadow Maps (OSM), Deep Opacity Maps (DOM) and Bounding Opacity Maps (BOM).

I measured the performance in frames-per-second (FPS) using the average FPS provided by Fraps benchmarking tool for a period of 60 seconds. The GPU card (Nvidia GT 335M) had the most important contribution in the measures taken, because all three algorithms are GPU bound (they involve rendering to texture multiple times and no significantly computational task is done on the CPU).

The performance is shown in Figure 1

Figure 1 Plot generated using gnuplot. The variance between the number of layers and the FPS for the three algorithms implemented for this project. Even thought Bounding Opacity Maps have the worst performance they produce realistic renderings even when using the minimum number of layers tested because they follow the real-world light’s distribution (BOM).


Splitting scheme

  • Linear splitting

The most frequently used splitting scheme for choosing the opacity maps’ position is the linear one. It has been used as the primary splitting scheme in both opacity shadow maps (OSM) and deep opacity maps (DOM).

However, if we were to look at the light’s distribution on real-world translucent objects such as clouds, trees or hair we can observe that for dense objects the lighting caused by self-shadowing changes only at the very beginning of the object (Figure 1).

Figure 1 Real-world photographs of clouds (a) and bushes (b). It can be observed that for these objects the lighting only changes at the very beginning of the object.

In such cases a linear distribution would produce layers that contain almost the same information from a certain layer onwards (Figure 2). A distribution that would have more layers near the beginning of the object and fewer at the end would probably give better results.

Figure 2 Layers obtained using linear splitting on a scene with a dense model. The last four layers contain almost the same information.

  • Logarithmic splitting

The logarithmic distribution has a slower increase rate and therefore produces a splitting that has a higher density of layers at the beginning of the object (Figure 3).

Figure 3 Comparison between linear and logarithmic distributions. Linear increase, blue, versus logarithmic increase, green (a), linear split (b) and logarithmic split (c).

Obtaining layers that have different shadow information prevents artifacts like the ones shown in Figure 4.

Figure 4 Difference in rendering when using linear splitting (a) and logarithmic splitting (b). Linear splitting (a) produces incorrect self-shadows because most of the layers contain the same information (Figure 2).

  •  Hybrid split

Although logarithmic splitting produces good results on dense objects, it doesn’t preform well on sparse objects because the lighting caused by self-shadowing changes throughout the entire length of the object (Figure 5).

Figure 5 Real-world photographs of clouds (a) and trees (b). It can be observed that for sparse objects the lighting changes throughout the entire length of the object.

The rendering artifacts that occur when logarithmic splitting is performed on sparse objects can be seen in Figure 6.

Figure 6 Difference in rendering when using logarithmic splitting (a) and linear splitting (b). Logarithmic splitting (a) produces artifacts: the willow is incorrectly lit near the top, because the layers don’t uniformly cover the whole length of the sparse object.

However, because the linear splitting scheme can be used when the logarithmic one fails and vice versa, using a hybrid split between the two of them based on the given scene should produce artifacts free renderings. More on this hybrid split in the next post.

Bounding Opacity Maps

Because Deep Opacity Maps (DOM) give information only about the starting position for the splitting points, two major issues appear:

  1. The position of the splitting points can’t be precisely determined for any type / shape of object because the end splitting position is not specified.
  2. The layers only follow the light’s distribution at the beginning of the object, where the start points are known via the depth map (Figure 1).

Figure 1 A translucent full sphere as seen in real-life (a), the distribution of layers when using DOM (b) and the way the light is distributed in real-life (c).

Furthermore, the example from Figure 1 is not a particular case, the light distribution following the shape of the object for other translucent real world objects, such as blonde hair or trees, as can be seen in Figure 2 and Figure 3.

Figure 2 Real-life lighting of blonde hair (a) and the corresponding layers and light distribution (b). It can be observed that the layers and the light distribution follow the shape of the object.

Figure 3 Real-life lighting of a tree (a) and the corresponding layers and light distribution (b). It can be observed that the layers and the light distribution follow the shape of the object.

By computing an extra depth map, in which depth information about the furthest away points is given, instead of the closest ones, the limitation of DOM regarding the lack of information for the end splitting points is solved, and more important the layers follow the light’s distribution in real-life.

This is achieved by Bounding Opacity Maps (BOM) where the layering follows the light distribution in real-life by interpolating the values from the two depth maps when choosing the splitting points.

The difference in splitting between DOM and BOM in Crystal Space can be seen in Figure 4 and the difference in rendering in Figure 5.

Figure 4 Difference in splitting between DOM (a) and BOM (b) when using 16 layers – first layer corresponds to light green and the last layer to black. It can be seen that because the end splitting points are not specified in DOM the layers don’t cover the whole length of the object (the final color is not black as in (b)).

Figure 5 Difference in rendering between DOM (a) and BOM (b) when using 16 layers. Because DOM don’t specify the end splitting points some grass strands (from the red circle), corresponding to the last layer, are given false shadow information.

Deep Opacity Maps in Crystal Space

Deep Opacity Maps (DOM) represent a nice way of removing artifacts caused by the linear splitting in Opacity Shadow Maps (OSM).

The novelty of DOM is that they align the opacity maps (layers) with the hair geometry by first computing a depth map and use this information as an offset to the linear splitting that happens at a later rendering pass. In the following picture taken from [DOM] you can see exactly the difference between splitting in OSM (a) and DOM (b).

The advantage of using DOM is that visual artifacts do not occur even when using just a few layers, because by being aligned with the geometry the splitting follows the light distribution. Next is a comparison between the rendering obtained with OSM with 60 layers (a) and DOM with 16 layers (b) in Crystal Space:

However, one major disadvantage of DOM is that even though they explicitly specify a starting splitting position for each point (via the depth map), no information about the stopping splitting position is given whatsoever. This can create a lot of difficulties when trying to make the implementation work with different objects of different sizes and shapes. Using either a constant or the length of the object measured at a particular point in order to obtain the distance between two consecutive splits are too restrictive and thus fail.

This is why for the remaining time of the project I plan to extend DOM to compute and use information from a depth map containing the depth of the farthest points (the stopping splitting positions) as well. I also want to experiment with different splitting schemes, apart from the linear one, and add support for opaque objects, possibly by using the information provided by the depth map.

Opacity Shadow Maps artifacts

Opacity shadow maps can suffer from sever artifacts if not “enough” maps are generated. The artifacts are caused by the fact that new points, without shadow information, are introduced by each opacity map. Below you can see the grass from CS rendered using only 15 opacity maps (the diagonal lines perpendicular to the light’s direction are the artifacts, in case they weren’t obvious enough 🙂 ):

The limit is 15 because, at the moment, the textures are passed as an array of sampler2D and only 16 (15 opacity + 1 material) textures can be used in one shader on my video card, NVIDIA GeForce 9500M GS. However, 4 times more maps can be generated if every channel of every texture is used, yielding 60 maps (but only 14 textures):

As can be seen from the above picture the artifacts are now slightly less visible (more but smaller), so increasing the number of maps is one way of trying to remove these artifacts. Another (smarter) way is by aligning the opacity maps with the geometry, using information from a depth map, as described in deep opacity maps (DOM). This is what I plan to implement for the second part of GSoC. Here is how DOM should improve the rendering: