Home > Crystal Space, Documentation > Bounding Opacity Maps

Bounding Opacity Maps

Because Deep Opacity Maps (DOM) give information only about the starting position for the splitting points, two major issues appear:

  1. The position of the splitting points can’t be precisely determined for any type / shape of object because the end splitting position is not specified.
  2. The layers only follow the light’s distribution at the beginning of the object, where the start points are known via the depth map (Figure 1).

Figure 1 A translucent full sphere as seen in real-life (a), the distribution of layers when using DOM (b) and the way the light is distributed in real-life (c).

Furthermore, the example from Figure 1 is not a particular case, the light distribution following the shape of the object for other translucent real world objects, such as blonde hair or trees, as can be seen in Figure 2 and Figure 3.

Figure 2 Real-life lighting of blonde hair (a) and the corresponding layers and light distribution (b). It can be observed that the layers and the light distribution follow the shape of the object.

Figure 3 Real-life lighting of a tree (a) and the corresponding layers and light distribution (b). It can be observed that the layers and the light distribution follow the shape of the object.

By computing an extra depth map, in which depth information about the furthest away points is given, instead of the closest ones, the limitation of DOM regarding the lack of information for the end splitting points is solved, and more important the layers follow the light’s distribution in real-life.

This is achieved by Bounding Opacity Maps (BOM) where the layering follows the light distribution in real-life by interpolating the values from the two depth maps when choosing the splitting points.

The difference in splitting between DOM and BOM in Crystal Space can be seen in Figure 4 and the difference in rendering in Figure 5.

Figure 4 Difference in splitting between DOM (a) and BOM (b) when using 16 layers – first layer corresponds to light green and the last layer to black. It can be seen that because the end splitting points are not specified in DOM the layers don’t cover the whole length of the object (the final color is not black as in (b)).

Figure 5 Difference in rendering between DOM (a) and BOM (b) when using 16 layers. Because DOM don’t specify the end splitting points some grass strands (from the red circle), corresponding to the last layer, are given false shadow information.

Advertisements
  1. November 8, 2011 at 7:24 am

    How did you compute the nearest and farthest distance from the light source to the grass. Did you divide the grass into four layers on the basis of this information?

  2. voicualexandruteodor
    November 8, 2011 at 8:04 am

    I rendered two depth maps.
    The first similar to a shadow map (actually exactly like a shadow map) from which I got the nearest distance from the light source.
    The second depth map, that contains information about the farthest distance, can be obtained by changing the compare function for the zbuffer GL_GREATER / GL_GEQUAL instead of GL_LESS / GL_LEQUAL.

    I dived the grass into for layers using a linear interpolation between these two maps.

    For more information check my report (Section 5.1 page 30): https://volumetricshadows.files.wordpress.com/2011/09/report.pdf

  3. November 8, 2011 at 2:13 pm

    Thank you for your explanation, could you tell me in your implementation of Opacity Shadow Maps, how did you define the position of the nearest opacity map from the light and the farthest. If I was asked to do the job, I would render two depth maps, then use the minimum and maximum value from the two maps respectively, but … how could I get to know the max or min value in a texture?

  4. voicualexandruteodor
    November 8, 2011 at 3:12 pm

    In the rendered textures you will have information between [0,1] or [-1,1] and in order to find the real coordinate (the position in the world relative to the light) you will have to use the world-view-projection matrix, as well as the light position itself.
    You can check this post here: http://takinginitiative.net/2011/05/15/directx10-tutorial-10-shadow-mapping/

    • November 8, 2011 at 3:58 pm

      I am just wondering if there is any convenient way to get the maxima or minima directly from the depth texture, so I can use them as the basis for layering.

      All I can think of is to copy the texture value from GPU to CPU, then doing a comparison between each value to find the maxima or minima.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: