Home > Crystal Space, Documentation > Splitting scheme

Splitting scheme

  • Linear splitting

The most frequently used splitting scheme for choosing the opacity maps’ position is the linear one. It has been used as the primary splitting scheme in both opacity shadow maps (OSM) and deep opacity maps (DOM).

However, if we were to look at the light’s distribution on real-world translucent objects such as clouds, trees or hair we can observe that for dense objects the lighting caused by self-shadowing changes only at the very beginning of the object (Figure 1).

Figure 1 Real-world photographs of clouds (a) and bushes (b). It can be observed that for these objects the lighting only changes at the very beginning of the object.

In such cases a linear distribution would produce layers that contain almost the same information from a certain layer onwards (Figure 2). A distribution that would have more layers near the beginning of the object and fewer at the end would probably give better results.

Figure 2 Layers obtained using linear splitting on a scene with a dense model. The last four layers contain almost the same information.

  • Logarithmic splitting

The logarithmic distribution has a slower increase rate and therefore produces a splitting that has a higher density of layers at the beginning of the object (Figure 3).

Figure 3 Comparison between linear and logarithmic distributions. Linear increase, blue, versus logarithmic increase, green (a), linear split (b) and logarithmic split (c).

Obtaining layers that have different shadow information prevents artifacts like the ones shown in Figure 4.

Figure 4 Difference in rendering when using linear splitting (a) and logarithmic splitting (b). Linear splitting (a) produces incorrect self-shadows because most of the layers contain the same information (Figure 2).

  •  Hybrid split

Although logarithmic splitting produces good results on dense objects, it doesn’t preform well on sparse objects because the lighting caused by self-shadowing changes throughout the entire length of the object (Figure 5).

Figure 5 Real-world photographs of clouds (a) and trees (b). It can be observed that for sparse objects the lighting changes throughout the entire length of the object.

The rendering artifacts that occur when logarithmic splitting is performed on sparse objects can be seen in Figure 6.

Figure 6 Difference in rendering when using logarithmic splitting (a) and linear splitting (b). Logarithmic splitting (a) produces artifacts: the willow is incorrectly lit near the top, because the layers don’t uniformly cover the whole length of the sparse object.

However, because the linear splitting scheme can be used when the logarithmic one fails and vice versa, using a hybrid split between the two of them based on the given scene should produce artifacts free renderings. More on this hybrid split in the next post.

Advertisements
  1. December 16, 2011 at 8:44 am

    I am reading section 5.2 Splitting scheme of your report. I have a problem about the distances between adjacent two layers when splitting the grass volume.

    I also implemented deep opacity maps algorithm, in my implementation, the program first renders the depth texture and the deep opacity maps in the view of the light. Therefore, the distances I used in splitting the hair volume are measured in the light space. However, in the final step, the hair volume is rendered in the eye‘s space, the distances used in last step cannot be used in fetching the opacity values. How did you deal with this problem?

    There is a sentence in your report saying that “the range of the object is given in local coordinates from 0 to 1”. do you mean you split the grass volume in the model space. How did you make it? How did you transform to light space?

    Thanks in advance.

  2. voicualexandruteodor
    December 16, 2011 at 9:18 am

    In the last step you should have the 3D coordinates in world space for the current rendered fragment (in the fragment shader) and you also have the position of the light in 3D coordinates in world space.
    With these two you can compute the distance between the light and any point of the hair volume and you can normalize it between 0 and 1 by knowing the distance between your near and far plane when rendering from the light’s perspective.
    Again, you might find this tutorial helpful (HLSL for Shadow Rendering – the ‘lpos’ vector): http://takinginitiative.net/2011/05/15/directx10-tutorial-10-shadow-mapping/.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: