Archive

Posts Tagged ‘Opacity Shadow Maps’

Performance

As a final step for this project I tested the performance of the three algorithms I implemented throughout the summer: Opacity Shadow Maps (OSM), Deep Opacity Maps (DOM) and Bounding Opacity Maps (BOM).

I measured the performance in frames-per-second (FPS) using the average FPS provided by Fraps benchmarking tool for a period of 60 seconds. The GPU card (Nvidia GT 335M) had the most important contribution in the measures taken, because all three algorithms are GPU bound (they involve rendering to texture multiple times and no significantly computational task is done on the CPU).

The performance is shown in Figure 1

Figure 1 Plot generated using gnuplot. The variance between the number of layers and the FPS for the three algorithms implemented for this project. Even thought Bounding Opacity Maps have the worst performance they produce realistic renderings even when using the minimum number of layers tested because they follow the real-world light’s distribution (BOM).


Advertisements

Splitting scheme

  • Linear splitting

The most frequently used splitting scheme for choosing the opacity maps’ position is the linear one. It has been used as the primary splitting scheme in both opacity shadow maps (OSM) and deep opacity maps (DOM).

However, if we were to look at the light’s distribution on real-world translucent objects such as clouds, trees or hair we can observe that for dense objects the lighting caused by self-shadowing changes only at the very beginning of the object (Figure 1).

Figure 1 Real-world photographs of clouds (a) and bushes (b). It can be observed that for these objects the lighting only changes at the very beginning of the object.

In such cases a linear distribution would produce layers that contain almost the same information from a certain layer onwards (Figure 2). A distribution that would have more layers near the beginning of the object and fewer at the end would probably give better results.

Figure 2 Layers obtained using linear splitting on a scene with a dense model. The last four layers contain almost the same information.

  • Logarithmic splitting

The logarithmic distribution has a slower increase rate and therefore produces a splitting that has a higher density of layers at the beginning of the object (Figure 3).

Figure 3 Comparison between linear and logarithmic distributions. Linear increase, blue, versus logarithmic increase, green (a), linear split (b) and logarithmic split (c).

Obtaining layers that have different shadow information prevents artifacts like the ones shown in Figure 4.

Figure 4 Difference in rendering when using linear splitting (a) and logarithmic splitting (b). Linear splitting (a) produces incorrect self-shadows because most of the layers contain the same information (Figure 2).

  •  Hybrid split

Although logarithmic splitting produces good results on dense objects, it doesn’t preform well on sparse objects because the lighting caused by self-shadowing changes throughout the entire length of the object (Figure 5).

Figure 5 Real-world photographs of clouds (a) and trees (b). It can be observed that for sparse objects the lighting changes throughout the entire length of the object.

The rendering artifacts that occur when logarithmic splitting is performed on sparse objects can be seen in Figure 6.

Figure 6 Difference in rendering when using logarithmic splitting (a) and linear splitting (b). Logarithmic splitting (a) produces artifacts: the willow is incorrectly lit near the top, because the layers don’t uniformly cover the whole length of the sparse object.

However, because the linear splitting scheme can be used when the logarithmic one fails and vice versa, using a hybrid split between the two of them based on the given scene should produce artifacts free renderings. More on this hybrid split in the next post.

Deep Opacity Maps in Crystal Space

Deep Opacity Maps (DOM) represent a nice way of removing artifacts caused by the linear splitting in Opacity Shadow Maps (OSM).

The novelty of DOM is that they align the opacity maps (layers) with the hair geometry by first computing a depth map and use this information as an offset to the linear splitting that happens at a later rendering pass. In the following picture taken from [DOM] you can see exactly the difference between splitting in OSM (a) and DOM (b).

The advantage of using DOM is that visual artifacts do not occur even when using just a few layers, because by being aligned with the geometry the splitting follows the light distribution. Next is a comparison between the rendering obtained with OSM with 60 layers (a) and DOM with 16 layers (b) in Crystal Space:

However, one major disadvantage of DOM is that even though they explicitly specify a starting splitting position for each point (via the depth map), no information about the stopping splitting position is given whatsoever. This can create a lot of difficulties when trying to make the implementation work with different objects of different sizes and shapes. Using either a constant or the length of the object measured at a particular point in order to obtain the distance between two consecutive splits are too restrictive and thus fail.

This is why for the remaining time of the project I plan to extend DOM to compute and use information from a depth map containing the depth of the farthest points (the stopping splitting positions) as well. I also want to experiment with different splitting schemes, apart from the linear one, and add support for opaque objects, possibly by using the information provided by the depth map.

Opacity Shadow Maps artifacts

Opacity shadow maps can suffer from sever artifacts if not “enough” maps are generated. The artifacts are caused by the fact that new points, without shadow information, are introduced by each opacity map. Below you can see the grass from CS rendered using only 15 opacity maps (the diagonal lines perpendicular to the light’s direction are the artifacts, in case they weren’t obvious enough 🙂 ):

The limit is 15 because, at the moment, the textures are passed as an array of sampler2D and only 16 (15 opacity + 1 material) textures can be used in one shader on my video card, NVIDIA GeForce 9500M GS. However, 4 times more maps can be generated if every channel of every texture is used, yielding 60 maps (but only 14 textures):

As can be seen from the above picture the artifacts are now slightly less visible (more but smaller), so increasing the number of maps is one way of trying to remove these artifacts. Another (smarter) way is by aligning the opacity maps with the geometry, using information from a depth map, as described in deep opacity maps (DOM). This is what I plan to implement for the second part of GSoC. Here is how DOM should improve the rendering:

Opacity Shadow Maps in Crystal Space

After roughly a month of coding I managed to implement the render manager (RM) for opacity shadow maps (OSM).

The basic idea with OSM is slicing through the translucent object(s), with planes perpendicular to the light’s direction, and rendering to texture multiple times. These maps store the translucency amount (obtained using additive alpha blending) at particular distances and are used to approximate the translucency of every point from the object (the opacity function from the following picture).

Picture taken from OSM.

At the moment the OSM RM only supports a linear splitting scheme (interpolation), though it supports both multiple objects and lights. Next I will work on some more advanced splitting schemes, that should get rid of the visual artefact from linear splitting and also do some optimizations regarding the rendering targets and the collision test for each split.

Below you can find a comparison between OSM with 4 and 8 slices and the parallel split shadow maps implementation (PSSM) from CS:

  • PSSM

  • OSM with 4 splits

  • OSM with 8 splits

Setting up the scene

Before developing the new render manager (RM) for opacity shadow maps (OSM) I built a scene that can show the difference between various RM in CS. This scene contains some translucent objects, such as grass and a cloud made out of spheres. I built the scene using Blender 2.49 and exported it via blender2crystal.

Here is the scene rendered by 4 different RM:

  • unshadowed – this is the default RM in CS, and it renders the scene without any shadows

  • shadow_pssm – a parallel split shadow map implementation (PSSM), which is the only shadow RM present in CS

  • osm – a prototype of what I am going to implement, which is just a striped version of the unshadowed RM at the moment

As a side note PSSM are basically the same thing as Cascaded Shadow Maps (CSM), which I also wanted to implement for this project because they use multiple shadow maps and rendering positions, similar from this point of view to OSM. However, because they are already implemented I went directly to writing the OSM RM.

Another year, another blog

So this is my second year at both Google Summer of Code (GSoC) and Crystal Space (CS) after the last year’s project, which you can find at: http://hairrendering.wordpress.com/.

This year I plan to implement “Real-Time Volumetric Shadows for Dynamic Objects”, by adding a new render manager in CS.

This idea came to me while trying to figure out why my hair rendering didn’t look as good as advertised by other demos, such as NVIDIA Nalu. The reason is that my hair plugin lacked self-shadowing and so I studied how this can be implemented by doing an Individual Study Option at my university, regarding “Rendering real-time self-shadowed dynamic hair”. You can find my presentation here.

For this project I plan to implement Opacity Shadow Maps (OSM) for starters and then implement some more advanced techniques, such as Fourier Opacity Mapping (FOM) and/or improve the OSM by using a different sorting algorithm.

If you would like to view the implementation, as it progress, you can check out via SVN the CS selfshadow branch (no account needed). Also, if you experience problems compiling CS, you can read this post here (it’s a little bit old, but it should do the trick).