Archive

Posts Tagged ‘artifacts’

Hybrid split

As I mentioned in the previous post a hybrid split between the linear and the logarithmic split can be a good idea because when the linear splitting scheme falls short the logarithmic one can be used and the other way around.

Because when multiple layers contain the same information artifacts may occur, the criteria for choosing the ratio between linear and logarithmic splitting is so that it produces consecutive layers as different from each other as possible. Or put in another way each new layer should bring new information. In terms of computer vision this can be translated to having the mutual information between these images as small as possible.

Two techniques of measuring mutual information were tested: sum of absolute differences and cross-correlation coefficient.

The sum of absolute differences is pretty straight forward to compute and it involves adding the absolute value of the difference between each two corresponding pixels from the two images.

The cross-correlation coefficient represents the ratio between the covariance of two images and the product of their standard deviation, and can be computed using the following formula:

where Ī(•) is the mean of image I. Another useful property about correlation is that it has values on a scale ranging from [-1, 1] and it gives a linear indication of the similarity between images.

As expected from the findings in the previous post the mutual information is smaller when choosing a more linear split for sparse objects and a more logarithmic one for denser objects (Figure 1).

Figure 1 Plot generated using gnuplot. Linear splitting corresponds to a split ratio of 0 while logarithmic splitting maps to the value of 1.

Because, as we can see from Figure 1, the cross-correlation coefficient (shown in green) covers a wider range of values, giving better estimate for each density value, it was chosen as the default method of computing the mutual information. The cross-correlation probably performs better due to the influence of standard deviation, which is completely neglected for the sum of absolute differences (shown in red).

Advertisements

Splitting scheme

  • Linear splitting

The most frequently used splitting scheme for choosing the opacity maps’ position is the linear one. It has been used as the primary splitting scheme in both opacity shadow maps (OSM) and deep opacity maps (DOM).

However, if we were to look at the light’s distribution on real-world translucent objects such as clouds, trees or hair we can observe that for dense objects the lighting caused by self-shadowing changes only at the very beginning of the object (Figure 1).

Figure 1 Real-world photographs of clouds (a) and bushes (b). It can be observed that for these objects the lighting only changes at the very beginning of the object.

In such cases a linear distribution would produce layers that contain almost the same information from a certain layer onwards (Figure 2). A distribution that would have more layers near the beginning of the object and fewer at the end would probably give better results.

Figure 2 Layers obtained using linear splitting on a scene with a dense model. The last four layers contain almost the same information.

  • Logarithmic splitting

The logarithmic distribution has a slower increase rate and therefore produces a splitting that has a higher density of layers at the beginning of the object (Figure 3).

Figure 3 Comparison between linear and logarithmic distributions. Linear increase, blue, versus logarithmic increase, green (a), linear split (b) and logarithmic split (c).

Obtaining layers that have different shadow information prevents artifacts like the ones shown in Figure 4.

Figure 4 Difference in rendering when using linear splitting (a) and logarithmic splitting (b). Linear splitting (a) produces incorrect self-shadows because most of the layers contain the same information (Figure 2).

  •  Hybrid split

Although logarithmic splitting produces good results on dense objects, it doesn’t preform well on sparse objects because the lighting caused by self-shadowing changes throughout the entire length of the object (Figure 5).

Figure 5 Real-world photographs of clouds (a) and trees (b). It can be observed that for sparse objects the lighting changes throughout the entire length of the object.

The rendering artifacts that occur when logarithmic splitting is performed on sparse objects can be seen in Figure 6.

Figure 6 Difference in rendering when using logarithmic splitting (a) and linear splitting (b). Logarithmic splitting (a) produces artifacts: the willow is incorrectly lit near the top, because the layers don’t uniformly cover the whole length of the sparse object.

However, because the linear splitting scheme can be used when the logarithmic one fails and vice versa, using a hybrid split between the two of them based on the given scene should produce artifacts free renderings. More on this hybrid split in the next post.

Deep Opacity Maps in Crystal Space

Deep Opacity Maps (DOM) represent a nice way of removing artifacts caused by the linear splitting in Opacity Shadow Maps (OSM).

The novelty of DOM is that they align the opacity maps (layers) with the hair geometry by first computing a depth map and use this information as an offset to the linear splitting that happens at a later rendering pass. In the following picture taken from [DOM] you can see exactly the difference between splitting in OSM (a) and DOM (b).

The advantage of using DOM is that visual artifacts do not occur even when using just a few layers, because by being aligned with the geometry the splitting follows the light distribution. Next is a comparison between the rendering obtained with OSM with 60 layers (a) and DOM with 16 layers (b) in Crystal Space:

However, one major disadvantage of DOM is that even though they explicitly specify a starting splitting position for each point (via the depth map), no information about the stopping splitting position is given whatsoever. This can create a lot of difficulties when trying to make the implementation work with different objects of different sizes and shapes. Using either a constant or the length of the object measured at a particular point in order to obtain the distance between two consecutive splits are too restrictive and thus fail.

This is why for the remaining time of the project I plan to extend DOM to compute and use information from a depth map containing the depth of the farthest points (the stopping splitting positions) as well. I also want to experiment with different splitting schemes, apart from the linear one, and add support for opaque objects, possibly by using the information provided by the depth map.

Opacity Shadow Maps artifacts

Opacity shadow maps can suffer from sever artifacts if not “enough” maps are generated. The artifacts are caused by the fact that new points, without shadow information, are introduced by each opacity map. Below you can see the grass from CS rendered using only 15 opacity maps (the diagonal lines perpendicular to the light’s direction are the artifacts, in case they weren’t obvious enough 🙂 ):

The limit is 15 because, at the moment, the textures are passed as an array of sampler2D and only 16 (15 opacity + 1 material) textures can be used in one shader on my video card, NVIDIA GeForce 9500M GS. However, 4 times more maps can be generated if every channel of every texture is used, yielding 60 maps (but only 14 textures):

As can be seen from the above picture the artifacts are now slightly less visible (more but smaller), so increasing the number of maps is one way of trying to remove these artifacts. Another (smarter) way is by aligning the opacity maps with the geometry, using information from a depth map, as described in deep opacity maps (DOM). This is what I plan to implement for the second part of GSoC. Here is how DOM should improve the rendering: