0% found this document useful (0 votes)
61 views42 pages

Photo-Realistic Real-Time Face Rendering Semester Project LGG Laboratory, EPFL Daniel Chappuis

This document describes research on real-time photo-realistic rendering of human skin. It discusses previous work on modeling subsurface scattering in skin and introduces an algorithm that approximates subsurface scattering with Gaussian filters, enabling real-time rendering on the GPU. The algorithm accounts for diffuse lighting from environment maps using spherical harmonics as well as specular lighting and shadows. Implementation details and results are provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views42 pages

Photo-Realistic Real-Time Face Rendering Semester Project LGG Laboratory, EPFL Daniel Chappuis

This document describes research on real-time photo-realistic rendering of human skin. It discusses previous work on modeling subsurface scattering in skin and introduces an algorithm that approximates subsurface scattering with Gaussian filters, enabling real-time rendering on the GPU. The algorithm accounts for diffuse lighting from environment maps using spherical harmonics as well as specular lighting and shadows. Implementation details and results are provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Photo-Realistic Real-time Face Rendering

Semester pro ject

LGG Laboratory, EPFL

Daniel Chappuis

Supervisors : Dr. Thibaut Weise, Soen Bouaziz


Professor : Dr. Mark Pauly
January 7, 2011
Contents

1 Introduction 3
1.1 Previous work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Real-time Skin Rendering 5
2.1 Theory of subsurface scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Skin Surface Reectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Skin Subsurface Reectance . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Diusion Proles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.4 Approximating Diusion Proles . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Skin Rendering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Texture-Space Diusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Overview of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Rendering Irradiance in Texture-Space . . . . . . . . . . . . . . . . . . . . . 10
2.2.4 Blurring the Irradiance Texture . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Specular lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Modied Translucent Shadow Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6 Energy conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.7 Gamma correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.8 The Final Skin Shader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Environment lighting 28
3.1 Spherical harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.1 Properties of Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Spherical harmonics for environment lighting . . . . . . . . . . . . . . . . . . . . . 30
3.3 Spherical harmonics and occlusions . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Rotation of Spherical Harmonics Coecients . . . . . . . . . . . . . . . . . . . . . 33
4 Results 35
4.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3 Rendered images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A The Skin Rendering Application 40
A.1 About the Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
A.2 How to use the application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2
Chapter 1

Introduction

Skin rendering is a really important topic in Computer Graphics. A lot of virtual simulations or
video games contain virtual humans. In order to obtain a realistic realism of their faces, we need
to take care of skin rendering particularly because we are very sensitive to the appearance of skin.
Nowadays, we can use modern 3D scanning technology to obtain very detailed meshes and textures
for the face. But the diculty with skin rendering is that we need to model subsurface scattering
eects. Subsurface scattering is the fact that light goes under the skin surface, then scatters, gets
partially absorbed, and at the end exits the skin somewhere else. It is very important to correctly
handle this eect in order to render photo-realistic faces. There already exists oine techniques
that simulate skin subsurface scattering and give very realistic looking skin. But this has a cost,
it can take second or minutes to render.

With their work, Eugene d'Eon, David Luebke, and Eric Enderton [7] introduced a technique
to approximate subsurface scattering of skin in real-time on the GPU. They have obtained very
realistic results.

The goal of this project is to implement their algorithm in order to simulate subsurface scat-
tering of skin. I will also compare the results with dierent parameters of the algorithm. I have
also implemented diuse environment lighting with occlusions.

1.1 Previous work


The rst really important work about rendering translucent material was the work of Jensen et
al. [16] in 2001. Then, in [10], Gosselin et al. approximated subsurface scattering to render skin.
But this technique used an approximation of subsurface scattering but not based on single or
multi-layered translucent material. For instance, they used a Gaussian smoothing but this is not
physically-based and therefore the result was not very realistic. Then in 2005, with their work [3],
Donner and Jensen show that realistic rendering of skin requires to correctly model multi-layered
subsurface scattering. In their work, they used a three layer skin model. They have obtained very
realistic results using a Monte Carlo renderer but the computation took 5 minutes. Their result is
shown in gure 1.1.

Then, in 2007 with their work [7], Eugene d'Eon, David Luebke and Erir Enderton used an
approximation of the scattering diusion proles of Donner and Jensen with a weighted sum of
six Gaussian functions. Because of the nice properties of Gaussian lters, they have been able to
render very realistic human skin in real-time on the GPU. You can see the result in gure 1.2.

Environment maps are usually used in Computer Graphcis to render a reective surface in
order that it reects the surrounding environment. But this is all about specular reection and it
is much more dicult to use environment map for diuse reection because at each point on the
surface, we need to compute diuse reection for every possible directions from the environment
map. Therefore, we need to compute an integral for each point on the surface which we cannot do

3
Figure 1.1: Skin rendering obtained by Donner and Jensen

Figure 1.2: Real-time skin rendering obtained by d'Eon, Luebke and Enderton

eciently in real-time. But in 2001, with their work [17], Ravi Ramamoorthi and Pat Hanrahan
introduced a way to approximate diuse lighting from an environment map with spherical har-
monics. After a precomputation of the environment map, it is possible to compute diuse lighting
from the environment map in real-time.

4
Chapter 2

Real-time Skin Rendering

This chapter is about the main part of the project which is the algorithm for rendering skin
subsurface scattering.

2.1 Theory of subsurface scattering


As we have seen, skin rendering is quite dicult because we need to take care of subsurface
scattering. If we don't, the skin will look very hard and dry. To see this, compare the gure 2.1
where subsurface scattering is not used with the gure 2.2 which is a result of the skin rendering
I have obtained with the subsurface scattering algorithm.

Figure 2.1: Skin rendering without subsurface scattering

As you can see, taking subsurface scattering into account is very important in order to have a
realistic skin appearance. Subsurface scattering is the process where light goes beneath the surface
of the skin, scatters and gets partially absorbed, and then exits the skin surface somewhere else.
This results in a translucent appearance of the skin.

Now, we will try to understand how light interacts with skin. I will rst discuss the reectance
of light at the surface of skin and then the subsurface reectance.

2.1.1 Skin Surface Reectance


The gure 2.3 shows a model of the skin used by Donner and Jensen in [8]. The skin is mainly
made of three parts. The rst one is the surface which is quite thin and also oily. Just under the
surface, we nd the epidermis layer and the dermis layer. The epidermis and the dermis are mainly
responsible for the subsurface scattering. For the moment, we will focus on the reection at the

5
Figure 2.2: Skin with subsurface scattering

surface of the skin.

Approximately 6 percent of the incident light reects directly at the surface of skin without
being colored. This is the result from a Fresnel eect with the topmost layer of the skin which
is quite oily. This reection is not a perfect mirror reection because the surface of skin is also
rough (as you can see on the right of gure 2.3). Therefore, an incident ray is not reected in only
one single direction. The reectance at the surface can be modeled using a specular bidirectional
reectance distribution function (BRDF) quite often used in Computer Graphics. But we cannot
use a simple model like Blinn-Phong because is it not physically-based and does not accurately
approximate the specular reection of skin. For this reason, we will use a physically-based specular
BRDF. This function will be explained in section 2.3.

Figure 2.3: Surface reectance

2.1.2 Skin Subsurface Reectance


The light that is not directly reected at the skin surface will enter the subsurface layers. The
light enters those layers and is partially absorbed which will give its color and is also scattered
quite often, returning and exiting the surface in a neighborhood of the initial entry point. This
process is illustrated in the gure 2.4. Note that it is possible that light goes completely through
thin regions like ears.

It is important to understand that light is not absorbed and scattered the same way in the
dierent layers (epidermis and dermis) of the skin (see gure 2.4). The model that we are using
here is composed of three layers (oily surface layer, epidermis and dermis). Actually, real skin is
much more complex. Indeed, as explained in [14], the epidermis layer contains ve dierent layers
by itself. But Donner and Jensen in [3] have shown that a single-layer model is not sucient for
skin rendering and using a three-layer model seems to be a good choice. Note that the algorithm
for subsurface scattering that we will use, doesn't handle single scattering eects. Single scattering

6
is the fact that each ray scatters beneath the surface but only once. This is a quite correct
approximation for skin but if we would render marble or smoke for instance, it wouldn't be a good
choice because for those materials, single scattering is really important.

Figure 2.4: Subsurface scattering in epidermis and dermis

2.1.3 Diusion Proles


To better understand how to simulate subsurface scattering, we need to indroduce the notion of
diusion prole. A diusion prole is an approximation of how the light scatters under the surface
of a highly scattering translucent material. Consider a at surface in a dark room. Now imagine
that we illuminate that surface with a very thin white laser beam. The result is that we will see
a glow around the center point where the laser touches the surface. This is because some light
is going beneath the surface and returning nearby. This is illustrated on the left of gure 2.5. A
diusion prole is exactly the mathematical representation of this experiment. It's a function R(r)
(which depends on the distance r from the glow center) and that tells us how much light emerges
as a function of the angle and distance from the center of the incident light ray. For uniform
materials, the shape of the diusion prole is the same in each direction. Note that we can have
a dierent diusion prole for each color channel (the diusion prole depends on the wavelength
of the incident ray). The image on the right of gure 2.5 illustrates the curves of three dierent
diusion proles (one for each color). For instance in this example, we can see that the red scatters
much more farther than the blue or the green.

Figure 2.5: Diusion prole


If we know the exact diusion proles of scattering of light in the light, we could simulate
scattering in the following way. For each point on the skin surface, we collect all the incoming light
and spread it around the surface point according to the shape of the diusion prolles. By doing
this for each surface point, we will obtain a translucent aspect of the skin and is exactly what
we want. Notice that we don't really need to keep the direction information of incident rays for
simulating surface scattering because in skin the light is diused so quickly that only the amount
of incident light is important.

7
In 2005, Donner and Jensen presented their three-layer skin model and created diusion proles
predicted by their model.

2.1.4 Approximating Diusion Proles


A diusion prole can be a quite complicated function but Eugene d'Eon, David Luebke, and Eric
Enderton have shown in [7] that it can be approximated by a weighted sum of Gaussian functions.
Consider G(v, r) to be a Gaussian function of variance v :
1 −r2
G(v, r) = e 2v
2πv
The constant 2πv1
is chosen such that G(v, r) doesn't darken or brighten the input image when
used for a radial 2D blur (it has unit impulse response). Therefore, the diusion prole R(r) can
be approximated with :
k
X
R(r) ≈ wi G(vi , r)
i=1

For instance, the gure 2.6 shows the diusion prole of the gure 2.5 approximated by a sum
of two or four Gaussian functions. But why choosing a sum of Gaussian functions ? Obviously, it is
because of the very nice properties of Gaussian functions. Gaussian kernel is separable and radially
symmetric and moreover convolving Gaussian functions with each other produces new Gaussian
functions. Those properties are very useful mainly for eciency reasons.

Figure 2.6: Diusion prole (dipole) approximated by a sum of Gaussian functions

In [7], they have found that six Gaussian functions were needed to correctly approximate the
three-layer skin model given in [3]. The gure 2.7 shows the parameters of those six Gaussian
functions with the corresponding weights and the gure 2.8 plots the corresponding approximation
of the diusion proles.

Note that the Gaussian weights of each prole sum up to 1. Therefore by normalizing these
proles to have a white diuse color, we make sure that the result after scattering will be white
on average. Then we can use an albedo texture color map to dene the color of skin.

2.2 Skin Rendering Algorithm


Now that we have seen what is subsurface scattering and how it can be approximated, I will present
the implementation of the skin rendering algorithm used in [7].

8
Figure 2.7: Six Gaussian functions to approximate a three-layer skin model

Figure 2.8: Diusion proles of the sum of six Gaussian functions

2.2.1 Texture-Space Diusion


In 2003, Borshukov and Lewis introduced the texture-space diusion technique in [9] for The Ma-
trix movie. The goal of this technique is to simulate subsurface scattering eciently. The idea
is rst to unwrap the 3D mesh of the head using texture coordinates as render coordinates in a
2D texture. Then, the unwrapped 2D texture of the mesh is blurred using a simple convolution
that can be done very eciently. Any kind of diusion proles can be used for the convolution. In
2004, Green shows in [12] that texture-space diusion can be implemented in real-time using GPUs.

In [7], they also used texture-space diusion to implement the six Gaussian convolutions. They
have also incorporated transmission through thin regions like ears and have used stretch correction
to obtain a more precise texture-space diusion.

2.2.2 Overview of the algorithm


Here is the basic algorithm of the skin rendering. The following pseudo-code is executed every
frame :
1. Render a shadow map for each light source.
2. Render the stretch correction map.
3. Render the irradiance into an o-screen texture (in texture-space).
4. For each one of the six Gaussian kernel in the diusion prole approximation: we rst perform
a separable blur pass in U direction of the o-screen irradiance texture and render the result
in a temporary buer and then we perform a separable blur pass in V direction of the texture

9
in the temporary buer and keep the resulting texture. After this step, we have six dierent
blurred versions of the o-screen irradiance texture.
5. Render the mesh in 3D. We compute here the weighted sum of the six irradiance textures to
approximate for subsurface scattering at each pixel. We also add the specular reectance at
this point.

2.2.3 Rendering Irradiance in Texture-Space


The rst step of the algorithm is to render a shadow map for each light source of the scene. This
process is described in section 2.4. Then we need to compute the o-screen irradiance texture
(in texture-space). Therefore, we render (in an o-screen texture) the incident irradiance at each
pixel of the mesh. Notice that we render the irradiance in texture-space by unwrapping the mesh
as described in section 2.2.1 using the mesh texture coordinates as render coordinates. At each
pixel, we compute the sum of the incident diuse irradiance for each light source and store it in the
irradiance texture. Note that for each light source, we need to take care of its shadow by looking
up in the corresponding shadow map if the current pixel is on shadow or not (see section 2.4 for
more details about shadow mapping).

The code for the irradiance texture rendering is given in the irradianceShader vertex and
fragment shaders. This shader also implement energy conservation which will be descibed in
section 2.6. The gure 2.9 illustrates the o-screen irradiance texture computed for our mesh.

Figure 2.9: O-screen irradiance texture (computed in texture-space)

2.2.4 Blurring the Irradiance Texture


Now that we have computed the irradiance texture, we know at each point of the surface of the mesh
the amount of incoming diuse light. Then, the next step is to simulate subsurface scattering. As
we have seen, subsurface scattering is represented by the diusion proles. We have also seen that
the diusion proles of skin can be approximated with a weighted sum of six Gaussian functions
with the parameters and weights given in gure 2.7. Thus, what we have to do is to convolve
the irradiance with the weighted sum of Gaussian functions. Notice that we will use a very nice
property of convolution: the convolution of an image I by a kernel that is a weighted sum of
functions G(vi , r) is the same as a weighted sum of images I , each of which is the original image
convolved by each of the the functions :

10
k
! k
X X
I∗ wi G(vi , r) = wi I ∗ G(vi , r)
i=1 i=1

Therefore, what we have to do is to convolve the irradiance texture I independently with each
of those six Gaussian kernels G(vi , r) and at the end compute the weighted sum of the resulting
textures to simulate subsurface scattering according to the skin diusion proles. To correctly
match the diusion proles of skin we have to use the correct weights given in gure 2.7 when
computing the weighted sum of convolded irradiance textures. Also note that the smallest Gaus-
sian is so narrow that using it to convolve the irradiance texture will not make any dierence.
Therefore we use the initial irradiance texture as the convoled version with the smallest Gaussian
kernel. Thus, we only need to compute ve other convolved irradiance textures. The gure 2.10
shows the irradiance texture of gure 2.9 convolved with the largest Gaussian kernel.

Figure 2.10: Irradiance texture convolved with the largest Gaussian kernel

A second nice property of Gaussian kernels is that they are separable. Therefore when comput-
ing the convolution of an image with a Gaussian kernel, we don't have to compute a 2D convolution
which is quite expensive if the kernel and the image are large. We only need to compute a 1D
convolution in the U direction and store the result into a temporary image and then compute
another 1D convolution in the V direction of that temporary image. This reduces a lot the number
of operations needed for convolving a 2D image.

The third property of Gaussians that we will use is that the convolution of two Gaussian func-
tions is also a Gaussian. Therefore, we can produce each convolved irradiance texture by convolving
the result of the previous one, allowing us to convolve the irradiance texture by wider and wider
Gaussian kernels without increasing the number of taps at each step.

The result of convolving two Gaussian functions G(v1 , r) and G(v2 , r) with variances v1 and v2
is the following :
Z ∞ Z ∞  p   p 
G(v1 , r)∗G(v2 , r) = G v1 , x02 + y 02 G v2 , (x − x0 )2 + (y − y 0 )2 dx0 dy 0 = G(v1 +v2 , r)
0 0

with r = x2 + y 2 . Therefore, if the previous convolved irradiance texture contains the con-
p

volved version I1 = I ∗ G(v1 , r) (note that I is the irradiance texture) and we want to compute

11
I2 = I ∗ G(v2 , r), we only need to convolve I1 with G(v2 − v1 , r).

We use a separable seven-tap convolution in each U and V direction. The blurring is done in
the blurShader vertex and fragment shaders. The same shader is used for the blurring in both U
and V direction and take as a parameter the blurring direction. We begin to convolve in the U
direction and store the result in a temporary texture and then convolve in the V direction. The
seven Gaussian tap weights are the following :

{0.006, 0.061, 0.242, 0.383, 0.242, 0.061, 0.006}


Those weights represent a Gaussian kernel with a standard deviation of 1, considering that the
coordinates of the taps are :

{−3.0, −2.0, −1.0, 0.0, 1.0, 2.0, 3.0}


We use this seven-tap kernel for all the convolutions. The taps are linearly scaled about the
center tap to convolve by any desired Gaussian function. The spacing is scaled by the standard
deviation of the current Gaussian. The listing 2.2 shows the blurShader fragment shader that
performs the Gaussian convolution in one dimension.

Correction of texture-space distortion


Because we are convolving in texture-space, we need to take care of UV distortion. What is UV
distortion ? If we have a curved surface, the distance between two locations in the texture doesn't
correspond to the distance on the mesh due to UV distortion. Therefore, if we don't take care of
texture-space distortion, the subsurface scattering on curved surfaces would not be accurate. A
simple solution to correct this problem is to compute at every frame and for each pixel a stretch
value in both U and V direction (in texture-space) and store it into a stretch-map texture. Then
when convolving, we modulate the spacing of the convolution taps at each point on the surface in
texture-space according to the value in the stretch-map texture.

The listing 2.1 shows how the stretch-map texture is computed. First, we compute the texture-
space derivatives of world-space coordinates derivu and derivv in both U and V direction. Then
we invert those values and multiply by a stretchscale value in order that the inverted value is
in the range [0, 1] to be stored in a RGB texture.

// Compute t h e w o r l d c o o r d i n a t e c h a n g e s
vec3 d e r i v u = dFdx ( w o r l d C o o r d ) ;
vec3 d e r i v v = dFdy ( w o r l d C o o r d ) ;

// Compute t o pixel changes


float stretchU = 1.0 / length ( derivu ) ;
float stretchV = 1.0 / length ( derivv ) ;

// c o n v e r t stretch to [0 ,1] range


stretchU = stretchU ∗ stretchscale ;
stretchV = stretchV ∗ stretchscale ;

// Output t h e stretch i n U and V d i r e c t i o n s


gl_FragColor = vec4 ( stretchU , stretchV , 0, 1) ;

Listing 2.1: Code to compute the stretch-map


Then in the blurShader fragment shader, we need to lookup the corresponding stretch value
in the stretch-map texture to correctly scale the Gaussian taps. The listing 2.2 shows the code
for the blurShader fragment shader that compute the convolution in 1D using the stretch-map
texture to correct for UV distortion.

// Get t h e stretch value of the t e x t u r e from t h e s t r e t c h map


vec2 s t r e t c h = t e x t u r e 2 D ( s t r e t c h M a p , t e x C o o r d ) . xy ;
vec2 scaleConv = 1 . 0 / imageDimension ;

12
// Compute t h e step distance
vec2 step = scaleConv ∗ blurDirection ∗ stretch ∗ gaussianStd / stretchScale ;

// G aus sia n w e i g h t s of the 7 taps


float weights [7];
weights [ 0 ] = 0.006; weights [ 1 ] = 0 . 0 6 1 ; weights [ 2 ] = 0 . 2 4 2 ;
weights [ 3 ] = 0.383; weights [ 4 ] = 0 . 2 4 2 ; weights [ 5 ] = 0 . 0 6 1 ;
weights [ 6 ] = 0.006;

// Compute t h e coordinate of the first tap


vec2 c o o r d s = texCoord − step ∗ 3 . 0 ;
v e c 4 sum = v e c 4 ( 0 . 0 ) ;

// Tap 0
vec4 tap0 = texture2D ( inputTexture , coords ) ;
sum += t a p 0 ∗ w e i g h t s [ 0 ] ;
c o o r d s += s t e p ;

// Tap 1
vec4 tap1 = texture2D ( inputTexture , coords ) ;
sum += t a p 1 ∗ w e i g h t s [ 1 ] ;
c o o r d s += s t e p ;

// Tap 2
vec4 tap2 = texture2D ( inputTexture , coords ) ;
sum += t a p 2 ∗ w e i g h t s [ 2 ] ;
c o o r d s += s t e p ;

// Tap 3
vec4 tap3 = texture2D ( inputTexture , coords ) ;
sum += t a p 3 ∗ w e i g h t s [ 3 ] ;
c o o r d s += s t e p ;

// Tap 4
vec4 tap4 = texture2D ( inputTexture , coords ) ;
sum += t a p 4 ∗ w e i g h t s [ 4 ] ;
c o o r d s += s t e p ;

// Tap 5
vec4 tap5 = texture2D ( inputTexture , coords ) ;
sum += t a p 5 ∗ w e i g h t s [ 5 ] ;
c o o r d s += s t e p ;

// Tap 6
vec4 tap6 = texture2D ( inputTexture , coords ) ;
sum += t a p 6 ∗ w e i g h t s [ 6 ] ;

// S t o r e t h e sum i n the texture


g l _ F r a g C o l o r = sum ;

Listing 2.2: Code to convolve an irradiance texture in one direction

Pre-Scatter or Post-Scatter Diuse Texturing


As we have said before, the color of the skin comes from a diuse albedo texture. This texture has
been created by several photographs of a human subject under diuse illumination. The diuse
albedo color has to be multiply by the irradiance from the lights to obtain the correct skin color.
But now we have a choice to make. One possibility is to multiply by the diuse albedo color be-
fore the subsurface scattering convolution when we compute the irradiance texture. This is called
Pre-Scatter texturing. According to [7], a possible drawback of this technique is that it may lose
too much of the high-frequency details. The gure 2.11 shows an image of our human head with
only pre-scatter texturing. As we can observe, the result seems a little bit too smooth.

The other possibility is to multiply by the diuse albedo color in the nal pass after the sub-
surface scattering convolutions and not when computing the irradiance texture. This is called

13
Figure 2.11: Subsurface scattering with only pre-scatter texturing (without specular reectance)

Post-Scatter texturing. With this technique, the subsurface scattering convolutions is done with-
out any colors. Now, high-frequency details are kept because their are not blurred by the subsurface
scattering convolutions. Again, according to [7], the problem with this method is that there is no
color bleeding of the skin tones. The gure 2.12 shows an image of our human head with only
post-scatter texturing. We can observe that the high-frequency details are kept.

Figure 2.12: Subsurface scattering with only post-scatter texturing (without specular reectance)

A good solution is to combine pre-scatter and post-scatter texturing. With this technique, a part
of the diuse albedo color is applied in the irradiance texture computation (pre-scatter texturing)
and the remaining part is applied at the end after the subsurface scattering computation (post-

14
scatter texturing). To implement this, we can multiply the diuse irradiance diffuseIrradiance
by a portion of the diuse albedo diffuseAlbedo color in the irradiance texture computation as
follows :
diffuseIrradiance ∗ = pow(diffuseAlbedo, mixRatio)
where mixRatio is a value in the range [0, 1] representing the amount of pre-scatter or post-
scatter texturing. The previous multiplication is implemented in the irradianceShader fragment
shader. Then, we also have to apply the remaining part of the diuse albedo color at the end in
the nal pass (after the subsurface scattering computation). We implement this as follows :

finalDiffuseColor ∗ = pow(diffuseAlbedo, 1.0 - mixRatio)


This computation is implemented in the finalSkinShader fragment shader. For instance, if
mixRatio = 1.0, we only apply the diuse albedo color in the irradiance texture computation
and therefore this corresponds to pre-scatter texturing only. According to [7], a good value for
mixRatio is 0.5. All the images in this document (except gures 2.11 and 2.12) have been rendered
with a value of 0.5 for mixRatio.

2.3 Specular lighting


As we have already said, the Phong model is used a lot in Computer Graphics for simulating spec-
ular reectance. But this model is not physically based and therefore doesn't give very realistic
results. For instance, the Phong model doesn't capture increased specularity at grazing angles. It
can also output more energy than it receives. Therefore we need to use a physically-based Bidi-
rectional Reectance Distribution Function (BRDF) for the specular reectance. We will use the
Kelemen/Szirmay-Kalos model and see how to implement it.

In general, a BRDF function fBRDF (ωi , ωo ), where ωi and ωo are respectively the input and
output directions of the light, is dened as follows at a given surface point :
dLr (ωo ) dLr (ωo )
fBRDF (ωi , ωo ) = = (2.1)
dEi (ωi ) Li (ωi ) cos θi dωi
where L is the output radiance, E is the irradiance and θi is the angle between the normal at
the given surface point and ωi . Therefore, we have :

dLr (ωo ) = fBRDF (ωi , ωo ) · Li (ωi ) cos θi dωi (2.2)


Usually, a specular BRDF is a product of several terms like a constant rho_s which is the
specular intensity, a surface normal vector N , a viewing vector V , a light direction vector L, an
index of refraction eta (used for the Fresnel term) and a roughness parameter m. Then for each
light source, the specular reectance is computed as follows :
specularReflect += lightColor ∗ lightShadow ∗ rho_s
∗ specularBRDF(N, V, L, eta, m) ∗ dot(N, L)

Note that the term dot(N, L) comes from the denition of the BRDF (see the cosine term in
equation 2.2).

The Fresnel eect is the fact that the specular reectance increases at grazing angles. It is
important to take this eect into account if we want to obtain a physically plausible result. As
in [6], we use the Schlick's Fresnel approximation (see [4]) that seems to works quite well for
skin. The listing 2.3 shows the code to compute the Fresnel term. This code can be found in
the finalSkinShader fragment shader. Note that H is the half-viewing vector, V is the viewing
vector and F 0 is the reectance at normal incidence. As explained in [6], the value 0.028 should
be used for skin.

15
// Compute t h e Fresnel reflectance for specular lighting
f l o a t f r e s n e l R e f l e c t a n c e ( v e c 3 H, v e c 3 V, f l o a t F0 ) {
f l o a t b a s e = 1 . 0 − d o t (V, H) ;
f l o a t e x p o n e n t i a l = pow ( b a s e , 5 . 0 ) ;
return e x p o n e n t i a l + F0 ∗ ( 1 . 0 − e x p o n e n t i a l ) ;
}

Listing 2.3: Code to compute the Fresnel term


A specularly reecting surface doesn't have a specular highlight as the perfectly sharp reected
image of a light source. Usually, the objects show a blurred specular highlights. This can be
explained by the existence of microfacets [1]. The idea is to assume that a surface that is not
perfectly smooth is made of several very small facets, each of which is a perfect specular reec-
tor. These microfacets have normals that are distributed about the normal of the approximating
smooth surface. The degree to which microfacet normals dier from the smooth surface normal is
determined by the roughness of the surface. There exists several ways to predict the distribution
of the microfacets. For instance, in the Phong model, the specular highlight kspec is computed as
follows :
kspec = cosn (R, V )
where R is the reection vector of the light vector L at a given point on the surface and V is
the viewing direction. But a more physically based model is the Beckmann distribution. With this
model, the specular intensity kspec is computed as follows :
2
exp( − tan
m2
(α)
)
kspec = 2 4
, α = arccos(N · H)
πm cos (α)

where m is the roughness of the material and H is the half-vector bewteen L and V .

Heidrich and Seidel described in 1999 a precomputation strategy to eciently evaluate a BRDF
model (see [18]). The idea was to factor the BRDF into several precomputed 2D textures. In [6],
the used a similar method but precomputed a unique texture corresponding to the Beckmann
distribution function that we have just seen and used the Schlick Fresnel approximation that we
have introduced above. Their precomputation of the Beckmann distribution is done by rendering a
texture that will be accessed by the dot product of N and H and the roughness m. The listing 2.4
shows how the Beckmann texture is precomputed. This code can be found in the beckmannShader
fragment shader.

void main ( ) {
// Compute and map t h e PH v a l u e in the range [0 , 1] to be s t o r e d in the
texture
f l o a t PH = 0 . 5 ∗ pow ( PHBeckmann ( t e x C o o r d . x , texCoord . y ) , 0.1) ;
g l _ F r a g C o l o r = v e c 4 (PH, PH, PH, 1 . 0 ) ;
}

// Compute t h e Beckmann PH v a l u e
f l o a t PHBeckmann ( f l o a t ndoth , f l o a t m) {
f l o a t a l p h a = a c o s ( ndoth ) ;
f l o a t ta = tan ( alpha ) ;
f l o a t v a l = 1 . 0 / (m∗m∗ pow ( ndoth , 4 . 0 ) ) ∗ exp ( − ( t a ∗ t a ) / (m∗m) ) ;
return v a l ;
}

Listing 2.4: Code to precompute the Beckmann texture


The listing 2.5 shows the code to compute the specular BRDF using the Beckmann texture.
This code can be found in the finalSkinShader fragment shader.

// Kelemen / Szirmay −K a l o s S p e c u l a r BRDF


f l o a t KS_Skin_Specular ( v e c 3 N, vec3 L , v e c 3 V, float m, float rho_s ) {
float specular = 0 . 0 ;

16
float ndotl = d o t (N, L ) ;
i f ( ndotl > 0.0) {
vec3 h = L + V;
vec3 H = normalize ( h ) ;
f l o a t ndoth = d o t (N, H) ;
f l o a t PH = pow ( 2 . 0 ∗ t e x t u r e 2 D ( beckmann_texture , v e c 2 ( ndoth , m) ) . r ,
10.0) ;
f l o a t F = f r e s n e l R e f l e c t a n c e ( H, V, 0 . 0 2 8 ) ;
f l o a t f r S p e c = max ( PH ∗ F / d o t ( h , h ) , 0 ) ;
s p e c u l a r = n d o t l ∗ rho_s ∗ f r S p e c ;
}
return specular ;
}

Listing 2.5: Code to compute the specular BRDF


The gures 2.13, 2.14 and 2.15 show the specular reectance for dierent values of the rough-
ness m. In [2], they say that a roughness of m = 0.3 is a good average value for the face. It is also
possible to paint the roughness value in a map in order that the value can change over the face.
All the other images in the document use a value of 0.3 for the roughness.

Figure 2.13: Rendering with roughness m = 0.1. Right image is only specular

2.4 Shadows
Obviously, simulating shadows is very important for realistic rendering. As we have already said
before, we use shadow mapping to render shadows. Shadow mapping is a classical but widely-used
technique. A surface point of our object is not in shadow if there is no other object between the
point and a certain light source. The idea of shadow mapping is to place the camera at the light
source position and render the scene from this point but instead of rendering the pixel color of the
objects of the scene, we render the distance dshadow of each pixel to the light source. This rendering
is stored into a texture which is called a shadow map. Therefore, the shadow map contains the
distances to the light source of all objects visible from that light source. Then when rendering a
given pixel p of our object from the camera view, we compute the distance dpixel from that pixel
p to the light source. Then we lookup in the shadow map the distance dshadow in the shadow map
corresponding to the direction of the pixel p from the light source. Then we compare both distances
and if dpixel > dshadow , it means that the pixel p is in shadow and cannot receive light from that
light source. The gure 2.16 shows the dierence when rendering without and with shadows.

17
Figure 2.14: Rendering with roughness m = 0.3. Right image is only specular

Figure 2.15: Rendering with roughness m = 0.5. Right image is only specular

We need to generate a shadow map for each light source of the scene. This is done at the
beginning of each frame. Then when computing the irradiance texture, we use the shadow maps
to compute a shadow factor for each pixel and light source. The shadow factor for a given pixel
and a given light source tells us if the light source contributes to the irradiance of that pixel.

Shadow mapping is not so easy because it can generate serious artifacts. Firstly, we need to
take care of self-shadowing. Self-shadowing occurs because the comparison between dpixel and
dshadow is a oating number comparison and therefore because of the oating number precision, it
cannot always get the right answer if two values are very close. This can cause Moiré artifacts as
shown in the left image of gure 2.17. A possible solution to this problem is to add a small bias
for instance using the glPolygonOffset() function of OpenGL.

A second problem with shadow mapping is the generation of hard shadows and aliased shadow
edges (as you can see in the right image of gure 2.17) To correct this problem we implement a

18
Figure 2.16: Rendering without shadows (left) and with shadows (right)

Figure 2.17: Shadow mapping artifacts (Moiré patterns) due to self-shadowing (left) and aliased
shadow edges (right)

technique called Percentage Closer Filtering (PCF). The idea is to smooth the aliased edges by
simply sampling several surrounding texels in the shadow map along with the center texel and
take the average of all the shadow factors. I use a 4x4 PCF kernel which means I sample the 16
surrounding texels of the texel in the shadow map I want to compute the shadow factor. The listing
2.6 shows the code that implement the PCF technique. Notice that we need to compute a shadow
factor in the irradianceShader fragment shader to take care of shadows for diuse lighting but
also in the finalSkinShader fragment shader to obtain shadows for the specular reectance term.

// Lookup i n a shadow map a t a g i v e n offset


float lookupShadowMap ( v e c 4 shadowCoord , v e c 2 o f f S e t , sampler2DShadow shadowMap )
{
return shadow2DProj ( shadowMap , shadowCoord + v e c 4 ( o f f S e t . x ∗ x P i x e l O f f s e t
∗ shadowCoord . w , o f f S e t . y ∗ y P i x e l O f f s e t ∗ shadowCoord . w , 0 . 0 , 0 . 0 ) ) .
x;

19
}

// Compute a 4 x4 PCF a v e r a g e from t h e shadow map t o o b t a i n smooth shadows


f l o a t shadowPCF ( v e c 4 shadowCoord , sampler2DShadow shadowMap ) {
f l o a t shadow = 0 . 0 ;
float x , y ; ;
if ( shadowCoord . w > 1 . 0 ) {
// Compute an 4 x4 a v e r a g e from t h e shadow map
for ( y = − 1.5 ; y <=1.5 ; y +=1.0) {
for ( x = − 1.5 ; x <=1.5 ; x +=1.0) {
shadow += lookupShadowMap ( shadowCoord , v e c 2 ( x , y ) , shadowMap ) ;
}
}
shadow /= 1 6 . 0 ;
}

return shadow ;
}

Listing 2.6: Code for the PCF shadow smoothing

2.5 Modied Translucent Shadow Map


Texture-space diusion cannot capture light transmitting completely through thin regions such as
ears because two surface locations can be very close in 3D space but quite distant in texture space.
In [7], they have modied the Translucent Shadow Map technique introduced by Dachsbacher and
Stamminger in [5] that allows an ecient estimate of diusion through thin regions. In the orig-
inal Translucent Shadow Map method, what is rendered in the shadow map is the depth z , the
normal and the irradiance of each point on the surface nearest the light source (the light-facing
surface). The modied version that we use here is to render the depth z and the (u,v) texture
coordinate of the light-facing surface. Then at each point in shadow, we can compute the thickness
through the mesh (using the depth z ) and we can access the irradiance texture on the opposite
side of the surface (using the texture coordinate (u,v)). This process is illustrated in the gure 2.18.

Figure 2.18: Modied Translucent Shadow Map

Now, we will see how to compute global scattering given that we have the modied translucent
shadow map corresponding to a light source. If you look at the gure 2.19, the goal is to com-
pute at any shadowed surface point C global subsurface scattering using the part of the convolved
irradiance texture around the point A that is in the light-facing surface. For each point C , the

20
Translucent Shadow Map contains the distance m and the U V texture coordinate of point A.
The scattered light that exits point C is the convolution of the light-facing points by the diusion
prole R, where distances from point C to each sample are computed individually. But instead,
we compute this for the point B because it is easier and for small angles q , B is close to C . If q
is large, the Fresnel and N ·L factor will make the contribution to be quite small and hide the error.

Figure 2.19: Distance correction

In order to compute scattering at point B , any sample at distance r away from point A on the
light-facing surface has to be convolved by the following :
k k
−d2
p X  p  X
R( r2 + d2 ) = wi G vi , r2 + d2 = wi e vi G(vi , r)
i=1 i=1

This formula is very useful because we know that the points on the light-facing surface have
already been convolved with G(vi , r) (it is what is stored in the convolved irradiance textures).
Therefore the total diuse light exiting at point C is a sum of k texture lookups (using the texture
coordinates (u,v) of point A in the Translucent Shadow Map), each weighted by the weights wi
−d2
and the exponential term e vi . Note that the depth m computed from the Translucent Shadow
Map is corrected by the factor cos(q) because we use a diusion approximation and the most di-
rect thickness is more applicable. We also compare surface normals at points A and C and this
correction is only applied if the normal are in opposite directions. We use linear interpolation to
blend this correction. To avoid high-frequency artifacts in the Translucent Shadow Map, we store
the depth through the surface in the alpha channel of the irradiance texture. Then when blurring
the irradiance texture during convolution, the alpha channel is also blurred resulting in a blurred
version of depth at the end. Then, we use the correct version of depth (the correct convolved
irradiance texture) when we compute global scattering. For instance, we use the alpha channel of
convolved irradiance texture i − 1 for the depth when we compute the Gaussian i. Note that we
store e(−const·d) in the alpha channel in order that the thickness d can be stored in a 8-bit channel.

When the texture coordinates (u,v) of the point beeing rendered approaches the texture coor-
dinates of the point in the Translucent Shadow Map, double contribution to subsurface scattering
can occur. We use linear interpolation such that the Translucent Shadow Map contribution is zero

21
for each Gaussian term when the two texture coordinates approach each other. The listing 2.7
shows the code of the irradianceShader fragment shader that compute the thickness through the
skin and store it into the alpha channel of the irradiance texture.

// Compute t h e thickness through s k i n and s t o r e it in texture


float d i s t a n c e T o L i g h t 0 = l e n g t h ( l i g h t 0 P o s i t i o n W o r l d . xyz − w o r l d P o s i t i o n ) ;
v e c 4 TSMTap = t e x t u r e 2 D ( t r a n s l u c e n t S h a d o w M a p , shadowCoord0 . xy / shadowCoord0 . w) ;
v e c 3 normalBackFace = 2 . 0 ∗ t e x t u r e 2 D ( n o r m a l _ t e x t u r e , TSMTap . yz ) . xyz − v e c 3
(1.0) ;
f l o a t b a c k F a c i n g E s t = clamp(− d o t ( normalBackFace , n or ma l ) , 0 . 0 , 1 . 0 ) ;
f l o a t t h i c k n e s s T o L i g h t = d i s t a n c e T o L i g h t 0 − TSMTap . x ;
float val = thicknessToLight ;

if ( NdotL [ 0 ] > 0 . 0 ) { // S e t a l a r g e distance for surface points facing the


light
thicknessToLight = 5 0 . 0 ;
}
if ( t h i c k n e s s T o L i g h t < 2 . 0 ) { // To remove artifacts of t h e shadow map
thicknessToLight = 5 0 . 0 ;
}
float c o r r e c t e d T h i c k n e s s = clamp(− NdotL [ 0 ] , 0 . 0 , 1 . 0 ) ∗ t h i c k n e s s T o L i g h t ;
float f i n a l T h i c k n e s s = mix ( t h i c k n e s s T o L i g h t , c o r r e c t e d T h i c k n e s s , b a c k F a c i n g E s t ) ;
float a l p h a = exp ( f i n a l T h i c k n e s s ∗ − 0 . 1 ) ; // E x p o n e n t i a t e t h i c k n e s s f o r s t o r a g e

// S t o r e the i r r a d i a n c e and t h e thickness in the texture


gl_FragColor = vec4 ( d i f f u s e , alpha ) ;

Listing 2.7: Code for computing the thickness through skin and storing it into the texture
The code that computes the part of the global scattering using the thickness through the
skin is in the finalSkinShader fragment shader and you can see that code in the listing 2.11.
The gure 2.20 shows the result of rendering with global scattering using Modied Translucent
Shadow Map. Note that in our application there are three positional light sources but I have only
applied Translucent Shadow Map for the rst light source. The Translucent Shadow Map can
contains some artifacts due to the borders of the 3D mesh (faces that are at a 90 degrees angle
with the light direction) where the thickness through the skin is very close to zero. Such artifacts
can be seen sometimes depending on the positon of the light source and the orientation of the mesh.

Figure 2.20: Rendering with Modied Translucent Shadow Map

22
2.6 Energy conservation
In the skin rendering algorithm, specular and diuse lighting are treated completely independently.
But this could be a problem because the total light energy leaving the surface at a given point
can be larger than the incoming light energy. This would not be physically realistic. Therefore we
need to take care of energy conservation in the algorithm.

The energy available for the subsurface scattering is exactly all the energy not reected by the
specular BRDF. Therefore, before computing the diuse lighting and storing it in the irradiance
texture, we need to compute the total specular reectance at this surface point and multiply the
diuse light by the fraction of energy that remains. Note that we need to integrate the specular
reectance over all the hemisphere to take care of all the viewing directions V . Therefore, if we
consider that fr (x, ωo , L) is the specular BRDF, L is the light vector at surface point x and ωo is a
viewing direction in the hemisphere about the normal N , then the diuse light will be attenuated
by the following factor for each light :
Z
ρdt (x, L) = 1 − fr (x, ωo , L)(ωo · N )dωo

If we use spherical coordinates, we have :


Z 2π Z 2π
ρdt (x, L) = 1 − fr (x, ωo , L)(ωo · N )sinθdθdφ
0 o

Note that this value will change depending on the roughness m at surface point x and on the
dot product N · L. The previous integral is precomputed for all combinations of roughness values
and angles and is stored in a 2D texture to be accessed in the irradianceShader fragment shader
based on m and N · L. Note that this precomputation is an approximation of the integral. This
precomputation is peformed in the energyConservation fragment shader and the code is shown
in the listing 2.8. Note that the ρs from the specular BRDF is not taken into account in the
precomputation because it will be applied later on.

// I n t e g r a t e the s p e c u l a r BRDF component o v e r the hemisphere


float c o s t h e t a = texCoord . x ;
float pi = 3.14159265358979324;
float m = texCoord . y ;
float sum = 0 . 0 ;

int numterms = 8 0 ;
vec3 N = vec3 ( 0 . 0 , 0.0 , 1.0) ;
vec3 V = vec3 ( 0 . 0 , sqrt (1.0 − costheta ∗ costheta ) , costheta ) ;

for ( int i = 0 ; i < numterms ; i++ ) {


float p h i p = f l o a t ( i ) / f l o a t ( numterms − 1) ∗ 2.0 ∗ pi ;
float localsum = 0.0 f ;
float cosp = cos ( phip ) ;
float s i n p = s i n ( phip ) ;

for ( int j = 0 ; j < numterms ; j ++) {


float t h e t a p = f l o a t ( j ) / f l o a t ( numterms − 1 ) ∗ p i / 2 . 0 ;
float s i n t = s i n ( thetap ) ;
float cost = cos ( thetap ) ;
vec3 L = vec3 ( sinp ∗ s i n t , cosp ∗ s i n t , c o s t ) ;
l o c a l s u m += KS_Skin_Specular (N, L , V, m, 0 . 0 2 7 7 7 7 8 ) ∗ s i n t ;
}

sum += l o c a l s u m ∗ ( p i / 2 . 0 ) / f l o a t ( numterms ) ;
}

f l o a t r e s u l t = sum ∗ ( 2 . 0 ∗ p i ) / ( f l o a t ( numterms ) ) ;
gl_FragColor = vec4 ( r e s u l t , r e s u l t , r e s u l t , 1 . 0 ) ;

Listing 2.8: Code to precompute the energy conservation texture

23
Now that we have a texture that gives us the term ρdt , we can use it in the irradianceShader
fragment shader to attenuate the irradiance that will be used for subsurface scattering by using only
the energy that remains. We have to do this for each light source. The corresponding code is in the
irradianceShader fragment shader and is shown in the listing 2.9. Note that the specIntensity
corresponds to the ρs factor from the specular BRDF.

float a t t e n u a t i o n = s p e c I n t e n s i t y ∗ texture2D ( energyAttenuationTexture , vec2 (


NdotL , r o u g h n e s s ) ) . r ;
energyFactor = 1.0 − attenuation ;
d i f f u s e += shadow ∗ e n e r g y F a c t o r ∗ max ( NdotL , 0 . 0 ) ∗ g l _ L i g h t S o u r c e [ 0 ] . d i f f u s e
. rgb ;

Listing 2.9: Attenuation of irradiance energy


Note that after light has scattered beneath the surface, it must pass through the same rough
interface that is modeled with the specular BRDF. According to the direction from which we are
looking at the surface (based on N · V ), a dierent quantity of diuse light will exit. Here we use
a diusion approximation and therefore, we consider that the exiting light will be owing equally
in all directions as it reaches the surface. Therefore, we need to compute another integral term
with an hemisphere of incoming directions from below the surface and a single outgoing direction
V . But we don't really have to compute this new integral because the BRDF function is reciprocal
(light and camera can be swapped and the reectance is the same). Thus, we only need to reuse
the same previous integral to evaluate ρdt but this time, we index it with direction V instead of L.
This is applied in the finalSkinShader fragment shader and the corresponding code in shown in
the listing 2.10.

float a tt e nu a t io n = s p e c I n t e n s i t y ∗ texture2D ( energyAttenuationTexture , vec2 (


dot ( n , v ) , roughness ) ) . r ;
float f i n a l S c a l e = 1.0 − attenuation ;
d i f f u s e ∗= f i n a l S c a l e ;

Listing 2.10: Attenuation of energy of outgoing light


I didn't really observed a change after adding energy conservation to the rendering algorithm.
According to [7], the eect is very subtle. Moreover, this model is not perfect because for instance,
it doesn't take interreections into account.

2.7 Gamma correction


When we want to render realistic images, we need to take care of the non-linear behavior of a CRT
screen. This non-linear behavior comes from the fact that the conversion from voltages into light
intensities is not linear. The behavior of a CRT screen is represented by the bottom curve in gure
2.21. The monitor's response is an exponential curve. The exponent is usually called gamma (γ )
and we typically have γ = 2.2 (this value can be dierent from device to device). If we consider
that x are R,G,B values, the output intensities ycrt (x) of a screen are such that :
ycrt (x) = xγ

Usually, this behavior is called Gamma behavior. Note that LCD screens don't inherently have
this property but they are constructed to mimic the behavior of a CRT screen.

To avoid that this gamma behavior changes the constrast of our rendered images, we need to
correct for it. The basic idea is to apply the inverse of this gamma behavior to our nal images
just before that they are sent to the screen. Therefore, we have to apply the following function to
the intensities x of our image before sending it to the screen :
1
ycorrect (x) = x γ

24
Figure 2.21: Non-linear behavior of a CRT screen and Gamma correction

This is called Gamma correction. If we apply this correction to our nal image before sendind
it to the screen, the resulting output will be linear and therefore, the contrasts in our image will be
preserved. I have applied this gamma correction at the end of the finalSkinShader (see listing
2.11).

But this is not all, we have to know that a lot of image le (like JPEG for instance) are
precorrected (with a gamma of 2.2) in order that a user doesn't have to take care of gamma
correction by himself if he wants to look at the image on a screen. Therefore, intensity values
in such an image le are not linear. Therefore, we cannot directly use those value from an input
texture in our subsurface scattering algorithm because we would perform subsurface scattering in
a non-linear space. This would cause some problems as described in [13]. A common problem with
skin rendering, is the appearance of blue-green glow around the shadow edges. To avoid this eect,
we have to apply the inverse of gamma correction to our texture images before using them. For
instance, you can see in the irradianceShader and finalSkinShader fragment shaders, that we
apply the following transform y(x) before using the albedo texture :
y(x) = x2.2

2.8 The Final Skin Shader


The listing 2.11 shows the code of the finalSkinShader fragment shader. The shader computes
the weighted sum of Gaussian functions using the 6 convolved irradiance textures. The code con-
tains also a part for the computation of global scattering using the thickness through the skin
available in the alpha channel of the irradiance textures. The specular reectance is computed
using the specular BRDF we have seen previously. Then, after the gamma correction, the nal
color is written for the nal rendering.

// Compute t h e diffuse lighting ( sum o f the weighted 6 irradiance textures )


vec3 d i f f u s e = vec3 ( 0 . 0 ) ;
if ( isBlurringActive ) {
vec4 i r r a d 1 t a p = texture2D ( d i f f u s e _ t e x t u r e 0 , texCoord ) ;
d i f f u s e += i r r a d 1 t a p . xyz ∗ g a u s s 1 w ;
vec4 i r r a d 2 t a p = texture2D ( d i f f u s e _ t e x t u r e 1 , texCoord ) ;
d i f f u s e += i r r a d 2 t a p . xyz ∗ g a u s s 2 w ;
vec4 i r r a d 3 t a p = texture2D ( d i f f u s e _ t e x t u r e 2 , texCoord ) ;
d i f f u s e += i r r a d 3 t a p . xyz ∗ g a u s s 3 w ;
vec4 i r r a d 4 t a p = texture2D ( d i f f u s e _ t e x t u r e 3 , texCoord ) ;
d i f f u s e += i r r a d 4 t a p . xyz ∗ g a u s s 4 w ;
vec4 i r r a d 5 t a p = texture2D ( d i f f u s e _ t e x t u r e 4 , texCoord ) ;
d i f f u s e += i r r a d 5 t a p . xyz ∗ g a u s s 5 w ;
vec4 i r r a d 6 t a p = texture2D ( d i f f u s e _ t e x t u r e 5 , texCoord ) ;
d i f f u s e += i r r a d 6 t a p . xyz ∗ g a u s s 6 w ;

25
// R e n o r m a l i z e diffusion profile to white
v e c 3 normConst = g a u s s 1 w + g a u s s 2 w + g a u s s 3 w + g a u s s 4 w + g a u s s 5 w + g a u s s 6 w ;
d i f f u s e /= normConst ;

// I f the t r a n s l u c e n t shadow mapping i s active


if ( isTranslucentShadowMapActive ) {
// Compute g l o b a l s c a t t e r from m o d i f i e d TSM
// TSMtap = ( d i s t a n c e t o l i g h t , u , v )
v e c 4 TSMtap = t e x t u r e 2 D ( t r a n s l u c e n t S h a d o w M a p , shadowCoord0 . xy /
shadowCoord0 . w) ;
// Four a v e r a g e thicknesses through the object ( i n mm)
v e c 4 thickness_mm = 1 . 0 ∗ − ( 1 . 0 / 0 . 2 ) ∗ l o g ( v e c 4 ( i r r a d 1 t a p . w ,
i r r a d 2 t a p . w , i r r a d 3 t a p . w , i r r a d 4 t a p . w) ) ;
vec4 s tr etc hT ap = texture2D ( stretchMap , texCoord ) ;
float s t r e t c h v a l = 0.5 ∗ ( stretchTap . x + stretchTap . y) ;
vec4 a_values = vec4 ( 0 . 4 3 3 , 0 . 7 5 3 , 1 . 4 1 2 , 2 . 7 2 2 ) ;
v e c 4 inv_a = − 1.0 / ( 2 . 0 ∗ a _ v a l u e s ∗ a _ v a l u e s ) ;
v e c 4 f a d e s = exp ( thickness_mm ∗ thickness_mm ∗ inv_a ) ;
f l o a t t e x t u r e S c a l e = 1024 ∗ 0 . 1 / s t r e t c h v a l ;
f l o a t b l e n d F a c t o r 4 = clamp ( t e x t u r e S c a l e ∗ l e n g t h ( t e x C o o r d . xy − TSMtap .
yz ) / ( a _ v a l u e s . y ∗ 6 . 0 ) , 0 . 0 , 1 . 0 ) ;
f l o a t b l e n d F a c t o r 5 = clamp ( t e x t u r e S c a l e ∗ l e n g t h ( t e x C o o r d . xy − TSMtap .
yz ) / ( a _ v a l u e s . z ∗ 6 . 0 ) , 0 . 0 , 1 . 0 ) ;
f l o a t b l e n d F a c t o r 6 = clamp ( t e x t u r e S c a l e ∗ l e n g t h ( t e x C o o r d . xy − TSMtap .
yz ) / ( a _ v a l u e s . w ∗ 6 . 0 ) , 0 . 0 , 1 . 0 ) ;
d i f f u s e += g a u s s 4 w / normConst ∗ f a d e s . y ∗ b l e n d F a c t o r 4 ∗ t e x t u r e 2 D (
d i f f u s e _ t e x t u r e 3 , TSMtap . yz ) . xyz ;
d i f f u s e += g a u s s 5 w / normConst ∗ f a d e s . z ∗ b l e n d F a c t o r 5 ∗ t e x t u r e 2 D (
d i f f u s e _ t e x t u r e 4 , TSMtap . yz ) . xyz ;
d i f f u s e += g a u s s 6 w / normConst ∗ f a d e s . w ∗ b l e n d F a c t o r 6 ∗ t e x t u r e 2 D (
d i f f u s e _ t e x t u r e 5 , TSMtap . yz ) . xyz ;
}
}
else {
d i f f u s e += t e x t u r e 2 D ( d i f f u s e _ t e x t u r e 0 , t e x C o o r d ) . xyz ;
}

// Compute d i f f u s e albedo c o l o r from t h e albedo texture


v e c 3 d i f f u s e A l b e d o = pow ( t e x t u r e 2 D ( a l b e d o _ t e x t u r e , t e x C o o r d ) . rgb , vec3 ( 2 . 2 ) ) ;
d i f f u s e A l b e d o = pow ( d i f f u s e A l b e d o , v e c 3 ( 1 . 0 − m i x R a t i o ) ) ;

// Post − S c a t t e r i n g according to t h e mix r a t i o value


if ( isAlbedoActive ) {
d i f f u s e ∗= d i f f u s e A l b e d o ;
}

// Energy c o n s e r v a t i o n
if ( isEnergyConservationActive ) {
float a tt e nu a ti o n = s p e c I n t e n s i t y ∗ texture2D ( energyAttenuationTexture ,
vec2 ( dot ( n , v ) , roughness ) ) . r ;
float f i n a l S c a l e = 1.0 − attenuation ;
d i f f u s e ∗= f i n a l S c a l e ;
}

// Compute t h e shadow f a c t o r f o r each light source


float shadow [ 3 ] ;
shadow [ 0 ] = 1 . 0 ;
shadow [ 1 ] = 1 . 0 ;
shadow [ 2 ] = 1 . 0 ;
i f ( isShadowActive ) {
shadow [ 0 ] = shadowPCF ( shadowCoord0 , shadowMap0 ) ;
shadow [ 1 ] = shadowPCF ( shadowCoord1 , shadowMap1 ) ;
shadow [ 2 ] = shadowPCF ( shadowCoord2 , shadowMap2 ) ;
}

// Compute s p e c u l a r reflectance
vec3 s p e c u l a r = vec3 ( 0 . 0 ) ;
if ( isSpecularActive ) {
i f ( i s L i g h t 0 A c t i v e ) s p e c u l a r += shadow [ 0 ] ∗ gl_LightSource [ 0 ] . s p e c u l a r . rgb

26
∗ KS_Skin_Specular ( n , L [ 0 ] , v , r o u g h n e s s , s p e c I n t e n s i t y ) ;
if ( i s L i g h t 1 A c t i v e ) s p e c u l a r += shadow [ 1 ] ∗ g l _ L i g h t S o u r c e [ 1 ] . s p e c u l a r . r g b
∗ KS_Skin_Specular ( n , L [ 1 ] , v , r o u g h n e s s , s p e c I n t e n s i t y ) ;
if ( i s L i g h t 2 A c t i v e ) s p e c u l a r += shadow [ 2 ] ∗ g l _ L i g h t S o u r c e [ 2 ] . s p e c u l a r . r g b
∗ KS_Skin_Specular ( n , L [ 2 ] , v , r o u g h n e s s , s p e c I n t e n s i t y ) ;
}

// Apply gamma c o r r e c t i o n
vec3 f i n a l C o l o r = pow ( d i f f u s e + s p e c u l a r , v e c 3 ( 1 . 0 / gamma) ) ;

// Render t h e final pixel color


gl_FragColor = vec4 ( f i n a l C o l o r , 1.0) ;

Listing 2.11: Code for the nal skin fragment shader

27
Chapter 3

Environment lighting

Instead of using a certain number of light sources for illuminating a scene, it can be much more real-
istic to use environment lighting. The idea is to use the light comming from the whole environment
of the scene. Usually we can do this using environment mapping where we have a texture (a cube
or a sphere) that contains the light comming from each direction around the object. Environment
map are very often used for specular lighting for rendering objects that reect the environment.
The problem is that it is quite expensive to compute the diuse irradiance at a given point of the
surface of an object if the environment map is large.

Consider that we have an environment map with k texels. Each texel can be thought of beeing
a single light source. Therefore, for each surface point of the object the diuse component can be
computed as follows :

diuse = surfaceAlbedo · lightColori · max(0, Li · N )


X

i=1,...,k

where Li is the light direction of texel i and N is the surface normal. It is obvious that if the
number of texels k is large computing this sum for each point of the surface of the object is really
expensive.

More generally, this is the same as computing the irradiance E with the integral over a upper
hemisphere Ω(N ) of light directions ω as follows :
Z
E(N ) = L(ω)(N · ω)dω (3.1)
Ω(N )

where N is the surface normal and L(ω) is the amount of light comming from the direction ω .
It is not possible to compute such an integral for surface point of our object in real-time.

We will use a technique using Spherical Harmonics to approximate the diuse lighting comming
from an environment map and to eciently compute the diuse irradiance at each surface point very
eciently. This technique has been introduced in 2001 by Ravi Ramamoorthi and Pat Hanrahan
with their work [17]. First, we will introduce the concept of spherical harmonics.

3.1 Spherical harmonics


First, consider that we have a 1D function f (x) and we want compute an approximation f˜ of that
function with a weighted sum of basis functions bi (x). Therefore, we want to have :

(3.2)
X
f˜(x) = ci bi (x)
i

28
where ci is the weight of the basis function bi . To nd those weights, we need to project the
original function f (x) onto each basis function bi (x). We can do this by computing the integrals :
Z
ci = f (x)bi (x)dx

Then by using equation 3.2, we can compute an approximation of the original function f (x).
But now, take a look at the equation 3.1 that computes the irradiance over a hemisphere. We can
consider that the function L(ω) is a function over the surface of the sphere. How can we use the
technique we have just seen with a 1D function to approximate a lighting function f (s) over a 2D
surface of a sphere S . To do this, we can use Spherical Harmonics. The spherical harmonics are
the basis functions over the surface of the sphere. Robin Green has written a very nice document
[11] that explains the theory of spherical harmonics.

Consider the standard parameterization of points on the surface of a unit sphere into spherical
coordinates : 
 x = sin θ cos φ
y = sin θ sin φ
z = cos θ

The spherical harmonics basis functions ylm (θ, φ) are dened by :


√
m m
√2Kl cos(mφ)Pl (cos θ)
 if m > 0
m
yl (θ, φ) = 2Kl sin(−mφ)Pl (cos θ) if m < 0
m −m
√ 0 0
if m = 0

2Kl Pl (cos θ)
where Plm are the Legendre polynomials and Klm is a scaling factor given by :
s 
(2l + 1)(l − |m|)!
Klm =
4π(l + |m|)!
For the following, we will consider the spherical harmonics basis functions yi (θ, φ) dened by :
yi (θ, φ) = ylm (θ, φ) where i = l(l + 1) + m with l ∈ R+ and −l ≤ m ≤ l
Now, the idea is to approximate the diuse lighting function f (s) over the surface of the sphere
S with a weighted sum of the spherical harmonics basis functions ylm (θ, φ). Therefore, we need to
nd the corresponding weights (or spherical harmonics coecients) cm l . As we have seen before
for a 1D function is quite simple to nd the coecients cm l , we just need to integrate the product
of the function f and the spherical harmonic basis function ylm . Therefore, we have :
Z
cm
l = f (s)ylm ds (3.3)
S

But we need to compute this integral numerically and therefore we can use Monte Carlo Inte-
gration. Monte Carlo integration allows us to compute the integral of a function f (x) as follows
: Z N
1 X
f (x)dx ≈ f (xj )w(xj )
N j=1

where N is the number of samples f (xj ) of the function f that we have and w(xj ) is given by :
1
w(xj ) =
p(xj )
where p(x) is the probability distribution function of the samples.

Therefore, if we consider again the equation 3.3, we have, using spherical coordinates :
Z 2π Z π
ci = f (θ, φ)yi (θ, φ) sin θdθdφ
0 0

29
If we choose the samples for Monte Carlo Integration such that they are unbiased w.r.t. surface
on the sphere, each sample has equal probability of appearing anywhere on the surface of the
sphere which gives us a probability function :
1
p(xj ) =

Therefore, using Monte Carlo Integration, if we have the samples f (xj ), we can compute the
spherical harmonics coecients as follows :
N
4π X
ci = f (xj )yi (xj )
N j=1

Then we can compute an n-th order approximation f˜ of the function f with :


n−1 l n
(3.4)
X X X
f˜(s) = cm m
l yl (s) = ci yi (s)
l=0 m=−l i=0

Note that for a n-th order approximation we need n2 spherical harmonic coecients.

3.1.1 Properties of Spherical Harmonics


Spherical harmonics have some very nice properties. For instance, the spherical harmonics functions
are rotationally invariant, meaning that if a function g is a rotated copy of a function f , then after
the spherical harmonic projection, we have :
g̃(s) = f˜(R(s))

This is useful because it means that by using spherical harmonic functions we can guarantee
that when we animate scenes, move lights or rotate the objects, the intensity of lighting will not
uctuate or have any artifacts.

The other very nice property is that integrating the product of two spherical harmonic functions
is the same as evaluating the dot product of their coecients. Consider two spherical harmonic
functions f˜(s) and g̃(s), we have :
Z n
X
f˜(s)g̃(s)ds = fi gi
S i=0

where fi and gi are the spherical harmonics coecients of the functions f and g .

3.2 Spherical harmonics for environment lighting


Now that we have seen the theory of spherical harmonics, we need to see how to use them for
environment lighting. As we have said, the environment light irradiance (represented by an envi-
ronment map) can be seen as a 2D function L(s) over the surface of the sphere S . The previous
section tells us how to project such a function onto the spherical harmonics basis functions yi to
get the corresponding spherical harmonics coecients ci . Then for a given environment map, we
can precompute this projection oine to get the coecients ci and then at rendering time, we can
compute equation 3.4 to approximate the irradiance at a given point of the surface of our object.
The more spherical harmonics coecients we use the better is the approximation. But that doesn't
help a lot so far. What is very interesting is that in [17], they have shown that only a third order
approximation is enough to approximate the diuse irradiance because the diuse irradiance is a
quite low frequency function. Remember that for a third order approximation, we only need 9
spherical harmonics coecients.

30
This is really useful because we can precompute only 9 spherical harmonics coecients oine
for a given environment map and at rendering time we just have to compute the equation 3.4 with
n = 3 which is much more ecient than evaluating an integral such as in equation 3.1 for each
surface point of the object.

In [17], they have computed the spherical harmonics coecients for a certain number of light
probes environment maps that can be found on http://ict.debevec.org/~debevec/Probes/.
Notice that each of the spherical harmonic coecient is separated in the three color channels
resulting in a total of 27 coecients for a given environment map.

3.3 Spherical harmonics and occlusions


The main issue of the environment lighting technique we have just seen is that we didn't take
occlusions into account. Rendering with occlusions is realy important to obtain a realistic result.
The good point is that we can extend what we have seen before about spherical harmonics to take
care of occlusions. Consider that at each vertex we have a visibility function V (ω) that is 1 if the
point on the surface of the sphere S in the direction ω is visible from that vertex and 0 if it is
occluded. Therefore, at each point on the surface of our object we want to compute the diuse
irradiance E which is given by :
Z
E= L(ω)V (ω)(N · ω)dω
S

We can rewrite this as : Z


E= L(ω)g(ω)dω
S

where g(ω) = V (ω)(N · ω). Now, we can see that L is a function that depends only on the
environment map and that g is a function that depends only on the geometry of the mesh. Thus,
at each point of the surface of our object, we need to compute this integral of the product of the
functions L and g . But we have seen in section 3.1 that integrating the product of two spherical
harmonics functions is equivalent to evaluating the dot product of their spherical harmonics coef-
cients. Therefore, we can compute (by projection) the spherical harmonics coecients Li and gi
and then at rendering time, for each point of the surface of the object, we only have to compute :
Z n
(3.5)
X
E= L(ω)g(ω)dω = Li gi
S i=0

The coecients Li comes from the projection of the environment lighting function onto spher-
ical harnomics basis functions. They are the same 9 coecients of the previous section that can
be precomputed given an environment map.

We can compute the others spherical harmonics coecients gi by projecting the function g
onto the spherical harmonics basis function. Notice that this function is dierent at each point on
the surface of the object because it depends on the normal vector N and the visibility function
V (ω). Projecting the function g will also give us 9 spherical harmonics coecients gi . Therefore,
we need to compute those 9 coecients gi at each point of the surface of our object. What we can
do instead is to approximate this by computing those coecients only at each vertex of the mesh
(which seems to be a good approximation if our head model contains a large number of faces).
This projection is precomputed oine for our head object.

How can we precompute the 9 coecients gi for a given vertex of the mesh. Remember that
g(ω) is a function dened on the surface of a sphere for each vertex. Because we want to project
this function onto the spherical harmonics basis using Monte Carlo integration, we have to generate
some samples on the surface of the sphere. Then for each sample direction ωj , we need to evaluate
g(ωj ), which means we need to compute the dot product N · ωj where N is the normal of the
vertex and also we have to evaluate the visibility term V (ωj ) at the current vertex. To do this,

31
I have used the idea found in [15] that consist of placing the camera at the vertex position and
rendering the mesh into a cubemap (the cubemap is cleared in white and the mesh is rendered is
black). Thus, at each vertex, we have a way to evaluate the visibility term V (ωj ). We only have
to lookup in the cubemap in the direction ωj and if the cubemap texel is this direction is white
it means the vertex is not occluded in this direction and if it is black, the vertex is occluded by
another part of the mesh. For each vertex, and for each sample direction ωj we send the values N ,
ωj and the cubemap to a fragment shader that computes the evaluation of the sample g(ωj ). The
listing 3.1 shows the corresponding code that can be found in the shOcclusionShader fragment
shader.

// Get t h e s a m p l e light d i r e c t i o n from t h e texture


vec3 sampleDir = texture2D ( texSampleDir , gl_TexCoord [ 0 ] . s t ) . r g b ;

// Get t h e texture coordinate for the occlusion information


vec2 i n d i r e c t i o n = textureCube ( texCBOIndirection , sampleDir ) . ra ;

// Get t h e information if the vertex is o c c l u d e d or not


float i s O c c l u d e d = t e x t u r e 2 D ( texCBO , indirection ) . r ;

// S t o r e the o c c l u s i o n term and d o t p r o d u c t of light d i r e c t i o n and normal


gl_FragColor = vec4 ( isOccluded ∗ dot ( sampleDir , normalDir ) ) ;

Listing 3.1: Code to evaluate the funtion g(ωj ) at a given sample direction ωj
Now, we can evaluate the function g at each vertex in a certain number of sample directions
ωj which allows us to compute by Monte Carlo Integration the projection of the function g onto
spherical harmonics basis functions to get the 9 coecients gi at each vertex of the mesh.

Because the coecients Li and gi are precomputed oine, at rendering time, we only have
to compute, at each point of the surface of the object, the dot product of equation 3.5 to obtain
the diuse irradiance comming from the environment lighting. This is done when computing the
irradiance in the irradianceShader vertex shader. The listing 3.2 shows to corresponding code.
Notice that the lighting coecients Li contains color information and therefore we have one spher-
ical harmonic coecient Li per color channel which gives us in total 27 coecients. The spherical
harmonics coecients gi don't contain color information and therefore we have nine coecients
per vertex.

// Compute t h e e n v i r o n m e n t lighting color


envColor = vec3 ( 0 . 0 ) ;
envColor += vertexSHCoeff1_2_3 .r ∗ envmapSHCoeffL00 ;
envColor += vertexSHCoeff1_2_3 .g ∗ envmapSHCoeffL1m1 ;
envColor += vertexSHCoeff1_2_3 .b ∗ envmapSHCoeffL10 ;
envColor += vertexSHCoeff4_5_6 .r ∗ envmapSHCoeffL11 ;
envColor += vertexSHCoeff4_5_6 .g ∗ envmapSHCoeffL2m2 ;
envColor += vertexSHCoeff4_5_6 .b ∗ envmapSHCoeffL2m1 ;
envColor += vertexSHCoeff7_8_9 .r ∗ envmapSHCoeffL20 ;
envColor += vertexSHCoeff7_8_9 .g ∗ envmapSHCoeffL21 ;
envColor += vertexSHCoeff7_8_9 .b ∗ envmapSHCoeffL22 ;

Listing 3.2: Code to compute the environment lighting


The gure 3.1 shows the environment lighting in our application. The environment lighting is
done using the spherical harmonics coecients obtained in [17] by precomputing the environment
light probe of the Grace's Cathedral from http://ict.debevec.org/~debevec/Probes/. The left
image is the environment lighting without occlusions and on the right with occlusions. As you can
see, the occlusions allow to obtain a much better result near the ears, the eyes and around the nose.
We can also observe the shadow behind the ear. As you can see, the approximation of computing
the coecients gi only at the vertex is quite good for our mesh.

32
Figure 3.1: Environment lighting without occlusions (left) and with occlusions (right)

3.4 Rotation of Spherical Harmonics Coecients


As we have said before, the spherical harmonics functions are rotationally invariant. Notice that
if we rotate the mesh, we also need to rotate the spherical harmonics coecients of the light Li in
order that the environment lighting remains correct. How can we rotate the spherical harmonics
coecients ?

We can use the technique from [11], the idea is to represent our 9 spherical harmonics coecients
by a 9 × 1 vector C and to multiply it by a 9 × 9 rotation matrix RSH (α, β, γ) that depends on the
rotation Euler angles α, β and γ . We can decompose the RSH matrix using the ZY Z formulation.
It means that rst we rotate about the Z axis, then around the rotated Y axis and nally around the
rotated Z axis. With this formulation, we can generate rotations around any possible orientation.
The rotation of an angle β around the Y axis can be decomposed as a rotation X+90 of 90 degrees
around the X axis, a rotation Zβ of an angle β around the Z axis a a rotation X−90 of -90 degrees
around the X axis. Therefore, we have :
RSH (α, β, γ) = Zγ Yβ Zα = Zγ X−90 Zβ X+90 Zα

Therefore, we only need the three rotation matrices Z , X+90 and X−90 . The Z matrix can be
computed as follows given an angle θ :

 
1 0 0 0 0 0 0 0 0
0 cos(θ) 0 sin(θ) 0 0 0 0 0 
 
0 0 1 0 0 0 0 0 0 
 
0
 − sin(θ) 0 cos(θ) 0 0 0 0 0  
0
Zθ =  0 0 0 cos(2θ) 0 0 0 sin(2θ) 

0 0 0 0 0 cos(θ) 0 sin(θ) 0 
 
0 0 0 0 0 0 1 0 0 
 
0 0 0 0 0 − sin(θ) 0 cos(θ) 0 
0 0 0 0 − sin(2θ) 0 0 0 cos(2θ)

and the matrix X+90 is dened by :

33
1 0 0 0 0 0 0 0 0
 
0 0 −1 0 0 0 0 0 0 
0 1 0 0 0 0 0 0 0
 

0 0 0 1 0 0 0 0 0
 

0 0 0 0 0 0 0 −1 0
 
X+90 = 
0 0 0 0 0 −1 0 0 0√
 

3

− 21

0
 0 0 0 0 0 0 − 2 
0 0 0 0 1 0 0√ 0 0 
0 0 0 0 0 0 − 23 0 1
2

And the last matrix X−90 is the transposed matrix of X+90 . Therefore, given the three Euler
angles of rotation of our mesh α, β and γ , we can compute the rotation matrix RSH and multiply
it by the spherical harmonics coecients vector C to obtain a new 9 × 1 vector that contains the
rotated spherical harmonics coecients of the environment lighting. Notice that we should use the
fact that the matrices Z , X−90 and X+90 are quite sparse to make the rotation computation more
ecient.

34
Chapter 4

Results

4.1 Conclusion
I have implemented the skin rendering algorithm introduced in [7] on the GPU. It seems to work
quite well. There is still some artifacts with the global scattering using Modied Translucent
Shadow Map. The shadow mapping also wasn't so easy to implement because of the several arti-
facts that arise when using shadow maps. Finally, I think that the rendering images that I have
obtained are quite realistic.

During this project, I learned a lot about physically-based rendering. Especially about sub-
surface scattering, shadow mapping, translucent shadow maps and environment lighting using
spherical harmonics. I've also improved my skills with GPU programming.

4.2 Future work


Now, we will see what can be done in the future to improve or extend the application.

As I have already said earlier, for the current application we use only a constant roughness
parameter m over the entire face. Instead, we can use a texture map in which the roughness
parameter is stored and is allowed to vary over the face. This would give a more realistic result
because it would be more physically plausible.

Texture seams can generate problems for texture-space diusion, because connected regions on
the 3D mesh are disconnected in the texture space and cannot easily diuse light between them.
Indeed, the empty regions on the texture will blur onto the mesh along each seam edge which
causes artifacts in the nal rendering. A solution to this problem is described in [6]. The idea is to
use a map or alpha value to detect when the irradiance textures are being accessed in a region near
a seam (or an empty space). When such a place is detected, the subsurface scattering computation
is turned o and the diuse reectance is replaced with an equivalent local computation. The
amount of artifacts depends on the texture of the mesh. It can be important to take care of the
seam artifacts because if they are strongly visible, the rendering will not look very realistic.

Another way to improve a lot the rendering would be to use High Dynamic Range (HDR)
rendering. With HDR rendering, the realism of the scene can be improved especially when the
scene contains for instance very glossy specular highlights. The current application is using the
GLUT library for managing the OpenGL window. Using another windows manager could allow us
to use more sample buers for multisampling. The more sample buers we use the better we can
deal with antialiasing of the scene. Therefore, by using more sample buers, we could increase the
quality of the scene of the application.

35
4.3 Rendered images
Here is some rendering that I have obtained with the application.

Figure 4.1: Rendering with three light sources and the environment lighting from the Uzi Gallery
light probe

36
Figure 4.2: Rendering with only one positional light source

Figure 4.3: Rendering with two light sources and the environment lighting from the Grace Cathedral
light probe

37
Figure 4.4: Rendering with two light sources and the environment lighting from the Uzi Gallery
light probe

Figure 4.5: Rendering with environment lighting from the Uzi Gallery light probe with a girl mesh

38
Figure 4.6: Rendering with two light sources and environment lighting from the Uzi Gallery light
probe

39
Appendix A

The Skin Rendering Application

A.1 About the Application


With the application, it is possible to load the mesh that I used during this project and to render it
using the subsurface scattering algorithm. The application contains three positional light sources.
Is is also possible to activate the environment lighting with the spherical harmonics coecients com-
puted in [17] for the Uzi Gallery light probe from http://ict.debevec.org/~debevec/Probes/.
The spherical harmonics coecients for the occlusions at each vertex of the mesh have already been
precomputed and are available in the le meshRoyaltyShCoeff.txt. But it is also possible to pre-
compute them and put them into a le using the application.

The parameters of the skin rendering algorithm like the roughness parameter m or the mixRatio
for pre-scatter texturing can be changed easily and can be found in the PhotoRealistic.hhp le.

The code for precomputing the spherical harmonics coecients for the occlusions at each vertex
can be found in the Mesh3D.cpp le.

A.2 How to use the application


To run the application, use the following command :
./photorealistic dataDirectory/ [computeVertexSHCoeffs]

where dataDirectory is the directory where the data needed by the application are stored
like the mesh, the textures, . . . The second parameter is optional and can be either y (yes) or n
(no). If this parameter is y, the spherical harmonics coecients for the occlusions for each vertex
are precomputed when the application starts and will be used later to render the scene while the
application is running. By default, those coecients are not computed at the beginning of the
application but are loaded from a le.

For instance, we can run the application with the command :


./photorealistic data/

While the application is running, it is possible to use the following commands :


key 0 : enable/disable the light source 0.
key 1 : enable/disable the light source 1.
key 2 : enable/disable the light source 2.
key a : move the light source 0 to the left around the vertical axis.

40
key d : move the light source 0 to the right around the vertical axis.
key w : move the light source 0 up around the horizontal axis.
key s : move the light source 0 down around the horizontal axis.
key e : enable/disable the environment lighting.
key z : enable/disable the specular lighting.
key o : enable/disable the shadows.
key b : enable/disable subsurface scattering.
key y : enable/disbale the Translucent Shadow Map for global scattering.
key h : rotate the mesh around the vertical axis.
key j : rotate the mesh around the horizontal axis.
mouse : use the mouse to rotate the camera around the mesh.

41
Bibliography

[1] Beckmann distribution. http://en.wikipedia.org/wiki/Specular_highlight#Beckmann_


distribution.
[2] Nvidia demo team secrets: Advanced skin rendering - gdc 2007 demo. http://developer.
download.nvidia.com/presentations/2007/gdc/Advanced_Skin.pdf.
[3] Donner C. and Jensen H. W. Light diusion in multi-layered translucent materials. ACM
Trans. Graph, 2005.
[4] Schlick Christophe. A customizable reectance model for everyday rendering. 1993.
[5] Carsten Dachsbacher and Marc Stamminger. Translucent shadow maps. 2004.
[6] Eugene d'Eon and David Luebke. Advanced Techniques for Realistic Real-Time Skin Render-
ing, GPU Gems 3. 2008.
[7] Eugene d'Eon, David Luebke, and Eric Enderton. Ecient rendering of human skin. 2007.
[8] Donner, Craig, and Henrik Wann Jensen. A spectral bssrdf for shading human skin. 2006.
[9] Borshuko George and J.P. Lewis. Realistic human face rendering for the matrix reloaded.
2003.
[10] Mitchell J. L Gosselin D., Sander P. V. Real-time texture-space skin rendering. 2004.
[11] Robin Green. Spherical harmonic lighting: The gritty details. 2003.
[12] Simon Green. Real-time approximations to subsurface scattering. GPU Gems, 2004.
[13] Larry Gritz and Eugene d'Eon. GPU Gems 3, The Importance of Being Linear. 2007.
[14] Poirer Guillaume. Human skin modeling and rendering. 2004.
[15] Sebastien Hillaire. Spherical harmonics lighting. http://sebastien.hillaire.free.fr/demos/sh/sh.htm.
[16] Henrik Wann Jensen, Stephen R. Marschner, Marc Levoy, and Pat Hanrahan. A practical
model for subsurface light transport. 2001.
[17] Ravi Ramamoorthi and Pat Hanrahan. An ecient representation for irradiance environment
maps. 2001.
[18] Heidrich Wolfgang and Hans-Peter Seidel. Realistic, hardware-accelerated shading and light-
ing. 1999.

42

You might also like