https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch17.html
the real-time computer graphics community has recently started to appreciate the increase in realism that comes from illuminating obejcts with complex light distributions from environment maps, rather than using a small number of simple light sources. in the real world, light arrives at surfaces from all directions, not from just a handful of directions to a point or directional light sources, and this noticeably affects their appearance. a variety of techniques have recently been developed to capture real-world illumination (such as on movie sets) and to use it to render objects as if they were illuminated by the light from the original environment, making it possible to more seamlessly 无缝隙 merge computer graphics with real scenes. for completely synthetic scenes, these techniques can be applied to improve the realism of rendered images by rendering an emvironment map of the scene and using it to light characters and other objects inside the scene. rather than using the map just for perfect specular reflection, these techniques use it to compute lighting for glossy and diffuse surfaces as well.
This chapter describes a simple technique for real-time environment lighting. It is limited to diffuse surfaces, but it is efficient enough for real-time use. Furthermore, this method accurately accounts for shadows due to geometry occluding the environment from the point being shaded. Although the shading values that this technique computes have a number of sources of possible errors compared to some of the more complex techniques recently described in research literature, the technique is relatively easy to implement. (To make it work, you don’t have to understand and implement a spherical harmonics library!) The approach described here gives excellent results in many situations, and it runs interactively on modern hardware.
This method is based on a view-independent preprocess that computes occlusion information with a ray tracer and then uses this information at runtime to compute a fast approximation to diffuse shading in the environment. This technique was originally developed by Hayden Landis (2002) and colleagues at Industrial Light & Magic; it has been used on a number of ILM’s productions (with a non-real-time renderer!).
17.1 Overview
The environment lighting technique we describe has been named ambient occlusion lighting. One way of thinking of the approach is as a “smart” ambient term that varies over the surface of the model according to how much of the external environment can be seen at each point. Alternatively, one can think of it as a diffuse term that supports a complex distribution of incident light efficiently. We will stick with 坚持 the second interpretation in this chapter.
The basic idea behind this technique is that if we preprocess a model, computing how much of the external environment each point on it can see versus how much of the environment has been occluded by other parts of the model, then we can use that information at rendering time to compute the value of a diffuse shading term. The result is that the crevices 裂缝 of the model are realistically adv. 现实地;实际地;逼真地 darkened, and the exposed parts of the model realistically receive more light and are thus brighter. The result looks substantially more realistic than if a standard shading model had been used.
This approach can be extended to use environment lighting as the source of illumination, where an environment map that represents incoming light from all directions is used to determine the color of light arriving at each point on the object. For this feature, in addition to recording how much of the environment is visible from points on the model, we also record from which direction most of the visible light is arriving. These two quantities, which effectively define a cone of unoccluded directions out into the scene, can be used together to do an extremely blurred lookup from the environment map to simulate the overall incoming illumination from a cone of directions of interest at a point being shaded.
17.2 the preprocessing step
given an arbitrary model to be shaded, this technique needs to know two thing at each point on the model:
(1) the “accessibility” at the point ——what fraction of the hemisphere above that point is unoccluded by other parts of the model; and
(2) the average direction of unoccluded incident light. Figure 17-1 illustrates both of these ideas in 2D.
Figure 17-1 Computing Accessibility and an Average Direction
given a point P on the surface with normal N, here roughly two-thirds of the hemisphere above P is occluded by other geometry in the scene, while one-third is unoccluded. the average direction of incoming light is denoted by B, and it is somewhat to the right of the normal direction N. loosely speaking, the average color of incident light at P could be found by averaging the incident light from the cone of unoccluded directions around the B vector.