
Preface
For just about as long as there has been graphics hardware, there has been programmable
graphics hardware. Over the years, building flexibility into graphics hardware designs has been
a necessary way of life for hardware developers. Graphics APIs continue to evolve, and because
a hardware design can take two years or more from start to finish, the only way to guarantee a
hardware product that can support the then current graphics APIs at its release is to build in
some degree of programmability from the very beginning.
Until recently, the realm of programming graphics hardware belonged to just a few people,
mainly researchers and graphics hardware driver developers. Research into programmable
graphics hardware has been taking place for many years, but the point of this research has not
been to produce viable hardware and software for application developers and end users. The
graphics hardware driver developers have focused on the immediate task of providing support
for the important graphics APIs of the time: PHIGS, PEX, Iris GL, OpenGL, Direct3D, and so on.
Until recently, none of these APIs exposed the programmability of the underlying hardware, so
application developers have been forced into usin
the fixed functionality provided by traditional
graphics APIs.
Hardware companies have not exposed the programmable underpinnings of their products
because of the high cost of educating and supporting customers to use low-level, device-specific
interfaces and because these interfaces typically change quite radically with each new
generation of graphics hardware. Application developers who use such a device-specific
interface to a piece of graphics hardware face the daunting task of updating their software for
each new
eneration of hardware that comes alon
. And for
et about supportin
the application
on hardware from multiple vendors!
As we moved into the 21st century, some of these fundamental tenets about
raphics hardware
were challenged. Application developers pushed the envelope as never before and demanded a
variety of new features in hardware in order to create more and more sophisticated onscreen
effects. As a result, new graphics hardware designs became more programmable than ever
before. Standard graphics APIs were challenged to keep up with the pace of hardware
innovation. For OpenGL, the result was a spate of extensions to the core API as hardware
vendors struggled to support a range of interesting new features that their customers were
demanding.
The creation of a standard, cross-platform, high-level shading language for commercially
available graphics hardware was a watershed event for the graphics industry. A paradigm shift
occurred, one that took us from the world of rigid, fixed functionality graphics hardware and
graphics APIs to a brave new world where the visual processing unit, or VPU (i.e., graphics
hardware), is as important as the central processing unit, or CPU. The VPU is optimized for
processing dynamic media such as 3D graphics and video. Highly parallel processing of floating-
point data is the primary task for VPUs, and the flexibility of the VPU means that it can also be
used to process data other than a stream of traditional graphics commands. Applications can
take advantage of the capabilities of both the CPU and the VPU, using the strengths of each to
optimally perform the task at hand.
This book describes how graphics hardware programmability is exposed through a high-level
language in the leading cross-platform 3D graphics API: OpenGL. This language, the OpenGL
Shading Language, lets applications take total control over the most important stages of the
graphics processing pipeline. No longer restricted to the graphics rendering algorithms and
formulas chosen by hardware desi
ners and frozen in silicon, software developers are be
innin
to use this programmability to create stunning effects in real time.