Bump Mapping

What Is Bump Mapping?

Bump Mapping is a special sort of per pixel lighting (remember: OpenGL lighting is per vertex; a color is computed for each vertex and then interpolated across the triangle, quad, …). The famous lighting models (for example the OpenGL lighting) uses the normal to calculate a lighting color. Normally (at per vertex lighting) this normal is provided just like the vertex’ coordinate or texture coordinate. The idea behind Bump Mapping is to use different normals for each pixel (rather than for each vertex).

But Why Do That?

Imagine you have a flat surface (like a triangle or a quad). The normal for this surface is equal in every point on this surface. Using different normals on a surface makes it “bumpy”, not straight flat. But remember we are only able to draw flat primitives like triangles or quads. So Bump Mapping is just a “fake” technology for rendering bumpy surfaces.

But Why Using Bump Mapping?

Bump Mapping is supported in hardware on GeForce 256 (and up) and Radeon 7200 (and up). Bump Mapping can be performed in texture environment stages (see implementations) and is a very inexpensive feature for rendering a more beautiful world.

Bump Mapping - How Does It Work?

There are some different bump mapping technologies:

DP3 Bump Mapping   The most used bump mapping technology.
Gloss Bump Mapping   A technology which doesn’t need DP3 Bump Mapping support in hardware, quite bad results.
Cross Plane Bump Mapping   A technology developed by Yuriy V. Miroshnik (see reference [6]).

In this article I’ll cover DP3 Bump Mapping

First let’s take a look at the OpenGL lighting equation:

<img src="extras/article20/figure1.jpg">

Quite confusing, eh? :-)

But we are interested in the middle part, the diffuse part only (if you don’t know how OpenGL lighting works, take a look at http://www.cs.tcd.ie/courses/baict/bass/4ict10/Hillary2003/pdf/Lecture2_9Jan.pdf.

<img src="extras/article20/figure2.jpg">

n is the vertex’ normal
V is the vertex’ coordinate
P is the coordinate of light i
VP is the vector from the vertex’ coordinate to the light i
dcm is the diffuse color of the material
ddi is the diffuse color of light i

Remember that the vector from the vertex’ coordinate to the light and the vertex normal has to be normalized We have heard that DP3 Bump Mapping is performed in hardware using texture environments. But how does this work with our lighting equation above? Modern 3D Cards supports a new texture environment extension: ARB_texture_env_dot3 This extension requires ARB_texture_env_combine and adds a new texture combiner operation: DOT3_RGB_ARB (and DOT3_RGBA_ARB, but this is less important).

First Of All: What Are Texture Environments?

A texture environment describes how a sampled texel of a certain texture unit is combined with the other values. There is a texture environment for each texture unit available. When a primitive is rendered (and texturing is enabled), the texture environment of the first texture is computed and the result is sent to the next texture environment. Normally all texture environments are set to GL_MODULATE, which means that the value of the last texture unit (at the first texture unit the primary color set by glColor or the color calculated by lighting is used) is multiplied by the texel of the current texture unit. The result of the last active texture unit is the color of our pixel.

Here an example:
The primary color is set to (1.0, 1.0, 1.0)
The texel of the first texture unit is (1.0, 1.0, 0.0)
The texel of the second texture unit is (0.0, 1.0, 1.0)
All texture environments are set to GL_MODULATE

So the output of the first texture environment is (0.5, 1.0, 1.0) * (1.0, 1.0, 0.0) = (0.5, 1.0, 0.0)
The output of the second texture environment is (0.5, 1.0, 0.0) * (0.0, 1.0, 1.0) = (0.0, 1.0, 0.0)
So our pixel has a color of (0.0, 1.0, 0.0) = green

You also can perform much more complex environment operations using ARB_texture_env_combine (see http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_env_combine.txt)

Before we return to Bump Mapping there’s another thing which is important:

Normalization Cube Maps

A cube map is a special form of a texture. Exactly there are 6 2D textures in one cube map. They represent the 6 faces of a cube (that’s why they are called cube maps… :-). Giving a 3D vector to a cube map it returns the texture value where this vector cuts the unit cube. Normally cube maps are used for view independent reflections. But a big advantage of cube maps is that this vector doesn’t have to be normalized! So creating a special cube map which contains normalized vectors you can easily normalize a vector by passing it as a texture coordinate. See reference [4] how to calculate a normalization cube map.

So, now back to our Bump Mapping problem:

How Can We Use This Knowledge With OpenGL To Perform DP3 Bump Mapping?

First a simple list what we need:

  • A simple texture for the material texture (a smiley or whatever you want! :-)
  • A bump map texture (here we want the bump map to represent normalized normal vectors, where r is the x value, g is the y value and b is the z value of this vector. But these vectors are of course in a range from 0 to 1, not -1 to +1(x, y and z values are added to 1 and then divided by 2 to get the rgb values).
  • A normalized cube map (as explained before).


In this simple example we want to implement a simple version of the diffuse lighting equation (where . Is the dot product and * is a multiplication):

result_color = (normalized_vector_from_surface_to_light . normal_of_the_surface) * material_texture

Assuming we have 4 texture units (GeForce 3 and up, Radeon 8500 and up) we can do the following:

// Set The First Texture Unit To Normalize Our Vector From The Surface To The Light.
// Set The Texture Environment Of The First Texture Unit To Replace It With The
// Sampled Value Of The Normalization Cube Map.
glBindTexture(GL_TEXTURE_CUBE_MAP, our_normalization_cube_map);

// Set The Second Unit To The Bump Map.
// Set The Texture Environment Of The Second Texture Unit To Perform A Dot3
// Operation With The Value Of The Previous Texture Unit (The Normalized
// Vector Form The Surface To The Light) And The Sampled Texture Value (The
// Normalized Normal Vector Of Our Bump Map).
glBindTexture(GL_TEXTURE_2D, our_bump_map);

// Set The Third Texture Unit To Our Texture.
// Set The Texture Environment Of The Third Texture Unit To Modulate
// (Multiply) The Result Of Our Dot3 Operation With The Texture Value.
glBindTexture(GL_TEXTURE_2D, our_texture);

// Now We Draw Our Object (Remember That We First Have To Calculate The
// (UnNormalized) Vector From Each Vertex To Our Light).

float vertex_to_light_x, vertex_to_light_y, vertex_to_light_z;

for (unsigned int i = 0; i < 4; i++)
	vertex_to_light_x = light_x – current_vertex_x;
	vertex_to_light_y = light_y – current_vertex_y;
	vertex_to_light_z = light_z – current_vertex_z;

	// Passing The vector_to_light Values To Texture Unit 0.
	// Remember The First Texture Unit Is Our Normalization Cube Map
	// So This Vector Will Be Normalized For Dot 3 Bump Mapping.
	glMultiTexCoord3f(GL_TEXTURE0, vertex_to_light_y, vertex_to_light_y, vertex_to_light_x);
	// Passing The Simple Texture Coordinates To Texture Unit 1 And 2.
	glMultiTexCoord2f(GL_TEXTURE1, current_texcoord_s, current_texcoord_t);
	glMultiTexCoord2f(GL_TEXTURE2, current_texcoord_s, current_texcoord_t);

	glVertex3f(current_vertex_x, current_vertex_y, current_vertex_z) ;

So, that’s all for creating a simple bump mapped surface.

But that’s not the end! We might run into big problems in this example. Here we assume that the quad we draw is parallel to the x/y plane. Remember that the normals we stored in a texture are in a static coordinate space. In the example above (if the quad we draw is paralled to the x/y plane) the coordinate space of our object is equal to the coordinate space of the normals. Imagine that we rotate the quad around the x axis. Now the z axis of the normals also has to be rotated. We didn’t pay respect to this. But there’s a very simple solution for this problem:

Tangent Space Bump Mapping

In Tangent Space Bump Mapping we define a new coordinate system, the tangent space. This tangent space is different from vertex to vertex. We use 3 vectors to represent this tangentspace: the normal (the z axis of our tangent space), the tangent (the x axis of our tangent space) and the binormal (the y axis of our tangent space). There’s a nice picture at reference [4]. You can easily calculate these vector from the geometry date (vertex and texture coordinates) (see reference [5] for details). With these 3 vectors we can build a matrix which transforms a vector from object space to our tangent space. In the example above the tangent is (1,0,0) and the binormal is (0,1,0) and the normal is (0,0,1), so our matrix will be:

( 1,0,0 )
( 0,1,0 )
( 0,0,1 )

And this is an identity matrix so there is no need to use tangent space bump mapping.

And that’s it!

Of course you can use vertex programs for performing the multiplication to tangent space and/or for calculation the vector from each vertex to the light.

Thanks for reading!
Questions and Feedback is welcome! :-)
Florian Rudolf
ICQ# 59184081

"If you want to contact my please make sure that it doesn't look like spam! I get a lot of spam mail/ICQ requests... Everything suspicious will be deleted"


Bump Mapping Demo For This Article


[1] - Real-Time Rendering by Eric Haines and Tomas Akenine-Möller (ISBN 1-56881-182-9)
[2] - OpenGL Specification ftp://ftp.sgi.com/opengl/doc/opengl1.2/opengl1.2.1.pdf
[3] - OpenGL Extension Registry http://oss.sgi.com/projects/ogl-sample/registry
[4] - http://www.paulsprojects.net/tutorials/simplebump/simplebump.html