Bump-Mapping, Multi-Texturing & Extensions

This lesson was written by Jens Schneider. It is loosely based on Lesson 06, though lots of changes were made. In this lesson you will learn:

  • How to control your graphic-accelerator's multitexture-features.
  • How to do a "fake" Emboss Bump Mapping.
  • How to do professional looking logos that "float" above your rendered scene using blending.
  • Basics about multi-pass rendering techniques.
  • How to do matrix-transformations efficiently.

Since at least three of the above four points can be considered "advanced rendering techniques", you should already have a general understanding of OpenGL’s rendering pipeline. You should know most commands already used in these tutorials, and you should be familiar with vector-maths. Every now and then you’ll encounter a block that reads begin theory(...) as header and end theory(...) as an ending. These sections try to teach you theory about the issue(s) mentioned in parenthesis. This is to ensure that, if you already know about the issue, you can easily skip them. If you encounter problems while trying to understand the code, consider going back to the theory sections.

Last but not least: This lesson consists out of more than 1,200 lines of code, of which large parts are not only boring but also known among those that read earlier tutorials. Thus I will not comment each line, only the crux. If you encounter something like this >…

Here we go:

#include <windows.h>								// Header File For Windows
#include <stdio.h>								// Header File For Standard Input/Output
#include <gl\gl.h>								// Header File For The OpenGL32 Library
#include <gl\glu.h>								// Header File For The GLu32 Library
#include <gl\glaux.h>								// Header File For The GLaux Library
#include "glext.h"								// Header File For Multitexturing
#include <string.h>								// Header File For The String Library
#include <math.h>								// Header File For The Math Library

The GLfloat MAX_EMBOSS specifies the "strength" of the Bump Mapping-Effect. Larger values strongly enhance the effect, but reduce visual quality to the same extent by leaving so-called "artefacts" at the edges of the surfaces.

#define MAX_EMBOSS (GLfloat)0.01f						// Maximum Emboss-Translate. Increase To Get Higher Immersion

Ok, now let’s prepare the use of the GL_ARB_multitexture extension. It’s quite simple:

Most accelerators have more than just one texture-unit nowadays. To benefit of this feature, you’ll have to check for GL_ARB_multitexture-support, which enables you to map two or more different textures to one OpenGL-primitive in just one pass. Sounds not too powerful, but it is! Nearly all the time if you’re programming something, putting another texture on that object results in higher visual quality. Since you usually need multiple "passes" consisting out of interleaved texture-selection and drawing geometry, this can quickly become expensive. But don’t worry, this will become clearer later on.

Now back to code: __ARB_ENABLE is used to override multitexturing for a special compile-run entirely. If you want to see your OpenGL-extensions, just un-comment the #define EXT_INFO. Next, we want to check for our extensions during run-time to ensure our code stays portable. So we need space for some strings. These are the following two lines. Now we want to distinguish between being able to do multitexture and using it, so we need another two flags. Last, we need to know how many texture-units are present(we’re going to use only two of them, though). At least one texture-unit is present on any OpenGL-capable accelerator, so we initialize maxTexelUnits with 1.

#define __ARB_ENABLE true							// Used To Disable ARB Extensions Entirely
// #define EXT_INFO								// Uncomment To See Your Extensions At Start-Up?
#define MAX_EXTENSION_SPACE 10240						// Characters For Extension-Strings
#define MAX_EXTENSION_LENGTH 256						// Maximum Characters In One Extension-String
bool multitextureSupported=false;						// Flag Indicating Whether Multitexturing Is Supported
bool useMultitexture=true;							// Use It If It Is Supported?
GLint maxTexelUnits=1;								// Number Of Texel-Pipelines. This Is At Least 1.

The following lines are needed to “link” the extensions to C++ function calls. Just treat the PFN-who-ever-reads-this as pre-defined datatype able to describe function calls. Since we are unsure if we’ll get the functions to these prototypes, we set them to NULL. The commands glMultiTexCoordifARB map to the well-known glTexCoordif, specifying i-dimensional texture-coordinates. Note that these can totally substitute the glTexCoordif-commands. Since we only use the GLfloat-version, we only need prototypes for the commands ending with an "f". Other are also available ("fv", "i", etc.). The last two prototypes are to set the active texture-unit that is currently receiving texture-bindings ( glActiveTextureARB() ) and to determine which texture-unit is associated with the ArrayPointer-command (a.k.a. Client-Subset, thus glClientActiveTextureARB). By the way: ARB is an abbreviation for "Architectural Review Board". Extensions with ARB in their name are not required by an OpenGL-conformant implementation, but they are expected to be widely supported. Currently, only the multitexture-extension has made it to ARB-status. This may be treated as sign for the tremendous impact regarding speed multitexturing has on several advanced rendering techniques.

The lines ommitted are GDI-context handles etc.

PFNGLMULTITEXCOORD1FARBPROC	glMultiTexCoord1fARB	= NULL;
PFNGLMULTITEXCOORD2FARBPROC	glMultiTexCoord2fARB	= NULL;
PFNGLMULTITEXCOORD3FARBPROC	glMultiTexCoord3fARB	= NULL;
PFNGLMULTITEXCOORD4FARBPROC	glMultiTexCoord4fARB	= NULL;
PFNGLACTIVETEXTUREARBPROC	glActiveTextureARB	= NULL;
PFNGLCLIENTACTIVETEXTUREARBPROC	glClientActiveTextureARB= NULL;

We need global variables:

  • filter specifies what filter to use. Refer to Lesson 06. We’ll usually just take GL_LINEAR, so we initialise with 1.
  • texture holds our base-texture, three times, one per filter.
  • bump holds our bump maps
  • invbump holds our inverted bump maps. This is explained later on in a theory-section.
  • The Logo-things hold textures for several billboards that will be added to rendering output as a final pass.
  • The Light...-stuff contains data on our OpenGL light-source.
GLuint  filter=1;								// Which Filter To Use
GLuint  texture[3];								// Storage For 3 Textures
GLuint  bump[3];								// Our Bumpmappings
GLuint  invbump[3];								// Inverted Bumpmaps
GLuint  glLogo;									// Handle For OpenGL-Logo
GLuint  multiLogo;								// Handle For Multitexture-Enabled-Logo
GLfloat LightAmbient[]	= { 0.2f, 0.2f, 0.2f};					// Ambient Light Is 20% White
GLfloat LightDiffuse[]	= { 1.0f, 1.0f, 1.0f};					// Diffuse Light Is White
GLfloat LightPosition[]	= { 0.0f, 0.0f, 2.0f};					// Position Is Somewhat In Front Of Screen
GLfloat Gray[]		= { 0.5f, 0.5f, 0.5f, 1.0f};

The next block of code contains the numerical representation of a textured cube built out of GL_QUADS. Each five numbers specified represent one set of 2D-texture-coordinates one set of 3D-vertex-coordinates. This is to build the cube using for-loops, since we need that cube several times. The data-block is followed by the well-known WndProc()-prototype from former lessons.

// Data Contains The Faces Of The Cube In Format 2xTexCoord, 3xVertex.
// Note That The Tesselation Of The Cube Is Only Absolute Minimum.

GLfloat data[]= {
	// FRONT FACE
	0.0f, 0.0f,		-1.0f, -1.0f, +1.0f,
	1.0f, 0.0f,		+1.0f, -1.0f, +1.0f,
	1.0f, 1.0f,		+1.0f, +1.0f, +1.0f,
	0.0f, 1.0f,		-1.0f, +1.0f, +1.0f,
	// BACK FACE
	1.0f, 0.0f,		-1.0f, -1.0f, -1.0f,
	1.0f, 1.0f,		-1.0f, +1.0f, -1.0f,
	0.0f, 1.0f,		+1.0f, +1.0f, -1.0f,
	0.0f, 0.0f,		+1.0f, -1.0f, -1.0f,
	// Top Face
	0.0f, 1.0f,		-1.0f, +1.0f, -1.0f,
	0.0f, 0.0f,		-1.0f, +1.0f, +1.0f,
	1.0f, 0.0f,		+1.0f, +1.0f, +1.0f,
	1.0f, 1.0f,		+1.0f, +1.0f, -1.0f,
	// Bottom Face
	1.0f, 1.0f,		-1.0f, -1.0f, -1.0f,
	0.0f, 1.0f,		+1.0f, -1.0f, -1.0f,
	0.0f, 0.0f,		+1.0f, -1.0f, +1.0f,
	1.0f, 0.0f,		-1.0f, -1.0f, +1.0f,
	// Right Face
	1.0f, 0.0f,		+1.0f, -1.0f, -1.0f,
	1.0f, 1.0f,		+1.0f, +1.0f, -1.0f,
	0.0f, 1.0f,		+1.0f, +1.0f, +1.0f,
	0.0f, 0.0f,		+1.0f, -1.0f, +1.0f,
	// Left Face
	0.0f, 0.0f,		-1.0f, -1.0f, -1.0f,
	1.0f, 0.0f,		-1.0f, -1.0f, +1.0f,
	1.0f, 1.0f,		-1.0f, +1.0f, +1.0f,
	0.0f, 1.0f,		-1.0f, +1.0f, -1.0f
};

LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);				// Declaration For WndProc

The next block of code is to determine extension-support during run-time.

First, we can assume that we have a long string containing all supported extensions as ‘\n’-seperated sub-strings. So all we need to do is to search for a ‘\n’ and start comparing string with search until we encounter another ‘\n’ or until string doesn’t match search anymore. In the first case, return a true for "found", in the other case, take the next sub-string until you encounter the end of string. You’ll have to watch a little bit at the beginning of string, since it does not begin with a newline-character.

By the way: A common rule is to ALWAYS check during runtime for availability of a given extension!

bool isInString(char *string, const char *search) {
	int pos=0;
	int maxpos=strlen(search)-1;
	int len=strlen(string);
	char *other;
	for (int i=0; i<len; i++) {
		if ((i==0) || ((i>1) && string[i-1]=='\n')) {			// New Extension Begins Here!
			other=&string[i];
			pos=0;							// Begin New Search
			while (string[i]!='\n') {				// Search Whole Extension-String
				if (string[i]==search[pos]) pos++;		// Next Position
				if ((pos>maxpos) && string[i+1]=='\n') return true;	// We Have A Winner!
				i++;
			}
		}
	}
	return false;								// Sorry, Not Found!
}

Now we have to fetch the extension-string and convert it to be ‘\n’-separated in order to search it for our desired extension. If we find a sub-string ”GL_ARB_multitexture” in it, this feature is supported. But we only can use it, if __ARB_ENABLE is also true. Last but not least we need GL_EXT_texture_env_combine to be supported. This extension introduces new ways how the texture-units interact. We need this, since GL_ARB_multitexture only feeds the output from one texture unit to the one with the next higher number. So we rather check for this extension than using another complex blending equation (that would not exactly do the same effect!) If all extensions are supported and we are not overridden, we’ll first determine how much texture-units are available, saving them in maxTexelUnits. Then we have to link the functions to our names. This is done by the wglGetProcAdress()-calls with a string naming the function call as parameter and a prototype-cast to ensure we’ll get the correct function type.

bool initMultitexture(void) {
	char *extensions;
	extensions=strdup((char *) glGetString(GL_EXTENSIONS));			// Fetch Extension String
	int len=strlen(extensions);
	for (int i=0; i<len; i++)						// Separate It By Newline Instead Of Blank
		if (extensions[i]==' ') extensions[i]='\n';

#ifdef EXT_INFO
	MessageBox(hWnd,extensions,"supported GL extensions",MB_OK | MB_ICONINFORMATION);
#endif

	if (isInString(extensions,"GL_ARB_multitexture")			// Is Multitexturing Supported?
		&& __ARB_ENABLE							// Override Flag
		&& isInString(extensions,"GL_EXT_texture_env_combine"))		// texture-environment-combining supported?
	{       
		glGetIntegerv(GL_MAX_TEXTURE_UNITS_ARB,&maxTexelUnits);
		glMultiTexCoord1fARB = (PFNGLMULTITEXCOORD1FARBPROC) wglGetProcAddress("glMultiTexCoord1fARB");
		glMultiTexCoord2fARB = (PFNGLMULTITEXCOORD2FARBPROC) wglGetProcAddress("glMultiTexCoord2fARB");
		glMultiTexCoord3fARB = (PFNGLMULTITEXCOORD3FARBPROC) wglGetProcAddress("glMultiTexCoord3fARB");
		glMultiTexCoord4fARB = (PFNGLMULTITEXCOORD4FARBPROC) wglGetProcAddress("glMultiTexCoord4fARB");
		glActiveTextureARB   = (PFNGLACTIVETEXTUREARBPROC) wglGetProcAddress("glActiveTextureARB");
		glClientActiveTextureARB= (PFNGLCLIENTACTIVETEXTUREARBPROC) wglGetProcAddress("glClientActiveTextureARB");
               
#ifdef EXT_INFO
		MessageBox(hWnd,"The GL_ARB_multitexture extension will be used.","feature supported!",MB_OK | MB_ICONINFORMATION);
#endif

		return true;
	}
	useMultitexture=false;							// We Can't Use It If It Isn't Supported!
	return false;
}

InitLights() just initialises OpenGL-Lighting and is called by InitGL() later on.

void initLights(void) {
        glLightfv(GL_LIGHT1, GL_AMBIENT, LightAmbient);				// Load Light-Parameters into GL_LIGHT1
        glLightfv(GL_LIGHT1, GL_DIFFUSE, LightDiffuse);
        glLightfv(GL_LIGHT1, GL_POSITION, LightPosition);
        glEnable(GL_LIGHT1);
}

Here we load LOTS of textures. Since auxDIBImageLoad() has an error-handler of it’s own and since LoadBMP() wasn’t much predictable without a try-catch-block, I just kicked it. But now to our loading-routine. First, we load the base-bitmap and build three filtered textures out of it ( GL_NEAREST, GL_LINEAR and GL_LINEAR_MIPMAP_NEAREST). Note that I only use one data-structure to hold bitmaps, since we only need one at a time to be open. Over that I introduced a new data-structure called alpha here. It is to hold the alpha-layer of textures, so that I can save RGBA Images as two bitmaps: one 24bpp RGB and one 8bpp greyscale Alpha. For the status-indicator to work properly, we have to delete the Image-block after every load to reset it to NULL.

Note also, that I use GL_RGB8 instead of just "3" when specifying texture-type. This is to be more conformant to upcoming OpenGL-ICD releases and should always be used instead of just another number. I marked it in BOLD for you.

int LoadGLTextures() {								// Load Bitmaps And Convert To Textures
	bool status=true;							// Status Indicator
	AUX_RGBImageRec *Image=NULL;						// Create Storage Space For The Texture
	char *alpha=NULL;

	// Load The Tile-Bitmap for Base-Texture
	if (Image=auxDIBImageLoad("Data/Base.bmp")) {
		glGenTextures(3, texture);					// Create Three Textures

		// Create Nearest Filtered Texture
		glBindTexture(GL_TEXTURE_2D, texture[0]);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
		glTexImage2D(GL_TEXTURE_2D, 0, <b>GL_RGB8</b>, Image->sizeX, Image->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, Image->data);

		// Create Linear Filtered Texture
		glBindTexture(GL_TEXTURE_2D, texture[1]);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
		glTexImage2D(GL_TEXTURE_2D, 0, <b>GL_RGB8</b>, Image->sizeX, Image->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, Image->data);

		// Create MipMapped Texture
		glBindTexture(GL_TEXTURE_2D, texture[2]);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
		gluBuild2DMipmaps(GL_TEXTURE_2D, <b>GL_RGB8</b>, Image->sizeX, Image->sizeY, GL_RGB, GL_UNSIGNED_BYTE, Image->data);
	}
	else status=false;

	if (Image) {								// If Texture Exists
		if (Image->data) delete Image->data;				// If Texture Image Exists
		delete Image;
		Image=NULL;
	}

Now we’ll load the Bump Map. For reasons discussed later, it has to have only 50% luminance, so we have to scale it in the one or other way. I chose to scale it using the glPixelTransferf()-commands, that specifies how data from bitmaps is converted to textures on pixel-basis. I use it to scale the RGB components of bitmaps to 50%. You should really have a look at the glPixelTransfer()-command family if you’re not already using them in your programs. They’re all quite useful.

Another issue is, that we don’t want to have our bitmap repeated over and over in the texture. We just want it once, mapping to texture-coordinates (s,t)=(0.0f, 0.0f) thru (s,t)=(1.0f, 1.0f). All other texture-coordinates should be mapped to plain black. This is accomplished by the two glTexParameteri()-calls that are fairly self-explanatory and "clamp" the bitmap in s and t-direction.

	// Load The Bumpmaps
	if (Image=auxDIBImageLoad("Data/Bump.bmp")) {
		glPixelTransferf(GL_RED_SCALE,0.5f);				// Scale RGB By 50%, So That We Have Only
		glPixelTransferf(GL_GREEN_SCALE,0.5f);				// Half Intenstity
		glPixelTransferf(GL_BLUE_SCALE,0.5f);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP);	// No Wrapping, Please!
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
		glGenTextures(3, bump);						// Create Three Textures

		// Create Nearest Filtered Texture
		>…<

		// Create Linear Filtered Texture
		>…<

		// Create MipMapped Texture
		>…<

You’ll already know this sentence by now: For reasons discussed later, we have to build an inverted Bump Map, luminance at most 50% once again. So we subtract the bumpmap from pure white, which is {255, 255, 255} in integer representation. Since we do NOT set the RGB-Scaling back to 100% (took me about three hours to figure out that this was a major error in my first version!), the inverted bumpmap will be scaled once again to 50% luminance.

		for (int i=0; i<3*Image->sizeX*Image->sizeY; i++)		// Invert The Bumpmap
			Image->data[i]=255-Image->data[i];

		glGenTextures(3, invbump);					// Create Three Textures

		// Create Nearest Filtered Texture
		>…<

		// Create Linear Filtered Texture
		>…<

		// Create MipMapped Texture
		>…<
	}
	else status=false;
	if (Image) {								// If Texture Exists
		if (Image->data) delete Image->data;				// If Texture Image Exists
		delete Image;
		Image=NULL;
	}

Loading the Logo-Bitmaps is pretty much straightforward except for the RGB-A recombining, which should be self-explanatory enough for you to understand. Note that the texture is built from the alpha-memoryblock, not from the Image-memoryblock! Only one filter is used here.

	// Load The Logo-Bitmaps
	if (Image=auxDIBImageLoad("Data/OpenGL_ALPHA.bmp")) {
		alpha=new char[4*Image->sizeX*Image->sizeY];
		// Create Memory For RGBA8-Texture
		for (int a=0; a<Image->sizeX*Image->sizeY; a++)
			alpha[4*a+3]=Image->data[a*3];				// Pick Only Red Value As Alpha!
		if (!(Image=auxDIBImageLoad("Data/OpenGL.bmp"))) status=false;
		for (a=0; a<Image->sizeX*Image->sizeY; a++) {
			alpha[4*a]=Image->data[a*3];				// R
			alpha[4*a+1]=Image->data[a*3+1];			// G
			alpha[4*a+2]=Image->data[a*3+2];			// B
		}

		glGenTextures(1, &glLogo);					// Create One Textures

		// Create Linear Filtered RGBA8-Texture
		glBindTexture(GL_TEXTURE_2D, glLogo);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
		glTexImage2D(GL_TEXTURE_2D, 0, <b>GL_RGBA8</b>, Image->sizeX, Image->sizeY, 0, GL_RGBA, GL_UNSIGNED_BYTE, <b>alpha</b>);
		delete alpha;
	}
	else status=false;

	if (Image) {								// If Texture Exists
		if (Image->data) delete Image->data;				// If Texture Image Exists
		delete Image;
		Image=NULL;
	}

	// Load The "Extension Enabled"-Logo
	if (Image=auxDIBImageLoad("Data/multi_on_alpha.bmp")) {
		alpha=new char[4*Image->sizeX*Image->sizeY];			// Create Memory For RGBA8-Texture
		>…<
		glGenTextures(1, &multiLogo);					// Create One Textures
		// Create Linear Filtered RGBA8-Texture
		>…<
		delete alpha;
	}
	else status=false;

	if (Image) {								// If Texture Exists
		if (Image->data) delete Image->data;				// If Texture Image Exists
		delete Image;
		Image=NULL;
	}
	return status;								// Return The Status
}

Next comes nearly the only unmodified function ReSizeGLScene(). I’ve omitted it here. It is followed by a function doCube() that draws a cube, complete with normalized normals. Note that this version only feeds texture-unit #0, since glTexCoord2f(s,t) is the same thing as glMultiTexCoord2f(GL_TEXTURE0_ARB,s,t). Note also that the cube could be done using interleaved arrays, but this is definitely another issue. Note also that this cube CAN NOT be done using a display list, since display-lists seem to use an internal floating point accuracy different from GLfloat. Since this leads to several nasty effects, generally referred to as "decaling"-problems, I kicked display lists. I assume that a general rule for multipass algorithms is to do the entire geometry with or without display lists. So never dare mixing even if it seems to run on your hardware, since it won’t run on any hardware!

GLvoid ReSizeGLScene(GLsizei width, GLsizei height)
// Resize And Initialize The GL Window
>…<

void doCube (void) {
	int i;
	glBegin(GL_QUADS);
		// Front Face
		glNormal3f( 0.0f, 0.0f, +1.0f);
		for (i=0; i<4; i++) {
			glTexCoord2f(data[5*i],data[5*i+1]);
			glVertex3f(data[5*i+2],data[5*i+3],data[5*i+4]);
		}
		// Back Face
		glNormal3f( 0.0f, 0.0f,-1.0f);
		for (i=4; i<8; i++) {
			glTexCoord2f(data[5*i],data[5*i+1]);
			glVertex3f(data[5*i+2],data[5*i+3],data[5*i+4]);
		}
		// Top Face
		glNormal3f( 0.0f, 1.0f, 0.0f);
		for (i=8; i<12; i++) {
			glTexCoord2f(data[5*i],data[5*i+1]);
			glVertex3f(data[5*i+2],data[5*i+3],data[5*i+4]);
		}
		// Bottom Face
		glNormal3f( 0.0f,-1.0f, 0.0f);
		for (i=12; i<16; i++) {
			glTexCoord2f(data[5*i],data[5*i+1]);
			glVertex3f(data[5*i+2],data[5*i+3],data[5*i+4]);
		}
		// Right Face
		glNormal3f( 1.0f, 0.0f, 0.0f);
		for (i=16; i<20; i++) {
			glTexCoord2f(data[5*i],data[5*i+1]);
			glVertex3f(data[5*i+2],data[5*i+3],data[5*i+4]);
		}
		// Left Face
		glNormal3f(-1.0f, 0.0f, 0.0f);
		for (i=20; i<24; i++) {
			glTexCoord2f(data[5*i],data[5*i+1]);
			glVertex3f(data[5*i+2],data[5*i+3],data[5*i+4]);
		}
	glEnd();
}

Time to initialize OpenGL. All as in Lesson 06, except that I call initLights() instead of setting them here. Oh, and of course I’m calling Multitexture-setup, here!

int InitGL(GLvoid)								// All Setup For OpenGL Goes Here
{
	multitextureSupported=initMultitexture();
	if (!LoadGLTextures()) return false;					// Jump To Texture Loading Routine
	glEnable(GL_TEXTURE_2D);						// Enable Texture Mapping
	glShadeModel(GL_SMOOTH);						// Enable Smooth Shading
	glClearColor(0.0f, 0.0f, 0.0f, 0.5f);					// Black Background
	glClearDepth(1.0f);							// Depth Buffer Setup
	glEnable(GL_DEPTH_TEST);						// Enables Depth Testing
	glDepthFunc(GL_LEQUAL);							// The Type Of Depth Testing To Do
	glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);			// Really Nice Perspective Calculations

	initLights();								// Initialize OpenGL Light
	return true								// Initialization Went OK
}

Here comes about 95% of the work. All references like "for reasons discussed later" will be solved in the following block of theory.

Begin Theory ( Emboss Bump Mapping )

If you have a Powerpoint-viewer installed, it is highly recommended that you download the following presentation:

Emboss Bump Mapping by Michael I. Gold, nVidia Corp. [.ppt, 309K]

For those without Powerpoint-viewer, I’ve tried to convert the information contained in the document to .html-format. Here it comes:

Emboss Bump Mapping

Michael I. Gold

NVidia Corporation

Bump Mapping

Real Bump Mapping Uses Per-Pixel Lighting.

  • Lighting calculation at each pixel based on perturbed normal vectors.
  • Computationally expensive.
  • For more information see: Blinn, J. : Simulation of Wrinkled Surfaces, Computer Graphics. 12,3 (August 1978) 286-292.
  • For information on the web go to: /data/lessons/http://www.r3.nu/~cass/thesis/thesis.html to see Cass Everitt’s Orthogonal Illumination Thesis. (rem.: Jens)

Emboss Bump Mapping

Emboss Bump Mapping Is A Hack

  • Diffuse lighting only, no specular component
  • Under-sampling artefacts (may result in blurry motion, rem.: Jens)
  • Possible on today’s consumer hardware (as shown, rem.: Jens)
  • If it looks good, do it!

Diffuse Lighting Calculation

C=(L*N) x Dl x Dm

  • L is light vector
  • N is normal vector
  • Dl is light diffuse color
  • Dm is material diffuse color
  • Bump Mapping changes N per pixel
  • Emboss Bump Mapping approximates (L*N)

Approximate Diffuse Factor L*N

Texture Map Represents Heightfield

  • [0,1] represents range of bump function
  • First derivate represents slope m (Note that m is only 1D. Imagine m to be the inf.-norm of grad(s,t) to a given set of coordinates (s,t)!, rem.: Jens)
  • m increases / decreases base diffuse factor Fd
  • (Fd+m) approximates (L*N) per pixel

Approximate Derivative

Embossing Approximates Derivative

  • Lookup height H0 at point (s,t)
  • Lookup height H1 at point slightly perturbed toward light source (s+ds,t+dt)
  • Subtract original height H0 from perturbed height H1
  • Difference represents instantaneous slope m=H1-H0

Compute The Bump

1) Original bump (H0).

2) Original bump (H0) overlaid with second bump (H1) slightly perturbed toward light source.

3) Substract original bump from second (H0-H1). This leads to brightened (B) and darkened (D) areas.

Compute The Lighting

Evaluate Fragment Color Cf

  • Cf = (L*N) x Dl x Dm
  • (L*N) ~ (Fd + (H1-H0))
  • Dm x Dl is encoded in surface texture Ct. Could control Dl seperately, if you’re clever. (we control it using OpenGL-Lighting!, rem.: Jens)
  • Cf = (Fd + (H0-H1) x Ct

Is That All? It’s So Easy!

We’re Not Quite Done Yet. We Still Must:

  • Build a texture (using a painting program, rem.: Jens)
  • Calculate texture coordinate offsets (ds,dt)
  • Calculate diffuse Factor Fd (is controlled using OpenGL-Lighting!, rem.: Jens)
  • Both are derived from normal N and light vector L (in our case, only (ds,dt) are calculated explicitly!, rem.: Jens)
  • Now we have to do some math

Building A Texture

Conserve Textures!

  • Current multitexture-hardware only supports two textures! (By now, not true anymore, but nevertheless you should read this!, rem.: Jens)
  • Bump Map in ALPHA channel (not the way we do it, could implement it yourself as an exercise if you have TNT-chipset rem.: Jens)
  • Maximum bump = 1.0
  • Level ground = 0.5
  • Maximum depression = 0.0
  • Surface color in RGB channels
  • Set internal format to GL_RGBA8 !!

Calculate Texture Offsets

Rotate Light Vector Into Normal Space

  • Need Normal coordinate system
  • Derive coordinate system from normal and “up” vector (we pass the texCoord directions to our offset generator explicitly, rem.: Jens)
  • Normal is z-axis
  • Cross-product is x-axis
  • Throw away "up" vector, derive y-axis as cross-product of x- and z-axis
  • Build 3x3 matrix Mn from axes
  • Transform light vector into normal space.(Mn is also called an orthonormal basis. Think of Mn*v as to "express" v in means of a basis describing tangent space rather than in means of the standard basis. Note also that orthonormal bases are invariant against-scaling resulting in no loss of normalization when multiplying vectors! rem.: Jens)

Calculate Texture Offsets (Cont’d)

Use Normal-Space Light Vector For Offsets

  • L’ = Mn x L
  • Use L’x, L’y for (ds,dt)
  • Use L’z for diffuse factor! (Rather not! If you’re no TNT-owner, use OpenGL-Lighting instead, since you have to do one additional pass anyhow!, rem.: Jens)
  • If light vector is near normal, L’x, L’y are small.
  • If light vector is near tangent plane, L’x, L’y are large.
  • What if L’z is less than zero?
  • Light is on opposite side from normal
  • Fade contribution toward zero.

Implementation On TNT

Calculate Vectors, Texcoords On The Host

  • Pass diffuse factor as vertex alpha
  • Could use vertex color for light diffuse color
  • H0 and surface color from texture unit 0
  • H1 from texture unit 1 (same texture, different coordinates)
  • ARB_multitexture extension
  • Combines extension (more precisely: the NVIDIA_multitexture_combiners extension, featured by all TNT-family cards, rem.: Jens)

Implementation on TNT (Cont'd)

Combiner 0 Alpha-Setup:

  • (1-T0a) + T1a - 0.5 (T0a stands for "texture-unit 0, alpha channel", rem.: Jens)
  • (T1a-T0a) maps to (-1,1), but hardware clamps to (0,1)
  • 0.5 bias balances the loss from clamping (consider using 0.5 scale, since you can use a wider variety of bump maps, rem.: Jens)
  • Could modulate light diffuse color with T0c
  • Combiner 0 rgb-setup:
  • (T0c * C0a + T0c * Fda - 0.5 )*2
  • 0.5 bias balances the loss from clamping
  • scale by 2 brightens the image

End Theory ( Emboss Bump Mapping )

Though we’re doing it a little bit different than the TNT-Implementation to enable our program to run on ALL accelerators, we can learn two or three things here. One thing is, that bump mapping is a multi-pass algorithm on most cards (not on TNT-family, where it can be implemented in one 2-texture pass.) You should now be able to imagine how nice multitexturing really is. We’ll now implement a 3-pass non-multitexture algorithm, that can be (and will be) developed into a 2-pass multitexture algorithm.

By now you should be aware, that we’ll have to do some matrix-matrix-multiplication (and matrix-vector-multiplication, too). But that’s nothing to worry about: OpenGL will do the matrix-matrix-multiplication for us (if tweaked right) and the matrix-vector-multiplication is really easy-going: VMatMult(M,v) multiplies matrix M with vector v and stores the result back in v: v:=M*v. All Matrices and vectors passed have to be in homogenous-coordinates resulting in 4x4 matrices and 4-dim vectors. This is to ensure conformity to OpenGL in order to multiply own vectors with OpenGL-matrices right away.

// Calculates v=vM, M Is 4x4 In Column-Major, v Is 4dim. Row (i.e. "Transposed")
void VMatMult(GLfloat *M, GLfloat *v) {
	GLfloat res[3];
	res[0]=M[ 0]*v[0]+M[ 1]*v[1]+M[ 2]*v[2]+M[ 3]*v[3];
	res[1]=M[ 4]*v[0]+M[ 5]*v[1]+M[ 6]*v[2]+M[ 7]*v[3];
	res[2]=M[ 8]*v[0]+M[ 9]*v[1]+M[10]*v[2]+M[11]*v[3];
	v[0]=res[0];
	v[1]=res[1];
	v[2]=res[2];
	v[3]=M[15];								// Homogenous Coordinate
}

Begin Theory ( Emboss Bump Mapping Algorithms )

Here we’ll discuss two different algorithms. I found the first one several days ago under:
/data/lessons/http://www.nvidia.com/marketing/Developer/DevRel.nsf/TechnicalDemosFrame?OpenPage

The program is called GL_BUMP and was written by Diego Tártara in 1999.
It implements really nice looking bump mapping, though it has some drawbacks.
But first, lets have a look at Tártara’s Algorithm:

  1. All vectors have to be EITHER in object OR world space
  2. Calculate vector v from current vertex to light position
  3. Normalize v
  4. Project v into tangent space. (This is the plane touching the surface in the current vertex. Typically, if working with flat surfaces, this is the surface itself).
  5. Offset (s,t)-coordinates by the projected v’s x and y component

This looks not bad! It is basically the Algorithm introduced by Michael I. Gold above. But it has a major drawback: Tártara only does the projection for a xy-plane! This is not sufficient for our purposes since it simplifies the projection step to just taking the xy-components of v and discarding the z-component.

But his implementation does the diffuse lighting the same way we’ll do it: by using OpenGL’s built-in lighting. Since we can’t use the combiners-method Gold suggests (we want our programs to run anywhere, not just on TNT-cards!), we can’t store the diffuse factor in the alpha channel. Since we already have a 3-pass non-multitexture / 2-pass multitexture problem, why not apply OpenGL-Lighting to the last pass to do all the ambient light and color stuff for us? This is possible (and looks quite well) only because we have no complex geometry, so keep this in mind. If you’d render several thousands of bump mapped triangles, try to invent something new!

Furthermore, he uses multitexturing, which is, as we shall see, not as easy as you might have thought regarding this special case.

But now to our Implementation. It looks quite the same to the above Algorithm, except for the projection step, where we use an own approach:

  • We use OBJECT COORDINATES, this means we don’t apply the modelview matrix to our calculations. This has a nasty side-effect: since we want to rotate the cube, object-coordinates of the cube don’t change, world-coordinates (also referred to as eye-coordinates) do. But our light-position should not be rotated with the cube, it should be just static, meaning that it’s world-coordinates don’t change. To compensate, we’ll apply a trick commonly used in computer graphics: Instead of transforming each vertex to worldspace in advance to computing the bumps, we’ll just transform the light into object-space by applying the inverse of the modelview-matrix. This is very cheap in this case since we know exactly how the modelview-matrix was built step-by-step, so an inversion can also be done step-by-step. We’ll come back later to that issue.
  • We calculate the current vertex c on our surface (simply by looking it up in data).
  • Then we’ll calculate a normal n with length 1 (We usually know n for each face of a cube!). This is important, since we can save computing time by requesting normalized vectors. Calculate the light vector v from c to the light position l
  • If there’s work to do, build a matrix Mn representing the orthonormal projection. This is done as f
  • Calculate out texture coordinate offset by multiplying the supplied texture-coordinate directions s and t each with v and MAX_EMBOSS: ds = s*v*MAX_EMBOSS, dt=t*v*MAX_EMBOSS. Note that s, t and v are vectors while MAX_EMBOSS isn’t.
  • Add the offset to the texture-coordinates in pass 2.

Why this is good:

  • Fast (only needs one squareroot and a couple of MULs per vertex)!
  • Looks very nice!
  • This works with all surfaces, not just planes.
  • This runs on all accelerators.
  • Is glBegin/glEnd friendly: Does not need any "forbidden" GL-commands.

Drawback:

  • Not fully physical correct.
  • Leaves minor artefacts.

This figure shows where our vectors are located. You can get t and s by simply subtracting adjacent vertices, but be sure to have them point in the right direction and to normalize them. The blue spot marks the vertex where texCoord2f(0.0f,0.0f) is mapped to.

End Theory ( Emboss Bump Mapping Algorithms )

Let’s have a look to texture-coordinate offset generation, first. The function is called SetUpBumps(), since this actually is what it does:

// Sets Up The Texture-Offsets
// n : Normal On Surface. Must Be Of Length 1
// c : Current Vertex On Surface
// l : Lightposition
// s : Direction Of s-Texture-Coordinate In Object Space (Must Be Normalized!)
// t : Direction Of t-Texture-Coordinate In Object Space (Must Be Normalized!)
void SetUpBumps(GLfloat *n, GLfloat *c, GLfloat *l, GLfloat *s, GLfloat *t) {
	GLfloat v[3];								// Vector From Current Position To Light
	GLfloat lenQ;								// Used To Normalize
	// Calculate v From Current Vertex c To Lightposition And Normalize v
	v[0]=l[0]-c[0];
	v[1]=l[1]-c[1];
	v[2]=l[2]-c[2];
	lenQ=(GLfloat) sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2]);
	v[0]/=lenQ;
	v[1]/=lenQ;
	v[2]/=lenQ;
	// Project v Such That We Get Two Values Along Each Texture-Coordinate Axis
	c[0]=(s[0]*v[0]+s[1]*v[1]+s[2]*v[2])*MAX_EMBOSS;
	c[1]=(t[0]*v[0]+t[1]*v[1]+t[2]*v[2])*MAX_EMBOSS;
}

Doesn’t look that complicated anymore, eh? But theory is necessary to understand and control this effect. (I learned THAT myself during writing this tutorial).

I always like logos to be displayed while presentational programs are running. We’ll have two of them right now. Since a call to doLogo() resets the GL_MODELVIEW-matrix, this has to be called as final rendering pass.

This function displays two logos: An OpenGL-Logo and a multitexture-Logo, if this feature is enabled. The logos are alpha-blended and are sort of semi-transparent. Since they have an alpha-channel, I blend them using GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, as suggested by all OpenGL-documentation. Since they are all co-planar, we do not have to z-sort them before. The numbers that are used for the vertices are "empirical" (a.k.a. try-and-error) to place them neatly into the screen edges. We’ll have to enable blending and disable lighting to avoid nasty effects. To ensure they’re in front of all, just reset the GL_MODELVIEW-matrix and set depth-function to GL_ALWAYS.

void doLogo(void) {
	// MUST CALL THIS LAST!!!, Billboards The Two Logos
	glDepthFunc(GL_ALWAYS);
	glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
	glEnable(GL_BLEND);
	glDisable(GL_LIGHTING);
	glLoadIdentity();
	glBindTexture(GL_TEXTURE_2D,glLogo);
	glBegin(GL_QUADS);
		glTexCoord2f(0.0f,0.0f);	glVertex3f(0.23f, -0.4f,-1.0f);
		glTexCoord2f(1.0f,0.0f);	glVertex3f(0.53f, -0.4f,-1.0f);
		glTexCoord2f(1.0f,1.0f);	glVertex3f(0.53f, -0.25f,-1.0f);
		glTexCoord2f(0.0f,1.0f);	glVertex3f(0.23f, -0.25f,-1.0f);
	glEnd();
	if (useMultitexture) {
		glBindTexture(GL_TEXTURE_2D,multiLogo);
		glBegin(GL_QUADS);
			glTexCoord2f(0.0f,0.0f);	glVertex3f(-0.53f, -0.25f,-1.0f);
			glTexCoord2f(1.0f,0.0f);	glVertex3f(-0.33f, -0.25f,-1.0f);
			glTexCoord2f(1.0f,1.0f);	glVertex3f(-0.33f, -0.15f,-1.0f);
			glTexCoord2f(0.0f,1.0f);	glVertex3f(-0.53f, -0.15f,-1.0f);
		glEnd();
	}
}

Here comes the function for doing the bump mapping without multitexturing. It’s a three-pass implementation. As a first step, the GL_MODELVIEW matrix is inverted by applying to the identity-matrix all steps later applied to the GL_MODELVIEW in reverse order and inverted. The result is a matrix that "undoes" the GL_MODELVIEW if applied to an object. We fetch it from OpenGL by simply using glGetFloatv(). Remember that the matrix has to be an array of 16 and that the matrix is "transposed"!

By the way: If you don’t exactly know how the modelview was built, consider using world-space, since matrix-inversion is complicated and costly. But if you’re doing large amounts of vertices inverting the modelview with a more generalized approach could be faster.

bool doMesh1TexelUnits(void) {
	GLfloat c[4]={0.0f,0.0f,0.0f,1.0f};					// Holds Current Vertex
	GLfloat n[4]={0.0f,0.0f,0.0f,1.0f};					// Normalized Normal Of Current Surface
	GLfloat s[4]={0.0f,0.0f,0.0f,1.0f};					// s-Texture Coordinate Direction, Normalized
	GLfloat t[4]={0.0f,0.0f,0.0f,1.0f};					// t-Texture Coordinate Direction, Normalized
	GLfloat l[4];								// Holds Our Lightposition To Be Transformed Into Object Space
	GLfloat Minv[16];							// Holds The Inverted Modelview Matrix To Do So
	int i;

	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);			// Clear The Screen And The Depth Buffer

	// Build Inverse Modelview Matrix First. This Substitutes One Push/Pop With One glLoadIdentity();
	// Simply Build It By Doing All Transformations Negated And In Reverse Order
	glLoadIdentity();
	glRotatef(-yrot,0.0f,1.0f,0.0f);
	glRotatef(-xrot,1.0f,0.0f,0.0f);
	glTranslatef(0.0f,0.0f,-z);
	glGetFloatv(GL_MODELVIEW_MATRIX,Minv);
	glLoadIdentity();
	glTranslatef(0.0f,0.0f,z);
	glRotatef(xrot,1.0f,0.0f,0.0f);
	glRotatef(yrot,0.0f,1.0f,0.0f);

	// Transform The Lightposition Into Object Coordinates:
	l[0]=LightPosition[0];
	l[1]=LightPosition[1];
	l[2]=LightPosition[2];
	l[3]=1.0f;								// Homogenous Coordinate
	VMatMult(Minv,l);

First Pass:

  • Use bump-texture
  • Disable Blending
  • Disable Lighting
  • Use non-offset texture-coordinates
  • Do the geometry

This will render a cube only consisting out of bump map.

	glBindTexture(GL_TEXTURE_2D, bump[filter]);
	glDisable(GL_BLEND);
	glDisable(GL_LIGHTING);
	doCube();

Second Pass:

  • Use inverted bump-texture
  • Enable Blending GL_ONE, GL_ONE
  • Keep Lighting disabled
  • Use offset texture-coordinates (This means that you call SetUpBumps() before each face of the cube
  • Do the geometry

This will render a cube with the correct emboss bump mapping, but without colors.

You could save computing time by just rotating the lightvector into inverted direction. However, this didn’t work out correctly, so we do it the plain way: rotate each normal and center-point the same way we rotate our geometry!

	glBindTexture(GL_TEXTURE_2D,invbump[filter]);
	glBlendFunc(GL_ONE,GL_ONE);
	glDepthFunc(GL_LEQUAL);
	glEnable(GL_BLEND);

	glBegin(GL_QUADS);
		// Front Face
		n[0]=0.0f;
		n[1]=0.0f;
		n[2]=1.0f;
		s[0]=1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=0; i<4; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glTexCoord2f(data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Back Face
		n[0]=0.0f;
		n[1]=0.0f;
		n[2]=-1.0f;
		s[0]=-1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=4; i<8; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glTexCoord2f(data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Top Face
		n[0]=0.0f;
		n[1]=1.0f;
		n[2]=0.0f;
		s[0]=1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=0.0f;
		t[2]=-1.0f;
		for (i=8; i<12; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glTexCoord2f(data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Bottom Face
		n[0]=0.0f;
		n[1]=-1.0f;
		n[2]=0.0f;
		s[0]=-1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=0.0f;
		t[2]=-1.0f;
		for (i=12; i<16; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glTexCoord2f(data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Right Face
		n[0]=1.0f;
		n[1]=0.0f;
		n[2]=0.0f;
		s[0]=0.0f;
		s[1]=0.0f;
		s[2]=-1.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=16; i<20; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glTexCoord2f(data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Left Face
		n[0]=-1.0f;
		n[1]=0.0f;
		n[2]=0.0f;
		s[0]=0.0f;
		s[1]=0.0f;
		s[2]=1.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=20; i<24; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glTexCoord2f(data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
	glEnd();

Third Pass:

  • Use (colored) base-texture
  • Enable Blending GL_DST_COLOR, GL_SRC_COLOR
  • This blending equation multiplies by 2: (Cdst*Csrc)+(Csrc*Cdst)=2(Csrc*Cdst)!
  • Enable Lighting to do the ambient and diffuse stuff
  • Reset GL_TEXTURE-matrix to go back to "normal" texture coordinates
  • Do the geometry

This will finish cube-rendering, complete with lighting. Since we can switch back and forth between multitexturing and non-multitexturing, we have to reset the texture-environment to "normal" GL_MODULATE first. We only do the third pass, if the user doesn’t want to see just the emboss.

	if (!emboss) {
		glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
		glBindTexture(GL_TEXTURE_2D,texture[filter]);
		glBlendFunc(GL_DST_COLOR,GL_SRC_COLOR);
		glEnable(GL_LIGHTING);
		doCube();
	}

Last Pass:

  • update geometry (esp. rotations)
  • do the Logos
	xrot+=xspeed;
	yrot+=yspeed;
	if (xrot>360.0f) xrot-=360.0f;
	if (xrot<0.0f) xrot+=360.0f;
	if (yrot>360.0f) yrot-=360.0f;
	if (yrot<0.0f) yrot+=360.0f;

	/* LAST PASS: Do The Logos! */
	doLogo();
	return true;								// Keep Going
}

This function will do the whole mess in 2 passes with multitexturing support. We support two texel-units. More would be extreme complicated due to the blending equations. Better trim to TNT instead. Note that almost the only difference to doMesh1TexelUnits() is, that we send two sets of texture-coordinates for each vertex!

bool doMesh2TexelUnits(void) {
	GLfloat c[4]={0.0f,0.0f,0.0f,1.0f};					// Holds Current Vertex
	GLfloat n[4]={0.0f,0.0f,0.0f,1.0f};					// Normalized Normal Of Current Surface
	GLfloat s[4]={0.0f,0.0f,0.0f,1.0f};					// s-Texture Coordinate Direction, Normalized
	GLfloat t[4]={0.0f,0.0f,0.0f,1.0f};					// t-Texture Coordinate Direction, Normalized
	GLfloat l[4];								// Holds Our Lightposition To Be Transformed Into Object Space
	GLfloat Minv[16];							// Holds The Inverted Modelview Matrix To Do So
	int i;

	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);			// Clear The Screen And The Depth Buffer

	// Build Inverse Modelview Matrix First. This Substitutes One Push/Pop With One glLoadIdentity();
	// Simply Build It By Doing All Transformations Negated And In Reverse Order
	glLoadIdentity();
	glRotatef(-yrot,0.0f,1.0f,0.0f);
	glRotatef(-xrot,1.0f,0.0f,0.0f);
	glTranslatef(0.0f,0.0f,-z);
	glGetFloatv(GL_MODELVIEW_MATRIX,Minv);
	glLoadIdentity();
	glTranslatef(0.0f,0.0f,z);

	glRotatef(xrot,1.0f,0.0f,0.0f);
	glRotatef(yrot,0.0f,1.0f,0.0f);

	// Transform The Lightposition Into Object Coordinates:
	l[0]=LightPosition[0];
	l[1]=LightPosition[1];
	l[2]=LightPosition[2];
	l[3]=1.0f;								// Homogenous Coordinate
	VMatMult(Minv,l);

First Pass:

  • No Blending
  • No Lighting

Set up the texture-combiner 0 to

  • Use bump-texture
  • Use not-offset texture-coordinates
  • Texture-Operation GL_REPLACE, resulting in texture just being drawn

Set up the texture-combiner 1 to

  • Offset texture-coordinates
  • Texture-Operation GL_ADD, which is the multitexture-equivalent to ONE, ONE- blending.

This will render a cube consisting out of the grey-scale erode map.

	// TEXTURE-UNIT #0
	glActiveTextureARB(GL_TEXTURE0_ARB);
	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, bump[filter]);
	glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);
	glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_REPLACE);

	// TEXTURE-UNIT #1
	glActiveTextureARB(GL_TEXTURE1_ARB);
	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, invbump[filter]);
	glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);
	glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_ADD);

	// General Switches
	glDisable(GL_BLEND);
	glDisable(GL_LIGHTING);

Now just render the faces one by one, as already seen in doMesh1TexelUnits(). Only new thing: Uses glMultiTexCoor2fARB() instead of just glTexCoord2f(). Note that you must specify which texture-unit you mean by the first parameter, which must be GL_TEXTUREi_ARB with i in [0..31]. (What hardware has 32 texture-units? And what for?)

	glBegin(GL_QUADS);
		// Front Face
		n[0]=0.0f;
		n[1]=0.0f;
		n[2]=1.0f;
		s[0]=1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=0; i<4; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,data[5*i], data[5*i+1]);
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Back Face
		n[0]=0.0f;
		n[1]=0.0f;
		n[2]=-1.0f;
		s[0]=-1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=4; i<8; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,data[5*i], data[5*i+1]);
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Top Face
		n[0]=0.0f;
		n[1]=1.0f;
		n[2]=0.0f;
		s[0]=1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=0.0f;
		t[2]=-1.0f;
		for (i=8; i<12; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,data[5*i], data[5*i+1]);
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Bottom Face
		n[0]=0.0f;
		n[1]=-1.0f;
		n[2]=0.0f;
		s[0]=-1.0f;
		s[1]=0.0f;
		s[2]=0.0f;
		t[0]=0.0f;
		t[1]=0.0f;
		t[2]=-1.0f;
		for (i=12; i<16; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,data[5*i], data[5*i+1]);
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Right Face
		n[0]=1.0f;
		n[1]=0.0f;
		n[2]=0.0f;
		s[0]=0.0f;
		s[1]=0.0f;
		s[2]=-1.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=16; i<20; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,data[5*i], data[5*i+1]);
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
		// Left Face
		n[0]=-1.0f;
		n[1]=0.0f;
		n[2]=0.0f;
		s[0]=0.0f;
		s[1]=0.0f;
		s[2]=1.0f;
		t[0]=0.0f;
		t[1]=1.0f;
		t[2]=0.0f;
		for (i=20; i<24; i++) {
			c[0]=data[5*i+2];
			c[1]=data[5*i+3];
			c[2]=data[5*i+4];
			SetUpBumps(n,c,l,s,t);
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,data[5*i], data[5*i+1]);
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,data[5*i]+c[0], data[5*i+1]+c[1]);
			glVertex3f(data[5*i+2], data[5*i+3], data[5*i+4]);
		}
	glEnd();

Second Pass

  • Use the base-texture
  • Enable Lighting
  • No offset texturre-coordinates => reset GL_TEXTURE-matrix
  • Reset texture environment to GL_MODULATE in order to do OpenGLLighting (doesn’t work otherwise!)

This will render our complete bump-mapped cube.

	glActiveTextureARB(GL_TEXTURE1_ARB);
	glDisable(GL_TEXTURE_2D);
	glActiveTextureARB(GL_TEXTURE0_ARB);
	if (!emboss) {
		glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
		glBindTexture(GL_TEXTURE_2D,texture[filter]);
		glBlendFunc(GL_DST_COLOR,GL_SRC_COLOR);
		glEnable(GL_BLEND);
		glEnable(GL_LIGHTING);
		doCube();
	}

Last Pass

  • Update Geometry (esp. rotations)
  • Do The Logos
	xrot+=xspeed;
	yrot+=yspeed;
	if (xrot>360.0f) xrot-=360.0f;
	if (xrot<0.0f) xrot+=360.0f;
	if (yrot>360.0f) yrot-=360.0f;
	if (yrot<0.0f) yrot+=360.0f;

	/* LAST PASS: Do The Logos! */
	doLogo();
	return true;								// Keep Going
}

Finally, a function to render the cube without bump mapping, so that you can see what difference this makes!

bool doMeshNoBumps(void) {
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);			// Clear The Screen And The Depth Buffer
	glLoadIdentity();							// Reset The View
	glTranslatef(0.0f,0.0f,z);

	glRotatef(xrot,1.0f,0.0f,0.0f);
	glRotatef(yrot,0.0f,1.0f,0.0f);

	if (useMultitexture) {
		glActiveTextureARB(GL_TEXTURE1_ARB);
		glDisable(GL_TEXTURE_2D);
		glActiveTextureARB(GL_TEXTURE0_ARB);
	}

	glDisable(GL_BLEND);
	glBindTexture(GL_TEXTURE_2D,texture[filter]);
	glBlendFunc(GL_DST_COLOR,GL_SRC_COLOR);
	glEnable(GL_LIGHTING);
	doCube();

	xrot+=xspeed;
	yrot+=yspeed;
	if (xrot>360.0f) xrot-=360.0f;
	if (xrot<0.0f) xrot+=360.0f;
	if (yrot>360.0f) yrot-=360.0f;
	if (yrot<0.0f) yrot+=360.0f;

	/* LAST PASS: Do The Logos! */
	doLogo();
	return true;								// Keep Going
}

All the drawGLScene() function has to do is to determine which doMesh-function to call:

bool DrawGLScene(GLvoid)							// Here's Where We Do All The Drawing
{
	if (bumps) {
		if (useMultitexture && maxTexelUnits>1)
			return doMesh2TexelUnits();
		else return doMesh1TexelUnits();	}
	else return doMeshNoBumps();
}

Kills the GLWindow, not modified (thus omitted):

GLvoid KillGLWindow(GLvoid)							// Properly Kill The Window
>…<

Creates the GLWindow, not modified (thus omitted):

BOOL CreateGLWindow(char* title, int width, int height, int bits, bool fullscreenflag)
>…<

Windows main-loop, not modified (thus omitted):

LRESULT CALLBACK WndProc(	HWND hWnd,					// Handle For This Window
				UINT uMsg,					// Message For This Window
				WPARAM wParam,					// Additional Message Information
				LPARAM lParam)					// Additional Message Information
>…<

Windows main-function, added some keys:

  • E: Toggle Emboss / Bumpmapped Mode
  • M: Toggle Multitexturing
  • B: Toggle Bumpmapping. This Is Mutually Exclusive With Emboss Mode
  • F: Toggle Filters. You’ll See Directly That GL_NEAREST Isn’t For Bumpmapping
  • CURSOR-KEYS: Rotate The Cube
int WINAPI WinMain(	HINSTANCE hInstance,					// Instance
			HINSTANCE hPrevInstance,				// Previous Instance
			LPSTR lpCmdLine,					// Command Line Parameters
			int nCmdShow)						// Window Show State
{

	>…<

				if (keys['E'])
				{
					keys['E']=false;
					emboss=!emboss;
				}

				if (keys['M'])
				{
					keys['M']=false;
					useMultitexture=((!useMultitexture) && multitextureSupported);
				}

				if (keys['B'])
				{
					keys['B']=false;
					bumps=!bumps;
				}

				if (keys['F'])
				{
					keys['F']=false;
					filter++;
					filter%=3;
				}

				if (keys[VK_PRIOR])
				{
					z-=0.02f;
				}

				if (keys[VK_NEXT])
				{
					z+=0.02f;
				}

				if (keys[VK_UP])
				{
					xspeed-=0.01f;
				}

				if (keys[VK_DOWN])
				{
					xspeed+=0.01f;
				}

				if (keys[VK_RIGHT])
				{
					yspeed+=0.01f;
				}

				if (keys[VK_LEFT])
				{
					yspeed-=0.01f;
				}
			}
		}
	}
	// Shutdown
	KillGLWindow();								// Kill The Window
	return (msg.wParam);							// Exit The Program
}

Now that you managed this tutorial some words about generating textures and bumpmapped objects before you start to program mighty games and wonder why bumpomapping isn’t that fast or doesn’t look that good:

  • You shouldn’t use textures of 256x256 as done in this lesson. This slows things down a lot. Only do so if demonstrating visual capabilities (like in tutorials).
  • A bumpmapped cube is not usual. A rotated cube far less. The reason for this is the viewing angle: The steeper it gets, the more visual distortion due to filtering you get. Nearly all multipass algorithms are very affected by this. To avoid the need for high-resolution textures, reduce the minimum viewing angle to a sensible value or reduce the bandwidth of viewing angles and pre-filter you texture to perfectly fit that bandwidth.
  • You should first have the colored-texture. The bumpmap can be often derived from it using an average paint-program and converting it to grey-scale.
  • The bumpmap should be "sharper" and higher in contrast than the color-texture. This is usually done by applying a "sharpening filter" to the texture and might look strange at first, but believe me: you can sharpen it A LOT in order to get first class visual appearance.
  • The bumpmap should be centered around 50%-grey (RGB=127,127,127), since this means "no bump at all", brighter values represent ing bumps and lower "scratches". This can be achieved using "histogram" functions in some paint-programs.
  • The bumpmap can be one fourth in size of the color-texture without "killing" visual appearance, though you’ll definitely see the difference.

Now you should at least have a basic understanding of the issued covered in this tutorial. I hope you have enjoyed reading it.

If you have questions and / or suggestions regarding this lesson, you can mail me or stop by my website at /data/lessons/http://www.glhint.de.

Thanks must go to:

  • Michael I. Gold for his Bump Mapping Documentation
  • Diego Tártara for his example code
  • NVidia for putting great examples on the WWW
  • And last but not least to NeHe who helped me learn a lot about OpenGL.

Jens Schneider

Jeff Molofee (NeHe)

* DOWNLOAD Visual C++ Code For This Lesson.

* DOWNLOAD Borland C++ Builder 6 Code For This Lesson. ( Conversion by Christian Kindahl )
* DOWNLOAD Code Warrior 5.3 Code For This Lesson. ( Conversion by Scott Lupton )
* DOWNLOAD Delphi Code For This Lesson. ( Conversion by Michal Tucek )
* DOWNLOAD Dev C++ Code For This Lesson. ( Conversion by Dan )
* DOWNLOAD GLut Code For This Lesson. ( Conversion by Bruce Barrera )
* DOWNLOAD Java Code For This Lesson. ( Conversion by Jeff Kirby )
* DOWNLOAD JoGL Code For This Lesson. ( Conversion by Abdul Bezrati )
* DOWNLOAD Linux Code For This Lesson. ( Conversion by Luca Rizzuti )
* DOWNLOAD Linux/SDL Code For This Lesson. ( Conversion by Ti Leggett )
* DOWNLOAD LWJGL Code For This Lesson. ( Conversion by Mark Bernard )
* DOWNLOAD Mac OS Code For This Lesson. ( Conversion by Morgan Aldridge )
* DOWNLOAD Mac OS X/Cocoa Code For This Lesson. ( Conversion by Bryan Blackburn )
* DOWNLOAD Visual C++ / OpenIL Code For This Lesson. ( Conversion by Denton Woods )
* DOWNLOAD Visual Studio .NET Code For This Lesson. ( Conversion by Grant James )

 

< Lesson 21Lesson 23 >