Articles
EXPLANATION OF THE TRANSLATION METHOD
Once your OpenGL window is made, and you compile and open it, you're staring at a black screen. OpenGL is a 3D rendering engine, which I'm sure you know if you're trying to learn it. So where exactly are you if you're just staring into blackness, or whiteness, depending on what color you put into glClearColor. Well, imagine a grid. The grid is 20 units long, and 20 units wide. Now in reality of OpenGL, there is no limit to the space you have. But for this idea, imagine it being 20x20. Your camera is sitting in the very center of the grid, at 10 units up, 10 units in. This is the center, just like in math, and it is divided into four quadrants.
IMPROVED TIMING
I have often downloaded demos that were either running to fast or to slow for my computer. A nice demo that runs so fast that it more looks like a flickering set of images or where the mouse or other controls makes the screen spin out of control is just as annoying as viewing something at 3 fps. I have therefore written this small article about timing.
GAME INPUT CLASS
1. About The Class 2. How To Use This Class An example: switch (uMsg) // type of uMsg is UINT { case WM_KEYDOWN: { InputClass.GetInput(wParam); // type of InputClass CGLInput // type of wParam - WPARAM } } The end of an input is when user presses the ENTER key (VK_RETURN constant). Because a lot of people use enter in their programs, I suggest you call the FixEnterCollide function every frame. After you've done that, you can start using the class for input. The class ostensibly works in the background (because it's based on Windows Events), so you can draw or do other things while the user inputs. The class works by "processes" it has a process for each input type integer, string or char. In order to get input, you begin a process (using the BeginProcess function. This function gets the kind of process you want to begin). Now any keys that are pressed will be handled by the input class up until the user presses the ENTER key. You can use the IsOver function to determine whether the process is over or not. If it is, use the GetInt, GetStr or GetChar function to get the input. Once you use one of these functions, the process is automatically cancelled. In your rendering loop use the DrawInput function so the user can see the input. Many people use the ENTER key for various things. If you do as well, make sure you call the FixEnterCollide function in the main loop of your program. It will fix the "collides" between the input class and use of Enter in the program itself. 3. The Function Table For This Class
4. Process Constants Table
There is obviously no need knowing the values of the constants. 5. Example #include "CGLInput.h" CGLInput IC1; // Define A Global Input Class Variable LRESULT CALLBACK WndProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam ) { switch (uMsg) { case WM_KEYDOWN: { InputClass.GetInput(wParam); } } } IC1.BeginProcess(PR_GETINT); // Begin a Get Int Process void DrawEverything() // Constantly Draws The Scene { int input; IC1.DrawInput(); // Draw The Input. If Process Is // PR_NO_PROCESS, Nothing // Is Drawn. if(IC1.IsOver()) // Is The Process Over? { IC1.GetInt(input); // If So, Put The Input In A Variable } }
1. About The Class 2. How To Use The Class 3. The Functions Of This Class void SetFont( HDC hDC, char* fName = NULL, GLfloat fDepth = 0.5f, int fWeight = FW_BOLD, DWORD fItalic = FALSE, DWORD fUnderline = FALSE, DWORD fStrikeOut = FALSE, DWORD fCharSet = ANSI_CHARSET ) Parameters: Print: bool Print(const char *fmt, ...) Parameters: Return Values: 4. Class Constants 5. Font Weights
6. Example #include "CGLOutput.h" char myname[]; CGLOutput OC; // Output Class Variable OC.SetFont(hDC, "Comic Sans MS"); // Set The Font OC.Print("Hello %s", myname); // Print "hello" + a string.
1. About The Class
BILLBOARDING HOW TO
1. Introduction 2. What is Billboarding 3. Terms 4. Point Sprites 4.1 Collective Billboarding 4.2 Individual Billboarding 5. Axis Aligned Billboards 6. Arbitrary Axis Billboards 7. Using Those Billboard Vectors 8. Rendering a Billboard I looked around a lot for a good billboarding tutorial, but the most I could ever find was a short document that missed a lot or some code with bad commenting. My intention with this document and code was to resolve those issues. I hope this document explains things well enough so you can get billboarding to work in your code. There is also accompanying source to demonstrate this, the source requires that you have freetype and directX5 to build; although I did include a Windows® executable for those who cant build it. You may notice that there is a lot more code than just billboarding code that comes with. This is a collection of source I written over time, and most my projects I make are based on it. I made the billboarding code in a separate file from this, so hopefully you can just paste the source into your project if you so desire. Questions and comments can be emailed to: opengltut@hotmail.com.
BUMP MAPPING
What Is Bump Mapping? Bump Mapping is a special sort of per pixel lighting (remember: OpenGL lighting is per vertex; a color is computed for each vertex and then interpolated across the triangle, quad, ). The famous lighting models (for example the OpenGL lighting) uses the normal to calculate a lighting color. Normally (at per vertex lighting) this normal is provided just like the vertex coordinate or texture coordinate. The idea behind Bump Mapping is to use different normals for each pixel (rather than for each vertex). But Why Do That? Imagine you have a flat surface (like a triangle or a quad). The normal for this surface is equal in every point on this surface. Using different normals on a surface makes it bumpy, not straight flat. But remember we are only able to draw flat primitives like triangles or quads. So Bump Mapping is just a fake technology for rendering bumpy surfaces. But Why Using Bump Mapping? Bump Mapping is supported in hardware on GeForce 256 (and up) and Radeon 7200 (and up). Bump Mapping can be performed in texture environment stages (see implementations) and is a very inexpensive feature for rendering a more beautiful world. Bump Mapping - How Does It Work? There are some different bump mapping technologies:
First lets take a look at the OpenGL lighting equation: <img src="extras/article20/figure1.jpg"> Quite confusing, eh? :-) But we are interested in the middle part, the diffuse part only (if you dont know how OpenGL lighting works, take a look at http://www.cs.tcd.ie/courses/baict/bass/4ict10/Hillary2003/pdf/Lecture2_9Jan.pdf. <img src="extras/article20/figure2.jpg"> n is the vertex normal Remember that the vector from the vertex coordinate to the light and the vertex normal has to be normalized We have heard that DP3 Bump Mapping is performed in hardware using texture environments. But how does this work with our lighting equation above? Modern 3D Cards supports a new texture environment extension: ARB_texture_env_dot3 This extension requires ARB_texture_env_combine and adds a new texture combiner operation: DOT3_RGB_ARB (and DOT3_RGBA_ARB, but this is less important). First Of All: What Are Texture Environments? A texture environment describes how a sampled texel of a certain texture unit is combined with the other values. There is a texture environment for each texture unit available. When a primitive is rendered (and texturing is enabled), the texture environment of the first texture is computed and the result is sent to the next texture environment. Normally all texture environments are set to GL_MODULATE, which means that the value of the last texture unit (at the first texture unit the primary color set by glColor or the color calculated by lighting is used) is multiplied by the texel of the current texture unit. The result of the last active texture unit is the color of our pixel. Here an example: So the output of the first texture environment is (0.5, 1.0, 1.0) * (1.0, 1.0, 0.0) = (0.5, 1.0, 0.0) You also can perform much more complex environment operations using ARB_texture_env_combine (see http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_env_combine.txt) Before we return to Bump Mapping theres another thing which is important: Normalization Cube Maps A cube map is a special form of a texture. Exactly there are 6 2D textures in one cube map. They represent the 6 faces of a cube (thats why they are called cube maps :-). Giving a 3D vector to a cube map it returns the texture value where this vector cuts the unit cube. Normally cube maps are used for view independent reflections. But a big advantage of cube maps is that this vector doesnt have to be normalized! So creating a special cube map which contains normalized vectors you can easily normalize a vector by passing it as a texture coordinate. See reference [4] how to calculate a normalization cube map. So, now back to our Bump Mapping problem: How Can We Use This Knowledge With OpenGL To Perform DP3 Bump Mapping? First a simple list what we need:
In this simple example we want to implement a simple version of the diffuse lighting equation (where . Is the dot product and * is a multiplication): result_color = (normalized_vector_from_surface_to_light . normal_of_the_surface) * material_texture Assuming we have 4 texture units (GeForce 3 and up, Radeon 8500 and up) we can do the following: // Set The First Texture Unit To Normalize Our Vector From The Surface To The Light. // Set The Texture Environment Of The First Texture Unit To Replace It With The // Sampled Value Of The Normalization Cube Map. glActiveTextureARB(GL_TEXTURE0); glEnable(GL_TEXTURE_CUBE_MAP); glBindTexture(GL_TEXTURE_CUBE_MAP, our_normalization_cube_map); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_REPLACE) ; glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_TEXTURE) ; // Set The Second Unit To The Bump Map. // Set The Texture Environment Of The Second Texture Unit To Perform A Dot3 // Operation With The Value Of The Previous Texture Unit (The Normalized // Vector Form The Surface To The Light) And The Sampled Texture Value (The // Normalized Normal Vector Of Our Bump Map). glActiveTextureARB(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, our_bump_map); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB) ; glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_PREVIOUS) ; glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB, GL_TEXTURE) ; // Set The Third Texture Unit To Our Texture. // Set The Texture Environment Of The Third Texture Unit To Modulate // (Multiply) The Result Of Our Dot3 Operation With The Texture Value. glActiveTextureARB(GL_TEXTURE2); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, our_texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); // Now We Draw Our Object (Remember That We First Have To Calculate The // (UnNormalized) Vector From Each Vertex To Our Light). float vertex_to_light_x, vertex_to_light_y, vertex_to_light_z; glBegin(GL_QUADS); for (unsigned int i = 0; i < 4; i++) { vertex_to_light_x = light_x current_vertex_x; vertex_to_light_y = light_y current_vertex_y; vertex_to_light_z = light_z current_vertex_z; // Passing The vector_to_light Values To Texture Unit 0. // Remember The First Texture Unit Is Our Normalization Cube Map // So This Vector Will Be Normalized For Dot 3 Bump Mapping. glMultiTexCoord3f(GL_TEXTURE0, vertex_to_light_y, vertex_to_light_y, vertex_to_light_x); // Passing The Simple Texture Coordinates To Texture Unit 1 And 2. glMultiTexCoord2f(GL_TEXTURE1, current_texcoord_s, current_texcoord_t); glMultiTexCoord2f(GL_TEXTURE2, current_texcoord_s, current_texcoord_t); glVertex3f(current_vertex_x, current_vertex_y, current_vertex_z) ; } glEnd(); So, thats all for creating a simple bump mapped surface. But thats not the end! We might run into big problems in this example. Here we assume that the quad we draw is parallel to the x/y plane. Remember that the normals we stored in a texture are in a static coordinate space. In the example above (if the quad we draw is paralled to the x/y plane) the coordinate space of our object is equal to the coordinate space of the normals. Imagine that we rotate the quad around the x axis. Now the z axis of the normals also has to be rotated. We didnt pay respect to this. But theres a very simple solution for this problem: Tangent Space Bump Mapping In Tangent Space Bump Mapping we define a new coordinate system, the tangent space. This tangent space is different from vertex to vertex. We use 3 vectors to represent this tangentspace: the normal (the z axis of our tangent space), the tangent (the x axis of our tangent space) and the binormal (the y axis of our tangent space). Theres a nice picture at reference [4]. You can easily calculate these vector from the geometry date (vertex and texture coordinates) (see reference [5] for details). With these 3 vectors we can build a matrix which transforms a vector from object space to our tangent space. In the example above the tangent is (1,0,0) and the binormal is (0,1,0) and the normal is (0,0,1), so our matrix will be: ( 1,0,0 ) ( 0,1,0 ) ( 0,0,1 ) And this is an identity matrix so there is no need to use tangent space bump mapping. And thats it! Of course you can use vertex programs for performing the multiplication to tangent space and/or for calculation the vector from each vertex to the light. Thanks for reading! "If you want to contact my please make sure that it doesn't look like spam! I get a lot of spam mail/ICQ requests... Everything suspicious will be deleted" Demo: Bump Mapping Demo For This Article References: [1] - Real-Time Rendering by Eric Haines and Tomas Akenine-Möller (ISBN 1-56881-182-9) |
|