OpenGL in Demos

Otis

Introduction.

Let me introduce myself first. I'm Frans Bouma, in the scene better known as 'Otis'. I'm an old scenefart, forgive me. I started in 1989 on the Amiga, making music and programming routines, I swapped to the PC in 1995 and am still there. Nowadays I program OpenGL demos and java routines and create graphics and music for Infuse Project. Don't ask how old I am, most computers can't handle that kind of large numbers.

The article I cooked up for you all is about a serious topic. It's about using 3D hardware in demos. That's not new of course, but using the 3D hardware by utilizing a widely accepted 3D API is. We have D3D on Windows, Glide on Windows and Macintosh, and since a year or two OpenGL is also a main 3D API in the desktop community.

Loosen your prejudices.

Before you grab your eggs and tomatos to throw at me because you think I'm a Linux advocate, a Microsoftie, an anti-DOS geek or a Macintosh-moron: I'm not. I don't have to. OpenGL is portable. It's available on all major platforms, so compatibility is not an issue. If you choose OpenGL as your renderAPI, you won't connect yourself to one particular platform, except the platform specific code that is needed to run a program of course.

Also forget your prejudice against 3D hardware in demos. Within a few years, perhaps even within a year, most computers used by sceners will have 3D hardware on board. So why waisting all that pretty sillicon when you can use it for free! Ok, if you disagree on that, this article won't be a thrill or fun to read, nor will I probably be in your list of 'ICQ pals', but some day you'll understand that progress is needed to achieve goals.

This article is not about what's better: D3D or OpenGL, nor does it tell you even the differences between them. The API of choice, perhaps even Glide is still an option for you, is a choice YOU have to make. I hope this article will help you decide which API you want to use.

What IS OpenGL then?

First of all, it takes a large piece of text to explain every detail inside OpenGL. I'll try to let you get a brief overview of what's inside the API and what's not.

OpenGL is a 3D rasterization API. This means that OpenGL is meant to render pure 3D primitives, like lines, polygons, points in 3D space etc., but no 2D stuff like you can do in DirectDraw. It's not comparable with DirectDraw, like D3D is not comparable with DirectDraw.

OpenGL is dependant on the display API that is used to display windows or canvasses on screen, like GDI on Windows and Xlib on Unix/Xwindow. This means that the render code is system independent. And that means (more on this later on) that you can write a demo once and release it on Windows AND Linux for example.

OpenGL works with RenderContexts, which can be seen as a virtual framebuffer. The word virtual is a keyword here. In OpenGL you can't touch the framebuffer with your own routines. Everything is done by using the API, you can't get a pointer to any part of the 3D hardware, like texturememory, framebuffer etc. etc. When you want to draw something using OpenGL you use your system's specific routines to get a window with a canvas (in Windows this is called a 'DeviceContext') and that canvas is used to create a rendercontext. When you have made a rendercontext you make it the current one, i.e. you activate it, and OpenGL will now draw everything on that rendercontext, no questions asked.

The rendering is done by several parts of OpenGL, together these parts are the complete rendering pipeline from primitive to pixel. These parts are stateful machines, which means that what they do is dependant on several states that are set internally. Their behaviour changes when you change certain statevariables.

OpenGL is an immediate mode API, which means when you execute an OpenGL command it has effect immediately, except in special cases when you use a display list, but more on that later on.

Like any good 3D API, OpenGL has support for all well known parts of a renderpipeline. It can completely translate, lit, clip and project your scene and objects, and also do certain operations per texel using buffers like a depth buffer (Z-Buffer) or Stencil buffer. Furthermore, it has support for texturemapping, but that was an obvious one.

You don't have to use all parts inside OpenGL to get the result you want. If you want to do your translation and lighting yourself, that's up to you. Looking at the specs of upcoming 3D cards however it would be a good idea to let OpenGL do these things for you. Performance wise it's also a good thing to let it be done by the API in most cases, because most OpenGL drivers contain heavily optimized code for Translation and Lighting (T&L). For example the nVidia OpenGL ICD for the TNT and TNT2 contains code that uses some optimizationtricks on several matrixoperations to get it faster, depending on the current status of the element and what you want to do, rotate or translate. Coding all this by yourself can be fun of course, but most of the time you will end up with slower code than the high paid engineers at nVidia can cook up.

Basic building blocks.

Everything is vertex based. All operations work on vertices, be it textureclamping, lighting, transformation etc. Operations on vertices work with matrices, so get your algebra books out of the closet! A 3D scene is formed by meshes and these meshes are made of vertices. Vertices itself are 1 dimensional but with some other vertices they form 2D primitives, like a line, a polygon etc.

OpenGL can only render convex polygons, inconvex polygons give an undefined result, so avoid these. Furtermore, OpenGL can render quads, triangles, lines, and points. Polygons (i.e. convex polygons, quads and triangles) can be filled and texturemapped using textures you have specified. These textures are defined inside textureobjects which live inside OpenGL. When you want to draw a texturemapped square for example you first have to load a bitmap into memory, create a textureobject inside OpenGL, upload the texture to OpenGL textureheaven and specify parameters for the textureobject like should it be repeated around a larger polygon than the texture or should it be stretched. Also you specify here the mipmapping support and the filteringmethod OpenGL should use when using this texture. After that you enable texturemapping (by setting a statevariable) and simply bind the texture and then all primitives that are drawn are texturemapped with that texture. You can specify a lot more with a texture or primitive like material aspects and so on, but for now this would be enough.

OpenGL uses a global 3D axis system, X horizontal positive to the right, Y vertical, positive upwards, and Z, positive OUTSIDE the screen. Basically (0,0,0) is the left bottom corner of your screen.

'Hey, where is the camera!'. Well, the camera is something virtual. It's not there, however it seems like it is. When you start an OpenGL drawing and you haven't specified anything, the camera is on (0,0,0) facing towards the negative Z. All objects you draw will be eventually at (0,0,0) too, unless you have translated (0,0,0) first. Translation is nothing more than 3D panning. Because there is no reference point in global 3D space, you can think of it in two ways: do I move the camera away from the 3D world or is the 3D world moving away from the camera? For the implementation it doesn't matter, but for understanding why you have to specify the translations in a certain order to let it work, it does. I'm not going to copy the 'OpenGL programmings manual' (the Red Book) on this subject because it's way too large, but basically it comes down to this: OpenGL uses matrices to calculate the 3D manipulations and effects. Because if you want to apply rotation R and translation T on object O, and O should FIRST be rotated and THEN be translated, you should first specify the translation and then the rotation: the calculation is T(R(O)).

So there is really no camera, but why do you see stuff then? Well because the primitives that make the 3D scene will go through the rasterization process and will end up in pixels stored in the colorbuffer (framebuffer). We have three different matrices in OpenGL: projection matrix, modelview matrix and texturematrix. Forget the last one for now. Projectionmatrix is the matrix where the projectionclipping information is stored, and thus NOT the camera translation information. That information is stored in the modelview matrix. You have to store this information in the matrices yourself, so you have full control over what will happen on screen. Remember the sequence of matrixoperations and when they have effect, so you first apply camera translate and rotate information on the modelview matrix and THEN the model translation/information. It's a combined matrix now, and with every modeloperation the camera info is used with it automatically.

During this rasterisation process vertices that make up the primitives will go through the following stages:

- First the modelview matrix is applied to the vertex, which gives eye coordinates. Remember that the modelview matrix is containing the 'camera' info.

- Next the projection matrix is applied to the result of the previous stage, which gives the clip coordinates.

- After that the perspective division takes place, which gives normalized device coordinates.

- Then the viewport transformation takes place, and this gives the window coordinates.

Besides these matrixstuff there are buffers to help you rendering the scene: the depthbuffer or Z-buffer for per pixel depth sorting using a function you specify, a stencilbuffer to do nice tricks by masking out rendered primitives etc and an accumulation buffer. The last 2 buffers are sometimes not implemented in hardware. Some hardware have a stencil buffer (most common new cards have), but accumulation buffers are rare. The new 3dfx voodoo4 will have a T-buffer, which is eventually an accumulation buffer in hardware. If a hardware feature is not supported it's performed in software like D3D does, but this is slow and not recommended.

There is no 2D functionality inside OpenGL except the orthogonal projection. This projection gives a non perspective look on the world and can be abused to do 2D graphics using 3D primitives. Quake consoles are using this projection. However, don't look for a blit function, it's not there. There are some pixelwrite functions but these are slow and not recommended.

More on matrices.

I've touched the matrix subject in the previous section because it's very important to understand what OpenGL does to your precious set up scene and how to use the power inside it.

OpenGL uses the three named matrixmodes that I mentioned earlier on: projection matrix, modelview matrix and texture matrix. This matrix mode is a state you can change and OpenGL will stay in that matrixmode untill you switch it to another mode. All commands that are executed after a matrixmode change/set will be using that mode. These three modes are modelview, projection and texture, specified by:

- glMatrixMode(GL_MODELVIEW),
- glMatrixmode(GL_PROJECTION), and
- glMatrixMode(GL_TEXTURE).

Once you've set the matrixmode statevariable within OpenGL to a state, you can perform operations on the matrix. With every different matrix (we have three different ones) comes a matrix stack. You can push and pop a matrix on its stack to preserve a matrixstate. You can also initialise the matrix with the identity matrix using glLoadIdentity(), or you can load the matrix with your own homemade matrix. Most of the time it's better to let OpenGL do the matrix stuff because the code inside the API is faster most of the time. When will you use all this? Well if you want to draw more than one primitive or object in your scene it's often nice to just setup the camera in the modelview matrix once and have that info with every object. So if you want to translate and rotate say four cubes in your scene, your program will look something like this (pseudocode, so given routines NOT with a leading 'gl' are own made routines):

   glMatrixMode(GL_MODELVIEW);
   glLoadIdentity();                     // initialize the matrix
   PositionCamera();                     // translate and rotate the camera
   glPushMatrix();                       // put this matrix on his stack
     PositionAndRenderCube1();           // the matrix will be modified
   glPopMatrix();                        // get the initial matrix back WITH
                                         // camera info!
   glPushMatrix();                       // put this matrix on the stack
     PositionAndRenderCube2();           // do cube 2
   glPopMatrix();

etc.

It's a lot of pushin' and poppin' and perhaps it can be done more efficient, but it's pure to illustrate how to prevent hard to find errors why cube 2 is not appearing at its spot in the scene for example and how to manage the matrixstates during a rendering.

OS specific stuff.

Microsoft made a decision when NT was released, that it should support OpenGL out of the box. Windows 95 did not, for example (OpenGL support came with OSR2). NT and later, also Windows 95OSR2 and 98, have the rendering API running in kernel mode. Microsoft decided it would be wise to have one OpenGL library, made by Microsoft, and all hardware vendors could connect their specific OpenGL library (ICD, Installable Client Device) to this OpenGL library for the hardware. But the disadvantage was and still is, you can have just one OpenGL hardware driver connected to the OpenGL driver used by the system (opengl32.dll). Of course this doesn't seem to be a problem, who has more than one videocard, right? Ah, that's right! Your pal 3Lee7e has a Voodoo2 and a normal videocard. Until recently 3Dfx didn't have an OpenGL ICD, just Glide wrapping mini-gl drivers for Quake engine based games. But now they've released a full OpenGL driver for the Voodoo2, but this driver cannot connect to opengl32.dll because the Voodoo2 can't display windowed 3D graphics.

A solution for this is to load the opengl32.dll dynamically instead of linking to it statically like you used to do. You then can load a 3Dfx specific opengl32.dll that will connect to the voodoo2 opengl driver so you can use the voodoo2 inside your demo/application. (Note: the Voodoo3 comes with a normal OpenGL ICD like any other normal 3D board). There is one disadvantage: you have to declare all OpenGL functions manually and obtain pointers to these functions by yourself. See 'controlling the power of hardware' later on how to do this. There is a workaround, and that is to tell 3dfx users to copy their 3dfxopengl.dll to the demodir and rename it to opengl32.dll. As you can see, this saves some hard work.

On Unix platforms this problem does not excist. You just specify which library to use and it will work. Stop moaning, it has disadvantages too.

What do you need?

When you plan to do a demo, you should decide if you want to release it on just one platform or on more than one. If you decide the latter, you should consider that porting the pure OpenGL code is a no-brainer, but the code that makes it all happen inside a window or canvas is probably pretty complicated. Some nice guy over at SGI, the inventor of OpenGL, named Mark Kilgard (now working for nVidia btw), wrote an Open Source library to take care of that, it's called GLUT. GLUT is a library that makes your OpenGL application totally system independent. Of course GLUT has to be available for the platforms you've chosen. What does it do? Well everything! You register your renderfunction that should be called every frame, do initialisation of OpenGL, which means setting states of several OpenGL parts, and tell GLUT to open a window, fullscreen or not, and to dive into the renderloop. Because all these things are just calls to GLUT routines, you can write a C program that uses GLUT under Windows and recompile it on Linux without even changing 1 byte of code, assuming you didn't do any system specific calls of course.

If you decide to do it without GLUT, perhaps to have more control, you have to code the program startup code, like opening a window, fullscreen or not, creating a rendercontext, handling messages, and at the end, removing all this. This seems a no-brain choice, but it's not. GLUT might be nice, it's still a library that can give you not the freedom that you want, especially if you program a large OpenGL program or you use C++. GLUT is C, not C++, and if you have an ObjectOriented demosystem, GLUT won't fit in, trust me.

When you've decided if you want to use GLUT or not, it's time to decide which OpenGL lib you want to use. As I said earlier on, there is a way to load a custom made opengl dll, plus on Unix you need to specify which library to use. Silicon Graphics, the developer and inventor of OpenGL, demanded a lot of money for a licenced version of the original code for the OpenGL library. Therefore there was no OpenGL support for Linux for example, until some people came up with a library called Mesa. Mesa is a library like the OpenGL library and it is fully compatible with OpenGL. So when you compile your program on a certain platform, it's up to you if you use the Mesa library or the OpenGL library for that platform.

See for URLs to GLUT and Mesa resources and other very helpful information at the bottom of this article.

Main parts for an OpenGL application.

The main parts of an OpenGL application, like a demo, are the following. Include these in this order in your program so you won't fall into a pitfall. Remember that some of these steps are covered by GLUT. See the documentation of GLUT for details.

- Create a window, borderless (for full screen) or with a border (windowed).

- Eventually switch to full screen, by changing the resolution of the desktop to that of your borderless window.

- Select a pixelformat. This pixelformat contains info about usage of double buffering, how many colorbits per element, and which buffers will be enabled.

- Create a RenderContext (RC). On Windows you need to get a Device Context (DC first by using GetDC();).

- Make that RC the current one.

- Initialize OpenGL parts by setting several states and enabling/disabling features.

- Set OpenGL in GL_PROJECTION matrixmode.

- Set up your frustum so OpenGL knows what to clip.

- Set up your viewport so OpenGL knows how to project the rendered screen on the rendercontext. This means that when your frustum is a perfect square but your viewport is a rectangle that is wider in the X direction than in the Y direction, the result will be a stretched view. Of course this sounds great, but the same results can be achieved elsewhere so keep this normal and to strechy effects elsewhere (like scaling camera matrices).

- Set OpenGL in GL_MODELVIEW matrixmode.

- Enter your renderloop:

- Load the identity matrix.

- Position and aim the camera for this frame by applying transformation and rotation commands to the current matrix (which is still GL_MODELVIEW).

- Place your sceneparts and render these, by calling several OpenGL commands.

- The demo has ended, so delete the RC. If you are on Windows, you have to release the DC you got from GDI.

- Release any memory or resources.

- Close the window and if we have changed the resolution, set back the resolution.

- We're done.

Some people at this point will already know what has to be optimized to get it as fast as possible. I bet you are wrong. The speedgain is NOT, I repeat: NOT, in doing the matrixmath by yourself, it's inside the different parts you have to render: the more states you change inside parts of OpenGL, the more time it takes to complete your rendercycle. But more on this in the section 'Pitfalls'.

Pitfalls.

Before you can rush to your editor and start coding you should know about some serious pitfalls people fall into when they start using OpenGL. These pitfalls are generic for every platform.

- Know what kind of matrixmode you are in and what to do in this mode and what not. Do I hear some people cry 'Duh'? Ok, but do you understand the difference between the projectionmatrix and the modelview matrix? Probably, but the biggest pitfall that is made by people using OpenGL is the misinterpretation of 'projectionmatrix'. The projectionmatrix is just for the clipping for eyecoords, and definitely NOT for camera information. Camera information is placed on the modelview matrix (hence the word 'view'). Because as I told before, matrixmanipulations are done in the reversed order, you place the translate and rotation commands for the camera before the rendering of the scene. In this case every translation and rotation you do of objects in 3D space is done with the camera knowledge already inside it and the camera transformations and rotations are effected after the object translations and rotations you specify are effected.

- Use as less glBegin/glEnd combinations as possible. This sounds obvious but is not. Instead of drawing every triangle on its own, it's better to use triangle strips or fans to render meshes. Because glBegin and glEnd are statechanges inside OpenGL it should be avoided to have to call these functions often. OpenGL supports vertex arrays, or if you don't want to use them you can read from your own array, that contain all vertices that can be drawn at once between one glBegin/glEnd pair. Obviously these vertices have to be connected to eachother in a strip or a fan.

- Know what you want to render. Sometimes it's best to sort all the primitives in the scene by texture and then render these primitives using strips and fans, sometimes it's better to just sort per other OpenGL state, like blendingmode, and then per texture you draw the primitives. Also, order the vertices in a way it is easy to render the mesh, like all in clockwise order or counter clock wise. Because OpenGL has build in hidden surface removal, it's important to specify the right vertices in the right order. Because for starters this is pretty dazzling I havent' included information on API based culling and hidden surface removal. Please read the information found by clicking on links at the bottom of this article like The Red Book, which is still online at the moment, is a must read for every OpenGL programmer. It's downloadable for offline reading.

Drawing a primitive.

Ok, you must be deaf by now by all this babbling about garbage you already thought you knew and which seemed to be totally irrelevant. Well if you plan to do all T&L by yourself, it probably is. But as I said earlier on: the next generation videocards will have T&L build in hardware, and if you let OpenGL do the T&L talking, you won't have to change 1 line of code to get a tremendous speedgain on those cards. Also, the build in T&L code of OpenGL is pretty fast although it's vertexbased. If you want lightmaps, you should provide them yourself.

I'm going to explain here very briefly how to draw a square. It's in pseudocode. It's meant to illustrate what's needed to draw something. If you want to draw 1000 squares you don't need to loop through this whole bunch of code obviously.

   EnableStates();         // glEnable several opengl states we need
   glMatrixMode(GL_MODELVIEW);
   glLoadIdentity();
   TranslateAndRotate();   // this own made routine will create a
                           // matrix that will position the square
                           // at the spot we want it to be, i.e.
                           // will position (0,0,0) at the
                           // startlocation of the square. You can
                           // also leave it out and render absolute
                           // coords, but most of the time that's not
                           // preferable
   glColor3f(r,g,b);       // specify colorstate (floats)
ø0   glBegin(GL_QUADS);      // this tells OpenGL to start drawing a
                           // quad
   glVertex3f(x1,y1,z1);   // point 1 (floats between 0 and 1)
   glVertex3f(x2,y2,z2);   // point 2
   glVertex3f(x3,y3,z3);   // point 3
   glVertex3f(x4,y4,z4);   // point 4
   glEnd();                // this tells OpenGL the quad
                           // specification is done. It can draw it
                           // now.
                           // done

As you can see it's a lot for drawing a square, but all you have to repeat for another square is the lines starting with glbegin up till glend and to provide another set of points of course.

Also one thing that is worth mentioning is the usage of floats, clamped between 0.0 and 1.0. OpenGL internally works with these floats so if you want speed, just use them too in your code. If you don't want to use floats you can also specify integers with lookalike functions. Thus glVertex3f() has an integer equivalent glVertex3i(). Check help on these functions in the MSDN library online at microsoft or the Mesa help, or in several other API help documents.

Example program.

Below is a simple program called rotcube.c. it illustrates how to get a simple rotating cube on screen with gouraudshading and lots of colors. It uses the GLUT library, so if you don't have the GLUT libs available get them at the official GLUT site. The URL to that site is below in the resources and links section. It's not an example how to code the fastest program on earth, it just gets you started. Fiddle around with the code, add own stuff to see how it works, like a texturemap.

Compile it using a c-compiler and link it with glut32.lib if you use Windows. If you are on another platform see the GLUT distribution for what you should link with your application to get it up running with GLUT.

// Draw a spinning cube, gouraudshaded with zillions of colors.
// Used a glut basic program framework by Mark Kilgard.
//
// This routine is by no means efficient. It's just to illustrate how to do
// a simple 3D effect in OpenGL.
//
// For HUGI 17 OpenGL article.
//
//////////////////////////////////////////////////////////////

// Include Glut.h, all opengl libs will be included in there.
#include <GL/glut.h>

// define faces array. it contains vertex indices to the actual vertexdata
GLint arrFaces[6][4] =
        {
                {0, 1, 2, 3}, {3, 2, 6, 7}, {7, 6, 5, 4},
                {4, 5, 1, 0}, {5, 6, 2, 1}, {7, 4, 0, 3}
        };

// angle variables. they are initialized to some random number.
static  int     iAlpha=0;
static  int     iBeta=40;
static  int     iGamma=60;

GLfloat arrVertices[8][3];

// this routine actually draws the cube on screen. It draws the cube from
// the same data every time. Because we rotate the axis and we translate
// 0,0,0 to a new situation, the cube is rotated.
void
DrawCube(void)
{
        int     i;

        for(i=0;i < 6;i++)
        {
                // start drawing a quad.
                glBegin(GL_QUADS);
                        // specify the working color. All vertices are now
                        // using this color unless the color is set to
                        // another color.
                        glColor3f(1.0f,0.0f,0.0f);
                        glVertex3fv(&arrVertices[arrFaces[i][0]][0]);
// use i to get a different color per vertex
                        glColor3f(1.0f,(1.0f * (i%2)),0.0f);
        glVertex3fv(&arrVertices[arrFaces[i][1]][0]);
                        glColor3f(1.0f,1.0f,(1.0f * (i%2)));
                        glVertex3fv(&arrVertices[arrFaces[i][2]][0]);
                        glColor3f(1.0f,0.0f,1.0f);
                        glVertex3fv(&arrVertices[arrFaces[i][3]][0]);
                glEnd();
        }
}


// Updates the angles
void
AdjustAngles()
{
        iAlpha++;
        iAlpha%=360;
        iBeta++;
        iBeta%=360;
        iGamma++;
        iGamma%=360;
}


// this routine renders the scene. It's called every frame, due to which
// we assign it to the idle function.
void
RenderFrame(void)
{
        // clear buffers: colorbuffer and depthbuffer.
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        // update the angles
        AdjustAngles();
        // set matrixmode in Modelview.
        glMatrixMode(GL_MODELVIEW);
        // load the identity matrix into the modelview matrix.
        glLoadIdentity();
        // position the camera.
        glTranslatef(0.0,0.0,-4.0f);

        // rotate the axis
        glRotatef((float)iAlpha,1.0f,0.0f,0.0f);
        glRotatef((float)iBeta,0.0f,1.0f,0.0f);
        glRotatef((float)iGamma,0.0f,0.0f,1.0f);
        // draw the cube
        DrawCube();
        // tell glut to swap the framebuffers. Glut will signal OpenGL to
        // do this.
        glutSwapBuffers();
}


// initialize variables
void
Init(void)
{
        // Setup cube vertex data.
        arrVertices[0][0]=arrVertices[1][0]=arrVertices[2][0]=
          arrVertices[3][0]=-1;
        arrVertices[4][0]=arrVertices[5][0]=arrVertices[6][0]=
          arrVertices[7][0]=1;
        arrVertices[0][1]=arrVertices[1][1]=arrVertices[4][1]=
          arrVertices[5][1]=-1;
        arrVertices[2][1]=arrVertices[3][1]=arrVertices[6][1]=
          arrVertices[7][1]=1;
        arrVertices[0][2]=arrVertices[3][2]=arrVertices[4][2]=
          arrVertices[7][2]=1;
        arrVertices[1][2]=arrVertices[2][2]=arrVertices[5][2]=
          arrVertices[6][2]=-1;

        // Use depth buffering for hidden surface elimination.
        glEnable(GL_DEPTH_TEST);

        // use gouraudshading on vertexbasis
        glShadeModel(GL_SMOOTH);

        // Setup the view of the cube.
        glMatrixMode(GL_PROJECTION);
        // use a glut function to set up the frustum. See the docs on this
        // function for more information
       gluPerspective(40.0, 1.0, 1.0, 10.0);
}


// This function is called when the windows is resized.
void
Reshape(int w, int h)
{
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        gluPerspective(40.0,w/(float)h,1.0,10.0);
        glViewport(0,0,w,h);
}


// your mainfunction
int
main(int argc, char **argv)
{
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
        glutCreateWindow("HUGI 17 OpenGL Example");
        glutDisplayFunc(RenderFrame);
        glutReshapeFunc(Reshape);
        // this function is called when nothing is done, i.e. when we draw
        // the scene. if you don't register this function, no animation
        // will take place. Best thing is to just create a new function
        // that flags glut to redraw the scene.
        glutIdleFunc(RenderFrame);
        Init();
        // dive into the glut mainloop.
        glutMainLoop();
        return 0;
}

Controlling the power of hardware.

Hardware develops fast, faster than API specs most of the time. A big disadvantage of this is the lack of new features like multitexturing and environment bumpmapping in APIs. Unlike D3D, OpenGL doesn't have a new version of the API every six months, but has an extension mechanism. How does this work? The current API version of OpenGL that is installed with Microsoft's OpenGL32.dll is version 1.1. That means that all hardware that uses this DLL will have to provide extensions to additional features in hardware that are NOT provided in the 1.1 API, like multitexturing. Mesa is full 1.2 compliant. OpenGL is now as Mesa in a frozen 1.2 condition and an update of opengl32.dll is expected. In the meantime, only 3Dlabs with the permedia3 promised to supply a full 1.2 compliant ICD.

What does this all mean? Well, for example, if you want to use multitexturing NOW, thus on the 1.1 API, you have to query the carddriver if it supports an extension that will do the multitexturing on the hardware, because it's not available in the native OpenGL API that you use. If you call a function in the native API like the usage of a stencilbuffer, and it's not provided by hardware it's automatically done in software. If you call a function that is NOT part of the API and the hardware doesn't support it, your application will probably crash.

This has one major drawback: if you want to let your demo run on videocards that do not support an extension, like the ARB_multitexture extension which is used to do multitexturing on TNT's, Rage128's and G400's, you have to code a piece of code that uses an extension twice: one for the videocards that don't support this extension and one for the supporting cards.

So it's up to you to include support for all cards or limit support for just a couple of cards. If you do limit the support for certain features or require features, please state this in an infofile with your demo, it will save a lot of people download time if they have a videocard that doesn't support a required feature.

To include support for a given feature, you have to do the same as what you'd do when you want to dynamically load the opengl dll. I'll illustrate the process by grabbing the extension for controlling the VSYNC. This is not included in the standard OpenGL 1.1 specs and most cards support it nowadays. For a complete list of extensions and what they do, see the links in the resourcesection.

To test for an extension, you have to 'parse' the string that OpenGL returns when you call the following function:

           glGetString(GL_EXTENSIONS);

The string has a format of spacedelimited substrings. An example can be: "GL_ARB_multitexture WGL_EXT_swap_control"

We are interested in the last extension. We scan the string on the string "WGL_EXT_swap_control", and we find it in the string that was returned by OpenGL, which means that the hardware supports the extension. The opengl32.dll or xlib variant you use supports to get a pointer to the function(s) that make the extension actually work. In our example, we need a pointer to wglSwapInterval(), to set the vsync params. You probably see it has 'wgl' in front of it and indeed this is a windows function. It's also a Windows extension, and I'm not sure if it's supported on X11, but for the example, that's irrelevant.

To get a working pointer to an extension function it is wise to use the common .h file glext.h. This is a file that is downloadable from a lot of websites, just browse some links that deal with extensions and you get plenty of full featuring glext.h files.

In glext.h there are the functionpointertype definitions for our extensionfunctions and parameters. We want to use the WGL_EXT_swap_control extension and we check if the following line is present:

   typedef void (APIENTRY * PFNGLWGLSWAPINTERVALEXTPROC) (GLint interval);

If not, add it and you can use it. Now, we have to define a variable that will contain the actual pointer, so we add to our program:

   PFNGLWGLSWAPINTERVALEXTPROC        wglSwapIntervalEXT = NULL;

Now, we should actually get a pointer to the extension, so call:

   wglSwapIntervalEXT =
     (PFNGLWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");

We now have the functionpointer, so we can now call the function to set the swap parameters. OpenGL by default swaps the buffers at the end of a frame, but we want to do it as soon as the frame is finished. This causes sometimes tearing (like Quake does when you disable vsync) but it can also adjust the framerate by some extend and that's what we all want.

If we pass a '0', OpenGL won't wait till the vertical retrace to swap the buffers but will swap immediately. So we do:

           (*wglSwapIntervalEXT)(0);

This looks a little odd to some perhaps, that's why I mention it here. You can of course fiddle around with the typedef to get rid of the '*'.

The same way is used to get a pointer to ALL the OpenGL functions inside the DLL when you dynamically load the opengl library. There are some sources available that show elegant ways how to do this. One is the official OpenGL driver for Unreal, written by Tim Sweeney, and IMHO the best and neatest way to do it, although nothing beats just let the OS load the lib!

Resources on the net.

Below are some resource links that can be very useful for OpenGL programmers. Because I couldn't mention a lot in this article otherwise it would have been a thick book, I mention this links here. For starters and for people who don't have the book on paper, download the Red Book from the URL below. It's version 1.0 but still covers everything a beginner needs to know.

The center of the OpenGL community: great startpoint for info. http://www.opengl.org

Nate Robins (GLUT co-programmer for windows) pages with source code, info, and tutorials. http://www.xmission.com/~nate/opengl.html

Angus Dorbie's pages. It's about performer, a gfx lib on top of OpenGL, but has some neat ideas for demos and how to use OpenGL in this. http://www.dorbie.com

nVidia's extensions used in TNT and TNT2 and how to use them and what they do.
http://www.berkelium.com/OpenGL/NVIDIA/extensions.html

The Red Book, OpenGL Programmers Manual v1.0. A MUST read. It can be downloaded for offline reading.
http://fly.srk.fer.hr/~unreal/theredbook/

The main center of links to all OpenGL resources you can think of, maintained by Mark Kilgard, the author of GLUT. Start here if you have any need for info on a certain OpenGL related topic.
http://reality.sgi.com/mjk_engr/

GLUT (Get the source distribution because in there are numerous examples with sourcecode how to use GLUT, and some are pretty advanced and would make neat demoeffects)
http://reality.sgi.com/opengl/glut3/glut3.html

Mesa:
http://www.mesa3d.org

Nate Miller's page, full with handy sourcecode.
http://members.home.com/vandals1/programming.html

Several good examples how to do hardly explained stuff like loading the OpenGL dll dynamically:
http://home.bc.rogers.wave.ca/borealis/opengl.html

Nice examples of advanced topics like volumetric fog:
http://www.gamedev.net/opengl

Great info on extensions:
http://www.opengl.org/News/Special/OGLextensions/OGLextensions.html

Unreal's OpenGL driver sourcecode. About using extensions the right way and loading a dll and getting functionpointers. Advanced C++.
http://unreal.epicgames.com/UnrealPubSrc224v.zip

Newsgroups:
comp.graphics.api.opengl
comp.graphics.opengl
3dfx.opengl
linux.dev.opengl

Otis/Infuse project