3D Coding TAT - Our 3D World Is Flat
Most of the 3d problems concerning rendering can be quickly approximated to and thought of in terms of 2d, after all this is what the view-screen bitmap is. The subject of lighting, shading and polygon filling may appear to be pure 3d ones but can be thought of in terms of filling horizontal or vertical lines on the 2d screen bitmap. If we take the X and Y coords as being the view-screen coordinates then the lighting/shading value together with the Z coordinate can be thought of as one parameter which changes over the length of a single line span. This parameter at its most simple would be constant (no shading) and at its most complex it would change at evey pixel along the line span, possibly once for the red, green and blue of each pixel as in the case of coloured RGB lighting. Basic shading would take the form of linear stepping along the line span and advanced shading would perform some form of radial/circular shading across the polygon face, this is because lighting a flat polygon can be thought of as a plane intersecting a solid sphere (the radiating light). The distance from the light source affects the shading so the further away the plane (flat polygon) is, the less light should fall on it. Perhaps the maths world of conic sections may prove useful in this area as that deals with planes intersecting cones to produce ellipses, circles and all other types of curves.
1: Normal Shading
I believe most people programming 3d stuff using vector maths to perform hidden-surface removal (back face culling) and various shading techniques. Using a vector for each light source and the normal-vector for each polygon it is possible to calculate the amount of shading required. I haven't looked into this area too much because I have only recented managed to get my sticky-mits on a few scrap of info. I think you need to store normal-vectors (or is it unit-vectors?) for either each polygon face or each vertex (which is impossible, isn't it? How can a point have a normal vector? I thought you needed a plane? Possibly using the surrounding 3 polygon faces which share the vertex).
2: Pointless Shading ?
One way I have thought about how to do shading is to find the distance from each light-source (X,Y,Z) to each vertex (X,Y,Z) this would mean a N sided polygon would have N shading parameters (one for each vertex). Then as the polygon is drawn the shading parameter is interpolated between them.
. ..... B
In the above 2d diagram we have two vertices V1 and V2 which define a line and we have two light-sources A and B. We could take the distance from A to each vertex V1 and V2 and then do the same for light-source B. At the end we should have two lighting totals (one for each vertex). Now we can interpolate between them along the line to give a rough approximation to the light level at each pixel.
Of course this is not absolutely correct, you should really perform the lighting/distance calculation PER PIXEL as it is possible to get overlapping regions in the middle of a line span where the lighting amount should peak before decreasing as it moves towards each end vertex. One way to handle this could be to detect if one vertex exceeds a light-source's illumination range (so at some point along the line it stops being lit by this light-source). This would mean you should really create a quasi-vertex on the line/plane and interpolate in two steps. If there is another light-source which partly illuminates the line/plane (again with one vertex within its illumination range) then we would need to create yet another quasi-vertex and interpolate between it. As you can see this would make more and more work depending on how many partially lite sections there are.
Like most other things in this document I haven't tried this method yet so I can't comment on how good (or bad) it looks. The proper vector method would no doubt give a much better and more realistic results but I think this method (cheat) would be faster. But calculating the true distance between two 3d points requires 3 multiplies and 1 square-root which are too slow. So I suggest using the longest axis length this would remove the 3 multiplies and square-root and replace them with the following code:
(Lx,Ly,Lz) = light source coords.
(Vx,Vy,VZ) = Vertex coords
X = ABS (Vx - Lx)
Y = ABS (Vy - Ly)
Z = ABS (Vz - Lz)
D = X
IF Y > D THEN D = Y
IF Z > D THEN D = Z
D = longest absolute axis length.
So if we have V number of vertices and L number of light-sources then we need to do V*L calculations.
GOOD NEWS: The advantage of this method is that you don't need to store the normal-vectors or do any of the multiplication and it also returns the distance. This distance can be used to simulate fog effects like horizon cut-off.
BAD NEWS: The lighting is NOT directional, it is kind of spherical. It does not take into account the direction of the light or the direction of the polygon (plane). It will even light vertices which should be hidden by other things because we are really lighting points (the vertices) rather than planes. It takes time to process L (light sources) times V (vertices). It does not handle multiple light-sources correctly where regions along a line/plane are lit by two or more overlapping illumination zones.
3: Regional Light-Sources
Checking every light source against every vertex would be far too slow to be usable so we need some way to speed it up. For a very large level with rooms, walls and doors only a few light-sources will be in range (supply enough light to reach the polygon's vertices). Also if you close a door then the light source behind it should be hidden or partially stopped so you could only see the light through the gap.
What we need is some form of regional restriction to the light sources we check. Only the local and TRACEABLE lights should be used. When I say "traceable" I mean light-sources from which a direct line can be traced out to a vertex/face. This is not the same as visibility because a light behind your view-point can not be visible but will still lights the scene in front of you. Also lights around corners will still illuminate the walls around them. There is also the problem of shadowing when a light is on one side of a wall and so it can not illuminate the other side (unless it is a transparent wall in which case it is a window).
On the face of it (lame pun) the problem seems very difficult and complex. We need to block out some lights and only consider those within a certain distance and at certain angles. Going back to the INLET rendering technique with its convex volumes and transparent face links it does offer some solutions to these problems. For a start the regional part is mostly taken care of by the rendering method itself because we only follow inlets (doors, windows) that fall in our view-point's line-of-sight. So only the local regions in the environment parts are used and rendered. The light sources can be defined within each of these inlet volumes (the convex sub-rooms or sub-areas). Now if an inlet volume is rejected then so too can its lighting and shading requirements on the accepted vertices. We could use the inlet links (the window or door faces) to halt all light source which are external to the current volume (i.e. if we close the door then the light-source beyond the door is rejected and ignored). This effectively allows curtain like effects to be done. A door or wall could open and illuminate a darken room or visa versa.
But there is a problem with this rendering-rejection. Just because a neighbouring inlet room volume cannot be seen from the current view-point does not mean its light-sources should be ignored.
/ \ |
/ \X room 2 |
/ room 3 \______________|
L / room 1 \
/ ³ \
In the above diagram we have 3 rooms, we are standing in room 1 and looking into room 2 through an inlet-link. We would reject room 3 because inlet-link 'X' is considered invisible and so is ignored. But suppose that there is a light-source 'L' it would illuminate part of room 2 through the inlet-link 'X'. As we have rejected room 2 the 'L' light-source would also be rejected which is wrong. Now if we turn our view-point so that inlet-link 'X' becomes visible then suddenly the light-source would be used. Even though the light-source has not moved or changed in brightness it has suddenly switched on.
So we need to follow rooms that are INVISIBLE but have light-sources which are TRACEABLE. The traceablity of a light-source in this inlet method would need to take into account any closed inlets such as closed doors or blackened windows or wall etc.
A neat bonus with using the inlet method is that we know where local light shines through (the inlets themselves) and could dim any light passing through them to create smoked glass or a LCD like fade effects.
4: Last Orders ?
One of the most obvious ways to perform shading is to process an entire polygon at a time for each polygon in the scene. But we only need to calculate the shading parameters for polygons which are visible AND only those which are not entirely clipped. So a typical renderer would do the shading/lighting stage as one of the final stages in the rendering pipe-line. We take the output of the back-face culling and polygon clipper and use these items to perform the lighting effects on. This should help minimize the amount of work which the shading algorithms have to do.
If we work out the shading on a vertex basis then this post-visibility and post-clip method should work okay. We could calculate the lighting amount for the clipped vertex points OR for the unclipped vertices and then clip these lighting amounts as we clip the polygon. So the decision is whether to light to pre-clipped vertices (this would allow up to three polygons to share the same value) or to use the post-clipped vertices. I think the first option would work out faster as most of a scene tends to contain far more unclipped vertices than clipped ones, especially if fairly small polygons are used to build up the scene.
5: Here's One I lit earlier
But all this real-time lighting calculation takes time and no matter how fast the algorithm is that performs it it will still take a fair amount of time. So we do what all good video-games do, we CHEAT! Rather than start of with a list of fully illuminated polygons, a list of light-sources and process them all, we can pre-calculate as much of the static lighting as possible at the level-editor stage. This would give us far less to do when rendering. We examine the shading values stored with each vertex and interpolate between them as we fill each polygon. This does mean a little more storage (perhaps one number per vertex) but it also means a lot less processing.
So we are left with the task of shading each polygon pixel as we scan-convert it onto the screen bitmap. This often requires a pixel look-up to shade a pixel and a few ADDs. But even here there is another cheat, and perhaps one of the most lazy. After playing the MDK demo a couple of times I get the feeling of NO-SHADING on the polgons which make up the levels and creatures. I could be wrong but they don't seem to change their lighting level even when firing your 2d-bitmap gun sprite point-blank at a wall (go on, try it). This would explain why the game run fairly smoothly at a high resolution (even though it has a large blank border to further reduce the amount of CPU workload).
Games like QUAKE, QUAKE 2 and (I think) UNREAL all handle lighting in various ways. Firstly there is the static shading which is probably done in the level-editor. Secondly there is the impact lighting where a flickering torch burns on a wall lightings up the surrounding area. And thirdly we have the dynamic, travelling lights (blaster trails which illuminate the surrounding corridor as they move). Actually the second and third cases are the same, except one moves. They both illuminate a very local area around them. This is a good strategy because most of the lighting and fancy shading can be done by the level-editor removing most of the CPU workload from the game engine. I hazard a guess that they use the vertex parameter method I previously mentioned and interpolate the light level across each polygon as they are drawn. When a 'dynamic' light source is needed we can just modify these vertex-levels for a short amount of time until the rocket explodes or the burning torch dies out etc... and then their normal level returns.
6: Multiple Light-Sources
This is a 'simple' case of summing up all the light which falls on a vertex or polygon. We take each of the light-sources and add it's lighting contribution to it's target subject. Possibly even doing three of these totals for the R,G and B lighting (this could be how UNREAL does it but having not seen the game running I can't guess too much).
Having more than one light-source which can fall on a vertex or polygon does mean extra work but it doesn't have to be done every render-cycle. Take the example of a room with two light-bulbs in which both illuminate the same wall. All the correct lighting amounts could be pre-calculated by the level-editor and each vertex assigned a lighting total. This total is interpolated across the polygon to give an okay approximation. But this means the lighting is static. Suppose we wanted to turn off one of the light blubs then we would need to recalculate everything again wouldn't we? And what about blinking lights or pulsating laser beams or some other flashy effect?
7: Static Kinship Lighting Tables
As we don't mind a little more data to store and have plenty of RAM to spare especially when you consider most PCs have 16 meg or more then instead of calculating the distance of vertices from each light source and all other problems connected with it, we can build relationship tables for each vertex and their shared light-sources. We know that each vertex (or polygon) can only be illuminated by a certain (and hopefully small) number of light-sources and we know that the distances between light and vertex will remain constant (remember this is STATIC lighting). For each vertex we need to sum up the total amount of light which it receives, also these light-sources can vary the amount of light that they emit (we can turn them on/off or fade them in a groovy way). These variable brightness and static lighting effects can be performed with little more than a few lookups and a series of adds.
For each light-source we can define a simple table where one entry describes its brightness level.
For R,G,B lighting (with a separate level for the red, green and blue components) it would require three such entries. We can assign a unique token value to each of these light-source definition or "lamps", this way we can specify which lamp (or lamps) affect a particular vertex or polygon. The lamp-token is nothing more than a straight forward index or address-pointer. It is used to allow many target vertices to all share the same lamp (light-source definition). Suppose we have six vertices and one lamp:
A / \D
\ L |
\ . |
In the above diagram we have a six sided shape with six vertices (A...F) and a single light-source 'L'. Examining just one vertex we need to indicate what illumination it recieves. This can be done with a "lamp-token" list. It just describes which light-sources affects a particular vertex. There would be a separate lamp-token list for each vertex. In the above diagram this means six lists.
PLEASE NOTE: I said "which light-sources affects a particular vertex". This means that for each vertex we only list the lamp-tokens that can affect it. We only include a lamp-token in the list if any light from the source can possibly strike the vertex otherwise we do not because either the light is hidden by some obstruction or is too far away.
The 'KIN' table is basically a table of lamp-token lists of varying lengths (depending on how many light-sources are traceable to each vertex). Each list element is made up from just 2 parameters, one is the lamp token. This represents which light-source should be used and can be implemented as either an address pointer or look-up index. The other is the strength of the light-source which can be used to calculate the amount of illumination which manages to arrive at the vertex (due to atmospheric dust, fog and so on...). It is also a good idea to have a count or a special list terminator code so we can tell when we have reached the end of each lamp-token list.
To put it simply, we have a list of lamp-tokens and strengths for each vertex. The strength modifies the lamp's brightness level for distance (fog etc.). It is really just a scaling amount used to divide the brightness. If lamp L had a range of 100 then a "KIN" table could look like this:
e.g. KIN table
Vertex Count Lamp Strength
-------- ------ ------------------
A 1 L 57%
B 1 L 50%
C 1 L 57%
D 1 L 73%
E 1 L 78%
F 1 L 68%
Where each 'Strength' value is calculated from the lamp's reach and the vertex's distance from the lamp. I didn't use a distance field because doing it this way removes some calculation which is important when you consider that this needs to be done for every vertex. As vertices are defined further and further away from the light, so their strength decreases until it is zero. Once it is zero it is deemed to be out of range and does NOT contribute anything to the vertex and so is NOT entered in a KIN table's list and does not appear in the vertex's lamp-token list. A level-editor would need to store each light's range and work out this strength factor when a level is saved, but a game engine would probably only need the strength values.
The 'lamp' table could look something like this:
and for R,G,B lighting like this:
Lamp RED GREEN BLUE
L 100% 100% 0%
The above lamp definition would describe a yellow light.
Defining the light-sources using this tokened method means that we can control an entire room's (or even entire level's) lighting just by modifying its brightness in the lamp table definition. By changing a value up and down we could produce some pulsating or flashing lighting effects.
The reason why a "Strength" parameter was included in the KIN table is because it allows us to scale a lamp's brightness for distance more quickly, rather than having to calculate the distance from the light-source every time. As the light-sources do not move and neither do the vertices it's distance value does not change so can be pre-calculated by either the level-editor or the loader. I think the best option is in the level-editor as this allows weird effects to be designed into levels to make 'em more interesting.
Now let's suppose we added two more lights to our six-sided shape (labelled J and K) and both these emit very little light (candles for example). So vertices close to them would be illuminated but more distant ones would not because their illumination range has been exceeded.
/ J \
A / . \D
\ L K |
\ . . |
Now the KIN table could look something like this:
e.g. KIN table
Vertex Count Lamp Strength Lamp Strength
-------- ------ --------------- ---------------
A 2 L 57% J 50%
B 1 L 50%
C 1 L 57%
D 2 L 73% K 30%
E 2 L 78% K 60%
F 1 L 68%
Because our two very dim light-sources (J and K) only emit a very small amount of light they can only illuminate close vertices (in the above example A,D and E) so only these are within their very short range and will recieve any light.
In the last example three vertices recieved the light from two lamps and the other three from only one lamp. After totalling up all the different lumination amounts from the lamps it is possible that the maximum lighting limit is exceeded (when many lamps are at full brightness and very close or have a vast range). So perhaps an upper-limit check is needed on the lumination total before it is used or the "Strength" field could be fudged to prevent this.
Another problem is that even with using look-up tables this totalling up for each vertex can take time. Remember that we might have a list of lamps for each vertex not just one to look-up. The more lights which can strike a vertex the more work will be involved. My advice is to restrict the range and number of light-sources to a reasonable amount and don't allow a mile away light to illuminate hundreds of vertices.
9: Here comes the sun..
On the subject of vastly distant light sources it is possible to create a moving sun effect with a little pre-planning. Having a stationary sun is boring, one which drifts around a scene is far more interesting (and something which I haven't witnessed in any game engine YET!). The problem is that we want to sun to move or atleast give the impression of movement across the sky and this means we can't use our previous static lighting techniques. (Doesn't it?) If the sun moved then ALL the vertex distances would need to be calculated and so too would the amount of illumination striking them.
Now what is movement in terms of computer animation? Just a series of frames used one after the other to give the illusion of movement. If you think of these frames as being static images then the following cheat should soon spring to mind.
We can define a series of infinity distant light sources which are placed in a semi-elliptical manner around the scene just like how the sun would arc across the sky. You can think of this string of lamps as being snap-shots of the sun at various intervals throughout the day. Now to give the sun the appearance of movement we just switch one light off and the next one on to give a chasing lights effect and yipee! We have a moving sun using static lamps. All we have to do is build the correct lamp-token list for each vertex which the sun can possibly strike in the same manner as all the other light-sources. Then to move the sun we just modify the brightness of these orbiting lamps to give the impression of movement. We don't have to use a great number of lamps to define the arc we can fade one light out and the other one in to give the illusion of a smooth transition, i.e. for a light source between two lamps you could make each one 50% bright, easy eh?
10: Delta Lumination
Unless you are planning to create a Disco level creator with every single light-source flashing in a funky way then a vast amount of vertices will recieve a near constant amount of light because the light-sources remain steady. And even with blinking lights or ones which can be switched on or off these will not change state very often. So a delta method can be employed. Keep the lighting total for each vertex and only ever recalculate the total if any of the lamp's brightnesses change. One way would be to include a flag in the lamp table to indicate when its brightness has changed, but this would still mean testing every lamp in each vertex list so it would probably be faster to just perform the totalling in the first place.
Another way could be to modify each vertex total whenever a brightness changed (if it turns off then subtract the amount and if it turns on then add the amount). This would require additional storage to record the list of vertices which each lamp affects. And it means that every time a light's brightness changes we need to modify all the vertex totals.
11: Texture Light mapping
The normal way to shade a polygon (textured or solid colour) is to interpolate between 2,3 or 4 values across its surface to quickly calculate the shading/lighting amount for each pixel. Depending on the amount of work the interpolation take up this is usually the quickest (and most CPU friendly) way to fill shaded polygons.
Another way which I originally looked at when writing my very first texture mapping routines is a pixel based shading technique which I believe is called "Light mapping" (I could be wrong). The basic idea behind it is to use another bitmap (2d array) for the lighting level at each pixel position. In many respects it is indentical to texture mapping except each pixel/element is used as a shading/brightness level rather than a screen pixel.
A normal texture-mapper would plot a pixel like this:
pixel = bitmap[u,v]
plot (x,y), pixel
But this would just plot pixel from our source bitmap on the screen. The simplest form of shading would take each pixel and use a look-up table to find the correct shaded pixel.
pixel = bitmap[u,v]
plot (x,y), shadingtable[shade, pixel]
If a light-map array has the same dimension as a the source bitmap then we use each element in the light-map to be our shading index value.
pixel = bitmap[u,v]
shade = lightmap[u,v]
plot (x,y), shadingtable[shade, pixel]
The above code can be performed in about 5 instructions. Of course you still need to step the (u,v) and (x,y) coords for the correct texture mapping.
12: 2d screen filters
I'm not sure how useful or worthwhile this method could be but I'll describe it anyway coz it might inspire something....
There is another way which 'might' be useful when drawing shaded polygons and other lighting effects (lens flaring and other dazzling stuff). Instead of shading each pixel as we draw it on a 2d bitmap, why not build two 2d bitmaps with one for the pixel and one for the shade. So we are drawing the shade as if they were real pixels except in a separate buffer. Now we can shade the entire screen bitmap against the entire shade bitmap in a single loop or two.
for y = 0 to screenheight-1
for x = 0 to screenwidth-1
pixel = screenmap[x,y]
shade = shademap[x,y]
plot (x,y), shadingtable[shade, pixel]
What is the point of this method? Surely it would cache into the CPU more badly than a normal shade-as-you-plot technique. Well, yes. It also requires more storage, an extra memory write and an extra memory read, so doesn't look good. But this method does allow some post-filling shade tricks to be performed on the shade-map array such as lense flaring, fog, rain or edge-fade outs near the screen edges.
Personally I feel that this method does not really justify the amount of extra CPU work for the few extra effects which it gives. Most of these effects can be achieved using some 2d-sprite techniques afterwards. In the case of lense-flaring a "shaded-bob" sprite can be blasted onto the screen far more quickly than writing-then-reading the entire shade-map array in memory.