3D Coding TAT - Distant Horizons

TAD

Like most programming tasks drawing an exterior landscape or an interior environment is a problem of finding a balance between detail and speed. This is much more so when dealing with 3d objects and realistic looking terrain. It's a simple matter of the more items you render, the more processing time you need. Because almost every 3d scene is drawn on a flat 2d bitmap which only has a very small, finite number of pixels it means items beyond a certain distance will be reduced to a single or less. This problem is normally handled by applying a form of clipping which discards things beyond a certain range. But this can cause the odd 'pop-up' or 'fade-in' object to appear out of thin air. This is where parts of the scenary, items or creatures appear from no where. Take a look at Terminator: Future Shock or Skynet to see what I mean. Sometimes laser bullets suddenly appear without warning because the gun turret or HK is beyond the Z-clipping threshold. It seems that ALL objects and explosion beyond a preset range are held in limbo until the player comes into range.

To help hide the pop-up problem many games use fade-in techniques to slowly introduce items to the player rather than surprise them with the sudden appearance of something. The old classic DOOM used a distance Z based fade-in method so that the horizon cut-off point is disguised. It also helps the player judge distances in a far more effective way. The technique of fading items to black as they move into the distance is called "Black Fog" and behaves in a similar way to mist or fog. In Hexen on some levels they used "White Fog" to give the impression of mist, but the ideal is the same. And flight simulators used blue or grey as their horizon colours just like the way the real atmosphere appears.

The main dilemma for programmers is where to draw the horizon cut-off point. Place it too far away and the game engine crawls to a sluggish pace, but place it too near and the game might race along but the object pop-up makes it too difficult to play.

1: International Interpolation

By far one of THE most useful techniques for programmers is interpolation. It's a fairly recent phrase and most coders have been using it for years before they knew what it was called. The idea is simple, take two pieces of data (two known values) and then interpolate between them to calculate the value at ANY point in the interval. One of the easist to understand examples of this is linear-interpolation used for drawing lines or calculating the intersection of a line against a screen edge.

The 'voxel landscape' technique uses interpolation to smooth between the 'voxels' (the corner vertices of the map grid). Polygon drawing routines also use this method to scan-convert each side using linear-interpolation between 2 corner vertices. You don't have to used a straight-line 'linear' method you can use curves, splines and the like, but of course the more complex the interpolation the slower it will be.

One of the many good advantages to interpolation is the fact that with a minute number of data values it is possible to create an infinite number of values simply by interpolating more between the data points. This is great from a CPU point of view as only a very, very few memory reads are required and then the CPU can generate the pseudo-data using an algorithm. It is also good from a storage view because only the limits and possibly a parameter or two is required to describe a curve or line.

2: MIP-Terrain-Maps

Again this technique can be thought of in term of our old friend the 'hierarchical data structure'. The C.A.N. method (Coarse Area Nets) could be described as a collection of root cells and the covered map cells its siblings. This 'might' be a good way to render landscapes or other such environments which need a very distant horizon cut off. The terrain map could use very coarse cells for the distant mountain range that is miles from your view-point and smaller, fine detail cells for closer items. This way all the items reduced to a single pixel or less could be filtered out in favour of large, more important structures. The renderer would need to adjust its working map scale as the landscape is drawn. You can think of this as a kind of terrain MIP-Mapping, possibly using multiple maps or data trees.

But there are problems with this method: storage space and the extra processing needed to select the appropriate map for the current scale. If the rendering of the landscape occurs in a predictable way (e.g. view-point --> horizon) then the selection process for the correct scale might be reduced to using a counter or even reduced to pre-processed/hard coded instructions or data tables.

3: Realtime Creation

If the horizon is extended to a more realistic distant from the view point then the scenary rendering must be even more efficient than it would normally need to be. The great increase in workload is mainly due to the larger number of items (hills, rocks, creatures, trees etc.) which need to be handled. Every exterior landscape I have seen tended to look very barren with a few trees here and there to break up the wide open spaces. This is because exterior items are not very geometric looking like the inside of buildings are. Trees have transparent gaps between leaves and branches, also their silhouettes have a very random, irregular shape to them. This means drawing trees, bushes and hedges often requires a pixel-based sprite routine to overlay them onto the background.

To create a very dense forest would require a vast number of trees to cover just a small area of ground. On top of the task of drawing these mis-shaped objects is the task of rotating and projecting (not to mention sorting) them. If the extra storage is not a problem, then the extra processing is.

One idea could be to generate parts of the scenary as it is being rendered based on distance and terrain type. For example a distant hill might require a few 'forest' bitmap sprites to give the impression of a far away, dense wood, but a close up forest might need to have a few hundred trees and randomly placed rocks on the forest floor. So given a starting cell, terrain type and level of detail it should be feasible to create a highly complex looking environment using interpolation for the floor with a pseudo-random number generator to drop plants/trees every N'th pixel or so on...

4: Volume Blinds

I have already highlighted a problem with rendering exterior scenary when I mentioned the forest example. The main botteneck is the overlaying of all those trees, bushes and hedges over the background. With the background (grass, water, rock etc.) it is possible to use a S-BUFFER (span buffer) technique to avoid overdrawing pixels or polygons, but there is no quick way to speed up overlaying multiple layers of sprites. Each layer must be drawn and then possibly overdrawn by closer layers. This is also the case when drawing those five-mile-away items which are reduced to a few pixels. At this point most of the very small items would be immediately rejected. This would help reduce the number of items that need to be processed but the forest would slowly become less and less dense as entire trees were reject in order to reduce their number.

An idea that I have had, it is to cheat when drawing items like trees, hedges, bushes and other such sprite-like objects. If the landscape items themselves are reduced to a pixel or less in size then so are the transparent gaps between leaves. Also once objects are beyond a certain distance the parts of the background which could be seen through the gaps become more difficult to see, so could be anything.

As a solid sprite/texture is much faster to draw than a partly transparent one with holes in it, then why not ignore the gaps beyond a certain distance. This way a tree would become a solid looking polygon and its silhouette used to mask out any background pixels. It's similar to making a glass window opaque beyond a certain range stopping the interior scene inside from being seen. In the example of a dense forest an entire block of many trees can be quickly replaced with a solid cube covered in a forest-like texture. To help disguise the straight edges of the opaque cube volume you could draw narrow leaf sprites along the edges.

This feather-edging could be used to simulate long grass, water or any other type of terrain which doesn't have a solid edge to it. We have used a mostly solid polygon with a fluffy surround to fool the player into believing that a complex, multi- layer scene has been rendered.

In effect we have pulled down the blinds on a complex, distant scene with multiple layers of objects to make it into a far simpler, opaque 3d model like a cube. But by ignoring the transparent parts of an object we hide everything behind it which could be bad if it is an enemy solider or tank. But then again if something nasty is IN the forest then it would appear darker and camoflagued by the trees and branches.

5: Flat Arial Maps

Rather than actually draw individual trees, rocks and proper 3-d buildings there is another technique which is favoured by flight-sim programmers. In these games/simulations the need for a very distant horizon is extremely important far more than true detail. The cheat is very simple and FAST, use some arial photograph-like textures to draw the scenary. This way we have a semi-nice looking landscape for screen shots with thousands of highly detailed objects on the ground.

"Wow! You can even seen the road markings on the streets!"

So basically all the complex, processor hungry tasks of rendering thousands of buildings, trees, grass, river and static items can be replaced with a pre-drawn 2d bitmap. Take a look at Flight Unlimited and other flight-sims to see what I mean.

This does mean those FLAT items on the ground are not true 3d and have no parallax or changable view point. Also these arial bitmaps are usually large in size compared to normal 32x32 or 64x64 pixel bitmaps. This means they chew up a lot more memory and probably will cache very badly into the CPU but as a huge area is being filled (perhaps only a dozen or less polygons for the entire screen) then processor time is saved from elsewhere, i.e. from not having to render hundreds of small structures.

The huge saving in processing time makes this attractive to us speed-freaks. Combined with MIP-Mapping and other such distance based techniques this is good way to boost performance without all the hard work. I suggest using these arial maps for the most distant, near-horizon parts of the scenary where most 3-d models would just be pixels or blobs. Or possibly for the ground behind trees to create the impression of a dense forest.

6: Sprites & Crosses

I can't believe I got this far without describing this method. Anyone who has played Wolfenstein, Doom, Heretic, System Shock etc. etc. will know this method. Basically instead of a true 3d model a flat 2d bitmap sprite is drawn. The item is usually pre-drawn in 8 or more directions and the one closest to the desired direction is chosen and scaled onto the screen. Usually the 2d sprite is scaled using the Z-distance from the view point and remains parallel to the view-plane. This does mean that items sometimes skate along and always face towards you even though you may be looking down on them.

This is a quick method for rendering complex models without the need for back-face culling, polygon scan-converting, texture mapping or shading as all these can be done when the item is pre-drawn by a graphics artist. But the amount of memory needed for all the animation frames for each direction can quickly add up into many megs. My own opinion on using 2d sprites is that they should only be used for non-descript items or garnish effects like explosions, rocks, sea splashes and so on.

Even some of the military flight simulators use this flat 2d sprite technique for trees and explosions, but instead of using a single bitmap always parallel to the view-plane they use two or more intersecting sprites to form a cross or star shaped structure. This does give a much greater sense of realism because they can be rotated and scaled just like other textured polygons. Unfortunately you can still see how these items are constructed.

7: Mips & Bits

It is nice to have a distant horizon so more of the background can be seen, this gives the player more time to react to oncoming hazards such as enemy tanks, cliffs or incoming missiles etc. But the problem of simply pushing the horizon cut-off point is much more widespread than a large increase in visible terrain and number of items to be processed. There is the ugly problem of flashing pixels caused by the linear scaling of bitmaps. This is where small details are skipped, then shown as the item's scale changes. Techniques such as MIP-mapping can be useful to reduce this flashing pixel problem. As models and parts of the terrain move into the distance they become much smaller and so more and more pixels are skipped over to approximate the new perspective size. With MIP-mapping a smaller 'averaged' bitmap is chosen instead of scaling a large one down. This is good in some ways:

1. The CPU is reading a smaller bitmap, so it will cache much better as a smaller number of pixels (bytes) need to be used. And more importantly far less pixels are skipped, the CPU cache is VERY, VERY bad at reading randomly accessed fragments of memory it perhaps sequential blocks.

2. The horrid flashing artifacts are reduced because pixels are pre-combined instead of using a single pixel every Nth one.

3. The averaging actually increases realism because you can think of it as a form of blurring or de-focusing just like real photography.

But the bad points are:

1. More memory is needed to store the extra MIP bitmaps.
2. More processing is required to select the correct MIP bitmap.
3. Some detail is lost due to the pixel averaging.

All this MIP-mapping is fine for solid bitmap textures but what about overlaid textures like fences, grills, bars? How can you average a transparent pixel with a opaque one? Should it be forced to transparent or opaque?

The other problem with fence like textures is the old one of flashing artifacts when it is scaled. This is where parts of the fence either turn solid (because the transparent pixels are skipped) or they turn transparent so big holes appear. So one moment a fence is normal then as it moves away it might become solid and then totally transparent at a certain distance.

Here are a few ideas to help disguise this problem:

1. Don't use very fine fence or grill textures, instead use a much larger giant bar-code method, i.e. use a wood fence instead of a fine wire mesh fence.

2. Use custom CPU routines to draw the fences. This would be faster than allowing a general bitmap to be overlaid as the transparent pixels are not encoded into instructions, but the flexibility is lost.

3. Instead of drawing the background and then the fence, try simply shading the background to 50% or 25% normal brightness. This is easy to do using the INLET rendering method, just set the master brightness level to 50% or scale the shading values before drawing the polygon.

4. Don't allow transparent textures. I know it's very lame, but it would help speed things up quite a bit.

5. Ignore the transparent gaps in the textures when they go beyond a certain distance. This could mean trees and bushes becoming solid polygons thereby hiding part of the background.

8: Squashed Horizons

Most (i.e. all) 3d engines use a Cartesian based coordinate system to define objects and Earth-like terrain. This means that the origin surface is defined as a flat, infinite plane usually Y=0 (or Z=0 depending on your X,Y,Z system). But the Earth is almost a sphere so in fact spherical coordinates should really be used. This would make wrap-around navigation of the planet/world very simple but is much more difficult than a flat, plane based Cartesian system to implement translations.

Thinking of the Earth as a solid sphere means it has the following characteristics:

1. The finite surface wraps around.

2. There is no edge to the surface.

3. It is impossible to see the entire surface in one go.

4. Objects located on this spherical surface will appear to rotate over the horizon and disappear.

Using a flat plane based map means movement, gravity, distance and line-of-sight calculations are much simpler because a straight line can be used instead of working out the path across a sphere. But it does cause a few problems:

1. The surface does not wrap, it should be infinite but there is usually a memory or coordinate limit.

2. There is an edge, due to the range of the coordinates used. Again this is usually determined by memory space.

3. From certain locations all of the objects CAN be seen (i.e. standing at one edge and looking across the map).

4. All objects appear to travel in a straight line and do not rotate like on a sphere, they only change in size.

The wrap-around problem can be easily done using either a limit check or using a power of 2 for the world size with a logical AND operation to produce the same wrap effect quickly. A lot of games programmers and designers use a restrictive form of map where unpassable obstacles are placed around the play-area to stop the player wondering off into the sun set or worse wondering through memory and possibly crashing. Mountains are the favourite form of this 'feature' used in games like Jedi Knight and Terminator Future shock. Others use giant walls or sides of buildings to stop people escaping. The problem with this method is that it sort of defeats one of the main advantages of 3d games; freedom of movement. This is a reason why so many inside building environments have been used instead of exterior, go-anywhere ones. Also because buildings and even huge complexes are much smaller than an entire world they are usually quicker to render with plenty of dead-end walls to prevent the whole level from being seen in one go.

From the flight-simulators I have seen they appear to use 4 or more tricks to help disguise the plane based world which they use for terrain and buildings.

1. Use 'magic horizon' objects. This is where objects and terrain suddenly appear from thin air.

2. Use black-fog or white-mist. Objects and terrain are faded into the background/horizon colour which is usually black, white or sky blue. DOOM used black-fog and HEXEN used white on certain levels to give the impression of mist.

3. Use a 'cut-off' or black curtain. This is like the black-fog method except NO fading is done. Items are simply clipped to a horizon-plane parallel to the view-plane. Terminator Future Shock does this badly! It means parts of items are cut off parallel to the screen.

4. Use 'horizon squashing'. Usually the Y axis coordinates are scaled down and the object is pushed down below the horizon. This gives a fake impression of a true horizon where objects look like they have fallen off the edge of the world. This squashing of a single axis makes it look like objects or the landscape are rotating away from the view-point due to the curvature of the Earth. It would be more correct to rotate them down below the ground level.

The 'squashing' trick can be done faster than a proper rotation and usually the matrix or the perspective calculation can be fudged to perform this single axis scaling. I haven't tried this trick yet, but I reckon because the object/terrain will be far away (so very small) most players won't be able to tell the difference, unless structure are very, very tall.

9: Moving the Earth

As far as I know NO 3d engines have ever used spherical coordinates for their world maps. I believe some use polar or spherical based coordinates for objects, but not for the terrain itself. As the Earth is such a large sphere (about 12800 km dia) it appears to be flat at very close range. Even at 20,000 feet the ground still looks very flat, but in truth it is not. A late friend of mine pointed out one problem with using a flat plane for the ground; that of corners. He created a demo some time in 1990/1991 where a large rectangle was used as the ground, then moved over it and then rotated his view angle. Sensible enough eh? But of course at certain angles the corners of the ground rectangle could be seen. At the time I hadn't thought about this much but now the solution is really quite simple. The ground DOES NOT move but the items on it DO. You don't really need to define a rectangle you can use a a straight line across the screen to represent the horizon. This horizon line only needs two Y coordinates to be calculated (one for each end) these are found from the the altitude, angle to the ground and side-to-side tilt angle.

10: Silhouettes

This is a very simple and realistic effect which can be used to gain some extra speed when drawing the most distant items. When combined with the fade-into-the-background technique this can save plenty of valuable processor time. The method works like this, when a polygon exceeds a certain distance and the fading has begun the normal texture-mapping with shading is replaced with a flat, solid polygon filling routine. This means only the outline of an object is seen and not any of its surface detail.

11: Field of Vision Slices

With a distant horizon comes a huge amount of landscape data which must be processed and the further the horizon the greater the amount of data. If the landscape is divided into a grid with object links from each cell then this can help speed up some of the work. If the entire cell can be rejected (behind the view point or beyond our field-of-view (FOV) then so too can all of the objects in it. As you move away from the view point more and more of the landscape can be seen because the scale of each cell is getting smaller (it's further away). Even the most, basic simple test can take time when applied to each and every map cell. It would be nice to avoid having to do most of them if possible.

I'm sure that most of you have seen roto-zoomers in demos before where the entire screen is filled with a bitmap which rotates around one point just like a steering wheel. The effect is very easy to code and could look like this:

                        Xstep = COS(angle)
                        Ystep = SIN(angle)

                FOR y = 0 to height-1
                        h = y * Ystep
                        v = y * Xstep

                        FOR x = 0 to width-1
                                plot (x,y),bitmap(h,v)
                                h = h + Xstep
                                v = v + Ystep
                        NEXT x

                NEXT y

Basically it steps across the bitmap by incrementing the (h,v) coordinates using the Xstep and Ystep values. These are just the X and Y increments for the desired direction.

Well this technique could be applied to a grid based landscape map with the view point being the centre of rotation (the origin) and each pixel being a single map cell. Starting at the view point a slice across the map parallel to the view plane (usually Z) could be scanned and each cell's objects could be drawn or placed in a depth buffer (for a back-to-front rendering). So we scan the map at right angles to our viewing direction. You should remember that each map slice will get longer with more cells needed at both ends as we move into the distance. The map is scanned in an isosceles triangle like way with the view point at the peak of the triangle and it's two equal sides marking out our field-of-vision.

This method is good because we are only dealing with the visible parts of the map (possibly only a 90 degrees segment) and so those map cells behind the view point are never processed which saves a lot of time. There is another way in which the map slices can help rendering. Because we are only scanning map cells in our field-of-vision this makes them visible, so some of the clipping stages can be ignored. Possibly only the outer boundary cells will need to be checked and clipped. Those map cells between the ends of a slice can be drawn without too much work and that can't be bad news.

TAD #:o)