delta9^salva mea^team23^oCCult

This paper is an introduction to raytracing, it should give you the idea how things work. I also assume that you are familiar with coding, which means: you know how to set pixels, how to handle colors and all the other basics of graphics-programming.

Also, this isn't meant as a tutorial in realtime-raytracing, but as I mentioned, a tutorial for raytracing.

'nuff said, let's start:

What it is

Perhaps you asked yourself sometimes why raytracing is slowly becoming popular in the scene or perhaps what's the difference to (hardware-accelerated) polygon-rendering.

The main difference is how things are evaluated. In raytracing objects are not just calculated and projected to the screen, but the other way round, rays are "shot" into the scene. For every ray shot into the scene all the intersections with all objects are calculated and the raytracer determines which one is near and which one is visible.

In most of the cases, there ain't even polygons that are evaluated, but "real" objects, like spheres, planes, boxes, cylinders. Objects that can be represented by formulae.

To continue with the description of the algorithm: If a object is reflective, another ray is spawned from the hit point with the reflected direction. Again, this ray is being traced like the first one. This continues until no further object is hit or the maximum raytracing depth is reached.

Therefore, the algorithm is called "Recursive Raytracing", to be precise.

The Basic Algorithm

As I mentioned already, raytracing is a recursive procedure. This means, the whole thing invokes itself for calculations of reflected or refracted rays (or whatever). This may be a bit confusing for most people in the beginning. Nevertheless, the main-part is rather short. Let's see some pseudo code:

void traceRay(ray, depth, &pixel) {
        for(all objects) {

        if(object is hit) {
                shade(pixel according to hit object properties);
                if(object is reflective and depth < maximum) {
                        calculate_reflection(hit object);
                        traceRay(reflected ray, depth+1, pixel);
                if(object is transmissive and depth < maximum) {
                        calculate_refraction(hit object);
                        traceRay(transmitted ray, depth+1, pixel);

Basically, that's it, that's what you do for every pixel on the screen.

Calculating Intersections

The all important thing within a raytracer. How to compute the intersection between a ray and a object. The first thing to know here: what is a ray?

I try to call it back to your mind: A ray is as simple as a straight line. The same thing as

m*x + t
, but in threeD, which will look like this:

   (1.) Origin + t * Direction = Hit

Or, nicer to read:

        O.x + t * D.x = H.x
   (2.) O.y + t * D.y = H.y
        O.z + t * D.z = H.z

Once you have a ray shot into your scene, you may test it against various objects, which means, you try to get some results from some formula.

Hint: To get rays for every point on the screen, try starting like this:

  EyeVector(0.0f, 0.0f, -128.0f);
  LookAtVector(x, y, 0.0f);
  Ray a(Eye, LookAt);

Going for every x and y on the screen, you get lots of rays defining your "Field of View". Probably you have to adjust x and y depending on the resolution, to fit to the center of your screen.

Let's check it out with a sphere. Again, one has to know how a sphere is defined. Spheres are rather simple to describe, that's why I've chosen spheres for this example. Only as a note, I'm talking about the algebraic solution, but there's also a geometric solution. If you are interested, you will find the other solution on the Internet very easily.

Intersecting Spheres

Ok, that's what a sphere looks like in math:

  (3.) x^2 + y^2 + z^2 = r^2

This means (x,y,z) is any point in distance r from (0, 0, 0) of the coordinate-system. But we might want to move around the sphere a bit later, so we modify this formula to take a centerpoint in consideration. This looks as following:

  (4.) (x-cx)^2 + (y-cy)^2 + (z-cz)^2 = r^2

Now that we have a description of our object we want to hit, we can actually start and test for the real intersection. To achieve this, we substitute x, y and z with the ray equation. This gives a quadratic equation in the form:

  (5.) at^2 + bt + c = 0


       a = Dx^2 + Dy^2 + Dz^2
  (6.) b = 2*Dx*(Ox-cx) + 2*Dy*(Oy-cy) + 2*Dz*(Oz-cz)
       c = cx^2 + cy^2 + cz^2 + Ox^2 + Oy^2 + Oz^2 - 2*(cx*Ox + cy*Oy + cz*Oz + r^2)

If the determinant of this equation is less than zero, the sphere is not intersected, otherwise it has one or two intersections which are described by the real roots of the quadric in (5.)

Re-substituting the values of t in (1.) gives you the real hitpoints in 3D.

For spheres, that's it.

But, this only gives you the information, which point is near and visible. Not very nice to watch. Next thing is to shade the point. To do this, you need to know the normal in the hit-point.

Normals For Spheres

In this case normals are really simple to calculate. Think! Each point has the same distance from the center-point, which was given for the equation in (4.)

Now, all you have to do is find a vector from the center to the hitpoint. Easy thing:

  (7.) (Centerpoint - Hitpoint) / radius

You should divide by the radius to have the vector normalized.

Once you have your surface normal, you are able to determine the color in which the traced point appears under the light in your scene.

Shading And Lighting Surface Points

In the usual case, your scene is lit by one or more "Point-Lights". Pointlight, because it has no extent in any dimension. This is very easy to represent, another simple vector will do to describe the light. All you have to do now is to determine the angle between the normal and the light-rays that are shading your hit point. Since you have all the values you need, this should be a cake-walk too. And you probably learned in school that the way to take is the cross-product:

light_vector = (hit_point - light)
lighting_value = (light_vector % hit_normal)

What you should do is to take care that the value of (lighting_value) isn't negative, since this would mean, the point is on the opposing side of the object. You may also want to have a very small standard value for (lighting_value). Otherwise, you won't see any objects that are not lit. Not quite correct, but nice. IMHO.

Texture Mapping

Once you know how intense the lighting is at the hit point, you have two options to shade the object. The first one: your object has a solid color. But much nicer: your sphere is textured.

For the second case you have to find out what basic color your hit point has. As you know the centerpoint and the hitpoint, you can calculate your (u, v) coordinates using some trigonometric formulas. You should know what this is.

To cut it short, it works like this:

                u = |(((PI/256) / 2) - arctan(x / z)) * (PI/256))|
                v = |(arctan(y / z) * (PI / 256))|

For some explanations: arctan( x / z ) is the angle between the x and z value of the hitpoint on a centred sphere. If your sphere moves, you have to adjust the values by subtracting the centerpoint values.

The pixel you get from this coordinates on your texture map provides the basic color for the shading I described above.

That's probably the main thing of the non-recursive part. Let's move on.


Remember the pseudo code from above. If the object that was hit is reflective, you have to calculate the direction of the reflected ray and start the whole thing over. Since you are following your eye ray (tracing it), you have your first part for this problem given. The other thing you need to know, is the normal of the surface, which is reflective. You just did that before.

To be honest, I took this formula from some book, as I couldn't figure it out myself. So I won't bother explaining, but basically, it's what you know from school: in-angle = out-angle

reflection = normal * (normal % dir) * -2 + rayDirection

Some Comments

I know, this is rather basic stuff, this is only about tracing spheres and lots of other stuff is missing. But it is still meant as a introduction to raytracing and I really hope you get the idea. If I find some time till the next issue of Hugi, there may be a second article concerning some further topics of raytracing. Hope you enjoyed this one.

Thanks to castla^team23 and d.fox^salva-mea for proof reading.

(c)2001 delta9^salva-mea^team23^oCCult (delta9@salva-mea.org)
WWW: http://www.team23.org
(main) http://rfhs8012.fh-regensburg.de/~nea33374/