Blasting the Bounds of Standard Math-Ops
Any programming language has a couple of standard math operators that come along with it, which is good as it would be pretty inefficient to re-code all of them oneself and having to clumsily call them as functions. The maths operators of C cover almost all the most common mathematical operations, with the exceptions of power and square root. For most application this is sufficient. However, they do have one disadvantage:
Since these standard operations work only with the standard data types, their bounds are limited. The largest data type for numbers C knows is unsigned long, which - depending on the compiler - is either 32 bit or 64 bit, thus making the maximum value either 2^32 - 1 = 4294967295 or 2^64 - 1 = 1844674407350. So what if you want to operate with larger numbers? You'll have to write your own operators.
C++ supports a nifty way that allows you to simplify the usage of self-defined operators: You can define your own new data types as classes, and then you can "overload" the existing operators with your own new, self-defined functions that process variables of the new data type. In this way, you can define e.g. a "big_integer_128_bit" and then use '+', '-', '*' and '/' just as you would do with regular ints, after implementing your own functions for these operations.
No matter whether you're planning to use this feature of C++ or whether you prefer to keep the clumsy C way where you have to call the functions directly, two questions remain:
1. how to define the custom data type, and
2. how to code the operators.
This little article is going to give you the answers on these two questions.
Let's first focus on positive integer numbers, i.e. such that are >= 0 and don't have a decimal comma because there would only be zero after the comma.
There are two reasonable ways of creating a numerical integer data type with bounds different than char, int and long. Both involve storing the number as an array of byte (char) which you access byte by byte. The difference between the two methods is the way you represent the numbers:
1. as integer values or
2. as characters (i.e. you define a string).
In the one variant, the value will be stored in an array of char where each element is in the range 0 .. 256.
In the other, the value will be stored in an array of char where each element is in the range '0' .. '9'.
Let's discuss both variants in more detail, beginning with the latter.
Storing Big Integers as Strings
Imagine a C program containing the following code line:
scanf ("%d", &n);
This permits the user to input a number; this number is consequently stored in the variable n, which probably is some sort of integer type. But what does scanf do before overwriting the value of n? It has to convert the user's input, which is a string, to a decimal number.
The algorithm for that is actually pretty simple. Let's take a single-digit number at first, e.g. 6. If the user enters '6' using the keyboard, scanf is returned the ASCII code of '6', which is 54. In order to convert the character '6' to the number 6, scanf in theory would have to look up on an ASCII code table.
Fortunately, this isn't necessary because the characters '0' to '9' are stored in the ASCII code table as a sequence. Therefore we only have to substract the ASCII code of '0' (that is 48) from the ASCII code of '6', and thus we get the correct number: 6.
In C, we write:
n_int = n_char - '0';
Now what if the entered string contains more than one character? Let's take "12" as an example. This notation is equal to:
10 * 1 + 2
"12" = 10 * ('1' - '0') + ('2' - '0')
The general formula for a four-character string "wxyz", where w, x, y and z are not to be taken literally but as variables for any numbers expressed as characters, would be:
"wxyz" = 10^3 * ('w' - '0') + 10^2 * ('x' - '0') + 10^1 * ('y' - '0') + 10^0 * ('z' - '0')
We could also express this as follows:
"wxyz" = ( ( ( 'w' - '0' ) * 10 + 'x' - '0' ) * 10 + 'y' - '0' ) * 10 + 'z' - '0'
Substituting 'w' with a , 'x' with a , 'y' with a  and 'z' with a , we get:
"wxyz" = ( ( ( a  - '0' ) * 10 + a  - '0' ) * 10 + a  - '0' ) * 10 + a  - '0'
Or, to express it in a more elegant way and for a string not necessarily consisting of 3 numbers, but of any conceivable n numbers:
r = 0;
for (i = 0; i <= n; i ++)
r *= 10;
r += a [i] - '0';
This is the proof that it's possible to represent any positive integer number using a string.
If r is a unsigned long, the code above will, however, only return the correct value if it fits into the bounds of unsigned long. If it doesn't, r will be the value of a [i] modulo (1 + the maximum value an unsigned long can store).
In other words: Using strings, we can store much larger integers than the standard C data types can, but we have to develop our own operator set in order to add values to strings representing big ints and perform substraction, multiplication, division, modulo and the like.
Storing Big Integers as Integers
As a matter of fact, the way how the standard integer data types are internally stored very much resembles the way strings are stored: any integer could also be expressed as an array of byte (char).
We can easily test this by creating a union of an integer type variable and an array of char. In this way we are able to access the individual bytes of the integer type variable. Check out the following example program:
unsigned long l;
unsigned char c ;
void main ()
unsigned long test ;
char i = 0;
test  = 1;
test  = 256;
test  = 65536;
test  = 65536 + 1;
test  = 65536 + 256;
test  = 65536 + 256 + 1;
test  = 0;
a.l = test [i];
printf ("int: %u\n"
"char : %u\n"
"char : %u\n"
"char : %u\n"
"char : %u\n\n",
a.c , a.c ,
a.c , a.c );
} while (test [i]);
If you compile this program for an Intel PC, it will always produce the same results, no matter whether your compiler reserves 32 bit or 64 bit for unsigned long. The reason for this is a peculiarity of the PC which also constitutes the main difference between the ways you have to handle strings representing integers and integers that you access using an array of byte:
In a string s of n + 1 characters, the lowest decimal place is s [n], the second lowest is s [n - 1], and the highest is s . By contrast, if it were possible to index an integer i in such a way that it we could access its decimal places in the same way, the order would be reverse: the lowest decimal place would be i , the second lowest would be i , and the highest would be i [n].
In reality, integers can't be accessed decimal place by decimal place but byte by byte, i.e. the lowest byte is i  (which equals i % 256), the second lowest byte is i  (which equals i % 256^2), and the highest byte is i [n] (which equals i % 256^(n - 1)).
This peculiarity of having a reverse order is due to the way Intel CPUs organize the memory. It differs from other computer systems such as the Amiga, where the order is the same as for strings.
However, it bears a little advantage: As already implied, it doesn't matter whether your integer type variable is 32 bit, 64 bit or perhaps even more - the first ld(32) bytes will always remain the same!
Now imagine a.l = 255, i.e. a.c  = 255 and a.c [1 .. 3] = 0, and you add 512 = 2 * 2^8 to a.l. This will only change the value of a.c , which will become 2. If you then add 134414336 = 4 * 2^25 + 3 * 2^16, only a.c  and a.c  will be affected: a.c  will become 3, and a.c  will become 8. (a.l will effectively get the value of 255 + 512 + 134414336 = 255 + 2 * 2^8 + 3 * 2^16 + 4 * 2 * 2^24.)
If we're using a compiler that reserves 32 bit for unsigned long, adding 4294967296 = 1 * 2^32 to a.l will not change its value since 2^32 is out of bounds. Now let's assume a.c were a variable of our custom type, an array of char that represents a big int. a.c  = 1, and we'd have the correct result - provided we've at least reserved five byte for a.c.
String vs. Integer
The reverse order I've just mentioned definitely is an advantage of integers over strings, at least on Intel PCs. Imagine a similar situation as described above involving a string-encoded number: Adding 100 to "23" (which is s  = '2', s  = '3' and s  = '\0') does not permit us to simply set s  = '1', but we will have to change all characters: s  = '1', s  = '2', s  = '3', s  = '3' and s  = '\0'. Such procedures are not only a bit slower but also a little more tedious to implement.
Another advantage of the integer method is that it occupies less space: a variable will only require 1 + log256(n) byte instead of 1 + log10(n) byte.
On the other hand, the benefit of using strings is that this way it's easy to work with values entered by the user.
Perhaps the optimum would be to have a function that converts string-encoded big ints to integer-encoded big ints.
The definition of a big int type variable as
unsigned char bigint [n]; // replace n with the size of your choice
// - the bigger, the better
is trivial; and it isn't that difficult to implement functions that replace the standard operators either. All you need is to recall the way you learned to do addition, substraction, multiplication and division with numbers with more than one decimal place at elementary school. This is exactly the way how to perform these operations on string-encoded decimals.
For example, take the following integer division function:
unsigned long divide (unsigned char *input, unsigned int divisor)
unsigned int output = 0,
temp = 0;
while (*input >= '0' && *input <= '9')
output *= 10;
temp += (unsigned int) *input - '0';
output += temp1 = temp / divisor;
temp = (temp % divisor) * 10;
It divides a string-encoded decimal called input by an unsigned int called divisor. Like at elementary school, it first performs an integer division of the highest decimal place of input by the divisor and stores the remainder in the variable temp. Then temp = temp * 10 + the second highest decimal place of input, and the same procedure is repeated until all decimal places of input have been processed this way. The final remainder will be discarded.
Replace '/' by '*', and you will get a multiplication function.
The algorithms for addition and substraction are a piece of cake, so I'll leave them for you as an exercise. (In fact the substraction function is also included in the example program in the bonus pack to this issue so you can check whether your algo is correct even before running your own program.)
Another (more difficult) exercise is: Write functions that do not only accept one string-encoded parameter (while the other remains a standard unsigned int or unsigned long) but where both parameters are string-encoded decimals. The values these functions return should also be string-encoded decimals.
The functions for integer-encoded decimals are basically the same, but you'll have to replace 10 with 256 and remove every appearance of " - '0'".
Actually the source of inspiration for this article was that I had created a program that calculated a modulo n with a being a string-encoded decimal and n being an unsigned int. I had written this program because I had found out (I know, I'm not the first in the world who's made this discovery) that a modulo 9 is the same as b modulo 9, where b is the sum of the digits of a, so this was a way how a modulo n could be calculated for a >= the max value supported by the largest standard integer data type supported by C.
The source code of this program is included in the bonus pack to this Hugi issue (h27bonus.zip, modulo.cpp).
Actually it was originally also based on another discovery of mine for 1 <= n <= 9, but that's another story.