What is a Code?
As kids we had great fun passing handwritten notes to each other across the classroom. The challenge was to avoid the teacher’s detection. The more “sophisticated” among us would convert them into secret messages by writing them in code. Unfortunately, the teacher was no dummy and easily figured out the code—each letter was a number, in sequence—and embarrassed us in front of the class by reading aloud our secret message.
Not until later as a science student did I learn of the broad application codes have in a variety of fields. Encryption codes, analogous to our crude grammar school attempts, have been used to send secret messages in times of war. This technology has found its way into everyday life in the sending of private emails, where the email program on our computer provides the encryption/decryption (translation) process. Another type, the Universal Product Code, consists of parallel lines of varying width and separation printed on merchandise that provide information about the product.
In computers, programmers write codes that act like recipes. The code instructs the computer to perform certain operations just like a recipe instructs a cook how to bake a cake. Computer programming can occur at two levels. There are high-level codes, written in programming languages like BASIC, that allow the programmer to “talk” to the computer in natural, English-like, phrases. These are translated by compilers into low-level codes written in languages specific to the particular computer processor being used.
The low-level computer codes are very cryptic in that they use short mnemonics to represent each particular operation. The three-letter mnemonics like ADD, SUB, MUL or DIV are obvious, but others like CMP (compare), CLR (clear a register), XOR (binary operation of “exclusive OR”) are not. Each of these operations is then translated by an assembler into a sequence of zeros and ones (computers use the binary 0 and 1 system instead of the decimal 0 through 9 system to represent numbers), called the machine code, that the computer understands for performing operations. The particular set of zeros and ones used to define the machine code are carefully designed to ensure that all operations are properly encoded and the associated information that goes with the operation, like what two numbers to multiply, is included in the most efficient way.
As elegant and sophisticated as these codes are, they pale in comparison to nature’s genetic code, which determines how a protein is to be constructed. Unlike the binary 0 and 1 system used by computers, the genetic code uses four chemical nucleotides represented by the letters A, T, G, and C. Triplets of these “letters,” called codons, are laid down in a DNA sequence that comprises the program or recipe for the chain of amino acids that make up proteins—the building blocks of life. Like the machine code, the genetic code has been carefully crafted in such a way as to contain redundancy and resist corruption. More remarkable is how this genetic code is optimized in several ways (see “The Genetic Code: Simply the Best,” on page 4). As with computers, spies, or even schoolchildren, none of this remarkable information seems possible apart from the work of a very clever designer.