The genetic code is universal. While there are some variants of the code, these variants, which tweak at the periphery, arose after the universal code was established. The universality of the code means that all of evolution has been under the constraint and influence of the genetic code.
The genetic code was originally thought to be a frozen accident. Murray Gell-Mann explains the concept as follows:
Now, most single accidents make very little difference to the future, but others may have widespread ramifications, many diverse consequences all traceable to one chance event that could have turned out differently. Those we call frozen accidents. I give as an example the right-handed character of some of the molecules that play important roles in all life on Earth though the corresponding left-handed ones do not. People tried for a long time to explain this phenomenon by invoking the left- handedness of the weak interaction for matter as opposed to antimatter, but they concluded that such an explanation wouldn’t work. Let’s suppose that this conclusion is correct and that the right-handedness of the biological molecules is purely an accident. Then the ancestral organism from which all life on this planet is descended happened to have right-handed molecules, and life could perfectly well have come out the other way, with left- handed molecules playing the important roles.
Yet this original explanation has been effectively falsified as scientists analyzed the code in more depth.
We can begin to story with an excerpt from a Science article back in 1998:
For example, in 1991, evolutionary biologists Laurence Hurst of the University of Bath in England and David Haig of Harvard University showed that of all the possible codes made from the four bases and the 20 amino acids, the natural code is among the best at minimizing the effect of mutations. They found that single-base changes in a codon are likely to substitute a chemically similar amino acid and therefore make only minimal changes to the final protein.
Now Hurst’s graduate student Stephen Freeland at Cambridge University in England has taken the analysis a step farther by taking into account the kinds of mistakes that are most likely to occur. First, the bases fall into two size classes, and mutations that swap bases of similar size are more common than mutations that switch base sizes. Second, during protein synthesis the first and third members of a codon are much more likely to be misread than the second one. When those mistake frequencies are factored in, the natural code looks even better: Only one of a million randomly generated codes was more error-proof. 
So the universal code is “one in a million.” (see Figure 1 below).
Figure 1. From 
Note, however, that the code is not the best of all possible codes according to this parameter. Researchers then figured various biosynthetic restrictions into their calculations and employed a more accurate measure of amino acid similarity, and as a result, determined that nature’s code appears to be “the best possible code” at buffering against deleterious mutations.  They write:
When the error value of the standard code is compared with the lowest error value of any code found in an extensive search of parameter space, results are somewhat more variable. Estimates based on PAM data for the restricted set of codes indicate that the canonical code achieves between 96% and 100% optimization relative to the best possible code configuration. If our definition of biosynthetic restrictions are a good approximation of the possible variation from which the canonical code emerged, then it appears at or very close to a global optimum for error minimization: the best of all possible codes.
Whether or not nature’s code is truly the “best of possible codes” depends on an important assumption:
Although detailed perceived patterns (Wong 1975) are untrustworthy because of the biosynthetic interrelatedness of most amino acids within present-day metabolism (Amirnovin 1997), it does appear that amino acids from the same biosynthetic pathway are generally assigned to codons sharing the same first base (Taylor and Coates 1989). If this reflects a history of biosynthetic expansion from some primordial code, then the implied restrictions on code evolution would reduce the number of possible codes so greatly as to render previous adaptive results meaningless (Freeland and Hurst 1998b). We investigate this possibility by constructing a set of possible codes that allows interchange of amino acids only within each biochemical pathway. 
If the code was designed to be the “best of possible codes,” the designer had a logical reason to correlate amino acid biosynthetic pathways and amino acid assignments within the code (Figure 2). The researchers mentioned above posited a historical reason for this association (i.e., expanding from a primordial code) and if true, might mean that life, not just evolution, was front-loaded through the laws of Nature (we’ve seen this echo before). If, on the other hand, life appeared on this planet as a consequence of seeding, we would have to posit a functional/engineering reason for this association. Why, in functional or engineering terms, are the biosynthetic pathways correlated with codon assignment?
Figure 2. From 
Nevertheless, the take home message from these studies, and several others, is that nature’s code is very good at buffering against deleterious mutations. This theme nicely fits with many other findings that continue to underscore how cells have layers and layers of safeguards and proof-reading mechanisms to ensure minimal error rates. The “universal code” is thus easily explained from a design perspective – if you have designed a code that is very good at buffering against deleterious mutations, why not reuse it again and again?
There is an additional reason to design a “universal code” that follows from any attempt to design life with the ability to evolve. Put simply, if each organism had its own unique code, this would serve as a serious obstacle for the horizontal flow of genetic information. It is now clear that bacterial have made extensive use of horizontal transfer. Successful gene products can be “shared” with very different bacteria such that the recipients receive all the benefits of genes pruned by selection without having to evolve them. This allows bacteria, as a global community of cells, to more successfully and rapidly adapt to various environmental stresses and thus effectively become a superorganism. We need only consider how powerful this mechanism of sharing is when we consider how quickly bacteria are adapting to extreme selection pressures caused by the massive, global use of antibiotics.
Horizontal transfer is not restricted to bacteria. Eukaryotes and bacteria have shared genes with each other in the past. What’s more, eukarya are thought to have emerged through the endosymbiotic uptake of bacteria that eventually transformed into mitochondira and chloroplasts. If eukarya and bacteria had a different genetic code, such merging would likely be unsuccessful.
Thus, there are two very good (and obvious) reasons for a designer to have employed the same code in bacteria and eukaryotes: 1) The code is extremely good at preventing deleterious amino acid substitutions and; 2) the shared code allows for the lateral transfer of genetic material and facilitates symbiotic unions.
1. Vogel, G. 1998. Tracking the History of the Genetic Code. Science 281: 329
2. Freeland, SJ, Knight, RD, and Landweber, LF. 2000. Measuring adaptation with the genetic code. TIBS 25; 44-45.
3. Freeland SJ, Knight RD, Landweber LF, Hurst LD. 2000. Early fixation of an optimal genetic code. Mol Biol Evol 17(4):511-8