a . In 2010, another level of complexity was discovered in the genetic code. On a strand of DNA, a sequence of three adjacent nucleotides forms a unit in the genetic code called a codon. Prior to 2010, some codons were thought to have the same function as others. That turns out to not be the case.
... synonymous codon changes can so profoundly change the role of a protein [that it] adds a new level of complexity to how we interpret the genetic code. Ivana Weygand-Durasevic and Michael Ibba, “New Roles for Codon Usage,” Science, Vol. 329, 17 September 2010, p. 1474. Also see Fangliang Zhang et al., “Differential Arginylation of Actin Isoforms Is Regulated by Coding Sequence-Dependent Degradation,” Science, Vol. 329, 17 September 2010, pp. 1534–1537.
b . “Genomes [all the DNA of a species] are remarkable in that they encode most of the functions necessary for their interpretation and propagation.” Anne-Claude Gavin et al., “Proteome Survey Reveals Modularity of the Yeast Cell Machinery,” Nature, Vol. 440, 30 March 2006, p. 631.
c . The genetic code is remarkably insensitive to translation errors. If the code were produced by random processes, as evolutionists believe, life would have needed about a million different starts before a code could have been stumbled on that was as resilient as the code used by all life today. [See Stephen J. Freeland and Laurence D. Hurst, “Evolution Encoded,” Scientific American, Vol. 290, April 2004, pp. 84–91.]
u “This analysis gives us a reason to believe that the A–T and G-C choice forms the best pairs that are the most different from each other, so that their ubiquitous use in living things represents an efficient and successful choice rather than an accident of evolution.” [emphasis added] Larry Liebovitch, as quoted by David Bradley, “The Genome Chose Its Alphabet with Care,” Science, Vol. 297, 13 September 2002, p. 1790.
u “It was already clear that the genetic code is not merely an abstraction, but also the embodiment of life’s mechanisms; the consecutive triplets of nucleotides in DNA (called codons) are inherited but they also guide the construction of proteins. So it is disappointing, but not surprising, that the origin of the genetic code is still as obscure as the origin of life itself.” John Maddox, “The Genetic Code by Numbers,” Nature, Vol. 367, 13 January 1994, p. 111.
d . “No matter how many ‘bits’ of possible combinations it has, there is no reason to call it ‘information’ if it doesn’t at least have the potential of producing something useful. What kind of information produces function? In computer science, we call it a ‘program.’ Another name for computer software is an ‘algorithm.’ No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organisms with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?” Abel and Trevors, p. 8.
u “No known hypothetical mechanism has even been suggested for the generation of nucleic acid algorithms.” Jack T. Trevors and David L. Abel, “Chance and Necessity Do Not Explain the Origin of Life,” Cell Biology International, Vol. 28, 2004, p. 730.
e . How can we measure information? A computer file might contain information for printing a story, reproducing a picture at a given resolution, or producing a widget to specified tolerances. Information can usually be compressed to some degree, just as the English language could be compressed by eliminating every “u” that directly follows a “q”. If compression could be accomplished to the maximum extent possible (eliminating all redundancies and unnecessary information), the number of bits (0s or 1s) would be a measure of the information needed to produce the story, picture, or widget.
Each living system can be described by its age and the information stored in its DNA. Each basic unit of DNA, called a nucleotide, can be one of four types. Therefore, each nucleotide represents two (log24 = 2) bits of information. Conceptual systems, such as ideas, a filing system, or a system for betting on race horses, can be explained in books. Several bits of information can define each symbol or letter in these books. The number of bits of information, after compression, needed to duplicate and achieve the purpose of a system will be defined as its information content. That number is also a measure of the system’s complexity.
Objects and organisms are not information. Each is a complex combination of matter and energy that the proper equipment—and information—could theoretically produce. Matter and energy alone cannot produce complex objects, living organisms, or information.
While we may not know the precise amount of information in different organisms, we do know those numbers are enormous and quite different. Simply changing (mutating) a few bits to begin the gigantic leap toward evolving a new organ or organism would likely kill the host.
u “Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.” Norbert Wiener, Cybernetics; or, Control and Communication in the Animal and the Machine, 2nd edition (Cambridge, Massachusetts: MIT Press, 1948), p. 132.
u Werner Gitt (Professor of Information Systems) describes man as the most complex information processing system on earth. Gitt estimated that about 3 × 1024 bits of information are processed daily in an average human body. That is thousands of times more than all the information in all the world’s libraries. [See Werner Gitt, In the Beginning Was Information, 2nd edition (Bielefeld, Germany: CLV, 2000), p. 88.]
f . “There is no known law of nature, no known process and no known sequence of events which can cause information to originate by itself in matter.” Ibid., p. 107.
u “If there are more than several dozen nucleotides in a functional sequence, we know that realistically they will never just ‘fall into place.’ This has been mathematically demonstrated repeatedly. But as we will soon see, neither can such a sequence arise randomly one nucleotide at a time. A pre-existing ‘concept’ is required as a framework upon which a sentence or a functional sequence must be built. Such a concept can only pre-exist within the mind of the author.” Sanford, pp. 124–125.
g . Because macroevolution requires increasing complexity through natural processes, the organism’s information content must spontaneously increase many times. However, natural processes cannot significantly increase the information content of an isolated system, such as a reproductive cell. Therefore, macroevolution cannot occur.
u “The basic flaw of all evolutionary views is the origin of the information in living beings. It has never been shown that a coding system and semantic information could originate by itself in a material medium, and the information theorems predict that this will never be possible. A purely material origin of life is thus precluded.” Gitt, p. 124.
h . Information theory tells us that the only known way to decrease the entropy of an isolated system is by having intelligence in that system. [See, for example, Charles H. Bennett, “Demons, Engines and the Second Law,” Scientific American, Vol. 257, November 1987, pp. 108–116.] Because the universe is far from its maximum entropy level, a vast intelligence is the only known means by which the universe could have been brought into being. [See also "Second Law of Thermodynamics" on page 31.]
i . If the “big bang” occurred, all the matter in the universe was once a hot gas. A gas is one of the most random systems known to science. Random, chaotic movements of gas molecules contain no useful information. Because an isolated system, such as the universe, cannot generate non-trivial information, the “big bang” could not produce the complex, living universe we have today, which contains astronomical amounts of useful information.