Exploring the wonders of geology in response to young-Earth claims...

Never been here? Please read my guidelines and background posts before proceeding!

Friday, December 31, 2010

C Новым Годом 2011!

Well, the end of the year came swiftly for me. I have essentially spent the last three days travelling halfway across the world. My mind is still having trouble with the fact that I experienced 4 sunrises/sunsets in only 72 hours time, but I hope to recover shortly.

In retrospect, I have learned much in the course of my blogging adventure--and not only with regard to geology. The task was rather time consuming at times, but the outside interest and self-benefits were sufficient that I am happy to continue. My only resolution is that I may post a similar message one year from now.

Thanks for reading, and Happy New Year from Russia! This is a wonderful place, a most magical land, and I have a very loving new family with whom to spend the holiday.

C праздником!

Tuesday, December 21, 2010

Science is more (and less) than you think!

When you hear the word “science,” what do you envision? Goggles and white lab coats? Mathematical formulas on a blackboard? A casually dressed couple in straw hats brushing away at the bones of a velociraptor? Very likely, your perception of what science is has been influenced (like mine) by childhood movies, comic strips, and/or the latest episode of Bones.

For this reason, my favorite part of teaching introductory geology laboratory sections was discussing the scientific method in the first lecture. Every university requires students to take a science course with a lab, regardless of their major. Thus introductory sections are filled with students from business, the liberal arts, and the like, all wondering why it is necessary to learn something about science. With that in mind, I try to remind students that the purpose of university curriculum is not merely to teach them ‘stuff’—the hardness of calcite, definition of unconformity, names of geologic periods—but how to think for themselves. So I am inspired by the Van Tillian couplet, “Every man can count, not every man can account for counting.”

Unfortunately, most people do not think scientifically or even logically, and the rhetoric of politicians and salesman commonly banks on this fact. To take a Black Swan example, consider how easily the difference between “Most terrorists are Muslims” and “Most Muslims are terrorists” is overlooked by the public in discussions of foreign and domestic social policy. Or consider how often you have heard the words “science has proven”, “scholars say”, and “studies show that”, without any consideration of how it was proven or which scholars say. Public discussion of scientific topics, especially when surrounding controversy, is commonly littered with empty appeals to authority and ad hominem argumentation: “Well biologist A, who works at prestigious university X has concluded after years of research that Y; therefore, I trust his/her word over yours,” or “You mean to tell me that human inputs to the atmosphere are partially responsible for climate change? Don’t tell me how it works, just tell me whether you’re receiving grant money from the government!” These tactics may work well to convince a jury, but they do not constitute critical thinking.

Perhaps, I should return to the original question and phrase it this way: is ‘science’ a noun or a verb? Is it something that is, or something that is done? I am not concerned here about dictionary definitions, semantic ranges, or etymology; rather, I want to elucidate the meaning of science in practice and its limitations. In other words, my goal is not to offer a comprehensive discussion of the scientific method throughout history, nor to lecture anyone about what I think science is. Instead, I want to simply show how science is the active pursuit of knowledge about the natural world, and is guided by an epistemological framework outside the realm of science itself.

My inspiration for this post comes from a recent article, by Roger Patterson at Answers in Genesis, entitled “What is science?” There, Mr. Patterson discusses the history of scientific thought, the difference between various types of scientific approaches, and how this relates to the study of Earth history. Since his aim is to defend the validity of Creation science and the Young-Earth interpretation of geological data, you may not find it surprising that I disagree with some of his comments and conclusions. However, I applaud his willingness to define the scope and methods of science from a young-Earth perspective, and would not dismiss the discussion wholesale. So I don’t expect to provide a rebuttal here so much as a discourse guided by the points he has already made.

The scientific method and categories of science

The scientific method is a process built around falsifying hypotheses, which are formulated from observations of the natural world (note: natural as opposed to supernatural or metaphysical; not as opposed to artificial). Let’s say you wanted to investigate the reason behind different yields from the same crop grown in two different regions. Your observations may include the actual crop yield, temperature and precipitation records, soil samples, etc., from which you can formulate a hypothesis such as: “Crop yield is a direct function of rainfall.” Sounds good, right?

While the explanation sounds plausible, especially if the region with higher yield receives significantly more precipitation and given that plants need water for growth, it is by no means proven. Science is much more than building plausible-sounding arguments! One must first demonstrate a correlation between rainfall and crop yield by falsifying the null hypothesis: “Crop yield is not a function of rainfall.” The falsification may require more observations or a controlled experiment to obtain statistical significance, which means that some uncertainty is involved. Furthermore, a statistical correlation may be consistent with the original hypothesis, but does not itself prove it. One must also falsify alternative explanations  for the same phenomenon; in this case, the dependence of yield on nutrient availability, soil type, solar irradiance, etc. For the hypothesis to remain scientific, it must also remain predictive of new facts (such as crop yield in regions C, D, E, etc., for a given amount of rainfall in the respective regions).

When a scientific hypothesis can predict new data, rather than being falsified thereby, it is treated as true (i.e. proven), but only provisionally so. The reason is that scientific hypotheses can only address existing data and are potentially falsifiable. Furthermore, scientific hypotheses are built (dependent) on a range of other scientific theories, which themselves remain only provisionally true. In this sense, a scientific premise may be proven and accepted as true, without any claim of infallibility. Considering the contingent nature of scientific conclusions, one may be inclined toward skepticism. However, one should also remember that the scientific method is self-correcting, since hypotheses not corresponding to reality are quickly falsified when tested by multiple independent researchers.

Philosophers of science typically make some distinction between experimental and historical methods of science (if you recall, I discussed historical science at length in a previous article). Since science must address a wide range of phenomenon (from molecular interactions to planetary motions; from modern world economies to human history; etc.), researchers may further refine the method according to their respective disciplines. Mr. Patterson describes the essential difference as following:

“Operational science deals with testing and verifying ideas in the present and leads to the production of useful products like computers, cars, and satellites. Historical (origins) science involves interpreting evidence from the past and includes the models of evolution and special creation.”

Mr. Patterson's dichotomy is not entirely inaccurate, but overly simplistic. Also, his use of the word "useful" is peculiar; is this to say that historical science does not produce useful products? The reconstruction of ancient texts, including the Bible, is one example of a useful product of historical science that Mr. Patterson would appreciate. Furthermore, while portions of evolutionary theory remain under the domain of historical science (e.g. the morphological history of phylogenies), a majority of research is experimental (or "operational") in nature. Finally, we must ask whether the notion of "special creation", as defined by Mr. Patterson, even falls into the category of historical science. But first, consider his comments on the limitations of historical science:

"Recognizing that everyone has presuppositions that shape the way they interpret the evidence is an important step in realizing that historical science is not equal to operational science. Because no one was there to witness the past (except God), we must interpret it based on a set of starting assumptions."

Presuppositions play a major role in disciplines that are hermeneutic (interpretive), and are not always obvious. However, the presence of underlying presuppositions is not unique to historical science, and therefore is not a valid means by which to distinguish it from experimental science. Mr. Patterson is mistaken if he believes that facts currently observed are any less "interpreted" than historical facts. To cite another Van Tillian couplet, "Brute facts are mute facts." Experimental science relies on a number of epistemological and metaphysical assumptions (the uniformity of nature, reliability of senses, nature of causality) and is dependent on potentially falsifiable scientific theories. The fundamental differences between historical and experimental science are the 1) method by which observations are made and 2) the availability of data to test hypotheses. In the historical sciences, nature has already set up the experiment, and data are thereby limited. Moreover, visual observation in person is not the only way to "witness the past." Mr. Patterson continues:

"Creationists and evolutionists have the same evidence; they just interpret it within a different framework. Evolution denies the role of God in the universe, and creation accepts His eyewitness account—the Bible—as the foundation for arriving at a correct understanding of the universe."

 This is a point on which I sincerely disagree, and I think it is an unfortunate caricature that only promulgates the misguided and unnecessary dichotomy between "Old-Earth Naturalism" and "Young-Earth Christianity", and between science and religion in general. First, unraveling the message of the Bible (especially as it pertains to history) is a matter of exegesis, which is in itself a hermeneutical science. Mr. Patterson or anyone else can argue for the validity of their reading of scripture above all others, but it is unfair and inaccurate to state that an acceptance of the Bible as God's witness precludes evolutionary theory (or any scientific theory, for that matter) from the interpretive framework. That assertion is a working hypothesis in competition with others, and is contingent upon the facts of linguistic theory, textual criticism, etc. As such, it is also potentially falsifiable. Secondly, the theory of evolution does not deny the role of God in the universe. Science operates under methodological naturalism, which means that it can only investigate natural explanations for natural phenomenon. By definition, the act of special creation (if defined as the sudden appearance or organization of matter by supernatural forces) is excluded from direct scientific investigation. Science does not, by definition, deny its truth, but is rather, by definition, silent on the matter. On the other hand, one can produce testable hypotheses in biology, geology, astronomy, etc. given a starting belief in special creation and a young Earth. In this sense, science could investigate the issue indirectly. Thus it is inaccurate to say "the denial of supernatural events limits the depth of understanding that science can have and the types of questions science can ask," as Mr. Patterson asserts later. Starting with a belief in God that providentially oversees the natural world does not change the scope or nature of scientific questions we can ask, since science is still limited by methodological naturalism. Theoretically, science could determine that all modern species appeared abruptly within the last 10,000 years, but science would still be silent on whether one God or millions of gods were responsible, and the personal character thereof. Taking an example from Mr. Patterson, let's consider this in practice:

"Even if the amazingly intricate structure of flagella in bacteria appears so complex that it must have a designer, naturalistic science cannot accept that idea because it falls outside the realm of naturalism/materialism."

Mr. Patterson reflects a common sentiment, which provides a powerful talking point in the discussion. At first, it appears the categorical limits of science prevent us from an unbiased assessment of nature. However, intrinsic to his argument is the premise that at some degree of observed complexity in organisms, we must conclude that the organism could not have arisen through "natural" processes. But how do we define that level of complexity? Some authors have made a case for biological features that are irreducibly complex, but keep in mind two things: 1) the identification of features as irreducibly complex is contingent on the existing data, and is potentially falsifiable in light of new observations; 2) even if the label can withstand new observations, it does not logically follow that the feature "must have a designer"; rather, we would only establish that to date, no known natural process can account for this feature. Remember that all science, regardless of one's philosophical commitments, is methodologically naturalistic (i.e. excludes supernatural explanations in practice). Thus "naturalistic science"—that is, science practiced by one who is a naturalist/materialist, according to Mr. Patterson—is not alone in excluding such a conclusion from the scientific investigation.

On what is natural

Before I sound as though I am contradicting myself, I want to clarify why I am comfortable, as a Christian, excluding design arguments from science. If one believes that a personal God is responsible for the existence of time, matter, and space, then it follows that everything is designed in the sense that every material instance has a purpose. In other words, the teleological principle is part of our a priori philosophical commitment to theism. As such, it cannot be the object of scientific investigation, which itself is built on principles of philosophy. Science cannot demonstrate design in nature any more than it can demonstrate the uniformity of natural laws; both are preconditions for knowledge about the natural world, while the former is unique to theistic worldviews.

My advice to Mr. Patterson, and anyone that supports the Intelligent Design (ID) movement, is to focus on exploring God's creation without attempting to redefine the scope of the scientific method. With the exception of Dembski's work, much of the ID movement's interaction with the public is somewhat misguided, and only results in equally misguided responses from critics of theism, such as Dawkins' examples of "bad design" (note: the word "intelligent" in ID is not meant to be contrasted with "stupid", but simply with "non-intelligent"; examples of "bad design" from Dawkins and others constitute interesting facts about nature, but are wasted efforts as arguments against ID).

Falsification and scientific theories

I mentioned earlier that the scientific method is built around falsification of hypotheses. The work of Karl Popper (and his critics) with regard to science as falsification has remained canonical through scientific disciplines. He argued that testability (the ability to prove wrong) is the key criterion for calling a study scientific. However, defining the criteria by which a hypothesis can be falsified is not always a simple, straightforward process. Most scientific theories/hypotheses have been modified numerous times in response to contrary evidence from previous experiments. Granted, this typically results in the 'self-correcting' aspect of science and a refinement of good scientific theories, but elucidates how bad theories can live beyond their years if supported by a stubborn, false paradigm (e.g. consider Kuhn's discussion of scientific revolutions). So how does this relate to the creation/evolution controversy? Mr. Patterson writes:

"Scientific theories must be testable and capable of being proven false. Neither evolution nor biblical creation qualifies as a scientific theory in this sense, because each deals with historical events that cannot be repeated. Both evolution and creation are based on unobserved assumptions about past events."

The fact that past events, such as the appearance of new species, cannot be repeated does not disqualify a theory from being scientific. When anthropologists/archaeologists excavate an ancient city, the response is hardly "Well, time to leave science at the door. Put on your guesswork hats!" The reason is that hypotheses about past events can be tested (i.e. falsified) by remaining evidence. In the case of evolution, there are numerous ways to falsify the theory: demonstrate that species share no vestigial remnants from a common lineage; demonstrate that all species appeared abruptly and coincidentally; demonstrate the existence of a predicted descendant taxon long before the existence of a predicted ancestral taxon (e.g. the existence of birds and theropods before crocodilia, which is the predicted common ancestor). By the same line of reasoning, Mr. Patterson's interpretation of the creation story can be tested scientifically, in that one can seek to falsify the hypothesis of a global flood, abrupt and distinct appearance of species/genera, and more. In fact, that is one major goal of my blog: to consider whether predictions stemming from the young-Earth model have not already been falsified. Theological details of the young-Earth model lie outside the scope of scientific investigation; historical events associated therewith, however, do not.

"Allowing only evolutionary teaching in public schools promotes an atheistic worldview, just as much as teaching only creation would promote a theistic worldview. Students are indoctrinated to believe they are meaningless products of evolution and that no God exists to whom they are accountable. Life on earth was either created or it developed in some progressive manner; there are no other alternatives. While there are many versions of both creation and evolution, both cannot be true." (emphasis added)

Years ago, I would have agreed with much of this statement. I now realize that the false dichotomy arises from an inability to properly define science. Mr. Patterson and others defending a young-Earth position have actually narrowed the scope of science—contrary to his complaint that materialistic science limits the depth of scientific inquiry—to the point that the validity of most science becomes dependent on a critique of supposed religious commitments (i.e. atheism) rather than its coherency and corroboration with past evidence. Unfortunately, this works in favor of Mr. Patterson among the general public, who is already suspicious of biological evolution. Notice how he connects the evolutionary origin of man with meaninglessness and moral relativism with ease. Is the connection a logical necessity? Two supposedly non-existent alternatives would be 1) a God, who created all species without moral purpose and holds none of them morally accountable; or 2) a God, who created all of history with purpose—lifeforms developed progressively over time, and the 'natural processes' reflect His handiwork/artistry—and holds a part of His creation morally accountable. Mr. Patterson's assessment is thus riddled with gratuitous assertions; primarily, that 'creation' must be an instantaneous event that occurs contrary to the laws of nature as we know them.

Uniformitarianism: a principle of geology; not 'that other church' around the corner

A longer discussion of uniformitarianism is warranted at some time, but I will conclude here with a few comments on Mr. Patterson's claims (for those interested in a detailed discussion of uniformitarianism, I strongly suggest reading Davis A. Young's chapter in The Bible, Rocks, and Time). I am sure that all of you are familiar with the basics of historical investigation, namely that we can use present facts to interpret past events. This applies to geology as well as human history, the former of which is built on the principle of uniformitarianism. In addition to the assumption that physical constants and laws (e.g. the speed of light, gravity, etc.) were unchanged through history, uniformitarianism is basically just an extension of Occam's Razor, which states that complexity should not be posited without necessity. In other words, we interpret past geological events in light of known, modern events, unless there is evidence to the contrary. With that in mind, consider Mr. Patterson's assessment:

"Evolution also relies heavily on the assumption of uniformitarianism—a belief that the present is the key to the past. According to uniformitarians, the processes in the universe have been occurring at a relatively constant rate. One of these processes is the rate of rock formation and erosion. If rocks form or erode at a certain rate in the present, uniformitarians believe that they must have always formed or eroded at nearly the same rate."

By "evolution", Mr. Patterson is also referring to geologists that reject the notion of a young Earth. It is true that the principle of uniformitarianism has commonly been summarized as "the present is the key to the past", but the description is not exhaustive and Mr. Patterson takes advantage of this fact. He is mistaken in saying that "uniformitarians" believe modern processes have ensued at a "relatively constant rate." Such is a caricature, since no modern geologist would state this a priori. Rates of erosion and rock formation, for example, are determined by geological evidences. Evidence is collected by testing hypotheses generated from observations and/or scientific models of the process. Rates of sedimentation are never simply assumed to be slow or fast. One only needs to search publications on sedimentology and stratigraphy to see that interpreted rates of deposition range over several orders of magnitude (consider the difference between sediments accumulating 1) onto the deep ocean floor, 2) the Mississippi River floodplain, and 3) near the continental slope by means of tectonically driven landslides). However, Mr. Patterson seems to think that catastrophes like Noah's flood are excluded from scientific investigations because of philosophical commitments. On the contrary, many such catastrophes have been interpreted from the geological record and are widely accepted (large-scale floods, meteor impacts, massive lava flows and landslides, etc.). The problem is that most sedimentary layers do in fact show good evidence for slow deposition. He states, "Noah’s Flood, for example, would have devastated the face of the earth and created a landscape of billions of dead things buried in layers of rock, which is exactly what we see." Any geologist would agree that Noah's flood might be expected to leave layers of fossiliferous rocks. Detailed examination of those fossiliferous rock layers reveals they were not the consequence of multiple stages of a single, short-lived event, however, but millions of events over millions of years.

Concluding remarks

The philosophy of science is a difficult subject, since the criteria by which a theory may be deemed scientific are open for discussion. One of the most fundamental and stable of these criteria is testability, or falsification. Mr. Patterson agrees with this criterion, but attempts to distinguish historical science from "operational science" to the extent that he may subject evolutionary theory and historical geology to unwarranted skepticism among his audience. In doing so, he undermines the validity of other historical inquiries, such as the textual transmission of the Bible and historical reality of the New Testament referents, which are undoubtedly important to Mr. Patterson's (and my own) worldview. A faithful application of the scientific method does not render the works of God silent, but results in an efficient, self-correcting means of exploring the details of His masterpiece. I would compare this to the relationship between an artist's mind and the painting, the latter of which was created through a variety of physical processes. One may examine the character of brush strokes, chemistry of the paint, geometry of objects, etc. to determine how the picture was made without consideration of why. An art critic may still ask the "why" questions, but through a very different method.

Science is the active pursuit of knowledge about the natural world. As such, it is methodologically naturalistic, and cannot speak to facts outside the realm of empirical observation. However, the scientific method is one epistemological method among others in the grand scheme of philosophy, and therein rests on principals not subject to scientific inquiry. This categorical distinction should humble the scientific researcher, who, if ignorant of such, is but "a man with his feet firmly planted in midair," to cite the words of Schaeffer.

Wednesday, December 8, 2010

Theological implications of an old Earth: Doesn't Scripture have a voice?

When I began this blog, I planned to focus on topics in geology. More specifically, I aimed to clarify whether Answers in Genesis (AiG) offered a valid position on the geological history of the Earth. I have not hidden my position on the answer to this question: no, I do not believe that Flood geology offers a viable interpretation of the rock record. Of course, I will continue to elucidate my reasoning as I consider scientific propositions from AiG's article database, but I feel that I should restate my reasoning behind the focus on geology and at least take some opportunity to explain my theological position on what it means, as a Christian, to believe in an old Earth.

Why such a narrow approach to a broad controversy?

Simply put, I am a geologist by training and by practice. I am happy to discuss other topics (say, biology?) but I think it more appropriate to address questions to which I can speak with some experience. I don't perceive myself to be ignorant of the other sciences (biology, astronomy, history, archaeology) or of theology and Biblical interpretation, but as I've mentioned, I believe others have articulated my position far more eloquently than I could here. That being said, I feel it is still important to add a personal touch to this blog—namely, how does one approach Scripture while maintaining belief in an old Earth? So I will commit at least one post per month to answering this question. Below, I have articulated what I think is a key introductory question for both Christians and non-Christians.

Does it really matter what a Christian believes about the age of the Earth or the rock record?

No, but yes. (I'll come back to my answer) If you pose the same question to a researcher at AiG, the answer would be emphatically yes, and that the gospel is intimately connected to belief in a young Earth. Their reasoning is rooted in a defense of Scripture as God's word, which should not be compromised in the face of an external authority. Though I respect their starting point and admire their zeal, I sincerely believe their conclusions not only to be erroneous but potentially dangerous to evangelicalism. First, despite the majority position over the history of Christian thought, I do not believe a faithful interpretation of Scripture demands a young Earth. While I expect to expound on my claim in time, it is worth noting here that many Christians have maintained orthodoxy (i.e. presentation of the gospel without comprise; Biblical inerrancy; historicity of Genesis) while believing in an old Earth. Whether you agree with their hermeneutic, it would be unfair to claim that AiG offers the only distinctly Christian understanding of Genesis. Secondly, nobody is free from extrabiblical influence when interpreting Scripture (Genesis in particular). AiG regularly employs studies of grammar, history, archaeology, and even science, to refine their understanding of each verse (e.g., the meaning of the word 'firmament', or even 'day').

Third, and most importantly, I feel that AiG has produced a false dichotomy between 'young-Earth Christianity' and 'old-Earth naturalism'. So thorough is their association that most people can no longer separate the modifier from the respective belief. Moreover, any belief that falls on the 'middle-ground' is deemed rather hypocritical. But how is this dangerous? I would propose two scenarios (granted I am not the first to do so):

1) A young Christian is taught that faithful adherence to God's word informs us that the world is quite young (less than 10,000 years or so). The world was overcome by catastrophe and repopulated even more recently (4–6,000 years ago). Furthermore, the Christian is taught that science reveals vast evidences for this historical account, thereby offering positive reason to believe God's word. He/she is eager to explore the science behind these evidences and promptly chooses a related degree path. However, as the student progresses, he/she discovers that the evidence was never there—science does not support a young Earth. The student's faith is challenged and may even feel deceived by those in whom he/she confided spiritually. But the dichotomy has never left his/her mind and so two choices appear: "If science supports an old Earth, then Christianity must be false; but if I maintain my faith, then science must be mistaken." In rejecting one, he/she rejects the other; an awkward silence characterizes his/her life.

2) An unbeliever is met with the challenge of the gospel—perhaps an acquaintance has shared the message with them, or they have embarked on a self-motivated search for meaning—and come across the ministry of AiG. Immediately, he/she perceives that an acceptance of Christ would require him/her to believe what seemed more obviously false: that the world is less than 10,000 years old and a great Flood once rearranged the planet. The skeptic is unwilling to pursue the religious/philosophical issue further and feels certain that he/she has justifiably rejected Christianity. Though I would never advocate compromising the gospel to make it appear more attractive to unbelievers, one should consider the effect of AiG's dichotomy on non-Christians. Is this a necessary stumbling block?

Back to my answer.

I mean 'no' in the sense that I believe the age of the Earth, biological evolution, etc. are tertiary issues in Christianity. Though important, they are not fundamental to the principles of the gospel and should not be bound to the conscience of the believer (or potential believer).

I mean 'yes' in the sense that according to Christianity, humans are to be stewards of the Earth. Moreover, we are called to know God, both through His word and His creation. I believe that an honest application of our God-given tools of knowledge to His creation results in the discovery that Earth is far older than we had previously thought. No Christian should be scared of the truth, even when it challenges our traditional notions of God and His creation. Rather, we should rejoice, and be glad in it.

Theological implications of an old Earth

Perhaps you feel awkward, offended, or put off in some way by the possibility of belief in an old Earth. Maybe it is downright scary and makes you feel skeptical about God's word altogether? If so, I hope that you would bear with me as I continue posting, and especially that you would not hesitate to contact me about specific concerns you have. In the mean time, I will briefly address some common questions below.

How can I believe that the universe began billions of years before humans were ever on the scene? Don't such long ages diminish God's purpose in creation and redemption?

If you've asked this question before, then you are not alone. From personal experience, and discussion with friends, I know the thought experiment can be, well, daunting. The concept of "deep time" is difficult for most geologists to grasp, let alone for Christians trying to reconcile their faith with the word of academia. So to answer, I would direct you to the third question in the OPC's First Catechism for children, which reads: "Why did God make you and all things?"

How would you answer? Was he lonely, bored, or even cynical? I find the OPC's succinct answer to be most Biblical: "For His own glory." God's creation is not about us; it's about Him. While His relationship to mankind—covenants, providence over the nations, etc.—is integral to redemptive history, it is futile and foolish to question His methods of bringing about history (and prehistory). What is the point of cosmic history without mankind? The glory of God—learn it, love it, and praise Him for it.

I like to think of it this way. For much of the pre-modern era, humans believed the universe to be quite small, not extending far beyond our atmosphere and certainly not beyond our solar system (even the stars were perceived as no more distant than our sun). Technological advances in the late Middle Ages introduced a revolutionary notion: the universe is far bigger than we could imagine. Even in the 21st century, our universe continues to expand (perceptively, that is). But in discovering how tiny we and "our" planet really are, does our view of God likewise shrink? On the contrary, we are all the more amazed by Him Who framed it. So in discovering that time is equally large, and that "our history" is only a pixel of the big picture, how should we respond in our view of God?

Doesn't the creation account suggest there was no death before Adam's fall? Yet long ages contradict this notion.

Many authors have considered this question before (e.g. here, or here for a young-Earth perspective), and I would exhort you to consider the resources available. The notion that no death (including animal death) occurred before Adam's sin is rooted in three premises: 1) it contradicts the notion of a "good" creation; 2) death is the explicit consequence named for Adam's initial rebellion; 3) Genesis 1:29-30 commissions all the animals to have plants for their food.

With regard to the first point, I simply don't think God's description of creation as 'good' precludes animal death. The wisdom literature (especially the Psalms) conveys a deep sense of purpose behind the predator/prey relationship, and there is no sense that the natural death of animals is the result of corruption. When God's judgment against nations is described in poetic rhetoric, it is typically characterized by an 'undoing' of the universe's natural cycles (e.g., stars fall from the sky, the sun is darkened). Chaos in the natural order of animals (consider the plagues of Egypt) is thereby associated with uncreation. Returning to Genesis 1:29-30, God commissioned the beasts to eat plants, but it would be an argument from silence to interpret this as "plants alone for every animal." Rather, the author of Genesis has assigned a simple, generalized purpose for each tier of creation. Obviously, the description is not exhaustive (plants are not only meant to be eaten) so it is premature to exclude carnivorous activity from the original creation.

The young-Earth interpretation is further complicated by our own classification of the animal kingdom versus that of the ancient Hebrew. What constitutes an animal/beast? Are insects and krill shrimp included, and if so, what did small reptiles, bats, and whales eat? Furthermore, we understand today that plants are living organisms with reproductive cycles, digestive systems, etc. (we even share some DNA). Therefore, in our modern understanding of death, something died before the Fall, so how do we define the cutoff and why?

All previous points aside, the real question is theological. God promised Adam that he would die in the day that he ate of the fruit. Adam ate of the fruit, yet didn't die. Do we thus misunderstand the word "day" or the word "death"? Various commentators have argued for ambiguity in either terms: "day" refers to the post-Fall period; or "death" simply refers to a spiritual death, rather than physical. I would opt for neither, and suggest a more simple reading of the text. First, the uniqueness of Adam's death is not that it would represent the first case of a living organism ceasing to function (consider the microbes in Adam's digestive tract as he ate the fruit), but rather that it represented God's wrath for sin (Rom. 6:23). Adam entered into covenant with God, who demanded perfect obedience in the communicative state. When Adam forsook the covenant, man's relationship with God indeed changed as he became at enmity with God (call this "spiritual death"; Rom. 8:5-8), but we should not forget that a death did occur that day. "And the LORD God made for Adam and for his wife garments of skins and clothed them." (Gen. 3:21) This is the most basic principle of the gospel, echoed also in Romans 6:23. Most importantly, it is not compromised in any way by an old-Earth understanding. Not only do we obtain a more consistent criteria of what constitutes "good" in creation—an intricate, functioning natural order—, but we also have a more precise understanding of God's covenant, wrath, and mercy.

Alright, maybe science suggests a old Earth, but there is no 'gap' in Genesis 1, and a day is a day!

I am happy to agree on these points. In short, I reject the 'Gap hypothesis' and 'Day-Age Theory' on basic exegetical grounds. The creation account is continuous and there is no reason to interpret the days in a purely metaphorical sense. At the same time, I reject the young-Earth interpretation on both scientific and exegetical grounds. Briefly stated, the seven-day structure of Genesis 1 breaks down the creative activity of God, who Himself made time. Thus the author is describing the work week of God. It makes no sense to argue over lengths of time, or physical frames of reference, when it comes to the individual days of Genesis 1. In doing so, we completely miss the point of the text and bind ourselves to unnecessary (and false) premises in our scientific investigation of the universe.

God uses the creation account as a model for our own work week (this one is obvious), but also for the Sabbath years and Jubliee. The analog is possible in all three cases if we take Genesis 1 to represent God's perspective. Otherwise, we are left to ponder silly (and unnecessary) questions like: "How could there be evening and morning (or plants) without the sun?", or "What was God doing on the eighth day?" Despite AiG's persistence to bind each day (and God Himself) to an Earthly timescale, a more parsimonious understanding of the text suggests that the specific chronology and age of the Earth are not addressed in Genesis 1.

So what is the point of Genesis 1 if not to chronicle the Earth's origin?

Quite simply, the author retells the story of creation to make a point about God and the universe; in other words, he uses a historic referential (the fact that the universe had a beginning and owes its existence to God) to make a theological point (there is one God responsible, and His creative work is complete) about the universe (all natural phenomena have a function and purpose; man is in covenant with God and accountable to Him). There are numerous references available that further explore the theological details of Genesis 1 without concern for nuclear processes, relativity, vapor canopies, and other anachronisms. Unfortunately, I did not have knowledge of such works when I first encountered the young-Earth position in my youth. While I plan to expound on this topic further, I would say now that more recently, I have found the exegesis of Answers in Genesis to be remarkably shallow, and often misguided. I hope that you would trust my recommendation to discover this for yourself.

Concluding remarks

I sincerely hope that if you are reading this as a young-Earth Christian, you would consider my reasoning and exhortation with comparable sincerity. Conversely, if you are a non-Christian that has associated Christianity with belief in a young-Earth, I hope that you would reconsider the connection. I look forward to expounding on more theological topics that arise from the question of Earth's age and history, and would appreciate any feedback or suggestions.

Thursday, December 2, 2010

Methods to Dr. John K. Reed's Madness: Deconstruction and the Geologic Timescale, Part 2

Last week, I briefly discussed historical approaches in science and how this applies to geologic dating methods – that is, how do geologists assign ages to a given rock? My goal was provide a basic understanding of scientific models in general, noting that the scientific method is used to falsify hypotheses and assumptions intrinsic to those models. Thus the scientific method can be used to discard models that don’t represent reality, while refining (and providing evidence for) models that do represent reality. At this point, I want to more specifically address the points made by Dr. John K. Reed in his Creation Research Science Quarterly article found here (downloads PDF file). I’ll divide my comments into three sections, dealing first with his comments on the geologic column, secondly with his comments on specific dating methods, and thirdly with my own thoughts on the strength of the geologic time scale.

Dr. Reed’s presentation of the geologic column

By way of preface, a bulk of Dr. Reed’s reference material is taken from the book A Geologic Time Scale by Gradstein, Ogg, and Smith (2004). If you are looking for a detailed explanation on how the geologic timescale is constructed, this is the authoritative work (and anyone with access to a university library can find it). However, I suspect that Dr. Reed has not spent much time with primary research in the fields of stratigraphy or geochronology. The reason is that Dr. Reed constructs a series of strawman arguments against the methods employed to construct the timescale (perhaps unintentionally?) and relies heavily on irrelevant citations in the text to give the impression that Gradstein et al. might even agree with his critique. My intention, however, is not simply to accuse Dr. Reed of dishonesty or incompetence. Rather, my goal is to exhort any serious reader to take advantage of the widely available reference, and decide whether he has accurately represented it.

Promoting Naturalism?
In his introductory sentence, Dr. Reed asserts that scientists use the geologic timescale as a means to promote their philosophical disposition to naturalism. I would point out, however, that this accusation is no more meaningful that accusing Dr. Reed (or any YEC) of using the rock record to promote his/her predisposition to a so-called Biblical model of history. Scientists from a range of philosophical (and religious) backgrounds have constructed the geologic timescale through a variety of scientific methods. Whether you agree with the validity of these methods is not relevant; the point is that scientists have long worked together to reconstruct Earth history and subsequently interpret the philosophical implications of that history within their respective worldviews. Many of the earliest attempts to construct a geologic timescale were made by Christians, some of whom speculated ages of rock formations much older than had been previously assumed (e.g. Nicolas Steno; or Thomas Chalmers, who fully expected that young-Earth models would disappear within decades). Uniformitarian principles of geology were in place long before Darwin’s biological theories were articulated, let alone widely accepted. Most early biostratigraphers (scientists that correlate rocks based on fossils) rejected Darwin’s theories on the origin of species, despite their own predispositions to naturalism. Notwithstanding accusations by Dr. Reed and other YECs, most Christian geologists have been comfortable interpreting the rock record as a reliable proxy for Earth history (including the evolutionary development of life), recognizing that in itself, the reality of the geologic timescale cannot speak to philosophical commitments that underlie our investigation of nature. Granted, if the evidence pointed to a very young Earth, pure naturalists would face a greater challenge in accounting for life’s origin and development, but a young Earth in itself does not preclude naturalism. This accusation is a category error on the part of Dr. Reed.

Modern stratigraphy and absolute chronometers
Only a couple sentences later, Dr. Reed introduces a red herring to the discussion by citing Gradstein et al. (2004, p. 3), who notes that “the chronostratigraphic scale is an agreed convention, whereas its calibration to linear time is a matter for discovery or estimation.” Apparently, Dr. Reed understands this to mean that scientists no longer empirically investigate the chronostratigraphic scale (or never did?), but rather ‘fit’ the facts by means of ‘convention’ into their preconceived template of Earth history, and uses the citation to cast doubt on the methodology of stratigraphers. If you’re confused by the terminology, let’s take a quick detour. The chronostratigraphic scale refers to the relative ages assigned to rocks using the methods I discussed last week. For example, we assume that a rock layer is younger than underlying rock layers. Furthermore, the consistent order of fossils is used to group rocks into Periods, such as the Cambrian, Ordovician, Silurian, etc. One does not need a degree in geology to understand how Dr. Reed has abused the citation (but it helps to read the full paragraph preceding the citation). In saying that the chronostratigraphic scale is “an agreed convention”, Gradstein et al. (2004) have only described how scientists have assigned labels to each interval in the rock record, not how they determined the order. Through empirical investigation, scientists have documented succession of brachiopod fossils throughout the rock record, for example, but assigning a categorical cutoff (such as Cambrian brachiopods versus Ordovician brachiopods) is an agreed convention. In other words, scientists cannot, by definition, ‘discover’ that the Ordovician period actually preceded the Cambrian period any more than one could ‘discover’ that the Egyptian Middle Kingdom actually preceded the Old Kingdom! One could, however, propose new calendar dates for the range of each Kingdom through empirical investigation, just as geologists can propose new ‘calendar dates’ for the Cambrian-Ordovician boundary if the evidence demands it.

Within the introduction, Dr. Reed also asserts that the geological timescale lacks an absolute chronometer, which would mean that scientists have no way of assigning absolute ages to rocks. As I mentioned last week, no scientist believes that we can determine absolute ages of rocks, but rather that we have a working scientific model of estimating those ages. The difference is subtle, but important – can you pick it out? As with any scientific model, assumptions are made, but the progress of geochronology has only refined the respective assumptions and improved our confidence in the ages now assigned. Dr. Reed is correct in noting that dating methods “exhibit uncertainty, and...assume rather than prove deep time,” but it is unclear why this is relevant to the discussion. If a coroner examines a corpse and estimates the age of the person to be 85 years at death, his/her method not only exhibits uncertainty but also assumes the reality of the last 85 years. Likewise, when scientists estimate that Codex Sinaiticus (the oldest complete copy of the Christian scriptures) was compiled in ~325 A.D., their methods exhibit uncertainty and assume the reality of the past 1700 years. Neither case precludes the dating method from adding meaningful information to the discourse, however. Epistemological principles underlying historical scientific methods are by no means unimportant, but simply citing such principles as reason to dismiss the results constitutes yet another category error on the part of Dr. Reed. When a geologist obtains a radiometric date of 500 million years, it is understood that the method makes assumptions about the physical history of the rock and the reality of the past 500 million years. But why does Dr. Reed think the geologist has made an error in assuming deep time? Because of a particular understanding of Scripture – an understanding that is rooted, no less, in the principles of Hebrew grammar and syntax, textual criticism, and hermeneutics.

A multiplicity of methods
If you recall the analogy I made to reconstructing history from a set of tattered diaries, I attempted to show how a multiplicity of dating methods can provide internal checks, verify or falsify key assumptions, and improve the overall resolution of the model. In geology, the case is no different, yet Dr. Reed claims that “the need to bounce back and forth from one method to another reveals the fundamental lack of a consistent ‘clock’ against which the rocks can be calibrated.” It is unclear what Dr. Reed means by “the need to bounce back and forth” between methods — perhaps he is referring to the fact that not every method can be applied to every rock? — but it does not logically follow that no reliable ‘clock’ exists. While radiometric dating methods have been refined (or replaced) over the years, this hardly constitutes “repeated failures” that undermine the geologic timescale. Moreover, it is a caricature for Dr. Reed to imply scientists deemed these methods “infallible” or proclaimed them as “scientific gospel.” On the contrary, inconsistencies in radiometric dates have only improved our understanding of the respective methods. For example, when historic lava flows yielded anomalously old Potassium-Argon (K-Ar) dates (e.g. Dalrymple, 1969), the assumption that all argon should be excluded during crystallization was falsified. Advances in scanning electron microscopy (SEM) and electron microprobe analysis revealed compositional zonation within individual minerals that were previously assumed to be homogeneous. Such technological advances were seminal to the development of the more accurate and consistent 40Ar/39Ar method, upon which the modern geologic timescale now heavily relies. However, one should never forget that radiometric ages will always represent model ages, and thus are subject to change in the case that our improved understanding of geology falsifies the underlying assumptions.

Incompleteness of the rock record
Dr. Reed continues his assessment by characterizing the stratigraphic record as patchy and incomplete. What this means is that in any given location, only a fraction of Earth history has been recorded in the rock record. This comes as no surprise to any geologist, nor should it to anyone in the general public. Currently, sediments are accumulating (and thereby recording Earth history) in the San Joaquin Valley of California, but are not accumulating the adjacent mountain ranges (which are actually the source of those sediments). In order for sediments to accumulate where the Sierra Nevada range is currently located, the mountains must be weathered down and subside to form a sedimentary basin. How long do you suppose this would take? As you try to ‘guesstimate’ the answer, you can appreciate why considerable time gaps are expected to exist within the rock record. Although Dr. Reed presents this fact as an embarrassing challenge to the “pure empiricist” (which, by the way, no scientist is), the absence of rock record for a given time interval commonly provides valuable information to the geologist. First of all, it reveals that the area did not constitute a sedimentary basin, but rather a source of sediments to adjacent regions. For example, Cretaceous rocks can be found throughout much of eastern Utah. From east to west, the rocks transition from silty/limey sediments with marine fossils to sandy/silty sediments with terrestrial fossils, suggesting that a shoreline ran through the middle of the state, with the sea to the east and highlands to the west. Thus we can predict that a mountain range was present in western Utah and eastern Nevada during the Late Cretaceous and provided sediments to riverine deposits found to the east (for reference, Bryce Canyon National Park contains a record of these deposits). If our interpretation is correct, there should be no rock record for this interval to the west (an unconformity), but we should find evidence of those mountains in rocks from central to eastern Utah (i.e. fragments of previously formed sedimentary and igneous rocks). Since I took the time to describe the example, you may have guessed that is exactly what we find. Not only does eastern Nevada and western Utah contain a discontinuity in the stratigraphic record for this time interval, but sandstone layers from the Upper Cretaceous rocks in central-eastern Utah contain abundant fragments of older (Paleozoic) sedimentary rocks now exposed in Nevada and western Utah. Moreover, detrital zircons (pieces of zircon mineral grains from igneous rocks now found in sedimentary rocks) can be dated to track the source of sediments, and have been examined in Upper Cretaceous rocks from central Utah. A recent study documented clusters of detrital zircon ages in the range of 81-76 Ma (Jinnah et al., 2009), which is consistent with ages of volcanism in western Nevada and southern Arizona.

Before moving on, we should consider the implications of the previous example. The data imply that sediments comprising Upper Cretaceous sandstones of central Utah came from recycled sedimentary rocks to the west, with some source from volcanic rocks farther to the west. Thus Paleozoic rocks of Nevada/Utah needed time to accumulate, harden into rock, and undergo weathering and erosion before being carried more than 100 km to accumulate in a newly formed basin. This is in addition to large volcanic eruptions, which needed time to cool and crystallize (but not too much time, since rapid crystallization forms glass and not euhedral crystals). Is it any surprise that geologists have not quickly abandoned their assumption of deep time in spite of difficulties encountered while refining the geologic timescale?

In case you did not follow my example, consider another analogy to history. The record of human history is notably patchy and incomplete, just like the rock record. History has not been preserved for a majority of ancient peoples, due to an absence of written records or a subsequent destruction of evidence. While this provides a significant challenge to historians and archaeologists, they have been able to apply a multiplicity of scientific methods to piece together isolated records and reconstruct a meaningful history of mankind. Similar assumptions go into detailing human history as in geology, yet there is no outcry against historians for presenting an equally uncertain history with confidence.

Little green men?
Finally, Dr. Reed’s claim that Creationists can use biblical history as a template for understanding geologic history is simply misguided. Even assuming Dr. Reed’s interpretation of biblical history (i.e. a young Earth and a global flood), biblical history is by no means exhaustive. It is equally valid to propose that “little green men...influenced the course of evolution” after the Flood as it is to propose the same happened during a depositional hiatus in the Cretaceous. To respond that the preposterous story is contradictory to biblical history would be an argument from silence. I would encourage any readers to strongly consider the implications of Dr. Reed’s silly thought experiment. Are we to fear gaps in our understanding of nature, past and present, because they introduce uncertainty to our conclusions? I hope to address Christians in particular: do not advances in science improve our understanding of the world that was made and the one who made it? Yet scientific advances are not possible without treading boldly across those gaps in the hope that we can diminish uncertainty. Biblical theology lays the epistemological framework for empirical investigations of the natural world, but a reliance on all of scripture (including Genesis) as authoritative in matters of faith does not preclude our need of scientific investigation to understand science and history. Rather our use of science is warranted thereby. Once again, it is worth mentioning that even Dr. Reed’s interpretation of biblical history is dependent on historical and social sciences.

Dr. Reed’s understanding of geologic dating methods

I think that perhaps I should have begun with this section, in which I want to address the claims by Dr. Reed concerning specific dating methods. As much fun as it is to discuss the philosophy of historical sciences, don’t we just want to know whether such dating methods even work? Absolutely, and if you’ve read to this point, I appreciate your patience. So let’s take a closer look at each method mentioned in Dr. Reed’s criticism.

Radiometric dating
Radiometric dating methods have been constantly refined as our knowledge of the geology and physics behind the methods improves. The most commonly used methods for constructing the geologic timescale are the 40Ar/39Ar and U-Pb techniques, but other older methods have by no means been “thrown out”, as Dr. Reed asserts. I think the confusion lies in the fact that he primarily references Gradstein et al. (2004), who mainly considered dates for Period and Stage boundaries in the geologic timescale (i.e. they were dating only stratigraphic units of rock, such as volcanic ash layers). Dr. Reed does not offer an firsthand critique of the supposed shortcomings of each method, but is confident that Young-Earth critiques have sufficiently proven each to be “fatally flawed,” and that “the rock-solid chronology of radioisotopes has turned into quicksand.” I can only respond that the assessment is extremely optimistic, to say the least. Any review of published scientific literature employing radiometric dating techniques will show that by and large the results are consistent, and many underlying assumptions can be verified. Radiometric dating methods use a scientific model that does not always correspond to reality for a given sample, and hence discordant results do exist. I am aware that Dr. Andrew Snelling and others have compiled such outlier cases to promote uncertainty and doubt among the public acceptance of these methods, but I would warn against taking their claims too seriously. As a researcher in geology, my exhortation to you is to look more closely at the big picture, and realize that scientists have neither ignored discordant data or uncertainties in each method. On the contrary, such discordant data can give very useful information about a rock’s history. If you are still interested in the particular ‘problems’ raised by Dr. Snelling and others, I would be happy to add more detail to the discussion in future posts. For the time being, let’s consider the rest of Dr. Reed’s comments.

Although Dr. Reed does not believe geologists have any absolute chronometer, he is aware that radiometric dating “provides the only theoretical way to directly obtain absolute dates for virtually all of the rock record.” So if he were wrong about the reliability of radiometric dates, then the entire argument fails because geologists do have an absolute chronometer, or ‘reliable clock’, against which they can calibrate the chronostratigraphic scale. But Dr. Reed insists on adding confusion to the discussion with a rather nonsensical line of reasoning: “Fundamentally,” he says, “isotopic dates cannot confirm the stages of the timescale because uncertainty in these methods precludes a certain chronology.” By “isotopic dates”, I assume he means radiometric dates, and by “stages of the timescale” I assume he is referring to the intervals labeled over the years by geologists (such as the Cambrian, Ordovician, etc.). Is he trying to say that geologists can not confirm the time span of the Cambrian or Cretaceous, for example, because there are uncertainties in the dating methods? Does he believe that geologists have a preconceived age of each stage? (They don’t.) And what does it mean that uncertainty in the methods “precludes a certain chronology?” Which chronology? Or does he mean to say that the ± sign is too much uncertainty for scientists to handle? Dr. Reed continues:

“If radiometric dating is uncertain, then geologists continue to argue in a circle. This is because the primary argument about radiometric dating is not whether it is generally correct or generally incorrect but whether or not it is the reliable chronometer—the magic hammer that can set the golden spikes of time. A method that is not absolute cannot provide absolute dates. If it can be wrong some of the time, then it can be wrong at any given time, and therefore any given date cannot possess the certainty generally assumed by stratigraphers. For example, note how the argument that current methods
are accurate reveals inaccuracies in other methods that once enjoyed equal confidence.”

Now things are getting ridiculous. I am not sure where Dr. Reed picked up his notions about how science is supposed to work, but it certainly wasn’t by contributing research to the field. It appears that he expects geologic dating methods to be proven infallible or considered useless, but where does this expectation come from? Isaac Newton used rather simple geometric methods and gravitational theory to estimate the distance to the moon. As technology improved so did estimates for this distance, and Newton’s original calculation was shown to be reasonably accurate (despite errors in some of his assumptions and variables). Technology will continue to improve, and estimated distances to all planetary objects will be updated correspondingly. But according to Dr. Reed’s line of reasoning, this means that nobody should tout with confidence that the moon, Sun, and stars are long distances away because there are uncertainties in our calculations! In fact, it’s probably just an illusion and no planetary object is further than the uppermost stratosphere. Yes, I know manned spacecraft have been there, but I could always propose that they fail to take into account changes in the physical laws of the universe as one ventures farther from the Earth’s surface. I would encourage Dr. Reed to spend more time arguing science as it is used by scientists, and less time redefining terms to play games with semantics. Yes, radiometric dates are wrong some of the time, and when they are wrong (discordant, at least) then geologists devote much more time exploring why they were wrong in that case. Then they formulate a hypothesis, test the hypothesis, and repeat the experiment in line with the scientific method. Dismissing scientific models because of uncertainties is not science; it is unwarranted skepticism (the same brand of skepticism employed by those who doubt the early authorship or textual transmission of the New Testament, for a distant but relevant analog).

Following the quote above, Dr. Reed cites Gradstein et al. (2004) to convince his audience that with each new radiometric dating method, older methods lose their once-held confidence. However, the citation was only discussing why certain methods (such as the Rb-Sr and Sm-Nd method) are not used as precision chronometers. This is a mis-citation on the part of Dr. Reed, who apparently does not understand the geologic reasons behind the preference. Methods such as Rb-Sr, Sm-Nd, and K-Ar still have application in geology and yield meaningful results, but there is more room for error in stratigraphic units where the interaction of hydrothermal fluids is more prominent, due to a much higher porosity and permeability of rocks (i.e water flows more freely through sedimentary rocks) and less isolated crystal systems. Although precise tuning of the geologic timescale is possible with these methods, it requires much more work in terms of quality control. So why waste the time and money?

Before moving on, I wanted to point out that Dr. Reed seems to think the point of radiometric dating methods is to substantiate a common belief in evolutionary theory by demonstrating the existence of deep time. However, any geologist (or geochronologist) would scoff at the association, recognizing that the age of rocks and the validity of evolutionary theory are two separate issues. Unfortunately, Dr. Reed’s association is very effective when it comes to the general public, which is more skeptical about (and spiteful of) evolution than the age of the Earth. Lastly, Dr. Reed claims that “while radiometric dating remains the mainstay of the timescale, it does so because the alternative is to admit...that the age of the earth has not been demonstrated to be measured in billions of years and that the historical record of the Bible is back on the table.” Once again, I think any geologist (Christians included) would scoff at the claim. Radiometric dating methods are not on the brink of extinction and geochronologists are by no means scrambling to counter the claims of AiG’s RATE team. But assuming they were, would a 6,000-year history be the only alternative? Deep time is not demonstrated by radiometric dating alone (or even primarily), but through a broad understanding of geologic processes responsible for rocks seen today: the accumulation of sediments; the emplacement of large magma bodies; crystallization and exhumation of igneous plutons; regional metamorphism of massive sedimentary rock bodies; spreading of the ocean floor; and biogenic structures, including the shear number of fossils and biomass contained within sedimentary rocks. Yet YECs like Dr. Reed create the illusion of a discipline in crisis by addressing these evidences in isolated cases rather than in the big picture.

I am going to take this one point by point and save a lengthier discussion for another time. Also, I encourage you to read Dr. Reed’s section on biostratigraphy in full before considering my comments.

“Biostratigraphy is the use of index fossils to assign ages to the rocks that contain
them. As has been noted by many creationists, the argument is circular because the deep time of evolution is a presupposition of the method.”

This is false. Biostratigraphy is a method used to correlate rocks based on the fossils they contain, since it is assumed that fossils represent the flora and fauna living at time of deposition (an assumption verifiable by other geologic methods). Fossil assemblages were categorized early on, based on the location of the rocks containing them (e.g. Cambrian for Cambria, England). Further categorization allows biostratigraphic correlations to be more precise as new species are found and more sections of sedimentary rock are analyzed. Fossil species are considered index fossils if 1) the first and last appearance of that species in the rock record can be dated radiometrically, with repeatable results; 2) the fossil can be found in multiple localities around the world, and radiometric dates for those rocks are consistent with others; 3) the fossil is abundant in many rock types (i.e. dinosaurs need not apply). If these criteria are met, index fossils can be used to assign an age range (not an absolute age) to sedimentary rocks containing that fossil. The reasoning and process is quite simple, but how do we know it works? Well, I would first point you to the success of the oil industry, which relies heavily on biostratigraphy to pinpoint the location of oil reservoirs. Furthermore, I have worked in sedimentary sections myself, and have collected thousands of fossils. The order of fossil organisms is amazingly consistent, down to the subspecies level, and provides excellent evidence for the evolutionary history of life, as well as the long ages estimated radiometrically. But that is a discussion for another day.

“As an aside, note that the use of ‘key’ radiometric dates tacitly admits that some are better than others.”

That’s true. If you want to constrain the duration of geologic stages on the timescale, then a radiometric date taken from a stage boundary is much better than one from the middle of the stage. But I am thinking Dr. Reed has confused the use of the word ‘key’ here. It does not mean ‘dates that agree with our presuppositions.’

“Time periods or stages are ‘scaled geologically’ or assembled in their ‘proper order’ using index fossils. This can happen only if the truth of evolution is known in advance and if its progression is adequately preserved in the fossil record.”

This is completely false. Geologic time periods were assigned long before evolutionary theory entered man’s conscience. The order is determined by the relative ages of rocks, which is determined by basic principles of stratigraphy. Here, Dr. Reed is citing another author, who was only making the point that I’ve been making all along. Biostratigraphy is used in tandem with sedimentary stratigraphy to assign relative ages and define stages for fossil-bearing rocks. This process does not require a knowledge of, or reference to, evolutionary history. Dr. Reed’s thinking is completely backwards on this topic.

“If the timescale has to be stretched in linear time with radiometric dates, does not that imply that the rock record itself does not give the appearance of age determined by radiometric methods—even with the assumption of evolution?”

Not at all. Again, Dr. Reed is playing semantic games with a citation from A Geologic Time Scale. On a side note, it seems most of his citations come from the first page of chapters in the book, leaving me to wonder whether he is familiar with the actual content. The original author referred to geological scaling techniques used in biostratigraphy. For example, the range of a certain fossil must be measured in multiple sections of sedimentary rock, but the thickness of each section will range from one to the next (sediments do not accumulate at the same rate in different water depths, climates, etc.). Scaling techniques allow geologists to estimate the age to thickness ratio for each section, so that an age can be assigned to each fossil or event once radiometric dates are available. Interestingly enough, radiometric ages invariably become younger throughout the rock section, as predicted by the interpreted relative ages of those rocks, and fit the geological scaling very well. Thus the “stretching” referred to by Dr. Reed has nothing to do with apparent lengths of time, but calibration of an unknown timeline to a timeline with known points of reference.

“Fossilization assumes in situ, low-energy paleoenvironments. Any high-energy catastrophic transport of fossils out of their “home” environment invalidates the scheme.”

That is absolutely true. However, the catastrophic transport of anything, fossils included, leaves behind distinct sedimentary structures and characteristics. Thus the assumption can be verified by a simple field analysis of the rocks, as well as chemical analyses in the laboratory (that would be my field of study). A vast majority of index fossils are taken from fine-grained marine shales and carbonates, which show no evidence of transport (catastrophic or not).

“Since fossils do not show evolutionary transitions, the dates are purely conceptual. This is demonstrated by comparing the evolutionary 'dates' from the nineteenth century with those of the twentieth century.”

Dates assigned to index fossils, once again, have nothing to do with evolutionary theory.  I am stunned that Dr. Reed thinks it would be appropriate other than for the purpose of entertainment to compare “evolutionary dates from the nineteenth century” with radiometric dates now assigned to index fossils. On what were nineteenth century dates based? And for the record, many fossils do show evolutionary transitions.

“Ignorance of the complete fossil record demands empirical uncertainty...Living fossils and changing ranges of index fossils highlight that uncertainty.”

Our knowledge of the fossil record is certainly incomplete, and nobody denies this. However, biostratigraphy relies on rock sections where the first and last appearances of a given fossil are documented in multiple sections around the world. Furthermore, correlations are never based on a single fossil type, but on dozens of fossil species that comprise a complex assemblage. Thus even if several fossil species disappeared from the rock record without actually going extinct, it would not affect biostratigraphic correlations by any meaningful degree. If new evidence suggests a better constraint on radiometric dates assigned to biostratigraphic intervals, then the range will change, but this has nothing to do with “ignorance of the complete fossil record”. Finally, although living fossils exist, these organisms are never used in biostratigraphy. Index fossils are typically microorganisms such as foraminifera, pollen, and radiolarians, or small shelled organisms such as brachiopods and trilobites. Has anyone demonstrated the existence of Cretaceous foraminifera in modern oceans?

“The predominance of marine invertebrates as index fossils arbitrarily biases sampling.”

This is by no means arbitrary. The reasons for using marine invertebrates are 1) their skeletal structure changes more frequently throughout the rock record, so that species can be distinguished more easily; 2) they occur in rocks formed in marine environments, where deposition is more constant and erosion is more rare. But I can’t figure out what Dr. Reed means by “sampling” here. Sampling of what?

“For nearly 200 years, naturalists have asserted that evolutionary history is preserved in the rocks and have thrown that rock record into the teeth of Christianity.”

This claim is both inaccurate and unfair to all. First of all, Darwin’s theories were not used in biostratigraphy until decades after he introduced them (and hundreds of years after the advent of biostratigraphy). Secondly, evolutionary history is preserved in the rocks, but this has nothing to do with Christianity, the tenets of which do not define our expectations for the rock record.

Dr. Reed devotes the rest of the section to commenting on a citation from Gradstein et al. (2004), who admit that some problems exist with “treating strata divisions largely as biostratigraphic units.” Of course, this admission seems very exciting to Dr. Reed, who perceives that “the biostratigraphic interpretation of the rock record is perhaps not so clear after all”, but I am certain he doesn’t understand the implications thereof. For one, Gradstein et al. are explicitly referring to cases where stage boundaries are defined only by biostratigraphic markers (fossils). Currently, this applies to about half of all stage boundaries, but that number is decreasing rapidly. Secondly, the uncertainty introduced by the problems that Gradstein et al. summarize do not affect the absolute ages of the timescale, but only where to place the age marker in a given sedimentary rock section. Imagine that an argument existed over where to define the beginning of the day: should it be at midnight, or should it vary based on sunset/sunrise? This is similar to the argument over which fossils should be used as boundary markers, but notice that neither option results in shorter or longer days. Now consider times in history before the invention of mechanic clocks. How do you define midnight then? More importantly, do uncertainties in rudimentary time-keepers give us reason to doubt the reliability of human history before the advent of Swiss watchmakers? Obviously not, and likewise there is no reason to dismiss the strength of biostratigraphy to correlate rocks. Uncertainties exist, but they don’t change the big picture by any stretch of the imagination.

Astronomical cycle stratigraphy
If you’re not familiar with the concept of Milankovitch Cycles, don’t worry. The theory is rather straightforward: 1) Earth does not follow the same path every time it orbits the Sun; 2) variations in the axis of Earth’s rotation and the shape of its orbit occur over long periods of time; 3) variations in Earth’s orbit and axis affect the amount of energy received by the Sun, which effects the strength of seasonality and overall climate; 4) these variations are cyclic, like a sine wave, so the path of Earth’s orbit can be extrapolated over time. Combined, these four premises (and yes, I’m simplifying) are used to formulate a predictive theory about Earth history. We can predict, for example, that climate-dependent characteristics of sedimentary rocks should record astronomical cycles to some extent. If you’re confused, just think of it this way. Day and night are the result of an astronomical cycle — namely, the rotation of the Earth (sometimes you face the Sun, sometimes you don’t). Seasons are the result of Earth’s orbit around the Sun, combined with the fact that Earth rotates on an axis not perpendicular to that orbit. Milankovitch cycles are no different, qualitatively. Just imagine them as long-term seasons, which recur on the scale of 26,000, 41,000, and 100,000 years.

So what does Dr. Reed have to say about the use of astronomical cycles in stratigraphy? He states, “All such oscillations boil down to variations in solar radiation...reaching Earth...” As I also mentioned, changes in solar radiation are an important factor, but certainly not the only one. One must also consider the degree of seasonality (i.e. temperature and precipitation difference between winter and summer)and changes in sea level (and not just resulting from climate change, but directly from astronomical forcing). All of the above factors directly affect the water depth, temperature, and rate of primary production, which affect several characteristics of the sediments. At this point, Dr. Reed points out three major assumptions that he sees behind the use of astronomical cycles in stratigraphy:

“(1) cause and effect between oscillations and sedimentation to the extent that
this “signal” overrides terrestrial influences, (2) cyclicity and continuity in sedimentation driven predominantly by climate, and (3) uniformity of rates and preservation that enable the “signature” to be manifested.”

With regard to the first, I don’t see why Dr. Reed deems it necessary for the astronomical signal to “override” factors on Earth that affect sedimentation (say, tectonics?). In other words, geologists do not assume that astronomical cycles dominate the signal (such as chemical or lithological changes in sediments), but recognize that the astronomical signal will be superimposed on any terrestrial signal. As for the second ‘assumption’, geologists recognize that discontinuities in sedimentation occur, and such would pose a challenge to interpreting any astronomical signal. However, such discontinuities can be interpreted through a variety of geological methods (petrographic analysis and/or isotopic trends, for example). Furthermore, determining whether an astronomical signal is present requires thorough statistical criteria (as opposed to pure visual discernment: “Yeah, I think I see some cycles there?”). The use of non-parametric  statistical analyses removes assumptions about perfect preservation. This applies to the third supposed ‘assumption’ as well.

Dr. Reed follows with a wonderful citation from A Geologic Time Scale, which describes briefly how the method is applied to the last 23 million years (where the model remains predictive). However, Dr. Reed jumps pass the brilliant success of the model in predicting sedimentary rock ages, which are later confirmed by radiometric dates, and proposes the existence of more supposed problems with the theory. For one, he notes that the method cannot be applied to rocks older than ~20 million years. This is true, but not in the sense that Dr. Reed assumes. Cycle stratigraphy can be applied to rocks of any age, just not when it comes to predicting the absolute age of those rocks from interpreted orbital cycles. In such cases, the method works more like using a ruler on a football field: we can use it to measure out fine-scale distances from a known marker (say, the 50-yard line). Thus if we have a single rock layer of known age (from a radiometric date), then we can use astronomical signals to estimate the age of the surrounding layers as we move away from the layer of known age. This has been used to estimate the exact duration of biostratigraphic zones, where the uncertainty in radiometric dates is larger than the duration itself (e.g. Locklair and Sageman, 2008).

Notice that at this point, we must ask the question: if the calibration of sediments to astronomical cycles can be verified for the past 20 million years (especially for the past 420,000 years), then why does Dr. Reed continue to write anything? The model has already been tested and tried for timescales much longer than the ~6,000 years he is defending, so what good is it to nit-pick about sources of uncertainty that are negligible to the big picture? Once again, it is unwarranted skepticism:

“Like varves or ice layers, geologists simplistically assume that the target sediments were deposited slowly, uniformly, and in response to regular climatic variables. Remove those assumptions and the whole theory crumbles.” (emphasis added)

No, the assumption is not simplistic by any means. It is only accepted after being demonstrated by multiple independent methods. One could argue that a modern lake with 20,000 varves is not necessarily 20,000 years old, but when multiple radiometric dating methods obtain single-layer ages consistent with the predicted sedimentation rate, then the argument becomes a gratuitous assertion. Dr. Reed fails to realize that in some cases, the criteria he names are not present (slow and uniform sedimentation in response to climatic cycles) and geologists are familiar with such cases. The “whole theory” has not “crumbled”, however, because it still explains the big picture and there are physical reasons for such exceptions. Deeming such cases as exceptions requires application of the scientific method rather than wholesale, unwarranted dismissal of the facts. But Dr. Reed continues with rapid firing of more gratuitous assertions.

“...any rapid or catastrophic style of sedimentation would render this style of dating meaningless...” or “...diagenesis could easily alter carbonate sequences enough to mask the signal...”

And can be ruled out easily by sedimentological, stratigraphic, and geochemical criteria. I’ve done this myself. It takes work; it takes time; but it’s not hard.

“...large submarine slump would generate turbidites that hypothetically could show a regular cycle of interbedded lithologies. Yet deposition would happen instantaneously. What would a plot of the various chemical ratios up through such a deposit show?”

Turbidites are quite easy to pick out in the rock record. For one, they produce coarse-grained lithologies in deep-water settings — a good indication that you picked a bad spot to interpret “astronomical forcing”. And to answer Dr. Reed’s rhetorical question, the chemical ratios would be stochastic, and would fail statistical criteria. Of course, this can be tested quite easily, and I’d be happy to run the samples for Dr. Reed if he were to provide them.

On a final note, Dr. Reed offers that the Flood model would undermine all assumptions made by cyclostratigraphers. Of course that is true, but Flood geologists have yet to propose a working model that could predict sedimentary and geochemical trends observed in sediments, ice records, speleothems, and more. Until then, Dr. Reed’s comments only resound of skepticism based on personal preference. In the meantime, geologists have produced thousands of studies that use orbital cycling to correlate sedimentary rocks. Their success is witness to the viability of the method.

I don’t think I’ll spend any time here discussing the details of magnetostratigraphy; I would prefer to challenge you all to read Dr. Reed’s comments on the discipline and see whether his argument is consistent. In any case, I felt it was worth commenting on at least one misconception:

“Note that [magnetostratigraphy] assumes plate tectonic theory and measurable spreading rates. But if the rocks can be dated well enough to supply those rates, then why is there a need for magnetostratigraphy?”

Here, Dr. Reed is referring to the dating of magnetic reversals using the ocean floor, but he obviously does not see the application to other rocks. Magnetic signatures can be taken from sedimentary rocks of all brands, and are more typically used to reconstruct the movement of continents over time (magnetic signatures also provide the latitude during deposition). Changes in the polarity of those signatures is used to correlate the sedimentary record (the result of sediments burying fossils) to the basalt record of the ocean floor (the result of volcanism at mid-ocean ridges), and with minor exceptions, they match up very well. So we must then ask Dr. Reed, how do you explain the correlation in a young-Earth model? I understand that Dr. Reed is confident that rapid magnetic reversals can be explained by Dr. Humphreys and others’ geophysical models, and we should expect to see reversals in both rock records, but why should they correlate at all? For example, why should sedimentary rock sections containing the Barremian-Aptian boundary (determined by the fossils present) also yield similar radiometric dates (~125 Ma) and show similar magnetic reversal patterns (e.g. He et al., 2008)? In the ‘uniformitarian model’, the answer is obvious. But it is yet unclear how in a Flood model all of these processes are related or should produce consistent data.

Final thoughts

Dr. Reed devotes the remaining sections of the article to demonstrating his lack of familiarity with the construction of the geologic timescale, and particularly his inability to understand the application of geologic dating methods. Furthermore, he does not fully understand the assumptions that go into each method, and contradicts himself in trying to articulate them. For example, he repeatedly refers to an assumption of constant sedimentation rate, or constant spreading rates of mid-ocean ridges, while ignoring the fact that he has already cited authors who would never consider those assumptions as valid or necessary.

On an unrelated note, Dr. Reed’s writing style can be misleading in itself. For one, the use of “quotes” around words to encourage doubt is simply inappropriate for scholarly discussion, because it creates the illusion that a meaningful argument has been made by subtly adjusting the connotation for the reader. I highly doubt any of Dr. Reed’s audience would take me seriously if I constantly referred to the “magical instance” called “the Flood” that Dr. Reed has “verified” by “science.” Out of respect for the discussion and for the truth, I’d prefer to take the issue more seriously.

So while there is obviously more detail to be discussed, I wish to stop here and simply ask you, which model has thus far explained the big picture? Do you believe that the geologic timescale is in crisis? If so, to what degree and what is the alternative? I hope that I have been able to accurately summarize methods used by geologists to interpret Earth history. Further, I hope that you would not be afraid to ask a geologist if you have questions about how things work, and especially if you find Dr. Reed’s arguments to be convincing on any point. As you can see by the length of my discussion here, many geologists are more than happy for the opportunity to just...talk about rocks.

References Cited:

Gradstein, F.M., Ogg, J.G., Smith, A.G., 2004, A Geologic Time Scale: Cambridge University Press, 589 p.

Jinnah, Z.A., Roberts, E.M., Deino, A.L., Larsen, J.S., Link, P.K., Fanning, C.M., 2009, New 40Ar-39Ar and detrital zircon U-Pb ages for the Upper Cretaceous Wahweap and Kaiparowits formations on the Kaiparowits Plateau, Utah: implications for regional correlation, provenance, and biostratigraphy: Cretaceous Research, v. 30, p. 287-299.

Locklair, R.E., and Sageman, B.B., 2008, Cyclostratigraphy of the Upper Cretaceous Niobrara Formation, Western Interior, U.S.A.: A Coniacian–Santonian orbital timescale: Earth and Planetary Science Letters, v. 269, p. 540-553.

Friday, November 19, 2010

Methods to Dr. John K. Reed's Madness: Deconstruction and the Geologic Timescale, Part 1

While many researchers in geology are committed to describing the fundamental processes of our dynamic Earth, others attempt to elucidate the details of Earth history by investigating the rock record. For example, a volcanologist might study gases and lava emitted from a modern volcano to assess the volcano’s effect on the atmosphere (how much carbon, sulfur, etc. it emits) or whether it poses danger to the surrounding life. To accomplish this goal, the volcanologist might analyze the chemistry of the rocks to answer questions like: How deep/hot is the magma chamber? How explosive (viscous) is the lava? How often has the volcano erupted in the past? Are there any tectonic forces promoting volcanism? A thorough scientific investigation thus requires the volcanologist not only to consider the physics behind volcanic eruptions, but to examine the rock record for clues about the region’s volcanic history.

Yet geologists commonly take for granted the philosophical distinction between experimental and historical approaches in their research, and consequently receive criticism from a range of skeptical observers. “Nobody was there to observe it. You are simply making assumptions about the past and extrapolating the data over long time periods. This is not science because it is not falsifiable!” If you are a geologist (or a historian, for that matter), you are probably familiar with such claims, but I am willing to speculate that few of you have found necessary occasion to defend against them. So what do you do when a majority of the public discredits historical science, even mocking it as an oxymoron? Your best bet may be to continue in your research, realizing that when applied properly (confined by a common scientific method) a combination of historical and experimental approaches is capable of producing accurate and, most importantly, falsifiable results. But in the hope that I have spiked your interest, I want to consider a recent criticism of dating methods commonly used in geology.

And if you are not a geologist, then I hope you are still curious as to how the geologic timescale is constructed, and how we know whether those methods are reliable. So click here to download a PDF of the timescale, and let’s get into it!

The challenge

In a 2008 Creation Research Science Quarterly article, Dr. John K. Reed examined what he termed “the starting rotation” of dating methods – that is, four geological methods used to assign ages to rocks. The rotation includes radiometric dating, biostratigraphy, astronomical tuning, and isotope chronostratigraphy (or chemostratigraphy). [If you have no idea what any of these words mean, then you are in for a treat, because all of them are fascinating and I’m here to explain!] After a brief discussion of each method, Dr. Reed concluded:

‘The current stable of “scientific” methods is riddled by uncertainty, and a very large
element of faith is needed to believe that they constitute a valid and verifiable chronometer of Earth’s supposed 4.5 billion-year past. In reality, there is no “silver bullet,” no single absolute clock that has measured uniformitarian history.’

So we are left with the impression that: 1) we have yet to find an absolute time-piece of Earth history; 2) there is much reason to doubt the validity of published dates; 3) the scientific nature of each method is sufficiently questionable to earn “quotation marks”; and 4) there is such a thing as uniformitarian history.

I want to draft my consideration of Dr. Reed’s claims over two articles. Below, I will consider the scientific background of geologic dating methods. In the next article, I will look more specifically at the individual methods and Dr. Reed’s assessment thereof.

Using a multiplicity of geological dating methods is like taking pages from a diary

For this analogy, I only require that you have an imagination. Imagine, for example, that you discovered a box of personal diaries from the burnt ruins of an old, countryside village. Your hope is to piece together the historical details of that village — maybe to better understand its reaction to political turmoil in the major cities? — and the diaries are your only hope. But there is one problem. The diaries are old and worn down, which renders them all incomplete. Furthermore, exposure to fire/smoke, and perhaps some water damage has erased the entry date for a majority of the pages. Is it still possible to apply a scientific method to reconstructing the history?

Let’s take a look at a single diary. It appears that in the 200 pages of entries, the entry date is still clear for 15 of those pages. This provides an absolute chronometer, meaning that it allows us to assign a real age to when those 15 pages were recorded. As for the rest of the entries, we can apply some relative methods of dating. For example, we can calculate the average number of pages between pages of a known age to get an idea of how often the person made a diary entry. We may also want to investigate the continuity of each record. In other words, phrases like “it’s been a long time since my last journal entry” can tell us where time gaps may exist in the record. Lastly, we can look at specific events (festivals, dates of birth/death of villagers, mention of a meteor shower or forest fire, etc.), but for this we require the other journals. If one journal contains a specific date for the marriage of villagers A and B, then we can assign that same date to journal entries from other diaries that mention the same marriage.

Of course, our historical reconstruction does not come without significant assumptions. Foremost, we assume the diaries were constructed by methods observed today: a living person drafted each page by their own hand, and that entries were made on sequential pages and reflected their thoughts at the time. We are assuming that the calendar age of the journal corresponds to our own calendar age, and that the author was not mistaken when he/she recorded the date. Our relative dating methods also rely on assumptions about the consistency of journal entries. Using specific events as markers assumes that each journal is referring to the same event, and that the date of the entry in which it was mentioned corresponds to date it actually took place (maybe the person was recalling an event from the year before?). And that is where the scientific method comes into play. In our reconstruction, we must apply a specific criteria to how we obtain dates, and how to decide whether assumptions in our method were falsified. If using our initial method tells us that according to Journal A, villager C was born in 1824, but according to Journal B, the same villager was born in 1794, then we have falsified the method and need to refine it. On the other hand, if our refined methods consistently predict the correct age of journal entries for multiple journals, then we have good evidence that the model is reliable. In other words, imagine now that a new laboratory method allows you to obtain the journal entry date from damaged pages. If the laboratory results are consistent the age you predicted for the entry, then your model is predictive and has great scientific value. If the combined methods produce the same history from each journal, then your method is also internally consistent. When it comes to historical science, the goal is to construct a model that is both predictive and internally consistent.

How does this apply to geology? Early on, geologists dealt primarily with relative dating methods in constructing the geologic timescale. One such method was biostratigraphy, which correlates rocks based on the types of fossils they contain (like using people mentioned in diary entries). [On a side note, it was not until the 17th century that scientists widely accepted that fossils came from once-living organisms. Sound crazy? Put yourself in the shoes of a Medieval/Classical scholar, and try to describe a process by which living matter can be turned into stone without sounding like an alchemist!] The work of Nicolas Steno was seminal to modern paleontology and stratigraphy, as he provided good evidence for the biological origin of fossils and suggested that the relative ages of rock layers could be estimated by stratigraphic relationships — namely: 1) sedimentary rock layers are younger than the rocks below them; 2) sedimentary rock layers were originally deposited horizontally and were laterally continuous; 3) rocks that cut through another type of rock are younger than the rock through which they cut. Within 200 years, geologists applied the methods of Steno to rock layers around the world and constructed a rough geologic timescale. There was still one problem, however. Although the timescale predicted which rock layers and organisms were older or younger than others (the order of events), it could attach a real date to neither. Geologists had no way to obtain specific dates for any of the pages, and thus lacked an absolute chronometer.

Calendar under construction

Early geologists attempted to estimate the age of rocks using known rates of sedimentation and extrapolating backward, but the method was limited and made too many assumptions about the continuity of the rock record (much like assuming a constant frequency and length of journal entries). By the mid-twentieth century, however, the discovery of radioactivity and isotopes allowed scientists to formulate a method (radiometric dating) that could potentially assign the absolute ages for which they had so hoped.

And so they went to work. Thousands of radiometric dates were acquired using elements like potassium and argon, rubidium and strontium, uranium and lead, etc., for which radioactive isotopes decayed at a known rate. Intrinsic to the method were several assumptions: a constant decay rate, known initial concentrations, a closed system, etc. In other words, they created a scientific model and applied it to the modeled geologic timescale that had been constructed. But the real test was whether the combined model was both predictive and internally consistent. Thus rocks from strata identified as Cambrian should yield radiometric dates older than rocks from Devonian strata, which should yield radiometric dates older than Triassic strata, and so forth. Furthermore, historic volcanic rocks (from eruptions that occurred in human history) should give approximately no age at all.

It is perhaps of no surprise to you that results from the first decades of geochronology were very promising. In general, rocks predicted to be old yielded very old dates (e.g. Fairbairn et al., 1967; Welin et al., 1980), while rocks predicted or known to be young yielded rather young dates (e.g. Dalrymple, 1969). Furthermore, radiometric ages of meteorites clustered around 4.55 billion years (Patterson, 1956) – the age assigned to the Earth itself. By this point, a history of geologic events (such as major extinctions and appearances of certain organisms, ancient lava flows, etc.) had been constructed using relative dating methods. Thus geologists worked hard to assign accurate ages to events that could be used as time-markers in the geologic record. If, for example, scientists could measure the age of lava flows coincident with the Permo-Triassic extinction (the largest known extinction in Earth history) in one part of the world, they could assign the same age to rocks that recorded the fossil transition in other parts of the world. In the decades to follow, a bulk of radiometric dating results showed the modeled geologic timescale to be both predictive and internally consistent to a reasonable extent, but the model was by no means perfect. Some rocks yielded very different dates, depending on the method used. Others yielded dates that were obviously too old (or too young) to be accurate (e.g. Brewer, 1969; Dalrymple, 1969). Early on, Pasteels (1968) summarized radiometric dating methods in use, and concluded with a rather prophetic exhortation:

“All methodological approaches to geological problems are interconnected. Geochronology as such does not exist; the interpretation of the results must take into account field, petrographic, geochemical, and geophysical evidences...It is hoped that the progress of interpretative geochronology will not be retarded, but that a clearer picture of many points presently debated will shortly emerge. However, when all difficulties of interpretation have been resolved, many fundamental questions...will also be resolved. The progress of geochronology depends on the progress of geology in
general, but it may also contribute towards this general progress.” (emphasis added)

Making an “ASS” out of “U” and “ME”

Every scientific pursuit involves assumptions – this should come as no surprise. But the conclusions reached are only valid as long as the assumptions hold. When Lord Kelvin estimated the age of the Earth to be no more than ~24 million years, he assumed the Earth started as a sphere at a given temperature, cooling only by radiative heat loss and with no heat being added thereto. The discovery of radioactivity showed that significant heat was being added to the Earth, however, thereby invalidating his conclusion. Making assumptions in science is not a bad thing, rather it is a necessity, and assumptions must be tried and verified just as the interpretations that follow from those assumptions.

A scientific model is only valid to the extent that it corresponds to reality. Gravitational theory predicts a constant downward acceleration for all objects near the Earth’s surface (~9.81 m/s^2). But what if I tried to prove the model wrong by measuring the acceleration of a feather when I dropped it? Obviously my calculation will be much lower than gravitational theory predicts, but I have done nothing to invalidate the model (by the way, I’m referring to the model of how objects are predicted to respond to the force of gravity according to gravitational theory). The reason is that the model assumes no other force acting on the object (in this case, drag from air resistance) and therefore does not correspond to physical conditions in my experiment. When a geologist analyzes a rock to obtain a radiometric age, he/she does not consider the number to be an absolute age. Rather it is a model age for when the rock/mineral was last at a given temperature. Thus inconsistent (discordant) ages do not necessarily invalidate the model (radiometric dating), which makes assumptions about the physical history of the rock/mineral being analyzed. When a geologist obtains an age that contradicts the broader model of geologic history, he/she must also verify the assumptions intrinsic to the model. Note that by this line of reasoning, radiometric dating methods do not prove the age of rocks, or the Earth for that matter, any more than dropping rocks in a vacuum proves gravitational theory. Both attempt to construct an internally consistent model that explains the relevant data while making a set of assumptions about the universe.

Before you all run off as skeptics, I’ll let you in on a little secret: science doesn’t prove anything. The goal of scientific methods is to falsify hypotheses. Science is self-correcting in that hypotheses/models not corresponding to reality are frequently disproven, while models that explain reality very well are widely accepted. Yes, widely accepted models can be overturned and paradigm shifts commonly occur. Nonetheless, this happens through mounting scientific evidence against the prevailing model and in favor of a new one that better explains the data.

“All models are wrong, but some are useful”

By definition, scientific models are a simplified representation of reality used to understand how things work. As such, they are not meant to be infallible in their predictions. Geological dating methods are scientific models used to interpret Earth history. Radiometric dating is the only method capable of yielding an “absolute age” (i.e. our calendar date) for a vast majority of Earth history, but geologists recognize it as a model that is ever being refined. The reason I have spent so much time discussing models and falsifying hypotheses is that Dr. Reed seems to misunderstand this basic concept in his article, particularly when he claims that the assumption of deep time precludes dating methods from proving deep time (i.e. that certain rocks are many millions of years old). Furthermore, he criticizes the methods apart from their intrinsic assumptions, replacing them instead with his own assumptions about Earth history, and then pronounces the case closed. Finally, he misunderstands the use of multiple, overlapping dating methods in geology, and believes that the need for multiple methods compounds the uncertainty and unreliability of individual methods, rather than strengthening the model as a whole.

Take a step back to the ‘diary reconstruction’ analogy. Each approach to interpreting history from a single diary was riddled with uncertainty and relied on falsifiable assumptions. Yet when combined, and proven to be internally consistent and predictive, the uncertainties in our reconstructed history were reduced and we could make a solid case for its accuracy. In the next article, I want to discuss uncertainties in individual dating methods and show that in a majority of cases, individual methods are consistent and predictive of one another. Pasteels (1968) was correct in his assessment that the development of geochronology would depend on advances in geology as a whole. New technologies, which allow geologists to analyze minerals on the micron scale, have greatly improved our understanding of the physics behind radioactive decay and the retention of daughter elements, thereby explaining many of the discrepancies early researchers had suspected. Better documentation and correlation of fossil species has increased the resolution at which we can investigate periods of Earth history. Advances in magnetostratigraphy (a technique that analyzes the alignment of magnetic minerals in rocks) and continued research in the Deep Sea Drilling Project have provided an additional link between sedimentary and igneous rock records. Finally, studies in the field of chemostratigraphy (my own field) continue to provide some of the most important tests of all: 1) they verify key assumptions about the nature of the sedimentary and fossil records, by providing evidence that these layers/fossils represent isochronous intervals of Earth history; 2) they test whether other methods can accurately predict the proper age of rocks around the world; 3) they allow us to identify and interpret paleoclimatic and paleoceanographic events in Earth history, such as changes in geochemical cycles and the composition of the ocean/atmosphere. If Dr. Reed and other YECs want to dismiss these models or overturn them, it will require them to provide a new, internally consistent model that better explains the range of data.  So far, this model does not exist.

References cited:

Brewer, M.S., 1969, Excess radiogenic argon in metamorphic micas from the eastern Alps, Austria: Earth and Planetary Science Letters, v. 6, p. 321-331.

Briden, J.C., Henthorn, D.I., Rex, D.C., 1971, Paleomagnetic and radiometric evidence for the age of the Freetown Igneous Complex, Sierra Leone: Earth and Planetary Science Letters, v. 12, p. 385-391.

Dalrymple, G.B., 1969, 40Ar/36Ar analyses of historic lava flows: Earth and Planetary Science Letters, v. 6., p. 47-55.

Fairbairn, H.W., Moorbath, S., Ramo, A.O., Pinson, W.H., Hurley, P.M., 1967, Rb-Sr age of granitic rocks of southeastern Massachusetts and the age of the lower Cambrian at Hoppin Hill: Earth and Planetary Science Letters, v. 2, p. 321-328.

Pasteels, P., 1968, A comparison of methods in geochronology: Earth Science Reviews, v. 4, p. 5-38.

Patterson, C., 1956, Age of meteorites and the Earth: Geochimica et Cosmochimica Acta, v. 10, p. 230-237.

Welin, E., Lundegårdh, P.H., Kähr, A.M., 1980, The radiometric age of a Proterozoic hyperite diabase in Vrmland, western Sweden: Journal of the Geological Society of Sweden, v. 102, p. 49-52.