Exploring the wonders of geology in response to young-Earth claims...

Never been here? Please read my guidelines and background posts before proceeding!

Friday, December 31, 2010

C Новым Годом 2011!

Well, the end of the year came swiftly for me. I have essentially spent the last three days travelling halfway across the world. My mind is still having trouble with the fact that I experienced 4 sunrises/sunsets in only 72 hours time, but I hope to recover shortly.

In retrospect, I have learned much in the course of my blogging adventure--and not only with regard to geology. The task was rather time consuming at times, but the outside interest and self-benefits were sufficient that I am happy to continue. My only resolution is that I may post a similar message one year from now.

Thanks for reading, and Happy New Year from Russia! This is a wonderful place, a most magical land, and I have a very loving new family with whom to spend the holiday.

C праздником!

Tuesday, December 21, 2010

Science is more (and less) than you think!

When you hear the word “science,” what do you envision? Goggles and white lab coats? Mathematical formulas on a blackboard? A casually dressed couple in straw hats brushing away at the bones of a velociraptor? Very likely, your perception of what science is has been influenced (like mine) by childhood movies, comic strips, and/or the latest episode of Bones.

For this reason, my favorite part of teaching introductory geology laboratory sections was discussing the scientific method in the first lecture. Every university requires students to take a science course with a lab, regardless of their major. Thus introductory sections are filled with students from business, the liberal arts, and the like, all wondering why it is necessary to learn something about science. With that in mind, I try to remind students that the purpose of university curriculum is not merely to teach them ‘stuff’—the hardness of calcite, definition of unconformity, names of geologic periods—but how to think for themselves. So I am inspired by the Van Tillian couplet, “Every man can count, not every man can account for counting.”

Unfortunately, most people do not think scientifically or even logically, and the rhetoric of politicians and salesman commonly banks on this fact. To take a Black Swan example, consider how easily the difference between “Most terrorists are Muslims” and “Most Muslims are terrorists” is overlooked by the public in discussions of foreign and domestic social policy. Or consider how often you have heard the words “science has proven”, “scholars say”, and “studies show that”, without any consideration of how it was proven or which scholars say. Public discussion of scientific topics, especially when surrounding controversy, is commonly littered with empty appeals to authority and ad hominem argumentation: “Well biologist A, who works at prestigious university X has concluded after years of research that Y; therefore, I trust his/her word over yours,” or “You mean to tell me that human inputs to the atmosphere are partially responsible for climate change? Don’t tell me how it works, just tell me whether you’re receiving grant money from the government!” These tactics may work well to convince a jury, but they do not constitute critical thinking.

Perhaps, I should return to the original question and phrase it this way: is ‘science’ a noun or a verb? Is it something that is, or something that is done? I am not concerned here about dictionary definitions, semantic ranges, or etymology; rather, I want to elucidate the meaning of science in practice and its limitations. In other words, my goal is not to offer a comprehensive discussion of the scientific method throughout history, nor to lecture anyone about what I think science is. Instead, I want to simply show how science is the active pursuit of knowledge about the natural world, and is guided by an epistemological framework outside the realm of science itself.

My inspiration for this post comes from a recent article, by Roger Patterson at Answers in Genesis, entitled “What is science?” There, Mr. Patterson discusses the history of scientific thought, the difference between various types of scientific approaches, and how this relates to the study of Earth history. Since his aim is to defend the validity of Creation science and the Young-Earth interpretation of geological data, you may not find it surprising that I disagree with some of his comments and conclusions. However, I applaud his willingness to define the scope and methods of science from a young-Earth perspective, and would not dismiss the discussion wholesale. So I don’t expect to provide a rebuttal here so much as a discourse guided by the points he has already made.

The scientific method and categories of science

The scientific method is a process built around falsifying hypotheses, which are formulated from observations of the natural world (note: natural as opposed to supernatural or metaphysical; not as opposed to artificial). Let’s say you wanted to investigate the reason behind different yields from the same crop grown in two different regions. Your observations may include the actual crop yield, temperature and precipitation records, soil samples, etc., from which you can formulate a hypothesis such as: “Crop yield is a direct function of rainfall.” Sounds good, right?

While the explanation sounds plausible, especially if the region with higher yield receives significantly more precipitation and given that plants need water for growth, it is by no means proven. Science is much more than building plausible-sounding arguments! One must first demonstrate a correlation between rainfall and crop yield by falsifying the null hypothesis: “Crop yield is not a function of rainfall.” The falsification may require more observations or a controlled experiment to obtain statistical significance, which means that some uncertainty is involved. Furthermore, a statistical correlation may be consistent with the original hypothesis, but does not itself prove it. One must also falsify alternative explanations  for the same phenomenon; in this case, the dependence of yield on nutrient availability, soil type, solar irradiance, etc. For the hypothesis to remain scientific, it must also remain predictive of new facts (such as crop yield in regions C, D, E, etc., for a given amount of rainfall in the respective regions).

When a scientific hypothesis can predict new data, rather than being falsified thereby, it is treated as true (i.e. proven), but only provisionally so. The reason is that scientific hypotheses can only address existing data and are potentially falsifiable. Furthermore, scientific hypotheses are built (dependent) on a range of other scientific theories, which themselves remain only provisionally true. In this sense, a scientific premise may be proven and accepted as true, without any claim of infallibility. Considering the contingent nature of scientific conclusions, one may be inclined toward skepticism. However, one should also remember that the scientific method is self-correcting, since hypotheses not corresponding to reality are quickly falsified when tested by multiple independent researchers.

Philosophers of science typically make some distinction between experimental and historical methods of science (if you recall, I discussed historical science at length in a previous article). Since science must address a wide range of phenomenon (from molecular interactions to planetary motions; from modern world economies to human history; etc.), researchers may further refine the method according to their respective disciplines. Mr. Patterson describes the essential difference as following:

“Operational science deals with testing and verifying ideas in the present and leads to the production of useful products like computers, cars, and satellites. Historical (origins) science involves interpreting evidence from the past and includes the models of evolution and special creation.”

Mr. Patterson's dichotomy is not entirely inaccurate, but overly simplistic. Also, his use of the word "useful" is peculiar; is this to say that historical science does not produce useful products? The reconstruction of ancient texts, including the Bible, is one example of a useful product of historical science that Mr. Patterson would appreciate. Furthermore, while portions of evolutionary theory remain under the domain of historical science (e.g. the morphological history of phylogenies), a majority of research is experimental (or "operational") in nature. Finally, we must ask whether the notion of "special creation", as defined by Mr. Patterson, even falls into the category of historical science. But first, consider his comments on the limitations of historical science:

"Recognizing that everyone has presuppositions that shape the way they interpret the evidence is an important step in realizing that historical science is not equal to operational science. Because no one was there to witness the past (except God), we must interpret it based on a set of starting assumptions."

Presuppositions play a major role in disciplines that are hermeneutic (interpretive), and are not always obvious. However, the presence of underlying presuppositions is not unique to historical science, and therefore is not a valid means by which to distinguish it from experimental science. Mr. Patterson is mistaken if he believes that facts currently observed are any less "interpreted" than historical facts. To cite another Van Tillian couplet, "Brute facts are mute facts." Experimental science relies on a number of epistemological and metaphysical assumptions (the uniformity of nature, reliability of senses, nature of causality) and is dependent on potentially falsifiable scientific theories. The fundamental differences between historical and experimental science are the 1) method by which observations are made and 2) the availability of data to test hypotheses. In the historical sciences, nature has already set up the experiment, and data are thereby limited. Moreover, visual observation in person is not the only way to "witness the past." Mr. Patterson continues:

"Creationists and evolutionists have the same evidence; they just interpret it within a different framework. Evolution denies the role of God in the universe, and creation accepts His eyewitness account—the Bible—as the foundation for arriving at a correct understanding of the universe."

 This is a point on which I sincerely disagree, and I think it is an unfortunate caricature that only promulgates the misguided and unnecessary dichotomy between "Old-Earth Naturalism" and "Young-Earth Christianity", and between science and religion in general. First, unraveling the message of the Bible (especially as it pertains to history) is a matter of exegesis, which is in itself a hermeneutical science. Mr. Patterson or anyone else can argue for the validity of their reading of scripture above all others, but it is unfair and inaccurate to state that an acceptance of the Bible as God's witness precludes evolutionary theory (or any scientific theory, for that matter) from the interpretive framework. That assertion is a working hypothesis in competition with others, and is contingent upon the facts of linguistic theory, textual criticism, etc. As such, it is also potentially falsifiable. Secondly, the theory of evolution does not deny the role of God in the universe. Science operates under methodological naturalism, which means that it can only investigate natural explanations for natural phenomenon. By definition, the act of special creation (if defined as the sudden appearance or organization of matter by supernatural forces) is excluded from direct scientific investigation. Science does not, by definition, deny its truth, but is rather, by definition, silent on the matter. On the other hand, one can produce testable hypotheses in biology, geology, astronomy, etc. given a starting belief in special creation and a young Earth. In this sense, science could investigate the issue indirectly. Thus it is inaccurate to say "the denial of supernatural events limits the depth of understanding that science can have and the types of questions science can ask," as Mr. Patterson asserts later. Starting with a belief in God that providentially oversees the natural world does not change the scope or nature of scientific questions we can ask, since science is still limited by methodological naturalism. Theoretically, science could determine that all modern species appeared abruptly within the last 10,000 years, but science would still be silent on whether one God or millions of gods were responsible, and the personal character thereof. Taking an example from Mr. Patterson, let's consider this in practice:

"Even if the amazingly intricate structure of flagella in bacteria appears so complex that it must have a designer, naturalistic science cannot accept that idea because it falls outside the realm of naturalism/materialism."

Mr. Patterson reflects a common sentiment, which provides a powerful talking point in the discussion. At first, it appears the categorical limits of science prevent us from an unbiased assessment of nature. However, intrinsic to his argument is the premise that at some degree of observed complexity in organisms, we must conclude that the organism could not have arisen through "natural" processes. But how do we define that level of complexity? Some authors have made a case for biological features that are irreducibly complex, but keep in mind two things: 1) the identification of features as irreducibly complex is contingent on the existing data, and is potentially falsifiable in light of new observations; 2) even if the label can withstand new observations, it does not logically follow that the feature "must have a designer"; rather, we would only establish that to date, no known natural process can account for this feature. Remember that all science, regardless of one's philosophical commitments, is methodologically naturalistic (i.e. excludes supernatural explanations in practice). Thus "naturalistic science"—that is, science practiced by one who is a naturalist/materialist, according to Mr. Patterson—is not alone in excluding such a conclusion from the scientific investigation.

On what is natural

Before I sound as though I am contradicting myself, I want to clarify why I am comfortable, as a Christian, excluding design arguments from science. If one believes that a personal God is responsible for the existence of time, matter, and space, then it follows that everything is designed in the sense that every material instance has a purpose. In other words, the teleological principle is part of our a priori philosophical commitment to theism. As such, it cannot be the object of scientific investigation, which itself is built on principles of philosophy. Science cannot demonstrate design in nature any more than it can demonstrate the uniformity of natural laws; both are preconditions for knowledge about the natural world, while the former is unique to theistic worldviews.

My advice to Mr. Patterson, and anyone that supports the Intelligent Design (ID) movement, is to focus on exploring God's creation without attempting to redefine the scope of the scientific method. With the exception of Dembski's work, much of the ID movement's interaction with the public is somewhat misguided, and only results in equally misguided responses from critics of theism, such as Dawkins' examples of "bad design" (note: the word "intelligent" in ID is not meant to be contrasted with "stupid", but simply with "non-intelligent"; examples of "bad design" from Dawkins and others constitute interesting facts about nature, but are wasted efforts as arguments against ID).

Falsification and scientific theories

I mentioned earlier that the scientific method is built around falsification of hypotheses. The work of Karl Popper (and his critics) with regard to science as falsification has remained canonical through scientific disciplines. He argued that testability (the ability to prove wrong) is the key criterion for calling a study scientific. However, defining the criteria by which a hypothesis can be falsified is not always a simple, straightforward process. Most scientific theories/hypotheses have been modified numerous times in response to contrary evidence from previous experiments. Granted, this typically results in the 'self-correcting' aspect of science and a refinement of good scientific theories, but elucidates how bad theories can live beyond their years if supported by a stubborn, false paradigm (e.g. consider Kuhn's discussion of scientific revolutions). So how does this relate to the creation/evolution controversy? Mr. Patterson writes:

"Scientific theories must be testable and capable of being proven false. Neither evolution nor biblical creation qualifies as a scientific theory in this sense, because each deals with historical events that cannot be repeated. Both evolution and creation are based on unobserved assumptions about past events."

The fact that past events, such as the appearance of new species, cannot be repeated does not disqualify a theory from being scientific. When anthropologists/archaeologists excavate an ancient city, the response is hardly "Well, time to leave science at the door. Put on your guesswork hats!" The reason is that hypotheses about past events can be tested (i.e. falsified) by remaining evidence. In the case of evolution, there are numerous ways to falsify the theory: demonstrate that species share no vestigial remnants from a common lineage; demonstrate that all species appeared abruptly and coincidentally; demonstrate the existence of a predicted descendant taxon long before the existence of a predicted ancestral taxon (e.g. the existence of birds and theropods before crocodilia, which is the predicted common ancestor). By the same line of reasoning, Mr. Patterson's interpretation of the creation story can be tested scientifically, in that one can seek to falsify the hypothesis of a global flood, abrupt and distinct appearance of species/genera, and more. In fact, that is one major goal of my blog: to consider whether predictions stemming from the young-Earth model have not already been falsified. Theological details of the young-Earth model lie outside the scope of scientific investigation; historical events associated therewith, however, do not.

"Allowing only evolutionary teaching in public schools promotes an atheistic worldview, just as much as teaching only creation would promote a theistic worldview. Students are indoctrinated to believe they are meaningless products of evolution and that no God exists to whom they are accountable. Life on earth was either created or it developed in some progressive manner; there are no other alternatives. While there are many versions of both creation and evolution, both cannot be true." (emphasis added)

Years ago, I would have agreed with much of this statement. I now realize that the false dichotomy arises from an inability to properly define science. Mr. Patterson and others defending a young-Earth position have actually narrowed the scope of science—contrary to his complaint that materialistic science limits the depth of scientific inquiry—to the point that the validity of most science becomes dependent on a critique of supposed religious commitments (i.e. atheism) rather than its coherency and corroboration with past evidence. Unfortunately, this works in favor of Mr. Patterson among the general public, who is already suspicious of biological evolution. Notice how he connects the evolutionary origin of man with meaninglessness and moral relativism with ease. Is the connection a logical necessity? Two supposedly non-existent alternatives would be 1) a God, who created all species without moral purpose and holds none of them morally accountable; or 2) a God, who created all of history with purpose—lifeforms developed progressively over time, and the 'natural processes' reflect His handiwork/artistry—and holds a part of His creation morally accountable. Mr. Patterson's assessment is thus riddled with gratuitous assertions; primarily, that 'creation' must be an instantaneous event that occurs contrary to the laws of nature as we know them.

Uniformitarianism: a principle of geology; not 'that other church' around the corner

A longer discussion of uniformitarianism is warranted at some time, but I will conclude here with a few comments on Mr. Patterson's claims (for those interested in a detailed discussion of uniformitarianism, I strongly suggest reading Davis A. Young's chapter in The Bible, Rocks, and Time). I am sure that all of you are familiar with the basics of historical investigation, namely that we can use present facts to interpret past events. This applies to geology as well as human history, the former of which is built on the principle of uniformitarianism. In addition to the assumption that physical constants and laws (e.g. the speed of light, gravity, etc.) were unchanged through history, uniformitarianism is basically just an extension of Occam's Razor, which states that complexity should not be posited without necessity. In other words, we interpret past geological events in light of known, modern events, unless there is evidence to the contrary. With that in mind, consider Mr. Patterson's assessment:

"Evolution also relies heavily on the assumption of uniformitarianism—a belief that the present is the key to the past. According to uniformitarians, the processes in the universe have been occurring at a relatively constant rate. One of these processes is the rate of rock formation and erosion. If rocks form or erode at a certain rate in the present, uniformitarians believe that they must have always formed or eroded at nearly the same rate."

By "evolution", Mr. Patterson is also referring to geologists that reject the notion of a young Earth. It is true that the principle of uniformitarianism has commonly been summarized as "the present is the key to the past", but the description is not exhaustive and Mr. Patterson takes advantage of this fact. He is mistaken in saying that "uniformitarians" believe modern processes have ensued at a "relatively constant rate." Such is a caricature, since no modern geologist would state this a priori. Rates of erosion and rock formation, for example, are determined by geological evidences. Evidence is collected by testing hypotheses generated from observations and/or scientific models of the process. Rates of sedimentation are never simply assumed to be slow or fast. One only needs to search publications on sedimentology and stratigraphy to see that interpreted rates of deposition range over several orders of magnitude (consider the difference between sediments accumulating 1) onto the deep ocean floor, 2) the Mississippi River floodplain, and 3) near the continental slope by means of tectonically driven landslides). However, Mr. Patterson seems to think that catastrophes like Noah's flood are excluded from scientific investigations because of philosophical commitments. On the contrary, many such catastrophes have been interpreted from the geological record and are widely accepted (large-scale floods, meteor impacts, massive lava flows and landslides, etc.). The problem is that most sedimentary layers do in fact show good evidence for slow deposition. He states, "Noah’s Flood, for example, would have devastated the face of the earth and created a landscape of billions of dead things buried in layers of rock, which is exactly what we see." Any geologist would agree that Noah's flood might be expected to leave layers of fossiliferous rocks. Detailed examination of those fossiliferous rock layers reveals they were not the consequence of multiple stages of a single, short-lived event, however, but millions of events over millions of years.

Concluding remarks

The philosophy of science is a difficult subject, since the criteria by which a theory may be deemed scientific are open for discussion. One of the most fundamental and stable of these criteria is testability, or falsification. Mr. Patterson agrees with this criterion, but attempts to distinguish historical science from "operational science" to the extent that he may subject evolutionary theory and historical geology to unwarranted skepticism among his audience. In doing so, he undermines the validity of other historical inquiries, such as the textual transmission of the Bible and historical reality of the New Testament referents, which are undoubtedly important to Mr. Patterson's (and my own) worldview. A faithful application of the scientific method does not render the works of God silent, but results in an efficient, self-correcting means of exploring the details of His masterpiece. I would compare this to the relationship between an artist's mind and the painting, the latter of which was created through a variety of physical processes. One may examine the character of brush strokes, chemistry of the paint, geometry of objects, etc. to determine how the picture was made without consideration of why. An art critic may still ask the "why" questions, but through a very different method.

Science is the active pursuit of knowledge about the natural world. As such, it is methodologically naturalistic, and cannot speak to facts outside the realm of empirical observation. However, the scientific method is one epistemological method among others in the grand scheme of philosophy, and therein rests on principals not subject to scientific inquiry. This categorical distinction should humble the scientific researcher, who, if ignorant of such, is but "a man with his feet firmly planted in midair," to cite the words of Schaeffer.

Wednesday, December 8, 2010

Theological implications of an old Earth: Doesn't Scripture have a voice?

When I began this blog, I planned to focus on topics in geology. More specifically, I aimed to clarify whether Answers in Genesis (AiG) offered a valid position on the geological history of the Earth. I have not hidden my position on the answer to this question: no, I do not believe that Flood geology offers a viable interpretation of the rock record. Of course, I will continue to elucidate my reasoning as I consider scientific propositions from AiG's article database, but I feel that I should restate my reasoning behind the focus on geology and at least take some opportunity to explain my theological position on what it means, as a Christian, to believe in an old Earth.

Why such a narrow approach to a broad controversy?

Simply put, I am a geologist by training and by practice. I am happy to discuss other topics (say, biology?) but I think it more appropriate to address questions to which I can speak with some experience. I don't perceive myself to be ignorant of the other sciences (biology, astronomy, history, archaeology) or of theology and Biblical interpretation, but as I've mentioned, I believe others have articulated my position far more eloquently than I could here. That being said, I feel it is still important to add a personal touch to this blog—namely, how does one approach Scripture while maintaining belief in an old Earth? So I will commit at least one post per month to answering this question. Below, I have articulated what I think is a key introductory question for both Christians and non-Christians.

Does it really matter what a Christian believes about the age of the Earth or the rock record?

No, but yes. (I'll come back to my answer) If you pose the same question to a researcher at AiG, the answer would be emphatically yes, and that the gospel is intimately connected to belief in a young Earth. Their reasoning is rooted in a defense of Scripture as God's word, which should not be compromised in the face of an external authority. Though I respect their starting point and admire their zeal, I sincerely believe their conclusions not only to be erroneous but potentially dangerous to evangelicalism. First, despite the majority position over the history of Christian thought, I do not believe a faithful interpretation of Scripture demands a young Earth. While I expect to expound on my claim in time, it is worth noting here that many Christians have maintained orthodoxy (i.e. presentation of the gospel without comprise; Biblical inerrancy; historicity of Genesis) while believing in an old Earth. Whether you agree with their hermeneutic, it would be unfair to claim that AiG offers the only distinctly Christian understanding of Genesis. Secondly, nobody is free from extrabiblical influence when interpreting Scripture (Genesis in particular). AiG regularly employs studies of grammar, history, archaeology, and even science, to refine their understanding of each verse (e.g., the meaning of the word 'firmament', or even 'day').

Third, and most importantly, I feel that AiG has produced a false dichotomy between 'young-Earth Christianity' and 'old-Earth naturalism'. So thorough is their association that most people can no longer separate the modifier from the respective belief. Moreover, any belief that falls on the 'middle-ground' is deemed rather hypocritical. But how is this dangerous? I would propose two scenarios (granted I am not the first to do so):

1) A young Christian is taught that faithful adherence to God's word informs us that the world is quite young (less than 10,000 years or so). The world was overcome by catastrophe and repopulated even more recently (4–6,000 years ago). Furthermore, the Christian is taught that science reveals vast evidences for this historical account, thereby offering positive reason to believe God's word. He/she is eager to explore the science behind these evidences and promptly chooses a related degree path. However, as the student progresses, he/she discovers that the evidence was never there—science does not support a young Earth. The student's faith is challenged and may even feel deceived by those in whom he/she confided spiritually. But the dichotomy has never left his/her mind and so two choices appear: "If science supports an old Earth, then Christianity must be false; but if I maintain my faith, then science must be mistaken." In rejecting one, he/she rejects the other; an awkward silence characterizes his/her life.

2) An unbeliever is met with the challenge of the gospel—perhaps an acquaintance has shared the message with them, or they have embarked on a self-motivated search for meaning—and come across the ministry of AiG. Immediately, he/she perceives that an acceptance of Christ would require him/her to believe what seemed more obviously false: that the world is less than 10,000 years old and a great Flood once rearranged the planet. The skeptic is unwilling to pursue the religious/philosophical issue further and feels certain that he/she has justifiably rejected Christianity. Though I would never advocate compromising the gospel to make it appear more attractive to unbelievers, one should consider the effect of AiG's dichotomy on non-Christians. Is this a necessary stumbling block?

Back to my answer.

I mean 'no' in the sense that I believe the age of the Earth, biological evolution, etc. are tertiary issues in Christianity. Though important, they are not fundamental to the principles of the gospel and should not be bound to the conscience of the believer (or potential believer).

I mean 'yes' in the sense that according to Christianity, humans are to be stewards of the Earth. Moreover, we are called to know God, both through His word and His creation. I believe that an honest application of our God-given tools of knowledge to His creation results in the discovery that Earth is far older than we had previously thought. No Christian should be scared of the truth, even when it challenges our traditional notions of God and His creation. Rather, we should rejoice, and be glad in it.


Theological implications of an old Earth

Perhaps you feel awkward, offended, or put off in some way by the possibility of belief in an old Earth. Maybe it is downright scary and makes you feel skeptical about God's word altogether? If so, I hope that you would bear with me as I continue posting, and especially that you would not hesitate to contact me about specific concerns you have. In the mean time, I will briefly address some common questions below.

How can I believe that the universe began billions of years before humans were ever on the scene? Don't such long ages diminish God's purpose in creation and redemption?

If you've asked this question before, then you are not alone. From personal experience, and discussion with friends, I know the thought experiment can be, well, daunting. The concept of "deep time" is difficult for most geologists to grasp, let alone for Christians trying to reconcile their faith with the word of academia. So to answer, I would direct you to the third question in the OPC's First Catechism for children, which reads: "Why did God make you and all things?"

How would you answer? Was he lonely, bored, or even cynical? I find the OPC's succinct answer to be most Biblical: "For His own glory." God's creation is not about us; it's about Him. While His relationship to mankind—covenants, providence over the nations, etc.—is integral to redemptive history, it is futile and foolish to question His methods of bringing about history (and prehistory). What is the point of cosmic history without mankind? The glory of God—learn it, love it, and praise Him for it.

I like to think of it this way. For much of the pre-modern era, humans believed the universe to be quite small, not extending far beyond our atmosphere and certainly not beyond our solar system (even the stars were perceived as no more distant than our sun). Technological advances in the late Middle Ages introduced a revolutionary notion: the universe is far bigger than we could imagine. Even in the 21st century, our universe continues to expand (perceptively, that is). But in discovering how tiny we and "our" planet really are, does our view of God likewise shrink? On the contrary, we are all the more amazed by Him Who framed it. So in discovering that time is equally large, and that "our history" is only a pixel of the big picture, how should we respond in our view of God?

Doesn't the creation account suggest there was no death before Adam's fall? Yet long ages contradict this notion.

Many authors have considered this question before (e.g. here, or here for a young-Earth perspective), and I would exhort you to consider the resources available. The notion that no death (including animal death) occurred before Adam's sin is rooted in three premises: 1) it contradicts the notion of a "good" creation; 2) death is the explicit consequence named for Adam's initial rebellion; 3) Genesis 1:29-30 commissions all the animals to have plants for their food.

With regard to the first point, I simply don't think God's description of creation as 'good' precludes animal death. The wisdom literature (especially the Psalms) conveys a deep sense of purpose behind the predator/prey relationship, and there is no sense that the natural death of animals is the result of corruption. When God's judgment against nations is described in poetic rhetoric, it is typically characterized by an 'undoing' of the universe's natural cycles (e.g., stars fall from the sky, the sun is darkened). Chaos in the natural order of animals (consider the plagues of Egypt) is thereby associated with uncreation. Returning to Genesis 1:29-30, God commissioned the beasts to eat plants, but it would be an argument from silence to interpret this as "plants alone for every animal." Rather, the author of Genesis has assigned a simple, generalized purpose for each tier of creation. Obviously, the description is not exhaustive (plants are not only meant to be eaten) so it is premature to exclude carnivorous activity from the original creation.

The young-Earth interpretation is further complicated by our own classification of the animal kingdom versus that of the ancient Hebrew. What constitutes an animal/beast? Are insects and krill shrimp included, and if so, what did small reptiles, bats, and whales eat? Furthermore, we understand today that plants are living organisms with reproductive cycles, digestive systems, etc. (we even share some DNA). Therefore, in our modern understanding of death, something died before the Fall, so how do we define the cutoff and why?

All previous points aside, the real question is theological. God promised Adam that he would die in the day that he ate of the fruit. Adam ate of the fruit, yet didn't die. Do we thus misunderstand the word "day" or the word "death"? Various commentators have argued for ambiguity in either terms: "day" refers to the post-Fall period; or "death" simply refers to a spiritual death, rather than physical. I would opt for neither, and suggest a more simple reading of the text. First, the uniqueness of Adam's death is not that it would represent the first case of a living organism ceasing to function (consider the microbes in Adam's digestive tract as he ate the fruit), but rather that it represented God's wrath for sin (Rom. 6:23). Adam entered into covenant with God, who demanded perfect obedience in the communicative state. When Adam forsook the covenant, man's relationship with God indeed changed as he became at enmity with God (call this "spiritual death"; Rom. 8:5-8), but we should not forget that a death did occur that day. "And the LORD God made for Adam and for his wife garments of skins and clothed them." (Gen. 3:21) This is the most basic principle of the gospel, echoed also in Romans 6:23. Most importantly, it is not compromised in any way by an old-Earth understanding. Not only do we obtain a more consistent criteria of what constitutes "good" in creation—an intricate, functioning natural order—, but we also have a more precise understanding of God's covenant, wrath, and mercy.

Alright, maybe science suggests a old Earth, but there is no 'gap' in Genesis 1, and a day is a day!

I am happy to agree on these points. In short, I reject the 'Gap hypothesis' and 'Day-Age Theory' on basic exegetical grounds. The creation account is continuous and there is no reason to interpret the days in a purely metaphorical sense. At the same time, I reject the young-Earth interpretation on both scientific and exegetical grounds. Briefly stated, the seven-day structure of Genesis 1 breaks down the creative activity of God, who Himself made time. Thus the author is describing the work week of God. It makes no sense to argue over lengths of time, or physical frames of reference, when it comes to the individual days of Genesis 1. In doing so, we completely miss the point of the text and bind ourselves to unnecessary (and false) premises in our scientific investigation of the universe.

God uses the creation account as a model for our own work week (this one is obvious), but also for the Sabbath years and Jubliee. The analog is possible in all three cases if we take Genesis 1 to represent God's perspective. Otherwise, we are left to ponder silly (and unnecessary) questions like: "How could there be evening and morning (or plants) without the sun?", or "What was God doing on the eighth day?" Despite AiG's persistence to bind each day (and God Himself) to an Earthly timescale, a more parsimonious understanding of the text suggests that the specific chronology and age of the Earth are not addressed in Genesis 1.


So what is the point of Genesis 1 if not to chronicle the Earth's origin?

Quite simply, the author retells the story of creation to make a point about God and the universe; in other words, he uses a historic referential (the fact that the universe had a beginning and owes its existence to God) to make a theological point (there is one God responsible, and His creative work is complete) about the universe (all natural phenomena have a function and purpose; man is in covenant with God and accountable to Him). There are numerous references available that further explore the theological details of Genesis 1 without concern for nuclear processes, relativity, vapor canopies, and other anachronisms. Unfortunately, I did not have knowledge of such works when I first encountered the young-Earth position in my youth. While I plan to expound on this topic further, I would say now that more recently, I have found the exegesis of Answers in Genesis to be remarkably shallow, and often misguided. I hope that you would trust my recommendation to discover this for yourself.

Concluding remarks

I sincerely hope that if you are reading this as a young-Earth Christian, you would consider my reasoning and exhortation with comparable sincerity. Conversely, if you are a non-Christian that has associated Christianity with belief in a young-Earth, I hope that you would reconsider the connection. I look forward to expounding on more theological topics that arise from the question of Earth's age and history, and would appreciate any feedback or suggestions.

Thursday, December 2, 2010

Methods to Dr. John K. Reed's Madness: Deconstruction and the Geologic Timescale, Part 2

Last week, I briefly discussed historical approaches in science and how this applies to geologic dating methods – that is, how do geologists assign ages to a given rock? My goal was provide a basic understanding of scientific models in general, noting that the scientific method is used to falsify hypotheses and assumptions intrinsic to those models. Thus the scientific method can be used to discard models that don’t represent reality, while refining (and providing evidence for) models that do represent reality. At this point, I want to more specifically address the points made by Dr. John K. Reed in his Creation Research Science Quarterly article found here (downloads PDF file). I’ll divide my comments into three sections, dealing first with his comments on the geologic column, secondly with his comments on specific dating methods, and thirdly with my own thoughts on the strength of the geologic time scale.

Dr. Reed’s presentation of the geologic column


By way of preface, a bulk of Dr. Reed’s reference material is taken from the book A Geologic Time Scale by Gradstein, Ogg, and Smith (2004). If you are looking for a detailed explanation on how the geologic timescale is constructed, this is the authoritative work (and anyone with access to a university library can find it). However, I suspect that Dr. Reed has not spent much time with primary research in the fields of stratigraphy or geochronology. The reason is that Dr. Reed constructs a series of strawman arguments against the methods employed to construct the timescale (perhaps unintentionally?) and relies heavily on irrelevant citations in the text to give the impression that Gradstein et al. might even agree with his critique. My intention, however, is not simply to accuse Dr. Reed of dishonesty or incompetence. Rather, my goal is to exhort any serious reader to take advantage of the widely available reference, and decide whether he has accurately represented it.

Promoting Naturalism?
In his introductory sentence, Dr. Reed asserts that scientists use the geologic timescale as a means to promote their philosophical disposition to naturalism. I would point out, however, that this accusation is no more meaningful that accusing Dr. Reed (or any YEC) of using the rock record to promote his/her predisposition to a so-called Biblical model of history. Scientists from a range of philosophical (and religious) backgrounds have constructed the geologic timescale through a variety of scientific methods. Whether you agree with the validity of these methods is not relevant; the point is that scientists have long worked together to reconstruct Earth history and subsequently interpret the philosophical implications of that history within their respective worldviews. Many of the earliest attempts to construct a geologic timescale were made by Christians, some of whom speculated ages of rock formations much older than had been previously assumed (e.g. Nicolas Steno; or Thomas Chalmers, who fully expected that young-Earth models would disappear within decades). Uniformitarian principles of geology were in place long before Darwin’s biological theories were articulated, let alone widely accepted. Most early biostratigraphers (scientists that correlate rocks based on fossils) rejected Darwin’s theories on the origin of species, despite their own predispositions to naturalism. Notwithstanding accusations by Dr. Reed and other YECs, most Christian geologists have been comfortable interpreting the rock record as a reliable proxy for Earth history (including the evolutionary development of life), recognizing that in itself, the reality of the geologic timescale cannot speak to philosophical commitments that underlie our investigation of nature. Granted, if the evidence pointed to a very young Earth, pure naturalists would face a greater challenge in accounting for life’s origin and development, but a young Earth in itself does not preclude naturalism. This accusation is a category error on the part of Dr. Reed.

Modern stratigraphy and absolute chronometers
Only a couple sentences later, Dr. Reed introduces a red herring to the discussion by citing Gradstein et al. (2004, p. 3), who notes that “the chronostratigraphic scale is an agreed convention, whereas its calibration to linear time is a matter for discovery or estimation.” Apparently, Dr. Reed understands this to mean that scientists no longer empirically investigate the chronostratigraphic scale (or never did?), but rather ‘fit’ the facts by means of ‘convention’ into their preconceived template of Earth history, and uses the citation to cast doubt on the methodology of stratigraphers. If you’re confused by the terminology, let’s take a quick detour. The chronostratigraphic scale refers to the relative ages assigned to rocks using the methods I discussed last week. For example, we assume that a rock layer is younger than underlying rock layers. Furthermore, the consistent order of fossils is used to group rocks into Periods, such as the Cambrian, Ordovician, Silurian, etc. One does not need a degree in geology to understand how Dr. Reed has abused the citation (but it helps to read the full paragraph preceding the citation). In saying that the chronostratigraphic scale is “an agreed convention”, Gradstein et al. (2004) have only described how scientists have assigned labels to each interval in the rock record, not how they determined the order. Through empirical investigation, scientists have documented succession of brachiopod fossils throughout the rock record, for example, but assigning a categorical cutoff (such as Cambrian brachiopods versus Ordovician brachiopods) is an agreed convention. In other words, scientists cannot, by definition, ‘discover’ that the Ordovician period actually preceded the Cambrian period any more than one could ‘discover’ that the Egyptian Middle Kingdom actually preceded the Old Kingdom! One could, however, propose new calendar dates for the range of each Kingdom through empirical investigation, just as geologists can propose new ‘calendar dates’ for the Cambrian-Ordovician boundary if the evidence demands it.

Within the introduction, Dr. Reed also asserts that the geological timescale lacks an absolute chronometer, which would mean that scientists have no way of assigning absolute ages to rocks. As I mentioned last week, no scientist believes that we can determine absolute ages of rocks, but rather that we have a working scientific model of estimating those ages. The difference is subtle, but important – can you pick it out? As with any scientific model, assumptions are made, but the progress of geochronology has only refined the respective assumptions and improved our confidence in the ages now assigned. Dr. Reed is correct in noting that dating methods “exhibit uncertainty, and...assume rather than prove deep time,” but it is unclear why this is relevant to the discussion. If a coroner examines a corpse and estimates the age of the person to be 85 years at death, his/her method not only exhibits uncertainty but also assumes the reality of the last 85 years. Likewise, when scientists estimate that Codex Sinaiticus (the oldest complete copy of the Christian scriptures) was compiled in ~325 A.D., their methods exhibit uncertainty and assume the reality of the past 1700 years. Neither case precludes the dating method from adding meaningful information to the discourse, however. Epistemological principles underlying historical scientific methods are by no means unimportant, but simply citing such principles as reason to dismiss the results constitutes yet another category error on the part of Dr. Reed. When a geologist obtains a radiometric date of 500 million years, it is understood that the method makes assumptions about the physical history of the rock and the reality of the past 500 million years. But why does Dr. Reed think the geologist has made an error in assuming deep time? Because of a particular understanding of Scripture – an understanding that is rooted, no less, in the principles of Hebrew grammar and syntax, textual criticism, and hermeneutics.


A multiplicity of methods
If you recall the analogy I made to reconstructing history from a set of tattered diaries, I attempted to show how a multiplicity of dating methods can provide internal checks, verify or falsify key assumptions, and improve the overall resolution of the model. In geology, the case is no different, yet Dr. Reed claims that “the need to bounce back and forth from one method to another reveals the fundamental lack of a consistent ‘clock’ against which the rocks can be calibrated.” It is unclear what Dr. Reed means by “the need to bounce back and forth” between methods — perhaps he is referring to the fact that not every method can be applied to every rock? — but it does not logically follow that no reliable ‘clock’ exists. While radiometric dating methods have been refined (or replaced) over the years, this hardly constitutes “repeated failures” that undermine the geologic timescale. Moreover, it is a caricature for Dr. Reed to imply scientists deemed these methods “infallible” or proclaimed them as “scientific gospel.” On the contrary, inconsistencies in radiometric dates have only improved our understanding of the respective methods. For example, when historic lava flows yielded anomalously old Potassium-Argon (K-Ar) dates (e.g. Dalrymple, 1969), the assumption that all argon should be excluded during crystallization was falsified. Advances in scanning electron microscopy (SEM) and electron microprobe analysis revealed compositional zonation within individual minerals that were previously assumed to be homogeneous. Such technological advances were seminal to the development of the more accurate and consistent 40Ar/39Ar method, upon which the modern geologic timescale now heavily relies. However, one should never forget that radiometric ages will always represent model ages, and thus are subject to change in the case that our improved understanding of geology falsifies the underlying assumptions.

Incompleteness of the rock record
Dr. Reed continues his assessment by characterizing the stratigraphic record as patchy and incomplete. What this means is that in any given location, only a fraction of Earth history has been recorded in the rock record. This comes as no surprise to any geologist, nor should it to anyone in the general public. Currently, sediments are accumulating (and thereby recording Earth history) in the San Joaquin Valley of California, but are not accumulating the adjacent mountain ranges (which are actually the source of those sediments). In order for sediments to accumulate where the Sierra Nevada range is currently located, the mountains must be weathered down and subside to form a sedimentary basin. How long do you suppose this would take? As you try to ‘guesstimate’ the answer, you can appreciate why considerable time gaps are expected to exist within the rock record. Although Dr. Reed presents this fact as an embarrassing challenge to the “pure empiricist” (which, by the way, no scientist is), the absence of rock record for a given time interval commonly provides valuable information to the geologist. First of all, it reveals that the area did not constitute a sedimentary basin, but rather a source of sediments to adjacent regions. For example, Cretaceous rocks can be found throughout much of eastern Utah. From east to west, the rocks transition from silty/limey sediments with marine fossils to sandy/silty sediments with terrestrial fossils, suggesting that a shoreline ran through the middle of the state, with the sea to the east and highlands to the west. Thus we can predict that a mountain range was present in western Utah and eastern Nevada during the Late Cretaceous and provided sediments to riverine deposits found to the east (for reference, Bryce Canyon National Park contains a record of these deposits). If our interpretation is correct, there should be no rock record for this interval to the west (an unconformity), but we should find evidence of those mountains in rocks from central to eastern Utah (i.e. fragments of previously formed sedimentary and igneous rocks). Since I took the time to describe the example, you may have guessed that is exactly what we find. Not only does eastern Nevada and western Utah contain a discontinuity in the stratigraphic record for this time interval, but sandstone layers from the Upper Cretaceous rocks in central-eastern Utah contain abundant fragments of older (Paleozoic) sedimentary rocks now exposed in Nevada and western Utah. Moreover, detrital zircons (pieces of zircon mineral grains from igneous rocks now found in sedimentary rocks) can be dated to track the source of sediments, and have been examined in Upper Cretaceous rocks from central Utah. A recent study documented clusters of detrital zircon ages in the range of 81-76 Ma (Jinnah et al., 2009), which is consistent with ages of volcanism in western Nevada and southern Arizona.

Before moving on, we should consider the implications of the previous example. The data imply that sediments comprising Upper Cretaceous sandstones of central Utah came from recycled sedimentary rocks to the west, with some source from volcanic rocks farther to the west. Thus Paleozoic rocks of Nevada/Utah needed time to accumulate, harden into rock, and undergo weathering and erosion before being carried more than 100 km to accumulate in a newly formed basin. This is in addition to large volcanic eruptions, which needed time to cool and crystallize (but not too much time, since rapid crystallization forms glass and not euhedral crystals). Is it any surprise that geologists have not quickly abandoned their assumption of deep time in spite of difficulties encountered while refining the geologic timescale?

In case you did not follow my example, consider another analogy to history. The record of human history is notably patchy and incomplete, just like the rock record. History has not been preserved for a majority of ancient peoples, due to an absence of written records or a subsequent destruction of evidence. While this provides a significant challenge to historians and archaeologists, they have been able to apply a multiplicity of scientific methods to piece together isolated records and reconstruct a meaningful history of mankind. Similar assumptions go into detailing human history as in geology, yet there is no outcry against historians for presenting an equally uncertain history with confidence.

Little green men?
Finally, Dr. Reed’s claim that Creationists can use biblical history as a template for understanding geologic history is simply misguided. Even assuming Dr. Reed’s interpretation of biblical history (i.e. a young Earth and a global flood), biblical history is by no means exhaustive. It is equally valid to propose that “little green men...influenced the course of evolution” after the Flood as it is to propose the same happened during a depositional hiatus in the Cretaceous. To respond that the preposterous story is contradictory to biblical history would be an argument from silence. I would encourage any readers to strongly consider the implications of Dr. Reed’s silly thought experiment. Are we to fear gaps in our understanding of nature, past and present, because they introduce uncertainty to our conclusions? I hope to address Christians in particular: do not advances in science improve our understanding of the world that was made and the one who made it? Yet scientific advances are not possible without treading boldly across those gaps in the hope that we can diminish uncertainty. Biblical theology lays the epistemological framework for empirical investigations of the natural world, but a reliance on all of scripture (including Genesis) as authoritative in matters of faith does not preclude our need of scientific investigation to understand science and history. Rather our use of science is warranted thereby. Once again, it is worth mentioning that even Dr. Reed’s interpretation of biblical history is dependent on historical and social sciences.

Dr. Reed’s understanding of geologic dating methods


I think that perhaps I should have begun with this section, in which I want to address the claims by Dr. Reed concerning specific dating methods. As much fun as it is to discuss the philosophy of historical sciences, don’t we just want to know whether such dating methods even work? Absolutely, and if you’ve read to this point, I appreciate your patience. So let’s take a closer look at each method mentioned in Dr. Reed’s criticism.

Radiometric dating
Radiometric dating methods have been constantly refined as our knowledge of the geology and physics behind the methods improves. The most commonly used methods for constructing the geologic timescale are the 40Ar/39Ar and U-Pb techniques, but other older methods have by no means been “thrown out”, as Dr. Reed asserts. I think the confusion lies in the fact that he primarily references Gradstein et al. (2004), who mainly considered dates for Period and Stage boundaries in the geologic timescale (i.e. they were dating only stratigraphic units of rock, such as volcanic ash layers). Dr. Reed does not offer an firsthand critique of the supposed shortcomings of each method, but is confident that Young-Earth critiques have sufficiently proven each to be “fatally flawed,” and that “the rock-solid chronology of radioisotopes has turned into quicksand.” I can only respond that the assessment is extremely optimistic, to say the least. Any review of published scientific literature employing radiometric dating techniques will show that by and large the results are consistent, and many underlying assumptions can be verified. Radiometric dating methods use a scientific model that does not always correspond to reality for a given sample, and hence discordant results do exist. I am aware that Dr. Andrew Snelling and others have compiled such outlier cases to promote uncertainty and doubt among the public acceptance of these methods, but I would warn against taking their claims too seriously. As a researcher in geology, my exhortation to you is to look more closely at the big picture, and realize that scientists have neither ignored discordant data or uncertainties in each method. On the contrary, such discordant data can give very useful information about a rock’s history. If you are still interested in the particular ‘problems’ raised by Dr. Snelling and others, I would be happy to add more detail to the discussion in future posts. For the time being, let’s consider the rest of Dr. Reed’s comments.

Although Dr. Reed does not believe geologists have any absolute chronometer, he is aware that radiometric dating “provides the only theoretical way to directly obtain absolute dates for virtually all of the rock record.” So if he were wrong about the reliability of radiometric dates, then the entire argument fails because geologists do have an absolute chronometer, or ‘reliable clock’, against which they can calibrate the chronostratigraphic scale. But Dr. Reed insists on adding confusion to the discussion with a rather nonsensical line of reasoning: “Fundamentally,” he says, “isotopic dates cannot confirm the stages of the timescale because uncertainty in these methods precludes a certain chronology.” By “isotopic dates”, I assume he means radiometric dates, and by “stages of the timescale” I assume he is referring to the intervals labeled over the years by geologists (such as the Cambrian, Ordovician, etc.). Is he trying to say that geologists can not confirm the time span of the Cambrian or Cretaceous, for example, because there are uncertainties in the dating methods? Does he believe that geologists have a preconceived age of each stage? (They don’t.) And what does it mean that uncertainty in the methods “precludes a certain chronology?” Which chronology? Or does he mean to say that the ± sign is too much uncertainty for scientists to handle? Dr. Reed continues:

“If radiometric dating is uncertain, then geologists continue to argue in a circle. This is because the primary argument about radiometric dating is not whether it is generally correct or generally incorrect but whether or not it is the reliable chronometer—the magic hammer that can set the golden spikes of time. A method that is not absolute cannot provide absolute dates. If it can be wrong some of the time, then it can be wrong at any given time, and therefore any given date cannot possess the certainty generally assumed by stratigraphers. For example, note how the argument that current methods
are accurate reveals inaccuracies in other methods that once enjoyed equal confidence.”

Now things are getting ridiculous. I am not sure where Dr. Reed picked up his notions about how science is supposed to work, but it certainly wasn’t by contributing research to the field. It appears that he expects geologic dating methods to be proven infallible or considered useless, but where does this expectation come from? Isaac Newton used rather simple geometric methods and gravitational theory to estimate the distance to the moon. As technology improved so did estimates for this distance, and Newton’s original calculation was shown to be reasonably accurate (despite errors in some of his assumptions and variables). Technology will continue to improve, and estimated distances to all planetary objects will be updated correspondingly. But according to Dr. Reed’s line of reasoning, this means that nobody should tout with confidence that the moon, Sun, and stars are long distances away because there are uncertainties in our calculations! In fact, it’s probably just an illusion and no planetary object is further than the uppermost stratosphere. Yes, I know manned spacecraft have been there, but I could always propose that they fail to take into account changes in the physical laws of the universe as one ventures farther from the Earth’s surface. I would encourage Dr. Reed to spend more time arguing science as it is used by scientists, and less time redefining terms to play games with semantics. Yes, radiometric dates are wrong some of the time, and when they are wrong (discordant, at least) then geologists devote much more time exploring why they were wrong in that case. Then they formulate a hypothesis, test the hypothesis, and repeat the experiment in line with the scientific method. Dismissing scientific models because of uncertainties is not science; it is unwarranted skepticism (the same brand of skepticism employed by those who doubt the early authorship or textual transmission of the New Testament, for a distant but relevant analog).

Following the quote above, Dr. Reed cites Gradstein et al. (2004) to convince his audience that with each new radiometric dating method, older methods lose their once-held confidence. However, the citation was only discussing why certain methods (such as the Rb-Sr and Sm-Nd method) are not used as precision chronometers. This is a mis-citation on the part of Dr. Reed, who apparently does not understand the geologic reasons behind the preference. Methods such as Rb-Sr, Sm-Nd, and K-Ar still have application in geology and yield meaningful results, but there is more room for error in stratigraphic units where the interaction of hydrothermal fluids is more prominent, due to a much higher porosity and permeability of rocks (i.e water flows more freely through sedimentary rocks) and less isolated crystal systems. Although precise tuning of the geologic timescale is possible with these methods, it requires much more work in terms of quality control. So why waste the time and money?

Before moving on, I wanted to point out that Dr. Reed seems to think the point of radiometric dating methods is to substantiate a common belief in evolutionary theory by demonstrating the existence of deep time. However, any geologist (or geochronologist) would scoff at the association, recognizing that the age of rocks and the validity of evolutionary theory are two separate issues. Unfortunately, Dr. Reed’s association is very effective when it comes to the general public, which is more skeptical about (and spiteful of) evolution than the age of the Earth. Lastly, Dr. Reed claims that “while radiometric dating remains the mainstay of the timescale, it does so because the alternative is to admit...that the age of the earth has not been demonstrated to be measured in billions of years and that the historical record of the Bible is back on the table.” Once again, I think any geologist (Christians included) would scoff at the claim. Radiometric dating methods are not on the brink of extinction and geochronologists are by no means scrambling to counter the claims of AiG’s RATE team. But assuming they were, would a 6,000-year history be the only alternative? Deep time is not demonstrated by radiometric dating alone (or even primarily), but through a broad understanding of geologic processes responsible for rocks seen today: the accumulation of sediments; the emplacement of large magma bodies; crystallization and exhumation of igneous plutons; regional metamorphism of massive sedimentary rock bodies; spreading of the ocean floor; and biogenic structures, including the shear number of fossils and biomass contained within sedimentary rocks. Yet YECs like Dr. Reed create the illusion of a discipline in crisis by addressing these evidences in isolated cases rather than in the big picture.

Biostratigraphy
I am going to take this one point by point and save a lengthier discussion for another time. Also, I encourage you to read Dr. Reed’s section on biostratigraphy in full before considering my comments.

“Biostratigraphy is the use of index fossils to assign ages to the rocks that contain
them. As has been noted by many creationists, the argument is circular because the deep time of evolution is a presupposition of the method.”

This is false. Biostratigraphy is a method used to correlate rocks based on the fossils they contain, since it is assumed that fossils represent the flora and fauna living at time of deposition (an assumption verifiable by other geologic methods). Fossil assemblages were categorized early on, based on the location of the rocks containing them (e.g. Cambrian for Cambria, England). Further categorization allows biostratigraphic correlations to be more precise as new species are found and more sections of sedimentary rock are analyzed. Fossil species are considered index fossils if 1) the first and last appearance of that species in the rock record can be dated radiometrically, with repeatable results; 2) the fossil can be found in multiple localities around the world, and radiometric dates for those rocks are consistent with others; 3) the fossil is abundant in many rock types (i.e. dinosaurs need not apply). If these criteria are met, index fossils can be used to assign an age range (not an absolute age) to sedimentary rocks containing that fossil. The reasoning and process is quite simple, but how do we know it works? Well, I would first point you to the success of the oil industry, which relies heavily on biostratigraphy to pinpoint the location of oil reservoirs. Furthermore, I have worked in sedimentary sections myself, and have collected thousands of fossils. The order of fossil organisms is amazingly consistent, down to the subspecies level, and provides excellent evidence for the evolutionary history of life, as well as the long ages estimated radiometrically. But that is a discussion for another day.

“As an aside, note that the use of ‘key’ radiometric dates tacitly admits that some are better than others.”

That’s true. If you want to constrain the duration of geologic stages on the timescale, then a radiometric date taken from a stage boundary is much better than one from the middle of the stage. But I am thinking Dr. Reed has confused the use of the word ‘key’ here. It does not mean ‘dates that agree with our presuppositions.’

“Time periods or stages are ‘scaled geologically’ or assembled in their ‘proper order’ using index fossils. This can happen only if the truth of evolution is known in advance and if its progression is adequately preserved in the fossil record.”

This is completely false. Geologic time periods were assigned long before evolutionary theory entered man’s conscience. The order is determined by the relative ages of rocks, which is determined by basic principles of stratigraphy. Here, Dr. Reed is citing another author, who was only making the point that I’ve been making all along. Biostratigraphy is used in tandem with sedimentary stratigraphy to assign relative ages and define stages for fossil-bearing rocks. This process does not require a knowledge of, or reference to, evolutionary history. Dr. Reed’s thinking is completely backwards on this topic.

“If the timescale has to be stretched in linear time with radiometric dates, does not that imply that the rock record itself does not give the appearance of age determined by radiometric methods—even with the assumption of evolution?”

Not at all. Again, Dr. Reed is playing semantic games with a citation from A Geologic Time Scale. On a side note, it seems most of his citations come from the first page of chapters in the book, leaving me to wonder whether he is familiar with the actual content. The original author referred to geological scaling techniques used in biostratigraphy. For example, the range of a certain fossil must be measured in multiple sections of sedimentary rock, but the thickness of each section will range from one to the next (sediments do not accumulate at the same rate in different water depths, climates, etc.). Scaling techniques allow geologists to estimate the age to thickness ratio for each section, so that an age can be assigned to each fossil or event once radiometric dates are available. Interestingly enough, radiometric ages invariably become younger throughout the rock section, as predicted by the interpreted relative ages of those rocks, and fit the geological scaling very well. Thus the “stretching” referred to by Dr. Reed has nothing to do with apparent lengths of time, but calibration of an unknown timeline to a timeline with known points of reference.

“Fossilization assumes in situ, low-energy paleoenvironments. Any high-energy catastrophic transport of fossils out of their “home” environment invalidates the scheme.”

That is absolutely true. However, the catastrophic transport of anything, fossils included, leaves behind distinct sedimentary structures and characteristics. Thus the assumption can be verified by a simple field analysis of the rocks, as well as chemical analyses in the laboratory (that would be my field of study). A vast majority of index fossils are taken from fine-grained marine shales and carbonates, which show no evidence of transport (catastrophic or not).

“Since fossils do not show evolutionary transitions, the dates are purely conceptual. This is demonstrated by comparing the evolutionary 'dates' from the nineteenth century with those of the twentieth century.”

Dates assigned to index fossils, once again, have nothing to do with evolutionary theory.  I am stunned that Dr. Reed thinks it would be appropriate other than for the purpose of entertainment to compare “evolutionary dates from the nineteenth century” with radiometric dates now assigned to index fossils. On what were nineteenth century dates based? And for the record, many fossils do show evolutionary transitions.

“Ignorance of the complete fossil record demands empirical uncertainty...Living fossils and changing ranges of index fossils highlight that uncertainty.”

Our knowledge of the fossil record is certainly incomplete, and nobody denies this. However, biostratigraphy relies on rock sections where the first and last appearances of a given fossil are documented in multiple sections around the world. Furthermore, correlations are never based on a single fossil type, but on dozens of fossil species that comprise a complex assemblage. Thus even if several fossil species disappeared from the rock record without actually going extinct, it would not affect biostratigraphic correlations by any meaningful degree. If new evidence suggests a better constraint on radiometric dates assigned to biostratigraphic intervals, then the range will change, but this has nothing to do with “ignorance of the complete fossil record”. Finally, although living fossils exist, these organisms are never used in biostratigraphy. Index fossils are typically microorganisms such as foraminifera, pollen, and radiolarians, or small shelled organisms such as brachiopods and trilobites. Has anyone demonstrated the existence of Cretaceous foraminifera in modern oceans?

“The predominance of marine invertebrates as index fossils arbitrarily biases sampling.”

This is by no means arbitrary. The reasons for using marine invertebrates are 1) their skeletal structure changes more frequently throughout the rock record, so that species can be distinguished more easily; 2) they occur in rocks formed in marine environments, where deposition is more constant and erosion is more rare. But I can’t figure out what Dr. Reed means by “sampling” here. Sampling of what?

“For nearly 200 years, naturalists have asserted that evolutionary history is preserved in the rocks and have thrown that rock record into the teeth of Christianity.”

This claim is both inaccurate and unfair to all. First of all, Darwin’s theories were not used in biostratigraphy until decades after he introduced them (and hundreds of years after the advent of biostratigraphy). Secondly, evolutionary history is preserved in the rocks, but this has nothing to do with Christianity, the tenets of which do not define our expectations for the rock record.

Dr. Reed devotes the rest of the section to commenting on a citation from Gradstein et al. (2004), who admit that some problems exist with “treating strata divisions largely as biostratigraphic units.” Of course, this admission seems very exciting to Dr. Reed, who perceives that “the biostratigraphic interpretation of the rock record is perhaps not so clear after all”, but I am certain he doesn’t understand the implications thereof. For one, Gradstein et al. are explicitly referring to cases where stage boundaries are defined only by biostratigraphic markers (fossils). Currently, this applies to about half of all stage boundaries, but that number is decreasing rapidly. Secondly, the uncertainty introduced by the problems that Gradstein et al. summarize do not affect the absolute ages of the timescale, but only where to place the age marker in a given sedimentary rock section. Imagine that an argument existed over where to define the beginning of the day: should it be at midnight, or should it vary based on sunset/sunrise? This is similar to the argument over which fossils should be used as boundary markers, but notice that neither option results in shorter or longer days. Now consider times in history before the invention of mechanic clocks. How do you define midnight then? More importantly, do uncertainties in rudimentary time-keepers give us reason to doubt the reliability of human history before the advent of Swiss watchmakers? Obviously not, and likewise there is no reason to dismiss the strength of biostratigraphy to correlate rocks. Uncertainties exist, but they don’t change the big picture by any stretch of the imagination.

Astronomical cycle stratigraphy
If you’re not familiar with the concept of Milankovitch Cycles, don’t worry. The theory is rather straightforward: 1) Earth does not follow the same path every time it orbits the Sun; 2) variations in the axis of Earth’s rotation and the shape of its orbit occur over long periods of time; 3) variations in Earth’s orbit and axis affect the amount of energy received by the Sun, which effects the strength of seasonality and overall climate; 4) these variations are cyclic, like a sine wave, so the path of Earth’s orbit can be extrapolated over time. Combined, these four premises (and yes, I’m simplifying) are used to formulate a predictive theory about Earth history. We can predict, for example, that climate-dependent characteristics of sedimentary rocks should record astronomical cycles to some extent. If you’re confused, just think of it this way. Day and night are the result of an astronomical cycle — namely, the rotation of the Earth (sometimes you face the Sun, sometimes you don’t). Seasons are the result of Earth’s orbit around the Sun, combined with the fact that Earth rotates on an axis not perpendicular to that orbit. Milankovitch cycles are no different, qualitatively. Just imagine them as long-term seasons, which recur on the scale of 26,000, 41,000, and 100,000 years.

So what does Dr. Reed have to say about the use of astronomical cycles in stratigraphy? He states, “All such oscillations boil down to variations in solar radiation...reaching Earth...” As I also mentioned, changes in solar radiation are an important factor, but certainly not the only one. One must also consider the degree of seasonality (i.e. temperature and precipitation difference between winter and summer)and changes in sea level (and not just resulting from climate change, but directly from astronomical forcing). All of the above factors directly affect the water depth, temperature, and rate of primary production, which affect several characteristics of the sediments. At this point, Dr. Reed points out three major assumptions that he sees behind the use of astronomical cycles in stratigraphy:

“(1) cause and effect between oscillations and sedimentation to the extent that
this “signal” overrides terrestrial influences, (2) cyclicity and continuity in sedimentation driven predominantly by climate, and (3) uniformity of rates and preservation that enable the “signature” to be manifested.”

With regard to the first, I don’t see why Dr. Reed deems it necessary for the astronomical signal to “override” factors on Earth that affect sedimentation (say, tectonics?). In other words, geologists do not assume that astronomical cycles dominate the signal (such as chemical or lithological changes in sediments), but recognize that the astronomical signal will be superimposed on any terrestrial signal. As for the second ‘assumption’, geologists recognize that discontinuities in sedimentation occur, and such would pose a challenge to interpreting any astronomical signal. However, such discontinuities can be interpreted through a variety of geological methods (petrographic analysis and/or isotopic trends, for example). Furthermore, determining whether an astronomical signal is present requires thorough statistical criteria (as opposed to pure visual discernment: “Yeah, I think I see some cycles there?”). The use of non-parametric  statistical analyses removes assumptions about perfect preservation. This applies to the third supposed ‘assumption’ as well.

Dr. Reed follows with a wonderful citation from A Geologic Time Scale, which describes briefly how the method is applied to the last 23 million years (where the model remains predictive). However, Dr. Reed jumps pass the brilliant success of the model in predicting sedimentary rock ages, which are later confirmed by radiometric dates, and proposes the existence of more supposed problems with the theory. For one, he notes that the method cannot be applied to rocks older than ~20 million years. This is true, but not in the sense that Dr. Reed assumes. Cycle stratigraphy can be applied to rocks of any age, just not when it comes to predicting the absolute age of those rocks from interpreted orbital cycles. In such cases, the method works more like using a ruler on a football field: we can use it to measure out fine-scale distances from a known marker (say, the 50-yard line). Thus if we have a single rock layer of known age (from a radiometric date), then we can use astronomical signals to estimate the age of the surrounding layers as we move away from the layer of known age. This has been used to estimate the exact duration of biostratigraphic zones, where the uncertainty in radiometric dates is larger than the duration itself (e.g. Locklair and Sageman, 2008).

Notice that at this point, we must ask the question: if the calibration of sediments to astronomical cycles can be verified for the past 20 million years (especially for the past 420,000 years), then why does Dr. Reed continue to write anything? The model has already been tested and tried for timescales much longer than the ~6,000 years he is defending, so what good is it to nit-pick about sources of uncertainty that are negligible to the big picture? Once again, it is unwarranted skepticism:

“Like varves or ice layers, geologists simplistically assume that the target sediments were deposited slowly, uniformly, and in response to regular climatic variables. Remove those assumptions and the whole theory crumbles.” (emphasis added)

No, the assumption is not simplistic by any means. It is only accepted after being demonstrated by multiple independent methods. One could argue that a modern lake with 20,000 varves is not necessarily 20,000 years old, but when multiple radiometric dating methods obtain single-layer ages consistent with the predicted sedimentation rate, then the argument becomes a gratuitous assertion. Dr. Reed fails to realize that in some cases, the criteria he names are not present (slow and uniform sedimentation in response to climatic cycles) and geologists are familiar with such cases. The “whole theory” has not “crumbled”, however, because it still explains the big picture and there are physical reasons for such exceptions. Deeming such cases as exceptions requires application of the scientific method rather than wholesale, unwarranted dismissal of the facts. But Dr. Reed continues with rapid firing of more gratuitous assertions.

“...any rapid or catastrophic style of sedimentation would render this style of dating meaningless...” or “...diagenesis could easily alter carbonate sequences enough to mask the signal...”

And can be ruled out easily by sedimentological, stratigraphic, and geochemical criteria. I’ve done this myself. It takes work; it takes time; but it’s not hard.

“...large submarine slump would generate turbidites that hypothetically could show a regular cycle of interbedded lithologies. Yet deposition would happen instantaneously. What would a plot of the various chemical ratios up through such a deposit show?”

Turbidites are quite easy to pick out in the rock record. For one, they produce coarse-grained lithologies in deep-water settings — a good indication that you picked a bad spot to interpret “astronomical forcing”. And to answer Dr. Reed’s rhetorical question, the chemical ratios would be stochastic, and would fail statistical criteria. Of course, this can be tested quite easily, and I’d be happy to run the samples for Dr. Reed if he were to provide them.

On a final note, Dr. Reed offers that the Flood model would undermine all assumptions made by cyclostratigraphers. Of course that is true, but Flood geologists have yet to propose a working model that could predict sedimentary and geochemical trends observed in sediments, ice records, speleothems, and more. Until then, Dr. Reed’s comments only resound of skepticism based on personal preference. In the meantime, geologists have produced thousands of studies that use orbital cycling to correlate sedimentary rocks. Their success is witness to the viability of the method.

Magnetostratigraphy
I don’t think I’ll spend any time here discussing the details of magnetostratigraphy; I would prefer to challenge you all to read Dr. Reed’s comments on the discipline and see whether his argument is consistent. In any case, I felt it was worth commenting on at least one misconception:

“Note that [magnetostratigraphy] assumes plate tectonic theory and measurable spreading rates. But if the rocks can be dated well enough to supply those rates, then why is there a need for magnetostratigraphy?”

Here, Dr. Reed is referring to the dating of magnetic reversals using the ocean floor, but he obviously does not see the application to other rocks. Magnetic signatures can be taken from sedimentary rocks of all brands, and are more typically used to reconstruct the movement of continents over time (magnetic signatures also provide the latitude during deposition). Changes in the polarity of those signatures is used to correlate the sedimentary record (the result of sediments burying fossils) to the basalt record of the ocean floor (the result of volcanism at mid-ocean ridges), and with minor exceptions, they match up very well. So we must then ask Dr. Reed, how do you explain the correlation in a young-Earth model? I understand that Dr. Reed is confident that rapid magnetic reversals can be explained by Dr. Humphreys and others’ geophysical models, and we should expect to see reversals in both rock records, but why should they correlate at all? For example, why should sedimentary rock sections containing the Barremian-Aptian boundary (determined by the fossils present) also yield similar radiometric dates (~125 Ma) and show similar magnetic reversal patterns (e.g. He et al., 2008)? In the ‘uniformitarian model’, the answer is obvious. But it is yet unclear how in a Flood model all of these processes are related or should produce consistent data.

Final thoughts

Dr. Reed devotes the remaining sections of the article to demonstrating his lack of familiarity with the construction of the geologic timescale, and particularly his inability to understand the application of geologic dating methods. Furthermore, he does not fully understand the assumptions that go into each method, and contradicts himself in trying to articulate them. For example, he repeatedly refers to an assumption of constant sedimentation rate, or constant spreading rates of mid-ocean ridges, while ignoring the fact that he has already cited authors who would never consider those assumptions as valid or necessary.

On an unrelated note, Dr. Reed’s writing style can be misleading in itself. For one, the use of “quotes” around words to encourage doubt is simply inappropriate for scholarly discussion, because it creates the illusion that a meaningful argument has been made by subtly adjusting the connotation for the reader. I highly doubt any of Dr. Reed’s audience would take me seriously if I constantly referred to the “magical instance” called “the Flood” that Dr. Reed has “verified” by “science.” Out of respect for the discussion and for the truth, I’d prefer to take the issue more seriously.

So while there is obviously more detail to be discussed, I wish to stop here and simply ask you, which model has thus far explained the big picture? Do you believe that the geologic timescale is in crisis? If so, to what degree and what is the alternative? I hope that I have been able to accurately summarize methods used by geologists to interpret Earth history. Further, I hope that you would not be afraid to ask a geologist if you have questions about how things work, and especially if you find Dr. Reed’s arguments to be convincing on any point. As you can see by the length of my discussion here, many geologists are more than happy for the opportunity to just...talk about rocks.




References Cited:

Gradstein, F.M., Ogg, J.G., Smith, A.G., 2004, A Geologic Time Scale: Cambridge University Press, 589 p.

Jinnah, Z.A., Roberts, E.M., Deino, A.L., Larsen, J.S., Link, P.K., Fanning, C.M., 2009, New 40Ar-39Ar and detrital zircon U-Pb ages for the Upper Cretaceous Wahweap and Kaiparowits formations on the Kaiparowits Plateau, Utah: implications for regional correlation, provenance, and biostratigraphy: Cretaceous Research, v. 30, p. 287-299.

Locklair, R.E., and Sageman, B.B., 2008, Cyclostratigraphy of the Upper Cretaceous Niobrara Formation, Western Interior, U.S.A.: A Coniacian–Santonian orbital timescale: Earth and Planetary Science Letters, v. 269, p. 540-553.