New Manuscripts Available at CSNTM

Another fantastic new press release from CSNTM:

New manuscripts digitized by the Center for the Study of New Testament Manuscripts (CSNTM) have just been added to our searchable collection. These include 10 new manuscripts from the National Library of Greece in Athens, the site of our ongoing digitization project for 2015–16.

  • GA 777: From the 12th century, this manuscript (MS) contains the complete Tetraevangelion. The manuscript features 22 beautiful icons, many of which are from the life of Jesus.
  • GA 792: From the 13th century, this is a rare MS in that its New Testament contents include only the Gospels and Revelation. Also included are selected passages from the Old Greek.
  • GA 798: From the 11th century, this MS of the Gospels contains Matthew and Mark. CSNTM had previously digitized the other portion (containing Luke and John) housed at the Institute for New Testament Textual Research (INTF), so digital images are now available for the entire MS.
  • GA 800: From the 12th or 13th century, this MS of the Gospels has extensive commentary wrapping around the text on three sides, and some unique textual features.
  • GA 1411: From the 10th or 11th century, this MS of the Gospels contains extensive commentary on John and Luke by Chrysostom and Titus of Bostra.
  • GA 1412: From the 10th or 11th century, this MS of the Gospels interweaves the biblical text with commentary by Chrysostom and Titus of Bostra, using a variety of different methods to distinguish the text from the commentary.
  • GA 1973: From the 13th century, this MS of Paul’s letters contains commentary from Theophylact of Bulgaria.
  • GA Lect 440: Paper lectionary dated to 1504, which was damaged and then repaired with other paper texts with script at some later point in its history.
  • GA Lect 1524: Paper lectionary dated to 1522, a well-used manuscript.
  • GA Lect 2007: Paper lectionary from the 15th century.

We have also added images for 12 manuscripts that are now in our digital library. Many of these are older images from microfilm.

  • GA 08
  • GA 010
  • GA 014
  • GA 015
  • GA 017
  • GA 018
  • GA 019
  • GA 020
  • GA 034
  • GA 035
  • GA 038
  • GA 044

These images have now been added to our growing searchable collection, which gives everyone free access to the best available digital images of Greek New Testament manuscripts.

All images are available at the new

Lexical Fallacies by Linguists

Ever since James Barr’s Semantics of Biblical Language, originally published in 1961, introduced students of the Bible to the fascinating field of linguistics, the world of biblical studies has not been the same. Barr took his cues from linguists such as Ferdinand de Saussure, whose 1916 work Cours de linguistique générale (translated as Course in General Linguistics), marked a milestone in lexical studies.

Some of the lexical fallacies pointed out by these scholars, and numerous others after them, include the following:

  • Root fallacy: assigning the (supposed) original meaning of a word to its usages throughout history;
  • Diachronic priority: like the etymological or root fallacy, this looks at usage throughout the history of a word as though all such uses are still in vogue at any given slice of history (synchronic view);
  • Illegitimate totality transfer: assumes that all the uses that occur at a given time apply in any given instance;
  • Lexical-conceptual equation: the belief that a concept is captured in a single word or word group or the subconscious transference of a word to the concept and vice versa (like ἁμαρτάνω and sin).

All of these fallacies are well documented in the literature prior to 1961 (and even after!), and they are indeed linguistic fallacies that must be avoided. I have essentially applied this linguistic approach to syntax in my Greek Grammar Beyond the Basics: An Exegetical Syntax of the New Testament (Zondervan, 1996).

There are other ‘fallacies’ which themselves are fallacious, however. Below are enumerated three of these:

  • a word has no meaning apart from context;
  • diachronics are not helpful; instead one must focus entirely on synchronics;
  • etymology is always worthless.

I will briefly examine these three fallacies of linguists in this blog post.

A Word Has No Meaning Apart from Context

Often linguist say that the word being examined should have the meaning of ‘X’ with ‘X’ being only what one can determine from the context. But this is an unreasonable demand on any word. If every word in a given utterance had the meaning ‘X’ then we simply could not figure out what any utterance ever meant. Consider the following sentence:

Mary had a little lamb whose fleece was white as snow.

If the only word we did not understand was ‘lamb’ then with a little help from the broader context we might be able to determine that it meant a four-legged domesticated ruminant mammal whose woolly coat is used for clothing. But what if we did not know the meaning of all the words in this utterance? Unfortunately, when lexical studies are done, armed with modern linguistics, they often assume the meaning of all but the target term. But where did the meanings of the other words come from? If we were to carry the linguistic notion that a word has no meaning apart from its context to its logical conclusion, then the above sentence would initially be rendered:

X X X X X X X X X X X.

Like Egyptian hieroglyphics that were not decipherable until the discovery of the Rosetta Stone, we would never be able to figure out the meaning of the sentence. It is not only the immediate context that tells us what a word means, and this leads us to the discussion of the second fallacy.

Diachronics Are Not Helpful

Frequently, linguists assume that diachronics are not helpful in determining a word’s meaning. The analogy that Saussure used was a chess game: Someone who observes a chess match, coming in sometime after the beginning of the match, can simply by observation determine who is winning the game. He or she does not need to know any of what has occurred prior to this point. This is synchronic (current time) priority to the exclusion of diachronics (over time).

There are inherent fallacies with this analogy, however. In this case, each one of the chess pieces always has its own defined functions and abilities. This never changes, yet it presupposes diachronics. Further, the chess game is not really the best analogy. A better one would be an American football game (or some other contact sport that involves teams). Suppose you came to the stadium at the beginning of the third quarter of the 1974 USC-Notre Dame football game. The score at the time was 24–7, with Notre Dame in the lead. You might say that Notre Dame was well on its way to winning the game, and you might even put money on it. I saw the game, but didn’t bet on it—though I should have since I have always been a USC fan! The second half USC team seemed to be different guys wearing the same numbers: USC went on to win 55–24, with Notre Dame being completely shut out in the second half. One would have to know about momentum (USC scored their first touchdown just before the half), and even what the coaches said to the players at halftime. In the least, just knowing the score would not be a helpful predictor of the outcome.

Expanding on this analogy, suppose you saw a game in which the teams were tied with 5 minutes to go. Knowing who had the momentum (which could only be known by diachronics); what injuries may have sidelined some key players—and when they happened; which team had the ball—and just as important, how they acquired it; which plays have worked; and which men are playmakers are all important factors in determining the outcome. Just as professional gamblers do not simply look at the W–L column but also examine injuries, home field advantage, weather, one-on-one matchups, and numerous other factors, diachronics is a key element in determining outcome. Although the current situation (synchronics) is the most important factor, the past also helps one to get a clearer picture.

It has often been said by linguists that since the speaker or writer whose words they are trying to understand may be blissfully unaware of the diachronic usage of his words, so linguists need to focus on this author’s/speaker’s usage rather than the past. I agree that we must employ the principle of synchronic priority; but we should not embrace the notion of synchronic exclusivity. Why? Because said author/speaker is presumably comfortable with his own language, having been exposed hundreds and thousands of times to most of the various words he will use in any given utterance. Diachronics are needed by the modern investigator, not the ancient speaker. Precisely because the modern researcher does not have the same linguistic background as the person whose usage is being examined he or she must ‘get up to speed’ on what a word can mean by employing diachronics. Consider for example the word-group κοινός/κοινωνία/κοινόω/κοινωνέω, etc. In the New Testament, when this word-group is used of human beings’ relationship to God, it is often put in a positive light because of the cross. We have fellowship (κοινωνία) with God because Jesus has made this possible. But in the Septuagint, this word-group frequently, if not usually, has a decidedly negative tinge. Has the word changed its meaning? No, it still has the idea of (sharing something in) common. What has changed is mankind’s relationship to God through the blood of his Son. But someone just looking at the synchronic meaning of the word-group in the New Testament may miss this background and thus an important clue to the richness of its usage in the New Testament.

Etymology Is Always Worthless

Certainly for words that have a long history, etymology is hardly needed to determine meaning. The fact is, words change in their meaning over time. Root fallacy ignores this fact. But what about words that are of recent vintage, perhaps even coined by the author one is studying? Consider, for example, θεόπνευστος, a word appearing only in 2 Timothy 3.16 in the Greek Bible. Although Paul did not invent the term, it was recently coined (apparently occurring for the first time in the Hellenistic period). As such, its history is short by the time we get to Paul. Breaking it down into its constituent elements (one form of etymologizing), we see that the word may mean “God breathed” or “inspired by God.” Did it have this force in 2 Timothy 3.16? Almost surely it did. In instances where a word is of recent coinage, and especially when it is used for the first time by the author in question, etymology is a must. No author would coin a word whose meaning had no resemblance to its parts. Words that have been in circulation for a long time, especially common words, however, require primarily a synchronic analysis with supplement from diachronics.

Although modern linguistics has made significant and abiding contributions to biblical studies, not all linguistic principles are of equal value. And some may even be fallacies themselves.

Can We Still Believe the Bible?


Craig Blomberg, Distinguished Professor of New Testament at Denver Seminary, has written another outstanding volume. Blomberg is a committed evangelical, but not one with a closed mind. As he says in his preface about the environment of Denver Seminary (quoting Vernon Grounds, former president of the school), “Here is no unanchored liberalism—freedom to think without commitment. Here is no encrusted dogmatism—commitment without freedom to think. Here is a vibrant evangelicalism—commitment with freedom to think within the limits laid down by Scripture.” Blomberg’s writings have always emulated this philosophy. His research in the secondary literature is consistently of superb quality, and his discussions of problem passages and issues, especially in the Gospels, is always well informed. Rather than clutter the narrative with documentation, Blomberg has wisely used endnotes instead of footnotes (though I personally prefer footnotes, I understand that most readers see them as a distraction). This book has nearly 50 pages of endnotes, almost one fifth of the whole book. Blomberg knows his stuff.

I received a prepublication draft of the book, Can We Still Believe the Bible?, and was asked to blog about it. More specifically, I was asked to blog about the first chapter, “Aren’t the Copies of the Bible Hopelessly Corrupt?”

This first chapter addresses the number one apologetic issue of our time—Did the scribes get it right when they copied the scriptures? No longer is the main attack on the Christian faith framed in the question, Is the Bible true? It is now the preliminary question, How do you even know that the Bible you have in your hands accurately represents the original documents? History, as many ancients conceived of it, is circular rather than linear. In this case, that’s true: “Hath God said?” is the original attack on God’s word, way back in the Garden. We’ve come full circle once again.

In this chapter, Blomberg rightfully shows the misrepresentations of the situation by Bart Ehrman, in his book, Misquoting Jesus. For example, of the approximately 400,000 textual variants among New Testament manuscripts, many who read Misquoting Jesus get the impression that this one datum is enough to destroy the Christian faith. But the reality is that less than one percent of all variants are both meaningful and viable. And even Ehrman himself has admitted that no cardinal doctrine is jeopardized by these variants.

Blomberg lays out a compelling argument, with much nuance, about the reliability of the NT and OT manuscripts. His chapter on the text of the Bible is organized as follows:

  • Misleading the Masses
  • The Truth about Variants (New Testament, Old Testament)
  • Did Originals Originally Exist?
  • Comparative Data
  • Avoiding the Opposite Extreme
  • Conclusion

In the opening section, the author takes on Bart Ehrman’s wildly popular book, Misquoting Jesus. In characteristic fashion, Blomberg critiques both what Ehrman does and doesn’t say, doing all with wisdom and wit. He points out, for example, that virtually nothing in Misquoting Jesus is new to biblical scholars—both liberal and evangelical, and all stripes in between. Non-scholars, especially atheists and Muslim apologists, latched onto the book and made preposterous claims that lay Christians were unprepared for. Ignorance, in this case, is not bliss. Earlier in the chapter when Blomberg mentioned that there are as many as 400,000 textual variants among the manuscripts, he bemoans: “It is depressing to see how many people, believers and unbelievers alike, discover a statistic like this number of variants and ask no further questions. The skeptics sit back with smug satisfaction, while believers are aghast and wonder if they should give up their faith. Is the level of education and analytic thinking in our world today genuinely this low?” (13).

He then discusses the two major textual problems that Ehrman zeroes in on: Mark 16.9–20 and John 7.53–8.11. He makes the insightful comment that the probable inauthenticity of these passages is news to laypeople because they tend not to read the marginal notes in their Bibles and because “more and more people are reading the Bible in electronic form, and many electronic versions of the Bible don’t even include such notes” (15).

In passing, I’d like to make three comments about the ending of Mark’s Gospel:

  1. Blomberg says that there is no passage elsewhere in Mark that has nearly as many variants as 16.9–20 (p. 19). This may be true, but he doesn’t document the point. It has often been said about the pericope adulterae, but I’m not sure about the ending of Mark.
  2. Blomberg cites Travis Williams, “Bringing Method to the Madness: Examining the Style of the Longer Ending of Mark,” Bulletin of Biblical Review 20 (2010), to the effect that “the style of writing in the Greek significantly differs from the rest of Mark’s Gospel” (19). This article was first read at the southwest regional Evangelical Theological Society meeting shortly after Travis was an intern of mine. He did an outstanding job on the paper; hence, its publication in BBR. Since this publication another student of mine, Greg Sapaugh, wrote his doctoral dissertation at Dallas Seminary on “An Appraisal of the Intrinsic Probability of the Longer Endings of the Gospel of Mark” (2012). Both scholars came to the same conclusion: the language of Mark 16.9–20 is anomalous and almost surely was not written by the person who wrote Mark 1.1–16.8.
  3. When discussing whether the real ending of Mark’s Gospel was lost, Blomberg says, “The open end of a scroll was the most vulnerable part of a manuscript for damage; perhaps Mark literally got ‘ripped off’!” (20). He goes on to argue against this, seeing that Mark’s intention was to conclude his Gospel at v. 8. Although Blomberg is right to note that Mark was almost certainly written on a roll instead of a codex, he doesn’t mention the great difficulty that this poses for those who think that the real ending was lost. Ancient rolls were almost always rolled up for the next reader. Assuming that to be the case for Mark, the ending of the Gospel would be the most protected part.

Blomberg also highlights many of other major passages that Ehrman wrestles with, such as Mark 1.41, Heb 2.9, and Luke 22.43–44. In the process, he notes that of the two standard Greek New Testaments in use today—the Nestle-Aland text and the United Bible Societies’ text—the latter includes only the most important textual problems (1438 of them) and a perusal of these textual problems reveals that “the only disputed passages involving more than two verses in length” are Mark 16.9–20 and John 7.53–8.11 (18).

The author takes pains to introduce the discipline of textual criticism to lay readers. He discusses some of the major textual problems (or, rather, those with much emotional baggage because of their long history in the printed Bible) in the NT (including Matt 5.22; 6.13; Acts 8.37; and 1 John 5.7–8), patiently going through the evidence, showing that the wording in the KJV is spurious because it is poorly attested in the manuscript evidence and/or has strong internal evidence against it.

The question is then raised, Why are these passages (including the two 12-verse texts mentioned earlier, along with Luke 22.43–44) sometimes printed in our modern translations? Blomberg gives a nuanced answer, but the bottom line (in my view) is this: Translations follow a tradition of timidity. My own examination of over 75 translations in a dozen different languages reveals the same monotonous story: Translators keep these passages in the text of their Bibles because to do otherwise might upset some uninformed Christians. But Ehrman has let the cat out of the bag. Just as Edward Gibbon’s Decline and Fall of the Roman Empire pointedly athetized the Trinitarian formula in 1 John 5.7–8 over two centuries ago, so Ehrman has done the same for Mark 16 and John 8. When Gibbon wrote this note in his six-volume work, it scandalized the British public. A hundred years later, the Comma Johanneum did not even show up as a marginal note in the Revised Version of 1881. It is time for us to relegate these likely inauthentic texts to the footnotes. Otherwise, we will continue to placate uninformed believers, setting them for a Chicken Little experience when they read books like Misquoting Jesus. Sadly, tens of thousands of college students, raised in a Christian home, have abandoned the faith because of fear of embarrassment over these issues, especially due to Misquoting Jesus. In recent years, it has been estimated that over 60% of kids coming from Christian homes abandon the faith by the time they get done with college. It is time for pastors and other Christian leaders to educate the masses about the reality of the transmission of the Bible. If we don’t, the fallout will only get worse.

Blomberg also discusses more routine textual variants (what he calls “ordinary and uninteresting,” the latter description of which I would disagree with :-)), giving a glimpse to the discipline of NT exegesis to outsiders. (At least he does correct this a bit later: “The vast majority of textual variants are wholly uninteresting except to specialists [italics mine].”) Almost anyone who has spent time with the textual apparatus is amazed at how little the vast majority of variants affect the meaning of the text.

In his treatment of the gap that exists between the originals and the early copies, he argues that “One may fantasize about all kinds of wild changes being introduced between the first, complete written form of a given book and the oldest copy we actually have, but it will be just that—fantasy…” (35). I’d like to offer some supplemental reasoning for why this is almost certainly true: Against the supposition that the older the manuscripts that are discovered, the more likely it is that we will find new, authentic readings, we can simply look at the last 130+ years. That’s when all but one of the NT papyri (our oldest manuscripts) have been discovered. How many earth-shaking, new readings have commended themselves to scholars as autographic among these 128 NT papyri? None, zero, zilch. Not a single new reading since the discovery of the NT papyri has been viewed by textual scholars as authentic. Does this mean that the papyri are worthless? Not at all. Rather, they usually confirm readings that scholars already thought were authentic. Now, with even earlier evidence found in the papyri, the arguments are stronger. This shows that the methods of textual scholars since the work of Westcott and Hort (1881–1882) are, in broad strokes and in many particulars, on target. But, with regard to Blomberg’s point, it also shows that if history is any indication, it would be foolish to think that any not-yet discovered readings will some day grace the text of our critical Greek New Testaments instead of finding a place in the apparatus of also-rans.

In comparing the copies of the NT with other ancient Greco-Roman literature, Blomberg argues well that Christians need not feel embarrassed about the relatively small gaps between the originals and the earliest copies (most NT books have copies within a century of the completion of the NT), since the gaps for other literature are far greater (hundreds of years). Further, the differences between the copies for, say, the apocryphal literature is remarkably greater than for the NT copies. He mentions as an illustration the Coptic text of the Gospel of Thomas and the three Greek fragments (though he incorrectly dates them to the second century [36]), citing Tim Ricchuiti’s excellent study (in Revisiting the Corruption of the New Testament).

In his last section before the conclusion, “Avoiding the Opposite Extreme” (37–40), Blomberg offers some excellent insights about the ludicrousness of a perpetual miracle of exact copying of the text (akin to the argument that Muslims use about the Qur’an and some KJV advocates come close to arguing about the TR): “But think of just what kind of miracle this would need to be for it really to have occurred. Not only would God have superintended the process of a select group of biblical authors penning their documents so that their words reflected precisely what God wanted to have written; God would also have needed to intervene in the lives of all the tens of thousands of copyists over the centuries to ensure that not one of them ever introduced a single change to the texts they were reproducing” (39). He goes on to expound on this topic, with remarkable clarity and logic. Definitely a good read.


There are a few errors of fact and misleading statements in Blomberg’s new release.

  1. Page 15: The author says that Ehrman’s Orthodox Corruption of Scripture, on which Misquoting Jesus was based, was Ehrman’s doctoral dissertation. Actually, Ehrman wrote his dissertation on the text of Didymus the Blind. Orthodox Corruption is Ehrman’s most influential scholarly work, but it was not his dissertation.
  2. Page 16: It would be “extraordinarily unlikely that we shall ever again find variants that are not already known.” Actually, it is very likely that we will find variants in almost every new MS discovered. They are almost always so trivial that they would not warrant mention in an apparatus, however. What is unlikely in the extreme is that any of these MSS will have new readings that convince scholars of their authenticity.
  3. Page 24: The textual problem in Rom 5.1 is discussed; Blomberg notes that the difference between ‘we have faith’ and ‘let us have faith’ is one letter in Greek: it is either an omicron or an omega. He says that the forms would have been similar, but gives the capital letters (Ο, Ω) instead of the majuscule letters (ο, ω), which is what the oldest MSS are written in.
  4. Page 27: The author suggests that every single second- and third-century papyrus of the NT was “written with the very careful handwriting of an experienced scribe…” This, however, is not true. The penman of P75, for example, was probably not a professional scribe (according to E. C. Colwell), although he produced a very careful text, painstakingly writing out one to two letters at a time. Further, even later scribes were definitely not professional. For example, P10, P93, and P99 were either done for private use or were perhaps schoolboy exercises. I pointed out in one of my debates with Ehrman (SMU, 2011; DVD available here) that a comparison of P66 and P75 reveals that the more professional scribe (P66) produced the less careful text. Zachary Cole, who is currently working on his doctorate in NT textual criticism at Edinburgh University, wrote his master’s thesis at Dallas Seminary (2012) on “Scribal Hands of Early New Testament Manuscripts.” This thesis was in response to Ehrman’s claims that the earliest scribes were not professional and therefore their text was not carefully produced. Several of the second- and third-century papyri were judged to be less than professionally done, including especially P9, P18, P24, P78, and P98, but also including as many as 27 other papyri. And Cole concluded that all this is irrelevant, since the training of the scribe is no necessary indicator of the quality of his text.
  5. Page 27: “no orthodox doctrine or ethical practice of Christianity depends solely on any disputed wording.” I would word this a bit differently. We can definitely say that no cardinal doctrine depends on any disputed wording, but I think there are some places in which less central teachings—both in terms of orthodoxy and orthopraxy—are based on texts that are disputed. For example, whether exorcists casting out particularly pesky demons need to pray and fast depends on a variant in Mark 9.29, and the particulars of the role of women in the church may depend, in part, on 1 Cor 14.34–35 (a passage that, although found in all MSS, is disputed by some scholars).
  6. Page 34: “the original copy [sic] of a biblical book would most likely have been used to make countless new copies over a period of several centuries…” Blomberg cites the important study by George Houston on the longevity of papyrus documents, which Craig Evans exploits to the effect that the original documents would have perhaps lasted several centuries. I think that Evans may be arguing his case a bit too strongly, especially in light of patristic evidence to the contrary. We do have two or three ancient patristic statements to the effect that the autographs still existed into the second or third centuries, but they have generally been regarded as ahistorical comments without substance behind them. Nevertheless, an important point to consider is that these ancient writers demonstrate, from a very early period, a desire on the part of the ancient church to seek out the oldest MSS to establish the wording of the original. And Blomberg is quite right that the ancient scribes surely would have copied the autographs multiple times, thus disseminating direct copies spanning a period of more than one or two generations.
  7. Page 37: Gutenberg’s printing press is dated c. 1440; it should be dated c. 1454.
  8. Page 38: Fifteenth-century Catholic reformer, Erasmus: sixteenth century is meant.
  9. Pages 16–17 has what looks to be the most egregious error: “Although Ehrman doesn’t total all the numbers, Wallace does, and the result is that those 400,000 variants, if there are that many, are spread across more than 25,000 manuscripts in Greek or other ancient languages.” In the next paragraph he asserts: “This is an average of only 16 variants per manuscript… Nor are the variants spread evenly across a given text; instead, they tend to cluster in places where some kind of ambiguity has stimulated them. Paul Wegner estimates that only 6 percent of the New Testament and 10 percent of the Old Testament contain the vast majority of these clusters.”I think Blomberg means that there is an average of 16 unique variants per MS. That would be essentially true, though we really should restrict the count to Greek MSS since the translations have too many problems to be able to discern at this stage whether the wording is a true variant from the Greek or simply a looser translation. On his use of Wegner: I’m out of the country right now and can’t look at my copy of Wegner. But it is simply not true that only 6% of the NT contains “the vast majority of these clusters.” I’m not sure what Blomberg is trying to say here. Perhaps he meant that the major textual problems of the NT are found in only 6% of the text. That may well be the case, but in this case the number seems too high.

These are, for the most part, rather niggling criticisms. Overall, this chapter is an excellent corrective to the extreme skepticism of Bart Ehrman and those who have followed in his train. It is well researched, clearly written, and deserves to have a wide reception among believers today, as does the book of which it is a part. One can hope that pastors and church leaders will wake up to the fact that we are losing the intellectual battle for the millennials, and we have only ourselves to blame. Bringing spiritual grace and academic rigor to the table is needed, and Blomberg is one of the evangelical gatekeepers leading the way.

Snoopy Seminar coming on Feb 22–23, 2013

On February 22 and 23, I will be conducting a “Snoopy Seminar” at the Hope Center in Plano, Texas (2001 W. Plano Parkway). This seminar is a fun, interactive, and challenging exercise about textual criticism. Enrollment is limited to 60 people. Intended audience: motivated laypeople, though we are not limiting it to them (seminary students may also come, for example).

Here’s the basic idea: On Friday night I will teach some of the basics of New Testament textual criticism. Then, I ask for 22 people to volunteer to be scribes. They go into a separate room and copy out a short text (in English), each with specific instructions designed to increase errors in the copying process and corrupt the text. The text goes through six generations of copying. Meanwhile, the rest of the people (the “textual critics”) are trying to reconstruct the genealogy of the transmission of the text (namely, which scribe copied from whom) and think through what kinds of skills and biases the scribes would have brought to their tasks.

On Saturday morning, we will all get together and the textual critics get busy working on the remaining manuscripts that the scribes produced. Unfortunately, most of the earliest manuscripts have strangely disappeared overnight (including all first-generation copies). The textual critics do the best they can with the manuscripts they’ve got to work with.

They record all the variants and there are always more variants than words in the original text. But unlike New Testament textual criticism, the variants are usually meaningful (the vast bulk of New Testament textual variants are not). The textual critics work in small groups for about three hours. They debate, wrestle with a variety of possibilities about corruption (and which manuscripts are more corrupt than others; all of them are corrupt to some degree), and try to determine the wording of the original “Gospel According to Snoopy.”

Then, all the groups get together and I function as secretary. I write down the major variants on a white board and list what the whole group thinks is the original wording in each place. When I get done posting the variants, the white board is a mess! No one is confident that they have reconstructed the text of Snoopy exactly. Then, a miracle happens: The original text of Snoopy is discovered and they can compare how they did. How close do they get? Well, I’ll leave that for the seminar. I’ve done this 70 times since 1979—in churches, seminaries, colleges, etc. It takes concentrated brain power, a desire to engage verbally with others, and a Sherlock Holmes mindset.

Once we’re finished with the exercise, I show the relevance to New Testament textual criticism. The Snoopy manuscripts and groups of manuscripts actually correspond to known New Testament manuscripts and groups. And this year, we are adding a packet of materials that has notes on some of the most important textual problems in the New Testament.

If you’re interested in joining us, please visit the website for more information or contact Dana Cooper at —and soon! It’s a great confidence-builder about scripture, suitable for high school students on up. We hope to do this a couple of times a year at the Hope Center, so if you miss out on this one there’s always another one coming down the pike.

Further Reading on Bible Translations

For further reading:

Bruce Metzger, The Bible in Translation: The Ancient and English Versions
Gordon Fee and Mark Strauss, How to Choose a Translation for All Its Worth
Ron Rhodes, The Complete Guide to Bible Translations
F. F. Bruce, History of the Bible in English
Donald Brake, Visual History of the King James Bible: The Tumultuous Tale of the
World’s Bestselling Book
Donald Brake and Shelly Beach, Visual History of the King James Bible: The Dramatic
Story of the World’s Best-Known Translation

And for some decent translations to consider:
NIV 2011
The Essential Evangelical Parallel Bible