CHESTER BEATTY PAPYRI AT CSNTM!

Chester Beatty Library

Below you can find the press release from CSNTM regarding our most recent expedition.

17 September 2013

The Chester Beatty papyri, published in the 1930s and 1950s, are some of the oldest and most important biblical manuscripts known to exist. Housed at the Chester Beatty Library (CBL) in Dublin, they have attracted countless visitors every year. It is safe to say that the only Greek biblical manuscripts that might receive more visitors are Codex Sinaiticus and Codex Alexandrinus, both on display at the British Library.

The Center for the Study of New Testament Manuscripts (CSNTM) is pleased to announce that a six-person team, in a four-week expedition during July–August 2013, digitized all the Greek biblical papyri at the Chester Beatty Library. The CBL has granted permission to CSNTM to post the images on their website, which will happen before the end of the year.

The New Testament papyri at the CBL include the oldest manuscript of Paul’s letters (dated c. AD 200), the oldest manuscript of Mark’s Gospel and portions of the other Gospels and Acts (third century), and the oldest manuscript of Revelation (third century). One or two of the Old Testament papyri are as old as the second century AD.

Using state-of-the-art digital equipment, CSNTM photographed each manuscript against white and black backgrounds. The result was stunning. Each image is over 120 megabytes. The photographs reveal some text that has not been seen before.

Besides the papyri, CSNTM also digitized all of the Greek New Testament manuscripts at the CBL as well as several others, including some early apocryphal texts. The total number of images came to more than 5100.

CSNTM is grateful to the CBL for the privilege of digitizing these priceless treasures. Their staff were extremely competent and a joy to work with. Kudos to Dr. Fionnuala Croke, Director of CBL, for such a superb staff! This kind of collaboration is needed both for the preservation of biblical manuscripts and their accessibility by scholars.

IMG_6411 copy

Wax Drippings and Favorite Passages

When the only access that students of the New Testament had to most images of manuscripts was through poor-quality microfilms, interpretation of the data was rather limited. The staff at the Institut für neutestamentliche Textforschung in Münster, which boasts about 90% of all NT MSS on microfilm, instructed student collators not to try to decipher the marginalia because such were virtually impossible to read. Just the text, please. And even with the text, the students had to guess at quite a bit of the letters and words because of blurred images. Below is an illustration of the kind of images they had to work with (this is codex 2813, photographed by INTF in 1989).

Screen Shot 2013-09-14 at 5.40.58 PM

Codex 2813 Microfilm Image

With digital photography, much more data can be seen and interpreted. This includes erasures, different colored ink (including the next-to-impossible-to-see-in-microfilm red and gold), smaller font, ornamentation, prickings (which are used to locate a MS’s scriptorium and age), etc. In addition, wax drippings are visible.

As innocuous as the wax drippings might seem, Henry Sanders, the editor of the editio princeps of Codex Washingtonianus, used them to show what pages were frequently on display. The reason, he argued, is that visitors would often read the pages with a candle, and less-than-careful lectors would inadvertently allow wax to drip from it onto the page.

Sanders’s comment, based on an examination of the actual MS, may have implications that go beyond codex W. With some caveats, it seems that wax drippings can be used to show what passages were favorites. If one were to examine the wax drippings seen in digital images of, say, a twelfth-century minuscule, he or she might be able to determine which passages were favorites from the twelfth century on. Of course, this kind of work would be needed to be done for a good number of manuscripts because an individual MS might be rather idiosyncratic—much the way codex W seemed to be (in that the wax drippings there, according to Sanders, only showed what passages were put on display, not what passages were otherwise favorites of readers). Lectionaries would probably be the least significant for interpreting wax drippings since they were regularly used in church services and the lector would of necessity be reading through the entire lectionary cycle, year after year. And when wax candles were used as opposed to oil lamps to read these MSS needs to be factored in as well. But MSS that were meant for study, personal use, or were otherwise not used much in public worship could contain many secrets of bygone generations of Christians.

I will offer two illustrations, one hypothetical and the other actual. John 3.16 is a favorite verse of American evangelicals today. It has even shown up on placards held by a crazed football fan wearing a multi-color afro, who would stand up in the end zone after a team scored, making sure that the TV cameras would capture the image.

 John 3.16 & multi-color doThe Gospel on the Simpsons

But was this text that well known and that well loved in ancient and medieval times? A look at digital images of MSS might reveal the answer. Of course, in order to make one’s case, a look at the entire MS’s images would be needed to see which pages had the most wax drippings. Another caveat: if the text on a given page was reworked, scraped, or had extensive marginalia, that might be the reason for extra wax drippings, produced in this case by the scribe him/herself.

An actual illustration can be seen in codex 61, also known as Codex Montfortianus. This is the MS that was produced by a scribe in Oxford named (F)roy in 1520, which included the comma Johanneum (the Trinitarian formula at 1 John 5.7) that made its way into Erasmus’s third edition of the NT (1522). Now housed at Trinity College, Dublin, it is reported to have almost naturally fallen open to 1 John 5 because of the frequent consulting of this passage by researchers over the years. The MS nowadays is no longer available for direct consultation, but the library has produced some adequate digital images of it. And the page which contains the comma has more wax drippings by far than any other. It is also significantly dirtier than any other page, due to the constant handling of the page.

Screen Shot 2013-09-14 at 5.16.19 PM Codex 61: Two Pages before 1 John 5

 Screen Shot 2013-09-14 at 5.16.03 PMCodex 61 at 1 John 5

Screen Shot 2013-09-14 at 5.20.06 PM The Comma Johanneum in Codex 61

In the least, examining the wax drippings of digital images in continuous text MSS century by century and production-location by production-location (when known) could produce some interesting results. As these begin to be examined, certain guidelines should emerge on how to interpret the data. The necessary caveats may temper otherwise robust claims, but these should not keep students from examining the data.

The Number of Textual Variants: An Evangelical Miscalculation

In the Baker Encyclopedia of Christian Apologetics, by Norm Geisler (Grand Rapids: Baker, 1998; p. 532), there is a comment about the number of textual variants among New Testament manuscripts:

“Some have estimated there are about 200,000 of them. First of all, these are not ‘errors’ but variant readings, the vast majority of which are strictly grammatical. Second, these readings are spread throughout more than 5300 manuscripts, so that a variant spelling of one letter of one word in one verse in 2000 manuscripts is counted as 2000 ‘errors.'”

There are several problems with this paragraph, one of which is this: to say that variant readings are not errors is an odd way of putting things. If the primary goal of NT textual criticism is to recover the wording of the autographa (i.e., the texts as they left the apostles’ hands), then any deviation from that wording is, indeed, an error. It may well be a rather minor error (as the vast majority of them are)—in fact, something that cannot even translated it is so trivial—but it is an error nevertheless. The author, however, is most likely equating error with some reading that would render the Bible errant and fallible. It is quite true that (virtually) no viable variants are major threats to inerrancy; the major problems that the doctrine of inerrancy faces are essentially never found in textually disputed passages in which one reading creates the problem and another erases it.

The larger issue, however is how the number of variants was arrived at. Geisler got his information (directly or indirectly) from Neil R. Lightfoot’s How We Got the Bible (Grand Rapids: Baker, 1963), a book now fifty years old. Lightfoot says (53-54):

“From one point of view it may be said that there are 200,000 scribal errors in the manuscripts, but it is wholly misleading and untrue to say that there are 200,000 errors in the text of the New Testament. This large number is gained by counting all the variations in all of the manuscripts (about 4,500). This means that if, for example, one word is misspelled in 4,000 different manuscripts, it amounts to 4,000 ‘errors.’ Actually in a case of this kind only one slight error has been made and it has been copied 4,000 times. But this is the procedure which is followed in arriving at the large number of 200,000 ‘errors.'”

In other words, Lightfoot was claiming that textual variants are counted by the number of manuscripts that support such variants, rather than by the wording of the variants. His method was to count the number of manuscripts times the wording error. This book has been widely influential in evangelical circles. I believe over a million copies of it have been sold. And this particular definition of textual variants has found its way into countless apologetic works.

The problem is, the definition is wrong. Terribly wrong. A textual variant is simply any difference from a standard text (e.g., a printed text, a particular manuscript, etc.) that involves spelling, word order, omission, addition, substitution, or a total rewrite of the text. No textual critic defines a textual variant the way that Lightfoot and those who have followed him have done. Yet, the number of textual variants comes from textual critics. Shouldn’t they be the ones to define what this means since they’re the ones doing the counting?

Let me demonstrate how Lightfoot’s definition is way off. Today we know of more than 5600 Greek NT manuscripts. Among these, we know of about 2000–3000 Gospels manuscripts, 800 Pauline manuscripts, 700 manuscripts of Acts and the general letters, and about 325 manuscripts of Revelation. These numbers do not include the lectionaries, over 2000 of them, that are mostly of the Gospels. At the same time, not all the manuscripts are complete copies. The earlier manuscripts are fragmentary, sometimes covering only a few verses. The later manuscripts, however, generally include at least all four Gospels or Acts and the general letters or Paul’s letters or Revelation. But an average estimate is that for any given textual problem (more in the Gospels, less elsewhere), there are a thousand Greek manuscripts (this assumes that less than 20% of all the Greek manuscripts “read” in any given passage, probably a conservative estimate).

Putting all this together, we can assume an average of 1000 Greek manuscripts being involved in any textual problem. Now, assume that we start with the modern critical text of the Greek New Testament (the Nestle-Aland28). Most today would say that that text is based largely on a minority of manuscripts that constitute no more than 20% (a generous estimate) of all manuscripts. So, on average, if there are 1000 manuscripts that have a particular verse, the Nestle-Aland text is supported by 200 of them. This would mean that for every textual problem, the variant(s) is/are found in an average of 800 manuscripts. But, in reality, the wording of the Nestle-Aland text is often found in the majority of manuscripts. So, we need a more precise way to define things. That has been provided for us in The Greek New Testament according to the Majority Text by Hodges and Farstad. They listed in the footnotes all the places where the majority of manuscripts disagreed with the Nestle-Aland text. The total came to 6577.

OK, so now we have enough data to make some general estimates. Even if we assumed that these 6577 places were the only textual problems in the New Testament (a demonstrably false assumption, by the way), the definition of Lightfoot could be shown to be palpably false. 6577 x 800 = 5,261,600. That’s more than five million, just in case you didn’t notice all the commas. Based on Lightfoot’s definition of textual variants, this is how many we would actually have, conservatively estimated. Obviously, that’s a far cry from 200,000!

Or, to put this another way: this errant definition requires that there be no more than about 250 textual problems in the whole New Testament (250 textual variants x 800 manuscripts that disagree with the printed text = 200,000). (It should be noted that, for simplicity’s sake, I am counting a textual problem as having only one variant from the base text, even though this is frequently not the case). If that is the case, how can the United Bible Societies’ Greek New Testament list over 1400 textual problems? And how can the Nestle-Aland text list over 10,000?

And again, this five million is not even close to the actual number. I took a very conservative approach by only looking at the differences from the majority of manuscripts. But if one started as his base text Codex Bezae for the Gospels and Acts and Codex Claromontanus for the letters, the number of variants (counted Lightfoot’s way) from these two would be astronomical. My guess is that it would be well over 20 million. Or if one started with Codex Sinaiticus, the only complete New Testament written with capital (or uncial) letters, the numbers would probably exceed 30 million—largely because Sinaiticus spells words in some strange ways that are not shared by very many other manuscripts. You can see that the definition of a textual variant as a combination of wording differences times manuscripts is rather faulty. Counting this way results in tens of millions of textual variants, when the actual number is miniscule by comparison. And that’s because we only count differences in wording, regardless of how many manuscripts attest to it.

All this is to say: a variant is simply the difference in wording found in a single manuscript or a group of manuscripts (either way, it’s still only one variant) that disagrees with a base text. Further, there aren’t only 200,000. That may have been the best estimate in 1963, when we knew of  fewer manuscripts. But with the work done on Luke’s Gospel by the International Greek New Testament Project, Tommy Wasserman’s work on Jude, and Münster’s work on James and 1-2 Peter, the estimates today are closer to 400,000. Some even claim half a million. In short, as Bart Ehrman has so eloquently yet simply put it, there are more variants among the manuscripts than there are words in the NT.

Although this may leave some feeling uneasy, it is imperative that Christians and non-Christians be honest with the data. I would urge those who have used Lightfoot’s errant definition to abandon it. It’s demonstrably wrong, and citing it reveals a fundamental ignorance about textual criticism. And I would hope that the publishers of numerous apologetics books would get the data right. The last thing that Christians should be doing is to latch on to some spurious ‘fact’ in defense of the faith.

Postscript

I have recently been in correspondence with some apologists (including Geisler), and I am happy to report that they are revising their definition of what constitutes a textual variant. Two or three of them have appealed to their publishers to correct the data in later printings.