Live Logos announcement coming soon. Register now for a free book and a chance to win prizes. Sign me up >

When a Mathematician Rethought the Synoptic Problem

screenshot  x

Jeffrey Tripp received a doctorate in New Testament and Early Christianity from Loyola University Chicago, and now teaches Math at Rock Valley College. He often incorporates statistical methods into his biblical research, which focuses on the New Testament narratives and their reception. Here he tells the story of how he came to rethink his position on the Synoptic Problem:


By Jeffrey Tripp

I was raised, so to speak, on the Two-Document Hypothesis (2DH). My undergraduate, master’s, and doctoral courses (at three different universities) all presented the composition of the Synoptic gospels through the lens of the 2DH. In a lot of ways, my brain is still a 2DH brain at its base. Theoretically I’m open to any solution if it helps make sense of the data, but still, when I read studies questioning Q or Matthew’s and Luke’s independence, I have a knee-jerk reaction to think, “Well, but…” 

Since many of these studies are anecdotal, there’s ample room for “Well, but…”

As it turned out, my first article was a study questioning Q by undermining one of the pillars in the case for Q as a unified document. The article was a statistical analysis of the “argument from order” for Q, which claims that the sequence of the sayings in the double tradition has too much overlap (roughly 40%) to be coincidental. Matthew and Luke must have each used a document with the sayings in the same order, or so the argument goes. Another student once asked an important but unfortunately overlooked question: is 40%, like, a lot? I didn’t have an immediate answer, so I set out to find one. 

Now, I began the study fully convinced that I would find a 40% overlap was significant and Q would be vindicated. It wasn’t. The overlap is insignificant, especially when the placement of material arguably influenced by Mark’s order is removed (e.g. the John the Baptist and temptation material). One has a fairly good chance of getting the same agreement in order (or better) by shuffling cards.

Still, my brain was a 2DH brain. The data didn’t disprove Q; it only showed that this particular argument for a unified document is weak. Maybe Matthew or Luke did not care about maintaining the sequence of the sayings in Q, or “Q” could just denote several shorter documents—but otherwise the model still holds. And it still held in the unconscious way I approached studies on the Synoptic problem.

Over the last few years, I’ve become interested in arguments using the minor agreements (MAs) between Matthew and Luke against Mark. The MAs are used to question the independence of Matthew and Luke on the assumption that if Matthew and Luke edited Mark independently, then we should not see both of them change Mark in the same way—or at least, we should not see this very often. For those who find the MAs significant, there are simply too many for Matthew and Luke to be independent. Defenses of the 2DH against MA arguments often claim the similar changes are coincidental. Statistical hypothesis testing is designed for just these situations: to differentiate between the coincidental and the significant. 

So I set out to test the hypothesis that Matthew and Luke adapted Mark independently. I built on the work of previous statistical analyses of the MAs, especially by Andris Abakuks, but in my case counting the alterations according to categories any student would know in the first century: transpositions, subtractions, additions, and substitutions. I then analyzed Matthew’s and Luke’s changes to Mark by category. As with previous statistical studies of the MAs, the overlap of choices was significant. So my data agreed with theirs, which again, wasn’t my expectation. Matthew and Luke make the same editorial choices far more often than we would expect if they were truly independent. 

But “not independent” doesn’t mean “directly dependent.” Similar editorial choices could be the result of mutually influential variables like similar ideologies (the culprit usually blamed), or more simply of similar educations. After all, Matthew and Luke both seem more comfortable with Greek than Mark. Without getting too bogged down in the details, I tested again using conditional probabilities to see whether Matthew or Luke were influential variables on the other text. The results surprised me.

In short, when the results are clear (and they are not always so), Matthew is not an influential variable on Luke, but Luke is an influential variable on Matthew. That is an unlikely result on the 2DH, which assumes they adapted Mark independently. It is also unlikely on the Farrer model, which has Matthew as an influential variable on Luke, not vice versa. A holistic statistical analysis of the MAs coheres easily with Matthean Posteriority, but not with any of the other major solutions to the Synoptic problem.

This sort of holistic statistical result has the strongest chance of pushing me away from the 2DH and toward Matthean Posteriority, at least more than any anecdotal or case-by-case study has. 

Each of the major models used by Synoptic scholars—and even some models overlooked by them—have the potential to make sense of some passages. That is why the same passages appear over and over in textbooks or arguments for the 2DH or Griesbach or Farrer: in those individual cases, the 2DH or Griesbach or Farrer make the best sense of the data. In others, an explanatory case (but really an interpretive case?) could be made using one of the other models. If the sample is so small, the data is simply unclear, undeterminative, literally insignificant in the sense that it provides no clear sign of the compositional history of the passage. In my mind, we have to look at all the data as a whole, or at least at as much as we can.

Of course, I’m still open to challenges. I’m open to any solution really—Augustine, Griesbach, Lukan Priority, all three are different drafts of the same gospel from the same author playing to different audiences, you name it. I try not to pre-judge. And part of me still hopes the 2DH will win the day, show that that my findings are not as significant as I think they are, if only to satisfy my base 2DH brain. But for now, the evidence has me exploring Matthean Posteriority as the model that makes the best sense of the lexical data.


The Matthean Posteriority Hypothesis is due to be debated at 9.00-11.30am on Sunday November 21, at SBL San Antonio. Participants include supporters of the Matthean Posteriority Hypothesis (Alan Garrow and Robert MacEwen), Farrer Hypothesis (Mark Goodacre), and Two Document Hypothesis (Robyn Walsh).

Share
New  Guest author profile image@x x
Written by
Guest Author

This post was written by a Logos guest author. Logos Bible Software helps pastors, scholars, and other Christians get more out of their Bible study.

View all articles

Your email address has been added

New  Guest author profile image@x x Written by Guest Author