Hi! I’m a graduate student in philosophy. Thank you for visiting my website, which hosts my personal blog. If you would like professional information or to contact me directly, please visit my about page.
I’ve been playing FEZ for the first time recently, and came across this room where some NPCs are constructing a QR code on the wall:
This seemed very likely to be a puzzle. It’s almost a complete QR (most of what’s missing is the top left positioning square), and anything under the scaffolding can be reconstructed, so it stands some chance of scanning after a few fixups.
I imported the screenshot in GIMP. I combined it with another screenshot of the same scene to remove the characters, leaving only the scaffolding. I used the Color Exchange tool (Colors > Map > Color Exchange) to replace the lighter purple color in the background with white. I then used the Threshold tool (Colors > Threshold) to make anything else in the image black.
Then, in another layer, I created a grid precisely aligned with the blocks in the image to help me see and fill in the missing portions. GIMP has a tool for that (Filters > Render > Pattern > Grid). I replaced the partial blocks hidden by scaffolding with completed blocks. This result looked like this (the grid is at 25% opacity):
Unfortunately the result still wouldn’t scan, even when I added the third position symbol back to the top left. Too much seemed to be missing, at least for the apps I tried. But since this seemed to be a puzzle that the developers intended you to solve, I assumed there was something more here, so I broke out the QR code spec.
The first thing to understand about QR codes is that they’re relatively simple ways of encoding binary data (ones and zeros) using black (1) and white (0) blocks. Certain parts of the image are only used to align scanners: they don’t contain any data. Other parts contain format information: this tells a scanner what kind of QR code it is and how to decode it. I was hoping this QR code would contain valid format information, as otherwise I’d have to test all the possibilities. Here’s an illustration of the different parts of a QR code. Image credit: Wikipedia.
From the size (25x25), we know this is a V2 QR code, which doesn’t have version information. The first question to answer was whether the QR code still contained valid Format Information. This information is error corrected and duplicated within the image. It appears in the QR code in the sections highlighted in blue:
One of the Format Information bit strings seems complete: it combines the top-right portion with the bottom-left portion. I’m working with the assumption (that I might need to change later) that anything not in the top left quadrant is “finished” by the NPCs, meaning it’s trustworthy. Hopefully this means I won’t need to apply error correction to the Format Information.
Information in QR codes is masked (XORed with a known pattern) to break up large chunks of white or black that give scanners difficulty. The Format Information uses the static mask of “101010000010010”1. Wikipedia incorrectly says this mask is “101011001010101”.2 The data in the QR code will use a different mask, chosen dynamically by the encoder so as to minimize large white or black chunks.
Here is where I began to be surprised by how far I could go using only GIMP. I expected to be importing the data into Python relatively quickly, but being able to toggle masks on the QR code directly and observe the result visually turned out to be very handy.
Using the new “Exclusion” layer mode, you can selectively invert parts of the underlying image using a black and white layer. Take care to observe the correct bit order when creating the mask (MSB: for the Format Information, that means bottom to top, left to right), as seen here:
Applying the mask, we see the following:
Note that because of the way the GIMP Exclusion layer works, we have to use white as the “on” bit for the mask. Black would be more appropriate since that’s what the QR standard uses for an “on” bit.
According to the spec, the first two bits give the error correction level, and the third through fifth bits give the data mask information. That’s once again in MSB order, so bits 14 and 13, and 12, 11, 10, respectively. In this case, we have “11” for the error correction level, and “001” for the data mask. That tells us that we have error correction level “Q” (which we’re ignoring for now), and “i mod 2 = 0” for the mask (this is one of several options for a mask that is chosen by an encoder). This means that the mask is active (flipping the underlying data bits) for every even number row, starting at 0 for the top row.
We can quickly create such a mask in GIMP by making a small portion of it (just two blocks, in this case), then expanding that with Filters > Map > Tile. We then trim out any portion that is part of a position symbol, timing symbol, alignment symbol, quiet zone, or Format Information (so that only the data will be masked). The spec (and other references) are pretty clear about what these areas are, so I’ll skip to the result here:
We’re now ready to begin trying to read the raw data! The order of bits is detailed in section 8.7 of the spec. We start with the bottom right block, then go left, then go back and up, then left again, reading upwards in a step-wise pattern. When we reach the position symbol at the top we move to the 2 block wide column to the left, and this time step down. (Discovering that QR code data starts with the bottom right and that it ends with error correction data gave me hope that the QR code was decipherable.) The reading order for the intricate sections is given precisely in the spec. I’ve indicated the interesting parts below.
Section 8.4 of the spec indicates that the data stream is prefixed by a header, where the first four bits are a “mode indicator”, followed by a length indicator. If you’re just encoding raw binary data in a QR code, you may as well just use 8 bit bytes, which is mode “0100”. But if you’re representing numeric or alphanumeric data, you can encode it more efficiently. The FEZ QR code begins with “0010”, which indicates an alphanumeric QR code.
This means that the next 9 bits (for a V2 QR code) are used to give the number of characters in the data. The next 9 bits of our QR code are “000010111”, which is the number 23 represented in binary.
For maximum efficiency, alphanumeric mode encodes 45 possible characters into blocks of 2 characters per 11 bits. (45^2 / 2^11 ≈ 98.9%, which is pretty good efficiency). To access the characters, we divide the 11 bit integer by 45. The result without the remainder is the first character, and the remainder is the second character.3
Each number represents a character according to its position in the following string:
Note that we have an odd number of characters, so to save every bit possible the last character will be encoded as a single 6 bit value, instead of wasting another 5 bits.
With all this information in hand, we can transcribe the data from the QR code. Anything after the last piece of data will be part of the error correction bits, which we’re hoping we don’t need. Now that we know how much data there is (22 × 11 / 2 + 6 bits), we can also identify each portion of the QR code by its function:
Blue represents Format Information (as before). Green represents data. Red represents error correction. Anything purely black or white is a static part of the QR code.
In other words, the data says “RT RT LT RT LT LT LT RT”. This is a sequence of inputs you can make with the controller in FEZ. You usually receive a collectable for doing so.
I also wrote some Python code to do the decoding automatically, which you can find here: fez_qr.py.
With all the information we now have, we can actually generate the final, corrected version of the QR code. To do that, we’ll need a library that allows us to set the parameters manually (because an automatic QR code creation tool might choose a different mask or error correction level, giving a completely different result). I used the one here. Exporting the resulting SVG and layering the result on the original, we can see that it’s nearly a perfect match!
Having finished all this, I reopened FEZ, and entered the input sequence. And NOTHING HAPPENED. I tried multiple times to make sure I wasn’t screwing anything up.
I looked up the puzzle online, and apparently a completed version of the QR code is in a later stage of the game, and this unfinished one is just for “lore”. I find this explanation frustrating, because it clearly seems to have been designed so that an enterprising person could reconstruct the code themselves. Everything you need is there. Yet I have to admit… it doesn’t work. You can’t enter the code in this room, you have to enter it elsewhere.
I think this is a mistake on the game designers’ part. If a puzzle gives you everything you need to solve it, the solution should work. It’s rather frustrating to put that much effort into solving a puzzle only to find out that your effort wasn’t anticipated, and the puzzle is actually solved for you later in the game.
According to section 8.9 of the QR code specification. ISO/IEC 18004:2000. ↩
It’s pretty easy to understand how this works. In a base 10 system, you can “encode” 2 characters out of a set of 10 possible characters in a 2 digit number: “12” for example. Dividing by 10 and ignoring the remainder gets you back the first character (“1”), and dividing by 10 and taking the remainder gets you the second character (“2”). This system works exactly the same, it’s just that it’s a base 45 system. If we used base 45, you’d be able to just read off the two characters after printing the number. ↩
I’ve been watching a lot of early-2000s sci-fi TV recently, and I’ve noticed
that just about every show is plagued by a very specific problem.
Take Stargate SG-1 as a pretty typical example. According to an interview
with one of its producers1,
the first two seasons of the show were shot on 16mm film, and then until season
eight the show was shot on 35mm film. It was finished along with the effects
compositing in standard definition, of course. For most if its runtime it was a
full screen show as well. All this, of course, means that the first seven
seasons of the show aren’t that great looking by today’s standards.
By 2004, however, audiences and networks wanted to see shows in HD, and this
meant a different set of expectations and trade-offs. Stargate SG-1 moved to
HD digital acquisition for its eight season. The results are, well…
The choice of this image is obviously selective. What’s happening here is that
the dynamic range of the scene (the difference in luminance between the darkest
part of the image and the brightest part) is too great to be captured by the
camera. The highlights are blown out. This scene is one of the worst looking
in the show, but as several other shots indicate, the same problems are obvious
in many other scenes too.
In addition to frequent blow-out issues, these HD shots suffer from quite a
bit of dynamic range compression and loss of detail in the highlights. This is
visible as a kind of “pastelization” of bright colors, making them “flatter”
looking than they ought to be, even though their saturation is at a normal or
even raised level.
While the increase in resolution is obviously appreciated, it also comes with
quite a bit of quality loss in terms of accurate, film-like colors. Hair and
faces tend to glow white or even bluish under any bright light, colors are harsh
and crushed, and many shots now have an ugly “plastic” look to them that further
exacerbates issues with the CGI and other effects shots. Even though many of
these shots wouldn’t look photorealistic without effects, the absence of
photorealistic images makes the effects shots feel worse on a subconscious
level, the plastic glow giving the show a queasy fever-dream veneer.
Primarily, these issues effect shots in uncontrolled outdoor lighting.
However, SG-1’s sister show, Stargate Atlantis, is frequently affected
by lighting issues even when shooting indoors.
I’m not sure what the issue here was. Perhaps it can be chalked up to different
crews and the producers having their primary focus on the last season of SG-1,
airing around the same time. However, I’d also point to the Atlantis crew’s
unwillingness to compromise on the shots. The SG-1 outtakes I’ve used as
examples are generally exceptions — what the show looks like when it looks
noticeably bad. The weird plastic-like appearance of HD remains on all the
shots, but they usually managed to avoid too many issues with the highlights.
I don’t have firm figures on how much the filming changed due to HD, but it’s
pretty indicative that I had to look through quite a few episodes of SG-1 before
I could find the kind of shot I was thinking about. Many episodes from the last
three seasons are shot entirely inside with carefully controlled lighting, on
some combination of the Stargate Command set and a few others. As I remember it,
this marked a clear shift from the kind of “new planet” episode openers many of
the earlier seasons depended on.
Even when shots in the later seasons of SG-1 are clearly outside, they tend to
be in places where the characters are in shadow, for example in the ubiquitous
forest scenes, inside a warehouse, or on the shadowed side of a building or
mountain. Anything to control the bright highlights!
In other scenes, heavy lighting was used to blend the scene together better and
give it a unified aesthetic, removing any harsh glows.
Quick trivia question: what do Attack of the Clones, Stargate SG-1, and
Battlestar Galactica all have in common? Answer: they were all shot on the
same camera, the Sony F900 HD series. This was an early professional HD camera
which was pretty readily available and affordable even for TV shows. It suffers
from a number of issues, including obviously the dynamic range issue, mentioned
above, but also the fact that anyone working with HD video this early was most
likely shooting in the same gamma (e.g. Rec. 709) as the final product, instead
of shooting with a log curve as
more modern productions would. Even if the image sensor in the F900 had been
fantastic, shooting this way would probably have crushed quite a bit of the
original detail out of the image, in a way that couldn’t then be recovered in
To be clear, the F900 wasn’t unique in having these problems, they were largely
a side-effect of being an early adopter of HD acquisition. That said, while many
episodes of SG-1 took care to avoid putting the worst aspects of HD video on
clear display, Attack of the Clones and Galactica didn’t work with these
limitations, and the results are quite frequently terrible.
The dated CGI of Attack is often credited for giving the film its plastic
look, and that’s certainly contributing given the number of shots that are
almost entirely greenscreened, but as you can see from examining these shots
closely, the lighting issues at play with this early HD camera is another
But now of course we come to Battlestar Galactica, which more than any other
show I can think of illustrates the pitfalls of shooting with HD video cameras.
Frankly, Battlestar Galactica is so bad it seems deliberate. Almost every
single shot in the show looks like this, it’s borderline unwatchable.
Unfortunately, it does in fact seem to be the case that some of this was
intended, as part of the creators playing with what to them was a new format.
An article on the use of HD2
in Star Trek Enterprise and Battlestar Galactica describes the difficulties
that the creators of Galactica faced when they tried to achieve the right
aethestic for the show, which was “grungy”, “gritty”, with pushed grain. To
try to get this result on a TV shooting schedule, DP Steve McNutt developed
a realtime color management process for adjusting color on set while using
little, if any, post-production color correction.
As part of an attempt to replicate a “documentary-style”, shot-on-film look,
McNutt “[pushed] for video noise to emulate grain”, sometimes as high as +18dB.
The visual effects supervisor, Gary Hutzel, says that the DP “pushed gamma,
crushing the blacks, clipping the highlights”. That’s extremely telling. It’s
not worthwhile to try to parcel out blame, in hindsight, for a show shot almost
20 years ago; what’s important however is how the specific limitations of
in HD forced creators to experiment with novel techniques to try to get the
specific looks they wanted to achieve, with expectations often keyed to the
behavior and appearance of the film stock they were used to working with.
Even with the F900, Battlestar Galactica didn’t have to look as bad as it did,
but it’s easy to see how seeking the intensity and immediacy of a grainy film
stock led too many directors astray.
You might be thinking of the Battlestar Galactica mini-series that aired the
year before right now, and wondering why it looked so good. Indeed, it does
look fantastic. It was shot on 35mm film.
Now you might recall that the Battlestar Galactica miniseries is available on
Blu-ray. And it’s not upscaled — it looks great. So if everyone was
already shooting on 35mm film before HD video took hold, why not just
continue doing so when the networks wanted HD?
It turns out that most producers would have liked to stay on film, but
were forced to change due to budgetary demands. For example, SG-1 producer
John Lenic said (speaking about follow-up show Stargate Universe):1
Film still is from a look perspective and in my opinion, the best looking
format to shoot on. … The only reason that we didn’t go film is financial.
Film still is roughly $18,000 more per episode than digital as you have all
the developing and transferring of the film to do after it is shot.
This transfer cost was something that producers were willing to bear when it
seemed to be necessary, but less so when HD promised a sharper image for less
money. In particular, continuing to use film when shooting for HD would have
meant having to do the transfers in HD as well, along with the same finishing
costs required for HD post-production, including the effects shots.
Additionally, the “cleaner” image of HD video, lacking in rough grain from film
stock, made effects compositing a bit easier to do in HD.
With Battlestar Galactica as well, director Michael Rymer initially opposed2
shooting with HD cameras. Here too, the reasons were financial, as it wouldn’t
have been possible to produce the show at all if it were shot on film. In the
Rymer indicates that he noticed many of the issues that I have pointed out. In
particular, he says “the video was picking up the fluorescent bars of various
consoles on our [spaceship interior] set”, and calls daylight exteriors “less
than satisfactory” and “the worst environment for HD”. That said, Rymer did
eventually come around to the look of the show on HD, saying that the sensor
noise “approximated film grain nicely” and that he could ultimately achieve the
desired aesthetic for the show. On this point I have to disagree, but again,
we’re looking back with 20 years of hindsight on how badly early HD footage
and CGI work has aged.
As it turns out, a ton of
productions were shot with the F900 camera, and cameras like it, and the effect
on TV production quality was fairly devastating. This isn’t a critique of
shooting digitally in general, of course; the technology has vastly improved
over the course of the last decade. Some shows, like The Big Bang Theory,
are so reliant on internal sets and carefully controlled lighting that the
limitations of shooting HD don’t really show. Others, like The Office, make
use of the raw harsh lighting to their advantage, creating an alienating and
sterile environment that pushes the presence of the camera in your face,
achieving a documentary tone that in my opinion Galactica never did.
Once you decide to go with digital acquisition and approximate the final color
grade in-camera, there’s basically no hope of an improved release (for home
video or otherwise) years down the road. On the other hand, it’s worth pointing
out that early seasons of SG-1, because they were shot on film, could be
released in HD one day. The difficulty of course is to find someone willing to
put up the money to have the original negatives scanned, and possibly
recomposite the effects in HD. For very popular shows, like Star Trek: TNG,
this has actually been done, but there’s little hope of the same for less
popular shows like Stargate.
Here too, then, there are tradeoffs. While we do have HD versions of the later
seasons of SG-1 (though only on streaming services, not Blu-ray), we have to
deal with the permanent damage to the image quality done by the acquisition
method. On the other hand, the earlier seasons have the potential to see
beautiful high definition scans, but this will likely never happen because it’s
simply too expensive.
I recently read the book Jesus, Interrupted by Bart Ehrman. It’s a very nice book. However, he has the following very strange thing to say about the method historians follow: “If historians can only establish what probably happened, and miracles by their definition are the least probable occurrences, then more or less by definition, historians cannot establish that miracles have ever probably happened.”
This is the sort of statement that causes me to scratch my head and re-read several times because I get the strong feeling that it has to be wrong. Ehrman claims that historians must accept methodological naturalism. In other words, a historian can never conclude on the basis of their evidence that a supernatural event occurred. They must always conclude (whatever their personal convictions) whatever is most likely, and what is most likely by definition cannot be a miracle.
While this requires some unpacking already, it seems to me to be based on seriously mistaken notions of both supernatural events and likelihood itself. Before getting into that I think it’s worth considering the implications of such a view.
If I read Ehrman correctly, should he miss the second coming, even extensive video evidence of Jesus Christ returning through the clouds to rule the nations would not convince him of its happening. A supernatural event of any kind is simply above the pay grade of historians to comment on. These events are purely the realm of faith. (One wonders if he would believe his own eyes.)
Not only is this odd, it’s already directly in conflict with Ehrman’s own claims about other supernatural events. For example, this excerpt I’m commenting on is from a book which argues “if the findings of historical criticism are right, then some kinds of theological claims are certainly to be judged as inadequate and wrong-headed. It would be impossible, I should think, to argue that the Bible is a unified whole, inerrant in all its parts, inspired by God in every way.”
Let’s consider what the evangelical claim that the Bible was verbally inspired by God entails, counting the instances of miraculous activity. Conservatives claim that each word of the Bible was inspired by God in the original autographs (1), the books were sufficiently preserved by scribal copying over time (2), and the correct books were canonized by the winning side in the theological battles of the early church (3). Ehrman is on the record as thinking these claims are untenable in the light of the historical evidence (e.g. in light of contradictions in the text), and in fact has dedicated this very book to proving that point. How is this possible if all claims of miracles are the proper domain of faith and not history?
In the book, Ehrman also pokes some well-deserved fun at the fundamentalist KJV-only movement. But what this sect claims, on any intelligible version of their views, is that God inspired the work of the translators. If this is a faith-claim, as it certainly must be, what are we to make of Ehrman and other historians hastening to point out that the translators of the KJV based their work on inadequate sources and made many mistakes? Why should that matter?
To try to clarify this, let me quote extensively what Ehrman has to say about miracles.
Miracles, by our very definition of the term, are virtually impossible events. Some people would say they are literally impossible, as violations of natural law: a person can’t walk on water any more than an iron bar can float on it. Other people would be a bit more accurate and say that there aren’t actually any laws in nature, written down somewhere, that can never be broken; but nature does work in highly predictable ways. That is what makes science possible. We would call a miracle an event that violates the way nature always, or almost always, works so as to make the event virtually, if not actually, impossible. The chances of a miracle occurring are infinitesimal. If that were not the case it would not be a miracle, just something weird that happened.
I think this quote points up the problem quite nicely. Ehrman is begging the question by slipping in the assumption of naturalism into what ought to be a defense of methodological naturalism. Ehrman’s definition of a miracle is quite acceptable, actually. I’ll quote it again: “[a miracle is] an event that violates the way nature always, or almost always, works…” His conclusion from this is that “The chances of a miracle occurring are infinitesimal.”
This simply does not follow. One cannot simply assume that “the way nature … works” is the same as what actually happens. Miracles are by definition supernatural occurrences! If you assume that the story of what happened at some point in the past can be filled out entirely by causal or quasi-causal chains in the natural order of the universe, you’ve simply assumed that miracles do not happen. That, of course, assumes far too much for Ehrman.
What Ehrman can safely conclude is that the chances of a miracle occurring naturally are infinitesimal. To get from there to the claim that historians are justified in rejecting these explanations in all cases requires much more work. I can think of several ways to do so.
For example, maybe Ehrman wants to position the historian as a sort of scientist, who can’t consider supernatural occurrences precisely because they’re supernatural — they’re in the wrong domain. On this view the job of the historian is to reconstruct the best possible naturalistic explanation for what happened in the past.
One problem with this interpretation is simply that Ehrman doesn’t seem to be saying this. He repeatedly asserts that the problem with miracles for the historian is their unlikelihood. He writes:
Historians can only establish what probably happened in the past. They cannot show that a miracle, the least likely occurrence, is the most likely occurrence.
The bigger problem with this view is that it’s just a silly and arbitrary restriction of the historian’s task. Surely the job of the historian is not like that of the scientist at all! The scientist must construct the most plausible account of the behavior of the natural world. The historian, on the other hand, is tasked with giving the most plausible account of what actually happened in the past.1 If the most plausible account involves some supernatural activity (recall the example I gave of video evidence), so be it.
I think the most likely explanation for Ehrman’s reticence to consider supernatural explanations is that they’re an enormous unknown. In order to get from the claim that miracles are extremely unlikely to happen naturally to the claim that they’re extremely unlikely, one has to have a prior for how likely non-natural events are. In other words, do miracles happen? It is not impossible to imagine a world in which miracles happen all the time; indeed, some evangelicals believe we live in such a world. Accounts of miraculous healings and near-death trips to heaven appear regularly in media targeted at conservative Christians. Ehrman (as an agnostic) does not take these accounts seriously. He (and I) don’t believe we live in a world where miracles are common. That said, there’s an enormous difference between the claim that miracles are (at least) uncommon, and the claim that a miracle is always “the least likely occurrence”.
This leaves a vast gray area. For Ehrman, we have no convincing evidence of miracles that would lead him to set a high enough prior for them to figure in many historical explanations. On the other hand, he is unwilling to take the materialistic stance that miracles are impossible and always ruled out as possible accounts of what really happened. Thus he tries to bracket off these concerns as completely as possible from the historian’s task.
If this is really the best reading of what Ehrman wants to do, it’s hard to have methodological objections. It would be an enormous issue for historians if at every turn they had to speculate endlessly about the appropriate prior likelihood of supernatural intervention in the normal course of the universe. Given that miracles are at least not every-day occurrences, historians seem to be justified in attempting to find straightforward scientific accounts of past events. I have no objection to this.
What I continue to object to is the specific defense offered by Ehrman. He writes about the resurrection of Jesus, as a purported historical miracle:
The resurrection is not least likely because of any anti-Christian bias. It is the least likely because people do not come back to life, never to die again, after they are well and truly dead.
In other words, he follows up his claim that his assignment of low probability to the resurrection is not the result of anti-Christian bias with what’s probably the only blatant expression of anti-Christian bias in the whole book! If it is simply a fact that people do not come back to life, then Jesus did not come back to life.2 No wonder the resurrection has such a low probability in his estimation!
What Ehrman presumably means is that under some unclear historian-centric notion of probability, the probability of the resurrection of Jesus is extraordinarily low. This is a notion of probability that makes naturalistic assumptions, not because the probability of supernatural events is low (which would beg the question), but because it is appropriate for historians to bracket off these types of considerations. In other words, it is because the likelihood of a supernatural explanation’s being true is indeterminable that historians must steadfastly refuse to speculate about them.
This passage remains remarkably unclear. If something like this is what Ehrman means, he would do well to say so directly. Meanwhile, those interested in more general answers to these questions must consider the matter more holistically. What one has to think about is quite simply the plausibility of two more or less complete descriptions of the entire world. Which makes more sense: the view that miracles sometimes occur and are the best explanations for some past events? Or that there have never been miracles and all past events are explained in a naturalistic fashion? This is a difficult question to answer — but it is not a question from which reason must be banished as a matter of faith.
After writing this, I subsequently read one of Ehrman’s other books, How Jesus Became God. In this book, written about five years after Jesus, Interrupted, Ehrman has a more nuanced take on the historian’s role. For example, Ehrman writes,
It is not appropriate for a historian to presuppose a perspective or worldview that is not generally held. “Historians” who try to explain the founding of the United States or the outcome of the First World War by invoking the visitation of Martians as a major factor of causality will not get a wide hearing from other historians—and will not, in fact, be considered to be engaging in serious historiography. Such a view presupposes notions that are not generally held—that there are advanced life-forms outside our experience, that some of them live on another planet within our solar system, that these other beings have sometimes visited the earth, and that their visitation is what determined the outcome of significant historical events.
This is a useful comment. Something about the historian’s task prevents them from invoking beings or phenomena that are not accepted already by a majority of other historians and scientists. In perhaps the best version of his view in the book, Ehrman continues:
The supernatural explanation, on the other hand, cannot be appealed to as a historical response because (1) historians have no access to the supernatural realm, and (2) it requires a set of theological beliefs that are not generally held by all historians doing this kind of investigation.
Here Ehrman drives the cleanest wedge between the question “what actually happened?”, and the historian’s question, which is (here) seemingly “what is the most plausible historical-naturalistic reconstruction of what happened?” This confirms that my previous suggestion that Ehrman wants to bracket off supernatural concerns from history may have some truth to it.
Unfortunately, Ehrman is not always so clear is this book. Later, he returns to making almost exactly the same claim that he made in Jesus, Interrupted:
But simply looking at the matter from a historical point of view, any of these views is more plausible than the claim that God raised Jesus physically from the dead. A resurrection would be a miracle and as such would defy all “probability.” Otherwise, it wouldn’t be a miracle. To say that an event that defies probability is more probable than something that is simply improbable is to fly in the face of anything that involves probability. Of course, it’s not likely that someone innocently moved the body, but there’s nothing inherently improbable about it.
Here Ehrman returns to the concern with probability and the strange claim that miracles are inherently the most improbable explanation. This suggests, contra the statements made earlier in the same chapter, that the historian is concerned with the question “what actually happened?” and is simply inferring to the most plausible explanation. As I argued above, anyone truly committed to answering this question cannot simply forswear any non-naturalistic explanations, because there’s simply no way to show a priori that the supernatural will never figure in the most reasonable account of historical events.
As philosophers of science have pointed out, there is some overlap between the fields in Biology. ↩
Paul noticed this with remarkable clarity. “Now if Christ is proclaimed as raised from the dead, how can some of you say there is no resurrection of the dead? If there is no resurrection of the dead, then Christ has not been raised; and if Christ has not been raised, then our proclamation has been in vain and your faith has been in vain.” (1 Cor 15:12-14 NRSV) The argument here is straightforward and if Paul is right, then the assumption that people do not come back to life is certainly an anti-Christian one. Likewise, Paul would no doubt be surprised to hear that “Believers believe that all these things are true. But they do not believe them because of historical evidence.” Paul (and the Gospel authors) regularly offer such evidence, as Ehrman himself points out in the book. * * ↩
I’ve heard the claim repeated dozens of times that the reason Bernie Sanders
failed to win the 2016 Democratic Primary1 was because he wasn’t able to get
enough support from black voters. This has become such a truism among some
pundits that attempting to refute it smacks of a conspiracy theory, but I
hope to show that it is actually false convincingly in this article. It turns
out that the claim hangs on math that is actually fairly unintuitive, so much
so that even after doing the calculations for many states, I still found myself
unable to guess what level of support Bernie Sanders got among black voters
versus white ones in any particular state when looking at exit polling data.
This may sound absurd. After all, the exit polls can look straightforward at
first glance. For example, in South Carolina, the exit poll data contains
something like the following table:
Nothing could be simpler, right? 35% of the voters were white, 61% were black,
and of the white voters, 54% went for Clinton, 46% went for Sanders. Of the
black voters, 86% went for Clinton, 14% went for Sanders. Sanders has a gap
among white voters of 8%, and a gap of 72% among black voters. Repeat this
process on all 50 states, some of which are much closer than South Carolina,
and you can trivially generate your hot take for MSNBC from there.
What’s wrong with this analysis? Well, what I’m interested in when I ask the
question “what’s Sanders’ relative support among black and white voters?” is
whether, if you asked every single voter leaving a primary election in 2016,
a greater proportion of black voters would support Sanders than white voters,
or vice versa. Or to put it in simpler terms, if you know 14 random white
people, and 14 random black people, which group is going to have the greater
number of Sanders supporters? I hope that seems like the obvious thing to be
interested in to you too.
It will probably surprise you then to learn that the answer in South Carolina
is that two in fourteen black voters support Sanders, and only about one in
fourteen white voters support Sanders.
How can this be? It’s because of a very simple fact that the exit poll is
unintentially obfuscating: if you know fourteen white people in South Carolina,
about one of them will support Sanders, one will support Clinton, and twelve
of them are Republicans! This is the kind of demographic fact that exit polls
don’t capture, because they’re not designed to. Polling results are divided and
reported separately for Democrats and Republicans, even though the elections
and exit polls are (usually) held simultaneously.
Fortunately, official election results and exit polls do provide enough data to
pretty reliably piece together what the actual political distribution looks
like. The actual distribution of South Carolina voters looks like this:
As this table suggests, South Carolina is extraordinarily bifurcated along
racial lines. White people in this state are extremely far-right, to such an
incredible extent that in a primary election where 75% of voters were white,
61% of Democratic voters were black. While rarely to such an extreme extent,
this is true of just about every state, and has a similar distorting effect on
the results of exit polls, and therefore a similar distortion on political
commentary that is based on those polls.
I went through the exit poll data, and put together a complete summary based on
every state I could get data for. Let me briefly explain how the math here is
done, using South Carolina as an example. Feel free to skip over this paragraph
entirely if you’re not interested
in this. The number of votes for each candidate is a matter of public record. I
used The Green Papers as my primary
source here. This site records that 740,881 votes were cast in the Republican
primary, 370,904 votes in the Democratic primary. Now we look at the exit poll
data. In South Carolina3,
in the Republican primary 96% of voters were white, 1% were black. So we
estimate that there were 7409 black Republican voters, 711,246 white Republican
voters. The same procedure for the Democrats reveals that 226,251 of their
voters were black, 129,816 were white. The exit poll data shows that 46% of
white Democrats voted for Sanders, while 14% of black Democrats did. So this
means there were about 31,675 black voters for Sanders, and 59,716 white voters.
The total number of white voters in the election was 841,0624 and the total
number of black voters was 233,660. So 13.6% of black voters went for Sanders,
and only 7.1% of white voters did.
Obviously there will be some degree of error in the exit polls, and therefore
in these results. But it’s not that severe: for example, if the proportion of
Republican voters who were black was changed to 0% or 2% (from 1%), this would
make a difference of about half a percent in Sanders’ support among black
voters. Taking all states together should have the effect of evening out the
errors, although some systamatic errors may remain. I’m not too bothered by
this, since the point of this article is to counter a false view that the
pundits take themselves to have learned from these very exit polls. If the
polls themselves are untrustworthy, then their conclusion is unsound too.
Anyway, on to the results. They’re based on the total vote of 21 states in
which all of the following were true: they held primaries in 2016, an exit
poll was taken in them by the major media organizations, and the exit poll
had a sufficient number of black respondents to draw conclusions about who
they supported. (The primary qualifier is important because in caucus
Iowa, the total popular vote count was not the official result reported by
the election.) These states are South Carolina, Alabama, Arkansas, Georgia,
Oklahoma, Tennessee, Texas, Virginia, Michigan, Mississippi, Florida, Illinois,
North Carolina, Ohio, Wisconsin, New York, Connecticut, Maryland, Pennsylvania,
and Indiana. I would have liked to include California, but they voted so late
in 2016 that the media didn’t take an exit poll. Here
is a spreadsheet with the math.
Here are the results:
In other words, my claim holds for all states in which there is data. On the
whole, black voters are at least as likely to support Sanders as white voters.
(The difference between the two is +0.8% for black voters, but I suspect that’s
within the margin of error of this kind of research.)
Now, a certain kind of pundit might be inclined to respond as follows: “If you
look just at the relative support for Clinton vs. Sanders among white voters,
you’ll see that Sanders edges out, and so it remains true that Sanders lost the
race because of poor support among minorities.”
I find this sort of analysis rather unhelpful. To put it simply, what we are
imagining is disenfranchising all minorities … in which case,
yes, Sanders would have won the 2016 Democratic primary, and then would have
gotten utterly crushed in the general election because the Democrats depend
on minority support for their basic viability as a party. It’s wrong in
another way too: the pundit (at least rhetorically) takes the point of view of
Sanders, and decides that “blame” needs to be parceled out to various
Democratic primary demographic groups according to the degree to which they
failed to support him. (Alternatively, a pundit might take a rhetorical
position opposing Sanders and blame him for failing to reach out to these
groups.) This isn’t really what’s happening in a primary. The reason that
moderate and conservative black voters play such an enormous role in the
Democratic primary is that almost two thirds of white voters are so far right
that they don’t vote in the Democratic party primary at all!
Now, you might imagine a less racialized (and simpler) country in which the
major political parties were basically in alignment with the range of political
views along a left-right spectrum. There would be a lot more black Republican
voters. The question of why we don’t live in something closer to that world is
an interesting one; FiveThirtyEight took this question on directly in a recent
Their conclusion was that “social pressure is what cements that relationship
between the black electorate and the Democratic party”. The word “cements” is
doing a lot of work here. Social pressure certainly can’t explain the majority of
the effect; the same article says that 85% of black respondents identified as
Democrats in an online poll where social pressure was not a factor.
It seems plausible to me that another significant factor is a response to the
racialized politics of the Republican party, as the extreme proportion of white
supporters in its ranks attests. If this is true, though, why wouldn’t the
party take the pragmatic approach by toning down its rhetoric to pull in the
many conservative minorities who are aligned with them on policy questions?
Certainly, part of the answer is that they haven’t needed to so far, and that
the rhetoric may serve to energize part of their white base, but what this
research may suggest is that it can actually be helpful to a political party
to have a large number of people consistently voting to nominate moderates in
the opposing party’s primary process.
The promise of Sanders all along, of course, was that the supposed left-right
spectrum is a lie. If people (and their candidates) do fall on a simple spectrum
like that, then you can trivially show that the Condorcet
winner will be a centrist. Even in a complicated two-party system like that of
the United States, a centrist is expected to be the strongest candidate the
majority of the time. (Obviously, the Electoral College throws a wrench into
this.) But Sanders, and Trump to some extent, represent a claim that the true
views of most voters are not well represented by the current two-party
system, and that in fact someone very far to the left (or right) on the current
spectrum might be more acceptable to the median voter than a centrist.
How else to understand Sanders’ candidacy at all? So far, he has not shown
signs of being able to win a Democratic primary, suggesting (but not proving)
that he’s too far left for many Democrats. If this is true, then he’d be sure
to lose a general election that introduces an almost equal number of Republican
voters. However, he has surprisingly performed at or near the top of recent
head to head polls against Donald Trump, compared with other Democrats. What
does that mean?
I suggest that the one explanation that suffices includes multiple factors. One
very important reason why Sanders would stand a chance in a general election is
polarization. Most regular voters in this country are loyal to one party or the
other, and loath to switch parties based merely on the ideology of their
candidates. (Moreover, there are slightly more Democratic voters than
Republicans.) So if Sanders wins a Democratic primary, most of his support will
come from loyal Democrats who don’t necessarily approve of all his policies.
That said, it’s notable that Sanders has consistently performed at or near the
top of these polls. I suggest this means that there must be some truth to his
claim to represent those who do not find themselves cleanly on the left-right
American political spectrum.
It’s important to notice that these two explanatory factors pull in opposite
directions. On a strict party-loyalty hypothesis, it wouldn’t matter at all
who gets nominated. This seems to be mostly true (for the small number of
candidates who actually stand some chance of being nominated), but it’s not the
whole story. Sanders represents the possibility of pulling support beyond mere
party loyalty, and he’s succeeded to some extent at that, but perhaps not
enough to win a primary election.
In the final analysis, this shows exactly why the exit poll based criticism of
Sanders is misguided. Among Democrats, black voters are much less likely to
support Sanders than white voters. But this is largely because of partisan
demographics that Sanders can’t help: the Democratic party pulls in a number
of surprisingly conservative black voters, while the Republican party
presumably has a corresponding effect on many white voters who might be open to
Sanders’ policy aims, but are more at home in their party’s racial antagonism.
On the whole, Sanders’ problem is not with black voters; they support him at
equal or greater rates than do white voters. His problem is that his promise
of pulling voters from both parties and those currently unaligned has not yet
come to fruition. He has not been able to shift the majority of white voters
away from their Republican or independent allegiances. The hope for left wing
Sanders supporters must be that time and voter education will cause a
realignment, and that people like Sanders will begin to see support across the
political spectrum and from current non-voters. His high level of support from
young voters does suggest some promise. But if the American political spectrum
did accurately reflect the distribution of its voters, there would be little
hope for candidates like Sanders in the near future. America simply has too
many white people on the far right for that.
And now the 2020 primary election as well. This article focuses on the
2016 election specifically, because in this election Sanders faced only one
challenger, giving a clearer picture of where voters stood than in 2020, when
he faced a very broad field. ↩
This is neglecting third party voters. There are very few of them, and they
appear to be disproportionately white, so if they were included, they would
further reduce Sanders’ support among white voters. ↩
This is a short story that I wrote some time ago. It’s designed to illustrate some interesting properties of CSPRNGs (Crytographically Secure Pseudo Random Number Generators), which form the bedrock of modern encryption techniques. In particular, the story highlights the fact that a single rather short key is sufficient to generate all the random numbers anyone will ever need. You don’t need a continuous source of pure entropy.
Part 1: The Universal RNG
During the universe’s design stage, it was realized that making some events probabilistic from the point of view of human beings was a greatly desirable property. (Several small proto-universes failed shortly after the intelligent species that populated them was able to work out the deterministic laws behind every event.) For the universe that ultimately went into production, it was decided that making a great many events (including quantum fluctuations) chancy was the safest approach.
True Believers hold that all these chancy events are Really random. That is, they believe that whenever the universe needs a new random number, God uses their infinite power to create it in their mind ex nihilo, and there’s simply nothing more to be said in the way of explanation. Skeptics hold that not even God is capable of acts of creation of this kind, and that there must be some ultimately deterministic story about where these numbers come from.
As it turns out, both are wrong. God is perfectly capable of creating Really random numbers, but although omnipotent is far too lazy to continue doing this all the time. Perhaps God has other universes to tend to, or maybe Heaven needs random numbers for some secret purpose of its own. In any case, the fact is that God only ever bothered to generate 2^8 random binary numbers, in the form of a single 256 bit key embedded into the universe’s core systems. Whenever any “chancy” event needs to happen, the random information is generated by the universe using a CSPRNG that (entirely by coincidence) is exactly equivalent to ChaCha201. So it turns out that the universe is fundamentally deterministic, just not in the way anyone expected.
A stubborn cohort of angels on the review board insisted that this was an inelegant solution to the problem of randomness. They tried to convince God to create an Oracle that would generate Really random numbers all on its own for the universe to use. The Almighty was undeterred, ultimately ruling that the system as designed was “good enough”. Many suspected that God’s real reason was that having another thing around capable of generating uncaused events was taken to be a slight to the Divine dignity. Lucifer led several others in resigning from the panel in protest. Following a disruptive sit-in at God’s office, he was cast like lightning from Heaven.
Unsurprisingly, God was right about the system being good enough. After all, the whole point of the system was to prevent humans from predicting events that were meant to be unpredictable without requiring the intervention of miracles.2 One complaint was that the total number of requests to the RNG over the universe’s lifetime might possibly exceed a value at which the RNG would begin to cycle. However, it was shown that collecting enough data to exploit (or even have a chance at detecting) the issue was physically impossible due to the energy constraints of the universe.
Of course, no steps needed to be taken to prevent direct attacks on the CSPRNG’s state, or key recovery, since these were coded into the OS of the universe itself, and life forms in the universe would have no access to them. So that’s the system that was ultimately put in place: every “random” event that ever happens in this universe can ultimately be traced back to its initial state and the single 256 bit key that makes it unique. While other designs based on entropy pools with estimators were considered, God worried about the universe blocking if at some point they forgot to update the pool with new random data. It was determined that the CSPRNG approach provided enough practical security with a single hard-wired key set at the beginning of time.
Part 2: God’s /dev/random
It is well known that God has a phone number.3 What is less commonly known is that when God designed the universe, they added a number of other interfaces intended to be helpful to human beings. The True Believers, for example, have it as an article of their faith that God is listening in all the time on /dev/null. But the most useful interface in God’s /dev is undoubtedly /dev/random.
The design team realized quite early on that humans themselves would need sources of randomness. Since every bit of random data is ultimately generated by God’s RNG anyway, it was decided that /dev/random should just return data straight from the RNG with no scrambling. Although this provided far more direct access to the RNG than its designers had initially anticipated, it was determined that its security margin was sufficiently high to allow for these queries.
Access to /dev/random was provided on Earth in a number of high and holy places. God’s interfaces are so fast that they are able to provide data to human devices at the full speed of any interface any humans have been able to construct so far. Of course, all these interfaces have to get their data ultimately from a single device built into the universal mainframe, but light travel time isn’t a problem since that was a constraint built into the universe’s physical laws, not something that applies to the machine the universe runs on.
For a long time humans were happy to take their devices to the nearest /dev to be filled up with random data. But Lucifer, displeased with the success of the system, tricked one of them into accepting data from an illicit, possibly backdoored source. God was pissed, and things generally went to hell for a while after that. While some authorities wanted to shut down the /dev system entirely, God ultimately decided that since the security of /dev/random hadn’t been compromised in any way, they would leave the system in place. In general, however, access to /dev for ordinary humans became more difficult after this, and many of the high and holy places fell under the control of nation states or were sold off to corporations for extraction of their natural resources.
It gradually came about that humans started to need random numbers more frequently, and even though you could get as many numbers as you needed from /dev/random, the latency caused by having to travel to an accessible holy place was considered unacceptable. Instead, it became common for priests to provide their own sources of random numbers. They would do this by traveling themselves and returning with 256 bits of random data, which they would then use as a key to seed a CSPRNG that was (incidentally) similar to God’s own. While the priests’ computers could provide random data only much more slowly than /dev/random, the latency was much better because people didn’t have to travel so far. This method managed to sustain most civilizations for centuries, resulting in a hierarchy where only the highest ranking bishops had direct access to /dev/random, and local priests would seed their own CSPRNG’s from 256 bit keys provided from their RNGs instead of directly from God’s sources.
Cracks emerged. The role of priests in this scheme became widely regarded as suspect. After all, an untrustworthy priest could be providing random bits from a less-than-holy source, and if anyone on the chain between you and God’s RNG was a bad actor, they could potentially uncover your secrets. Protestants began to insist on making the journey to /dev themselves to get their own keys, and rolling-your-own PRNG functions quickly became a widespread practice. A number of televangelists were found to be using keys of unknown origin with less than 32 bits of entropy.
Cryptographers eventually invented solutions for collecting and estimating entropy, and most skeptics stopped caring about having any link back to the “supposedly” holy /dev/random. Instead, their operating systems gathered entropy from secular sources like ordinary “random” events. Of course any key they created was ultimately the result of deterministic processes that had their origin in God’s RNG, but practically speaking this had no effect on their security.
Perhaps most surprising of all was the group of Satanists who insisted on using random numbers generated from secret sources supposedly provided by Lucifer himself. They claim Lucifer has crafted mechanisms for generating Really random numbers, such that every number you get from the Devil’s /dev/random is entirely Real, not backed by a PRNG. Expert theologians and cryptographers currently believe this to be impossible. Even if Lucifer is using some kind of chancy mechanism to generate these numbers, the process must be ultimately deterministic and known to God.
Part 3: Unexpected Consequences
A number of crypto nerds needed to generate 2048 bit keys for use with asymmetric cryptosystems like RSA. Many of them suspected that God’s RNG might be a PRNG or otherwise distrusted it, and decided like the secularists to collect their own sources of entropy from the universe. They relied on the only the most conservative estimations of entropy, collecting a full 2048 bits of entropy into their pools before turning that data via convoluted methods into their keys. The irony of this, of course, was that every event in all of space-time put together only contained the 256 bits of true randomness hard coded into it at the moment of creation. Their keys were no better than 2048 bits taken from God’s /dev/random, even no better than 2048 bits taken from a CSPRNG seeded by 256 bits taken from God’s /dev/random.
There is a strange beauty to the fact that all of this was fundamentally secure. No one, no matter how many bits they stored and analyzed from God’s RNG, had any hope of doing better than 50/50 at guessing the next bit that would come out, which someone else could securely use for any purpose. So long as every person in the chain from God’s RNG was trustworthy, each person could take a mere 256 bits to seed a CSPRNG from the person who came before, and every 256 bits that came out of the 10th person’s CSPRNG was just as cryptographically secure as the same amount of data taken from God’s own /dev/random. 256 bits of sufficiently unpredictable data really is enough for everyone, forever.4
Unfortunately, it didn’t last forever. One of God’s interns introduced a use-after-free into the universe’s code, and a too-clever hacker who found their way into one of the remaining high and holy places managed to root the universal mainframe. In a matter of minutes, they had accidentally triggered a debugging function that had been left in the code, which led to a kernel panic. The universe went out like a light.
To be precise, God used ChaCha20 with what Daniel J. Bernstein calls “fast-key-erasure” here. The point of this isn’t to provide protection against backtracking (key recovery was assumed to be impossible by the design team), but in this case is an efficient and secure way of rekeying which is required by the ChaCha20 cipher because of its smallish 64 bit counter. God briefly considered AES-256-CTR, but decided against it because of its small block size (128 bits), which makes it possible to distinguish from a random oracle with a sufficient number of requests. In theory fast-key-erasure might be enough to protect against this, even without rekeying with new randomness, but the security margin was deemed insufficient in light of available alternatives. ↩
Additionally, leaving open the possibility (from the human point of view) that the universe was non-deterministic was discovered to have psychological benefits. ↩
It’s 42, as suggested by the philosopher Majikthise in Douglas Adams’ Hitchhiker’s Guide to the Galaxy. Unfortunately, God did not put any audio interfaces in /dev. ↩
Based on my reading of Bernstein’s article here. ↩