Adam Fontenot

How the pursuit of HD video ruined TV for half a decade

16 September 2020


I’ve been watching a lot of early-2000s sci-fi TV recently, and I’ve noticed that just about every show is plagued by a very specific problem.

Take Stargate SG-1 as a pretty typical example. According to an interview with one of its producers1, the first two seasons of the show were shot on 16mm film, and then until season eight the show was shot on 35mm film. It was finished along with the effects compositing in standard definition, of course. For most if its runtime it was a full screen show as well. All this, of course, means that the first seven seasons of the show aren’t that great looking by today’s standards.

An image from Stargate SG-1 Season 7 Episode 21, "Lost City"

By 2004, however, audiences and networks wanted to see shows in HD, and this meant a different set of expectations and trade-offs. Stargate SG-1 moved to HD digital acquisition for its eight season. The results are, well…

An image from Stargate SG-1 Season 8 Episode 17, "Reckoning", showing blowout

The choice of this image is obviously selective. What’s happening here is that the dynamic range of the scene (the difference in luminance between the darkest part of the image and the brightest part) is too great to be captured by the camera. The highlights are blown out. This scene is one of the worst looking in the show, but as several other shots indicate, the same problems are obvious in many other scenes too.

An image from Stargate SG-1 Season 10 Episode 8, "Memento Mori"

An image from Stargate SG-1 Season 9 Episode 18, "Arthur's Mantle"

In addition to frequent blow-out issues, these HD shots suffer from quite a bit of dynamic range compression and loss of detail in the highlights. This is visible as a kind of “pastelization” of bright colors, making them “flatter” looking than they ought to be, even though their saturation is at a normal or even raised level.

While the increase in resolution is obviously appreciated, it also comes with quite a bit of quality loss in terms of accurate, film-like colors. Hair and faces tend to glow white or even bluish under any bright light, colors are harsh and crushed, and many shots now have an ugly “plastic” look to them that further exacerbates issues with the CGI and other effects shots. Even though many of these shots wouldn’t look photorealistic without effects, the absence of photorealistic images makes the effects shots feel worse on a subconscious level, the plastic glow giving the show a queasy fever-dream veneer.

Primarily, these issues effect shots in uncontrolled outdoor lighting. However, SG-1’s sister show, Stargate Atlantis, is frequently affected by lighting issues even when shooting indoors.

An image from Stargate Atlantis Season 3 Episode 2, "Misbegotten"

An image from Stargate Atlantis Season 3 Episode 12, "Echoes"

I’m not sure what the issue here was. Perhaps it can be chalked up to different crews and the producers having their primary focus on the last season of SG-1, airing around the same time. However, I’d also point to the Atlantis crew’s unwillingness to compromise on the shots. The SG-1 outtakes I’ve used as examples are generally exceptions — what the show looks like when it looks noticeably bad. The weird plastic-like appearance of HD remains on all the shots, but they usually managed to avoid too many issues with the highlights.

I don’t have firm figures on how much the filming changed due to HD, but it’s pretty indicative that I had to look through quite a few episodes of SG-1 before I could find the kind of shot I was thinking about. Many episodes from the last three seasons are shot entirely inside with carefully controlled lighting, on some combination of the Stargate Command set and a few others. As I remember it, this marked a clear shift from the kind of “new planet” episode openers many of the earlier seasons depended on.

Even when shots in the later seasons of SG-1 are clearly outside, they tend to be in places where the characters are in shadow, for example in the ubiquitous forest scenes, inside a warehouse, or on the shadowed side of a building or mountain. Anything to control the bright highlights! In other scenes, heavy lighting was used to blend the scene together better and give it a unified aesthetic, removing any harsh glows.

An image from Stargate SG-1 Season 8 Episode 16, "Reckoning Pt. 1"


Quick trivia question: what do Attack of the Clones, Stargate SG-1, and Battlestar Galactica all have in common? Answer: they were all shot on the same camera, the Sony F900 HD series. This was an early professional HD camera which was pretty readily available and affordable even for TV shows. It suffers from a number of issues, including obviously the dynamic range issue, mentioned above, but also the fact that anyone working with HD video this early was most likely shooting in the same gamma (e.g. Rec. 709) as the final product, instead of shooting with a log curve as more modern productions would. Even if the image sensor in the F900 had been fantastic, shooting this way would probably have crushed quite a bit of the original detail out of the image, in a way that couldn’t then be recovered in post-production.

To be clear, the F900 wasn’t unique in having these problems, they were largely a side-effect of being an early adopter of HD acquisition. That said, while many episodes of SG-1 took care to avoid putting the worst aspects of HD video on clear display, Attack of the Clones and Galactica didn’t work with these limitations, and the results are quite frequently terrible.

An image from Attack of the Clones

An image from Attack of the Clones

An image from Attack of the Clones

The dated CGI of Attack is often credited for giving the film its plastic look, and that’s certainly contributing given the number of shots that are almost entirely greenscreened, but as you can see from examining these shots closely, the lighting issues at play with this early HD camera is another significant factor.

But now of course we come to Battlestar Galactica, which more than any other show I can think of illustrates the pitfalls of shooting with HD video cameras.

An image from Battlestar Galactica Season 1 Episode 2

An image from Battlestar Galactica Season 1 Episode 2

Frankly, Battlestar Galactica is so bad it seems deliberate. Almost every single shot in the show looks like this, it’s borderline unwatchable.

An image from Battlestar Galactica Season 1 Episode 13

An image from Battlestar Galactica Season 1 Episode 12

Unfortunately, it does in fact seem to be the case that some of this was intended, as part of the creators playing with what to them was a new format. An article on the use of HD2 in Star Trek Enterprise and Battlestar Galactica describes the difficulties that the creators of Galactica faced when they tried to achieve the right aethestic for the show, which was “grungy”, “gritty”, with pushed grain. To try to get this result on a TV shooting schedule, DP Steve McNutt developed a realtime color management process for adjusting color on set while using little, if any, post-production color correction.

As part of an attempt to replicate a “documentary-style”, shot-on-film look, McNutt “[pushed] for video noise to emulate grain”, sometimes as high as +18dB. The visual effects supervisor, Gary Hutzel, says that the DP “pushed gamma, crushing the blacks, clipping the highlights”. That’s extremely telling. It’s not worthwhile to try to parcel out blame, in hindsight, for a show shot almost 20 years ago; what’s important however is how the specific limitations of shooting in HD forced creators to experiment with novel techniques to try to get the specific looks they wanted to achieve, with expectations often keyed to the behavior and appearance of the film stock they were used to working with.

Even with the F900, Battlestar Galactica didn’t have to look as bad as it did, but it’s easy to see how seeking the intensity and immediacy of a grainy film stock led too many directors astray.

You might be thinking of the Battlestar Galactica mini-series that aired the year before right now, and wondering why it looked so good. Indeed, it does look fantastic. It was shot on 35mm film.

An image from Battlestar Galactica Miniseries Pt. 1

An image from Battlestar Galactica Miniseries Pt. 1

An image from Battlestar Galactica Miniseries Pt. 1

An image from Battlestar Galactica Miniseries Pt. 1

Now you might recall that the Battlestar Galactica miniseries is available on Blu-ray. And it’s not upscaled — it looks great. So if everyone was already shooting on 35mm film before HD video took hold, why not just continue doing so when the networks wanted HD?

It turns out that most producers would have liked to stay on film, but were forced to change due to budgetary demands. For example, SG-1 producer John Lenic said (speaking about follow-up show Stargate Universe):1

Film still is from a look perspective and in my opinion, the best looking format to shoot on. … The only reason that we didn’t go film is financial. Film still is roughly $18,000 more per episode than digital as you have all the developing and transferring of the film to do after it is shot.

This transfer cost was something that producers were willing to bear when it seemed to be necessary, but less so when HD promised a sharper image for less money. In particular, continuing to use film when shooting for HD would have meant having to do the transfers in HD as well, along with the same finishing costs required for HD post-production, including the effects shots. Additionally, the “cleaner” image of HD video, lacking in rough grain from film stock, made effects compositing a bit easier to do in HD.

With Battlestar Galactica as well, director Michael Rymer initially opposed2 shooting with HD cameras. Here too, the reasons were financial, as it wouldn’t have been possible to produce the show at all if it were shot on film. In the previously cited article, Rymer indicates that he noticed many of the issues that I have pointed out. In particular, he says “the video was picking up the fluorescent bars of various consoles on our [spaceship interior] set”, and calls daylight exteriors “less than satisfactory” and “the worst environment for HD”. That said, Rymer did eventually come around to the look of the show on HD, saying that the sensor noise “approximated film grain nicely” and that he could ultimately achieve the desired aesthetic for the show. On this point I have to disagree, but again, we’re looking back with 20 years of hindsight on how badly early HD footage and CGI work has aged.

As it turns out, a ton of productions were shot with the F900 camera, and cameras like it, and the effect on TV production quality was fairly devastating. This isn’t a critique of shooting digitally in general, of course; the technology has vastly improved over the course of the last decade. Some shows, like The Big Bang Theory, are so reliant on internal sets and carefully controlled lighting that the limitations of shooting HD don’t really show. Others, like The Office, make use of the raw harsh lighting to their advantage, creating an alienating and sterile environment that pushes the presence of the camera in your face, achieving a documentary tone that in my opinion Galactica never did.

Once you decide to go with digital acquisition and approximate the final color grade in-camera, there’s basically no hope of an improved release (for home video or otherwise) years down the road. On the other hand, it’s worth pointing out that early seasons of SG-1, because they were shot on film, could be released in HD one day. The difficulty of course is to find someone willing to put up the money to have the original negatives scanned, and possibly recomposite the effects in HD. For very popular shows, like Star Trek: TNG, this has actually been done, but there’s little hope of the same for less popular shows like Stargate.

Here too, then, there are tradeoffs. While we do have HD versions of the later seasons of SG-1 (though only on streaming services, not Blu-ray), we have to deal with the permanent damage to the image quality done by the acquisition method. On the other hand, the earlier seasons have the potential to see beautiful high definition scans, but this will likely never happen because it’s simply too expensive.

Bart Ehrman and the “strictly historical point of view”

12 September 2020


I recently read the book Jesus, Interrupted by Bart Ehrman. It’s a very nice book. However, he has the following very strange thing to say about the method historians follow: “If historians can only establish what probably happened, and miracles by their definition are the least probable occurrences, then more or less by definition, historians cannot establish that miracles have ever probably happened.”

This is the sort of statement that causes me to scratch my head and re-read several times because I get the strong feeling that it has to be wrong. Ehrman claims that historians must accept methodological naturalism. In other words, a historian can never conclude on the basis of their evidence that a supernatural event occurred. They must always conclude (whatever their personal convictions) whatever is most likely, and what is most likely by definition cannot be a miracle.

While this requires some unpacking already, it seems to me to be based on seriously mistaken notions of both supernatural events and likelihood itself. Before getting into that I think it’s worth considering the implications of such a view.

If I read Ehrman correctly, should he miss the second coming, even extensive video evidence of Jesus Christ returning through the clouds to rule the nations would not convince him of its happening. A supernatural event of any kind is simply above the pay grade of historians to comment on. These events are purely the realm of faith. (One wonders if he would believe his own eyes.)

Not only is this odd, it’s already directly in conflict with Ehrman’s own claims about other supernatural events. For example, this excerpt I’m commenting on is from a book which argues “if the findings of historical criticism are right, then some kinds of theological claims are certainly to be judged as inadequate and wrong-headed. It would be impossible, I should think, to argue that the Bible is a unified whole, inerrant in all its parts, inspired by God in every way.”

Let’s consider what the evangelical claim that the Bible was verbally inspired by God entails, counting the instances of miraculous activity. Conservatives claim that each word of the Bible was inspired by God in the original autographs (1), the books were sufficiently preserved by scribal copying over time (2), and the correct books were canonized by the winning side in the theological battles of the early church (3). Ehrman is on the record as thinking these claims are untenable in the light of the historical evidence (e.g. in light of contradictions in the text), and in fact has dedicated this very book to proving that point. How is this possible if all claims of miracles are the proper domain of faith and not history?

In the book, Ehrman also pokes some well-deserved fun at the fundamentalist KJV-only movement. But what this sect claims, on any intelligible version of their views, is that God inspired the work of the translators. If this is a faith-claim, as it certainly must be, what are we to make of Ehrman and other historians hastening to point out that the translators of the KJV based their work on inadequate sources and made many mistakes? Why should that matter?

To try to clarify this, let me quote extensively what Ehrman has to say about miracles.

Miracles, by our very definition of the term, are virtually impossible events. Some people would say they are literally impossible, as violations of natural law: a person can’t walk on water any more than an iron bar can float on it. Other people would be a bit more accurate and say that there aren’t actually any laws in nature, written down somewhere, that can never be broken; but nature does work in highly predictable ways. That is what makes science possible. We would call a miracle an event that violates the way nature always, or almost always, works so as to make the event virtually, if not actually, impossible. The chances of a miracle occurring are infinitesimal. If that were not the case it would not be a miracle, just something weird that happened.

I think this quote points up the problem quite nicely. Ehrman is begging the question by slipping in the assumption of naturalism into what ought to be a defense of methodological naturalism. Ehrman’s definition of a miracle is quite acceptable, actually. I’ll quote it again: “[a miracle is] an event that violates the way nature always, or almost always, works…” His conclusion from this is that “The chances of a miracle occurring are infinitesimal.”

This simply does not follow. One cannot simply assume that “the way nature … works” is the same as what actually happens. Miracles are by definition supernatural occurrences! If you assume that the story of what happened at some point in the past can be filled out entirely by causal or quasi-causal chains in the natural order of the universe, you’ve simply assumed that miracles do not happen. That, of course, assumes far too much for Ehrman.

What Ehrman can safely conclude is that the chances of a miracle occurring naturally are infinitesimal. To get from there to the claim that historians are justified in rejecting these explanations in all cases requires much more work. I can think of several ways to do so.

For example, maybe Ehrman wants to position the historian as a sort of scientist, who can’t consider supernatural occurrences precisely because they’re supernatural — they’re in the wrong domain. On this view the job of the historian is to reconstruct the best possible naturalistic explanation for what happened in the past.

One problem with this interpretation is simply that Ehrman doesn’t seem to be saying this. He repeatedly asserts that the problem with miracles for the historian is their unlikelihood. He writes:

Historians can only establish what probably happened in the past. They cannot show that a miracle, the least likely occurrence, is the most likely occurrence.

The bigger problem with this view is that it’s just a silly and arbitrary restriction of the historian’s task. Surely the job of the historian is not like that of the scientist at all! The scientist must construct the most plausible account of the behavior of the natural world. The historian, on the other hand, is tasked with giving the most plausible account of what actually happened in the past.1 If the most plausible account involves some supernatural activity (recall the example I gave of video evidence), so be it.

I think the most likely explanation for Ehrman’s reticence to consider supernatural explanations is that they’re an enormous unknown. In order to get from the claim that miracles are extremely unlikely to happen naturally to the claim that they’re extremely unlikely, one has to have a prior for how likely non-natural events are. In other words, do miracles happen? It is not impossible to imagine a world in which miracles happen all the time; indeed, some evangelicals believe we live in such a world. Accounts of miraculous healings and near-death trips to heaven appear regularly in media targeted at conservative Christians. Ehrman (as an agnostic) does not take these accounts seriously. He (and I) don’t believe we live in a world where miracles are common. That said, there’s an enormous difference between the claim that miracles are (at least) uncommon, and the claim that a miracle is always “the least likely occurrence”.

This leaves a vast gray area. For Ehrman, we have no convincing evidence of miracles that would lead him to set a high enough prior for them to figure in many historical explanations. On the other hand, he is unwilling to take the materialistic stance that miracles are impossible and always ruled out as possible accounts of what really happened. Thus he tries to bracket off these concerns as completely as possible from the historian’s task.

If this is really the best reading of what Ehrman wants to do, it’s hard to have methodological objections. It would be an enormous issue for historians if at every turn they had to speculate endlessly about the appropriate prior likelihood of supernatural intervention in the normal course of the universe. Given that miracles are at least not every-day occurrences, historians seem to be justified in attempting to find straightforward scientific accounts of past events. I have no objection to this.

What I continue to object to is the specific defense offered by Ehrman. He writes about the resurrection of Jesus, as a purported historical miracle:

The resurrection is not least likely because of any anti-Christian bias. It is the least likely because people do not come back to life, never to die again, after they are well and truly dead.

In other words, he follows up his claim that his assignment of low probability to the resurrection is not the result of anti-Christian bias with what’s probably the only blatant expression of anti-Christian bias in the whole book! If it is simply a fact that people do not come back to life, then Jesus did not come back to life.2 No wonder the resurrection has such a low probability in his estimation!

What Ehrman presumably means is that under some unclear historian-centric notion of probability, the probability of the resurrection of Jesus is extraordinarily low. This is a notion of probability that makes naturalistic assumptions, not because the probability of supernatural events is low (which would beg the question), but because it is appropriate for historians to bracket off these types of considerations. In other words, it is because the likelihood of a supernatural explanation’s being true is indeterminable that historians must steadfastly refuse to speculate about them.

This passage remains remarkably unclear. If something like this is what Ehrman means, he would do well to say so directly. Meanwhile, those interested in more general answers to these questions must consider the matter more holistically. What one has to think about is quite simply the plausibility of two more or less complete descriptions of the entire world. Which makes more sense: the view that miracles sometimes occur and are the best explanations for some past events? Or that there have never been miracles and all past events are explained in a naturalistic fashion? This is a difficult question to answer — but it is not a question from which reason must be banished as a matter of faith.

Note:

After writing this, I subsequently read one of Ehrman’s other books, How Jesus Became God. In this book, written about five years after Jesus, Interrupted, Ehrman has a more nuanced take on the historian’s role. For example, Ehrman writes,

It is not appropriate for a historian to presuppose a perspective or worldview that is not generally held. “Historians” who try to explain the founding of the United States or the outcome of the First World War by invoking the visitation of Martians as a major factor of causality will not get a wide hearing from other historians—and will not, in fact, be considered to be engaging in serious historiography. Such a view presupposes notions that are not generally held—that there are advanced life-forms outside our experience, that some of them live on another planet within our solar system, that these other beings have sometimes visited the earth, and that their visitation is what determined the outcome of significant historical events.

This is a useful comment. Something about the historian’s task prevents them from invoking beings or phenomena that are not accepted already by a majority of other historians and scientists. In perhaps the best version of his view in the book, Ehrman continues:

The supernatural explanation, on the other hand, cannot be appealed to as a historical response because (1) historians have no access to the supernatural realm, and (2) it requires a set of theological beliefs that are not generally held by all historians doing this kind of investigation.

Here Ehrman drives the cleanest wedge between the question “what actually happened?”, and the historian’s question, which is (here) seemingly “what is the most plausible historical-naturalistic reconstruction of what happened?” This confirms that my previous suggestion that Ehrman wants to bracket off supernatural concerns from history may have some truth to it.

Unfortunately, Ehrman is not always so clear is this book. Later, he returns to making almost exactly the same claim that he made in Jesus, Interrupted:

But simply looking at the matter from a historical point of view, any of these views is more plausible than the claim that God raised Jesus physically from the dead. A resurrection would be a miracle and as such would defy all “probability.” Otherwise, it wouldn’t be a miracle. To say that an event that defies probability is more probable than something that is simply improbable is to fly in the face of anything that involves probability. Of course, it’s not likely that someone innocently moved the body, but there’s nothing inherently improbable about it.

Here Ehrman returns to the concern with probability and the strange claim that miracles are inherently the most improbable explanation. This suggests, contra the statements made earlier in the same chapter, that the historian is concerned with the question “what actually happened?” and is simply inferring to the most plausible explanation. As I argued above, anyone truly committed to answering this question cannot simply forswear any non-naturalistic explanations, because there’s simply no way to show a priori that the supernatural will never figure in the most reasonable account of historical events.


  1. As philosophers of science have pointed out, there is some overlap between the fields in Biology. 

  2. Paul noticed this with remarkable clarity. “Now if Christ is proclaimed as raised from the dead, how can some of you say there is no resurrection of the dead? If there is no resurrection of the dead, then Christ has not been raised; and if Christ has not been raised, then our proclamation has been in vain and your faith has been in vain.” (1 Cor 15:12-14 NRSV) The argument here is straightforward and if Paul is right, then the assumption that people do not come back to life is certainly an anti-Christian one. Likewise, Paul would no doubt be surprised to hear that “Believers believe that all these things are true. But they do not believe them because of historical evidence.” Paul (and the Gospel authors) regularly offer such evidence, as Ehrman himself points out in the book. * * 

The False Claim that Bernie Sanders Was Sunk in 2016 by Black Voters

09 September 2020


I’ve heard the claim repeated dozens of times that the reason Bernie Sanders failed to win the 2016 Democratic Primary1 was because he wasn’t able to get enough support from black voters. This has become such a truism among some pundits that attempting to refute it smacks of a conspiracy theory, but I hope to show that it is actually false convincingly in this article. It turns out that the claim hangs on math that is actually fairly unintuitive, so much so that even after doing the calculations for many states, I still found myself unable to guess what level of support Bernie Sanders got among black voters versus white ones in any particular state when looking at exit polling data.

This may sound absurd. After all, the exit polls can look straightforward at first glance. For example, in South Carolina, the exit poll data contains something like the following table:

Candidate White (35%) Black (61%)
Clinton 54% 86%
Sanders 46% 14%

Source: CNN2

Nothing could be simpler, right? 35% of the voters were white, 61% were black, and of the white voters, 54% went for Clinton, 46% went for Sanders. Of the black voters, 86% went for Clinton, 14% went for Sanders. Sanders has a gap among white voters of 8%, and a gap of 72% among black voters. Repeat this process on all 50 states, some of which are much closer than South Carolina, and you can trivially generate your hot take for MSNBC from there.

What’s wrong with this analysis? Well, what I’m interested in when I ask the question “what’s Sanders’ relative support among black and white voters?” is whether, if you asked every single voter leaving a primary election in 2016, a greater proportion of black voters would support Sanders than white voters, or vice versa. Or to put it in simpler terms, if you know 14 random white people, and 14 random black people, which group is going to have the greater number of Sanders supporters? I hope that seems like the obvious thing to be interested in to you too.

It will probably surprise you then to learn that the answer in South Carolina is that two in fourteen black voters support Sanders, and only about one in fourteen white voters support Sanders.

How can this be? It’s because of a very simple fact that the exit poll is unintentially obfuscating: if you know fourteen white people in South Carolina, about one of them will support Sanders, one will support Clinton, and twelve of them are Republicans! This is the kind of demographic fact that exit polls don’t capture, because they’re not designed to. Polling results are divided and reported separately for Democrats and Republicans, even though the elections and exit polls are (usually) held simultaneously.

Fortunately, official election results and exit polls do provide enough data to pretty reliably piece together what the actual political distribution looks like. The actual distribution of South Carolina voters looks like this:

Candidate White Black
GOP sum 84.6% 3.2%
Clinton 8.3% 83.3%
Sanders 7.1% 13.6%

As this table suggests, South Carolina is extraordinarily bifurcated along racial lines. White people in this state are extremely far-right, to such an incredible extent that in a primary election where 75% of voters were white, 61% of Democratic voters were black. While rarely to such an extreme extent, this is true of just about every state, and has a similar distorting effect on the results of exit polls, and therefore a similar distortion on political commentary that is based on those polls.

I went through the exit poll data, and put together a complete summary based on every state I could get data for. Let me briefly explain how the math here is done, using South Carolina as an example. Feel free to skip over this paragraph entirely if you’re not interested in this. The number of votes for each candidate is a matter of public record. I used The Green Papers as my primary source here. This site records that 740,881 votes were cast in the Republican primary, 370,904 votes in the Democratic primary. Now we look at the exit poll data. In South Carolina3, in the Republican primary 96% of voters were white, 1% were black. So we estimate that there were 7409 black Republican voters, 711,246 white Republican voters. The same procedure for the Democrats reveals that 226,251 of their voters were black, 129,816 were white. The exit poll data shows that 46% of white Democrats voted for Sanders, while 14% of black Democrats did. So this means there were about 31,675 black voters for Sanders, and 59,716 white voters. The total number of white voters in the election was 841,0624 and the total number of black voters was 233,660. So 13.6% of black voters went for Sanders, and only 7.1% of white voters did.

Obviously there will be some degree of error in the exit polls, and therefore in these results. But it’s not that severe: for example, if the proportion of Republican voters who were black was changed to 0% or 2% (from 1%), this would make a difference of about half a percent in Sanders’ support among black voters. Taking all states together should have the effect of evening out the errors, although some systamatic errors may remain. I’m not too bothered by this, since the point of this article is to counter a false view that the pundits take themselves to have learned from these very exit polls. If the polls themselves are untrustworthy, then their conclusion is unsound too.

Anyway, on to the results. They’re based on the total vote of 21 states in which all of the following were true: they held primaries in 2016, an exit poll was taken in them by the major media organizations, and the exit poll had a sufficient number of black respondents to draw conclusions about who they supported. (The primary qualifier is important because in caucus states like Iowa, the total popular vote count was not the official result reported by the election.) These states are South Carolina, Alabama, Arkansas, Georgia, Oklahoma, Tennessee, Texas, Virginia, Michigan, Mississippi, Florida, Illinois, North Carolina, Ohio, Wisconsin, New York, Connecticut, Maryland, Pennsylvania, and Indiana. I would have liked to include California, but they voted so late in 2016 that the media didn’t take an exit poll. Here is a spreadsheet with the math.

Here are the results:

Candidate White Black
GOP sum 63.9% 12.6%
Clinton 17.5% 67.8%
Sanders 17.9% 18.7%

In other words, my claim holds for all states in which there is data. On the whole, black voters are at least as likely to support Sanders as white voters. (The difference between the two is +0.8% for black voters, but I suspect that’s within the margin of error of this kind of research.)

Now, a certain kind of pundit might be inclined to respond as follows: “If you look just at the relative support for Clinton vs. Sanders among white voters, you’ll see that Sanders edges out, and so it remains true that Sanders lost the race because of poor support among minorities.”

I find this sort of analysis rather unhelpful. To put it simply, what we are imagining is disenfranchising all minorities … in which case, yes, Sanders would have won the 2016 Democratic primary, and then would have gotten utterly crushed in the general election because the Democrats depend on minority support for their basic viability as a party. It’s wrong in another way too: the pundit (at least rhetorically) takes the point of view of Sanders, and decides that “blame” needs to be parceled out to various Democratic primary demographic groups according to the degree to which they failed to support him. (Alternatively, a pundit might take a rhetorical position opposing Sanders and blame him for failing to reach out to these groups.) This isn’t really what’s happening in a primary. The reason that moderate and conservative black voters play such an enormous role in the Democratic primary is that almost two thirds of white voters are so far right that they don’t vote in the Democratic party primary at all!

Now, you might imagine a less racialized (and simpler) country in which the major political parties were basically in alignment with the range of political views along a left-right spectrum. There would be a lot more black Republican voters. The question of why we don’t live in something closer to that world is an interesting one; FiveThirtyEight took this question on directly in a recent article.5 Their conclusion was that “social pressure is what cements that relationship between the black electorate and the Democratic party”. The word “cements” is doing a lot of work here. Social pressure certainly can’t explain the majority of the effect; the same article says that 85% of black respondents identified as Democrats in an online poll where social pressure was not a factor.

It seems plausible to me that another significant factor is a response to the racialized politics of the Republican party, as the extreme proportion of white supporters in its ranks attests. If this is true, though, why wouldn’t the party take the pragmatic approach by toning down its rhetoric to pull in the many conservative minorities who are aligned with them on policy questions? Certainly, part of the answer is that they haven’t needed to so far, and that the rhetoric may serve to energize part of their white base, but what this research may suggest is that it can actually be helpful to a political party to have a large number of people consistently voting to nominate moderates in the opposing party’s primary process.

The promise of Sanders all along, of course, was that the supposed left-right spectrum is a lie. If people (and their candidates) do fall on a simple spectrum like that, then you can trivially show that the Condorcet winner will be a centrist. Even in a complicated two-party system like that of the United States, a centrist is expected to be the strongest candidate the majority of the time. (Obviously, the Electoral College throws a wrench into this.) But Sanders, and Trump to some extent, represent a claim that the true views of most voters are not well represented by the current two-party system, and that in fact someone very far to the left (or right) on the current spectrum might be more acceptable to the median voter than a centrist.

How else to understand Sanders’ candidacy at all? So far, he has not shown signs of being able to win a Democratic primary, suggesting (but not proving) that he’s too far left for many Democrats. If this is true, then he’d be sure to lose a general election that introduces an almost equal number of Republican voters. However, he has surprisingly performed at or near the top of recent head to head polls against Donald Trump, compared with other Democrats. What does that mean?

I suggest that the one explanation that suffices includes multiple factors. One very important reason why Sanders would stand a chance in a general election is polarization. Most regular voters in this country are loyal to one party or the other, and loath to switch parties based merely on the ideology of their candidates. (Moreover, there are slightly more Democratic voters than Republicans.) So if Sanders wins a Democratic primary, most of his support will come from loyal Democrats who don’t necessarily approve of all his policies. That said, it’s notable that Sanders has consistently performed at or near the top of these polls. I suggest this means that there must be some truth to his claim to represent those who do not find themselves cleanly on the left-right American political spectrum.

It’s important to notice that these two explanatory factors pull in opposite directions. On a strict party-loyalty hypothesis, it wouldn’t matter at all who gets nominated. This seems to be mostly true (for the small number of candidates who actually stand some chance of being nominated), but it’s not the whole story. Sanders represents the possibility of pulling support beyond mere party loyalty, and he’s succeeded to some extent at that, but perhaps not enough to win a primary election.

In the final analysis, this shows exactly why the exit poll based criticism of Sanders is misguided. Among Democrats, black voters are much less likely to support Sanders than white voters. But this is largely because of partisan demographics that Sanders can’t help: the Democratic party pulls in a number of surprisingly conservative black voters, while the Republican party presumably has a corresponding effect on many white voters who might be open to Sanders’ policy aims, but are more at home in their party’s racial antagonism.

On the whole, Sanders’ problem is not with black voters; they support him at equal or greater rates than do white voters. His problem is that his promise of pulling voters from both parties and those currently unaligned has not yet come to fruition. He has not been able to shift the majority of white voters away from their Republican or independent allegiances. The hope for left wing Sanders supporters must be that time and voter education will cause a realignment, and that people like Sanders will begin to see support across the political spectrum and from current non-voters. His high level of support from young voters does suggest some promise. But if the American political spectrum did accurately reflect the distribution of its voters, there would be little hope for candidates like Sanders in the near future. America simply has too many white people on the far right for that.


  1. And now the 2020 primary election as well. This article focuses on the 2016 election specifically, because in this election Sanders faced only one challenger, giving a clearer picture of where voters stood than in 2020, when he faced a very broad field. 

  2. https://www.cnn.com/election/2016/primaries/polls/sc/Dem 

  3. https://www.cnn.com/election/2016/primaries/polls/sc/Rep 

  4. This is neglecting third party voters. There are very few of them, and they appear to be disproportionately white, so if they were included, they would further reduce Sanders’ support among white voters. 

  5. https://fivethirtyeight.com/features/why-so-many-black-voters-are-democrats-even-when-they-arent-liberal/ 

God's RNG

09 September 2020


This is a short story that I wrote some time ago. It’s designed to illustrate some interesting properties of CSPRNGs (Crytographically Secure Pseudo Random Number Generators), which form the bedrock of modern encryption techniques. In particular, the story highlights the fact that a single rather short key is sufficient to generate all the random numbers anyone will ever need. You don’t need a continuous source of pure entropy.

Part 1: The Universal RNG

During the universe’s design stage, it was realized that making some events probabilistic from the point of view of human beings was a greatly desirable property. (Several small proto-universes failed shortly after the intelligent species that populated them was able to work out the deterministic laws behind every event.) For the universe that ultimately went into production, it was decided that making a great many events (including quantum fluctuations) chancy was the safest approach.

True Believers hold that all these chancy events are Really random. That is, they believe that whenever the universe needs a new random number, God uses their infinite power to create it in their mind ex nihilo, and there’s simply nothing more to be said in the way of explanation. Skeptics hold that not even God is capable of acts of creation of this kind, and that there must be some ultimately deterministic story about where these numbers come from.

As it turns out, both are wrong. God is perfectly capable of creating Really random numbers, but although omnipotent is far too lazy to continue doing this all the time. Perhaps God has other universes to tend to, or maybe Heaven needs random numbers for some secret purpose of its own. In any case, the fact is that God only ever bothered to generate 2^8 random binary numbers, in the form of a single 256 bit key embedded into the universe’s core systems. Whenever any “chancy” event needs to happen, the random information is generated by the universe using a CSPRNG that (entirely by coincidence) is exactly equivalent to ChaCha201. So it turns out that the universe is fundamentally deterministic, just not in the way anyone expected.

A stubborn cohort of angels on the review board insisted that this was an inelegant solution to the problem of randomness. They tried to convince God to create an Oracle that would generate Really random numbers all on its own for the universe to use. The Almighty was undeterred, ultimately ruling that the system as designed was “good enough”. Many suspected that God’s real reason was that having another thing around capable of generating uncaused events was taken to be a slight to the Divine dignity. Lucifer led several others in resigning from the panel in protest. Following a disruptive sit-in at God’s office, he was cast like lightning from Heaven.

Unsurprisingly, God was right about the system being good enough. After all, the whole point of the system was to prevent humans from predicting events that were meant to be unpredictable without requiring the intervention of miracles.2 One complaint was that the total number of requests to the RNG over the universe’s lifetime might possibly exceed a value at which the RNG would begin to cycle. However, it was shown that collecting enough data to exploit (or even have a chance at detecting) the issue was physically impossible due to the energy constraints of the universe.

Of course, no steps needed to be taken to prevent direct attacks on the CSPRNG’s state, or key recovery, since these were coded into the OS of the universe itself, and life forms in the universe would have no access to them. So that’s the system that was ultimately put in place: every “random” event that ever happens in this universe can ultimately be traced back to its initial state and the single 256 bit key that makes it unique. While other designs based on entropy pools with estimators were considered, God worried about the universe blocking if at some point they forgot to update the pool with new random data. It was determined that the CSPRNG approach provided enough practical security with a single hard-wired key set at the beginning of time.

Part 2: God’s /dev/random

It is well known that God has a phone number.3 What is less commonly known is that when God designed the universe, they added a number of other interfaces intended to be helpful to human beings. The True Believers, for example, have it as an article of their faith that God is listening in all the time on /dev/null. But the most useful interface in God’s /dev is undoubtedly /dev/random.

The design team realized quite early on that humans themselves would need sources of randomness. Since every bit of random data is ultimately generated by God’s RNG anyway, it was decided that /dev/random should just return data straight from the RNG with no scrambling. Although this provided far more direct access to the RNG than its designers had initially anticipated, it was determined that its security margin was sufficiently high to allow for these queries.

Access to /dev/random was provided on Earth in a number of high and holy places. God’s interfaces are so fast that they are able to provide data to human devices at the full speed of any interface any humans have been able to construct so far. Of course, all these interfaces have to get their data ultimately from a single device built into the universal mainframe, but light travel time isn’t a problem since that was a constraint built into the universe’s physical laws, not something that applies to the machine the universe runs on.

For a long time humans were happy to take their devices to the nearest /dev to be filled up with random data. But Lucifer, displeased with the success of the system, tricked one of them into accepting data from an illicit, possibly backdoored source. God was pissed, and things generally went to hell for a while after that. While some authorities wanted to shut down the /dev system entirely, God ultimately decided that since the security of /dev/random hadn’t been compromised in any way, they would leave the system in place. In general, however, access to /dev for ordinary humans became more difficult after this, and many of the high and holy places fell under the control of nation states or were sold off to corporations for extraction of their natural resources.

It gradually came about that humans started to need random numbers more frequently, and even though you could get as many numbers as you needed from /dev/random, the latency caused by having to travel to an accessible holy place was considered unacceptable. Instead, it became common for priests to provide their own sources of random numbers. They would do this by traveling themselves and returning with 256 bits of random data, which they would then use as a key to seed a CSPRNG that was (incidentally) similar to God’s own. While the priests’ computers could provide random data only much more slowly than /dev/random, the latency was much better because people didn’t have to travel so far. This method managed to sustain most civilizations for centuries, resulting in a hierarchy where only the highest ranking bishops had direct access to /dev/random, and local priests would seed their own CSPRNG’s from 256 bit keys provided from their RNGs instead of directly from God’s sources.

Cracks emerged. The role of priests in this scheme became widely regarded as suspect. After all, an untrustworthy priest could be providing random bits from a less-than-holy source, and if anyone on the chain between you and God’s RNG was a bad actor, they could potentially uncover your secrets. Protestants began to insist on making the journey to /dev themselves to get their own keys, and rolling-your-own PRNG functions quickly became a widespread practice. A number of televangelists were found to be using keys of unknown origin with less than 32 bits of entropy.

Cryptographers eventually invented solutions for collecting and estimating entropy, and most skeptics stopped caring about having any link back to the “supposedly” holy /dev/random. Instead, their operating systems gathered entropy from secular sources like ordinary “random” events. Of course any key they created was ultimately the result of deterministic processes that had their origin in God’s RNG, but practically speaking this had no effect on their security.

Perhaps most surprising of all was the group of Satanists who insisted on using random numbers generated from secret sources supposedly provided by Lucifer himself. They claim Lucifer has crafted mechanisms for generating Really random numbers, such that every number you get from the Devil’s /dev/random is entirely Real, not backed by a PRNG. Expert theologians and cryptographers currently believe this to be impossible. Even if Lucifer is using some kind of chancy mechanism to generate these numbers, the process must be ultimately deterministic and known to God.

Part 3: Unexpected Consequences

A number of crypto nerds needed to generate 2048 bit keys for use with asymmetric cryptosystems like RSA. Many of them suspected that God’s RNG might be a PRNG or otherwise distrusted it, and decided like the secularists to collect their own sources of entropy from the universe. They relied on the only the most conservative estimations of entropy, collecting a full 2048 bits of entropy into their pools before turning that data via convoluted methods into their keys. The irony of this, of course, was that every event in all of space-time put together only contained the 256 bits of true randomness hard coded into it at the moment of creation. Their keys were no better than 2048 bits taken from God’s /dev/random, even no better than 2048 bits taken from a CSPRNG seeded by 256 bits taken from God’s /dev/random.

There is a strange beauty to the fact that all of this was fundamentally secure. No one, no matter how many bits they stored and analyzed from God’s RNG, had any hope of doing better than 50/50 at guessing the next bit that would come out, which someone else could securely use for any purpose. So long as every person in the chain from God’s RNG was trustworthy, each person could take a mere 256 bits to seed a CSPRNG from the person who came before, and every 256 bits that came out of the 10th person’s CSPRNG was just as cryptographically secure as the same amount of data taken from God’s own /dev/random. 256 bits of sufficiently unpredictable data really is enough for everyone, forever.4

Unfortunately, it didn’t last forever. One of God’s interns introduced a use-after-free into the universe’s code, and a too-clever hacker who found their way into one of the remaining high and holy places managed to root the universal mainframe. In a matter of minutes, they had accidentally triggered a debugging function that had been left in the code, which led to a kernel panic. The universe went out like a light.


  1. To be precise, God used ChaCha20 with what Daniel J. Bernstein calls “fast-key-erasure” here. The point of this isn’t to provide protection against backtracking (key recovery was assumed to be impossible by the design team), but in this case is an efficient and secure way of rekeying which is required by the ChaCha20 cipher because of its smallish 64 bit counter. God briefly considered AES-256-CTR, but decided against it because of its small block size (128 bits), which makes it possible to distinguish from a random oracle with a sufficient number of requests. In theory fast-key-erasure might be enough to protect against this, even without rekeying with new randomness, but the security margin was deemed insufficient in light of available alternatives. 

  2. Additionally, leaving open the possibility (from the human point of view) that the universe was non-deterministic was discovered to have psychological benefits. 

  3. It’s 42, as suggested by the philosopher Majikthise in Douglas Adams’ Hitchhiker’s Guide to the Galaxy. Unfortunately, God did not put any audio interfaces in /dev

  4. Based on my reading of Bernstein’s article here

The Odds of a Correct First Guess in Clue

09 September 2020


Prompted by a strange dream, I decided to calculate what your odds are of correctly guessing the three pieces of evidence the first time in the game of Clue.

In practice, successfully doing this is likely to provoke accusations of cheating. But a simple calculation will show that this is likely undeserved. In a standard game of Clue1, there are six character cards, six weapon cards, and nine location cards. Without any information at all, that gives the odds of correctly guessing on your first turn at only 1/6 × 1/6 × 1/9 = 1/342, which is frequent enough that anyone who plays Clue many times is likely to encounter it. Keep in mind that each player has these odds on their first guess, which significantly raises the chances of ever seeing it happen in a game.

Of course, in every game of Clue each player will have some evidence, and so the odds of a correct first guess go up quite a bit. How much? That depends on how much evidence (how many cards) you receive.

The rules for the distribution of evidence are pretty simple. The three “correct” cards are removed from the deck of evidence, it’s shuffled, and distributed to players as evenly as possible. The players then proceed to interrogate each other about the cards they have, in order to eliminate live possibilities about the correct combination of person, weapon, and location. You hope to eliminate all but one combination (the correct one) before any other player can do so. In my circles, when children are playing, the players are arranged so that the younger will receive more cards than the older if they can’t be divided evenly.

Not every combination of cards is equally likely. If you are to receive five cards, you’re most likely to receive two of two of the card types and one of the third, or three locations, one weapon, and one person. These five-card hands are dealt a combined 58% of the time! In addition, some hands make a correct first guess easier than others: for a five-card hand, the best hand (all characters or all weapons) gives you almost three times better odds than the worst one (four locations, one other card). Note that actually, the better hands tend to be heavy in locations because you have to visit fewer of them. A low-location hand only improves your chances of guessing blindly.

Okay, so what we have to do is figure out the odds of each hand combination (multiset), and multiply that by the chances of a correct first guess for each hand, and sum up the results to get the total odds of a correct first guess (assuming a perfectly shuffled deck). I wrote Python code to do that here.

Now, here are the results! Some of the hands aren’t possible, because standard clue only supports six players, which means each of them would get three cards, but I’ve included “odds” games, where a player might start out with zero to two instead.

A graph showing the rarity of getting a correct first guess.

So for a five card hand, you’d expect to guess correctly the first time about once every 136 games. With six cards that drops to once every 111 games! Combining these facts with multiple players, you can show that fair games of clue will end with a player solving the mystery on their first turn every 30-40 games.

A Clue bot?

Thinking about this problem made me consider writing a Clue bot, but I ended up deciding against it. It might be an interesting project: you can do a very good approximation of perfect play with a bot that just tabulates its knowledge about every player’s hand and uses a simple pathfinding algorithm to efficiently traverse the board.

However, there are two good reasons not to bother. One is that Clue isn’t a “fair” game: an improved strategy may reduce your win rate rather than improve it. (In this specific sense, both Chess and Candy Land are fair.) The reason for this is that the standard rules of Clue say:

To make a Suggestion, move a Suspect and a Weapon into the Room that you just entered.

Normally, moving around the board is a slow process, since rooms are fairly far apart and you only get to move one d6 each turn. (This also adds quite a bit of luck into the game.) However, because the murder suspects are also other players, the above rule means that each guess (“Suggestion”) you make will instantly teleport one of them into the room with you. This can either aid (by vastly reducing travel time) or harm (by preventing an intended move) another player.

With coordination among the other players, it’s possible to harass one player and make it almost impossible to plan movements. Even without this unfair practice, it’s often in the interest of individual players to harass those of equal or greater skill to them. That’s just clever play! This can backfire, of course, but the better a player (or bot) is, the more likely other players are to attempt it, and it can make intelligent pathfinding impossible.

There’s a simpler reason not to bother with a bot, however, and that’s that close to perfect play is already easily achievable by humans. We’re already pretty good at intuiting optimal routes, and extracting as much information as possible from gameplay is easily done with an algorithm:

The game comes with worksheets for the players to use which list every card in rows, and have several columns (probably intended to save paper over multiple games). Simply assign the first column to yourself, and every succeeding column to the other players in the order of play. The additional columns are used to collect any information you can obtain about what hands the other players have. At the top of each column write the number of cards that player has. Use your own column to summarize everything you know about the solution. An “x” means that you know that a card is not part of the solution, and a box means that you know it is.

For the other columns, a box means that the player does not have the corresponding card. An “x” means that they do (and therefore, that there should also be an “x” in your column, the “solution” column). Whenever a player is not able to show any cards to someone (including you), place the box in each of the rows for that player. When a player shows a card to someone besides you, place a tiny number in that column in any row they might have a card in. (Simply increment the number you use in each column every time you need a new one.) Whenever logic forces you to place a box in someone’s column, check to see if only one row that shares a number remains, and you can then put an “x” there. If you can work out every card that a player has, you can put a box in every other row.

Example: Player 1 suggests Ms. Scarlet, the candlestick, and the ballroom. Player 2 has none of these cards, so you put a box on each one in their column. Player 3 shows a card, so you put a “1” in each box in their column. Player 2 suggests Ms. Scarlet, the knife, and the kitchen. Player 3 has none of these cards, so you now have a box for Ms. Scarlet in their column. It comes around to your turn, and you suggest Mr. Green, the candlestick, and the library. Player 1 shows you the candlestick. So you put an “x” on their column for candlestick, which means a box belongs in Player 3’s column for the candlestick. Now you’re only left with one “1” in that column, on the ballroom. So you know Player 3 must have shown Player 1 that card and so you can put an “x” in the box. Now you know it’s not part of the correct solution!

Obviously there’s a bit more improvement you can do with ideal guessing, and it might make sense to keep track of what other players know so you can surmise if they’re about to make a correct accusation, in which case you might want to jump the gun if you have a 50/50 shot. But 95% of strategy can be easily implemented by a player following the approach above.


Next page
©2020 Adam Fontenot. Licensed under CC BY-SA. About Me Projects RSS Feed