Adam Fontenot

The False Claim that Bernie Sanders Was Sunk in 2016 by Black Voters

09 September 2020


I’ve heard the claim repeated dozens of times that the reason Bernie Sanders failed to win the 2016 Democratic Primary1 was because he wasn’t able to get enough support from black voters. This has become such a truism among some pundits that attempting to refute it smacks of a conspiracy theory, but I hope to show that it is actually false convincingly in this article. It turns out that the claim hangs on math that is actually fairly unintuitive, so much so that even after doing the calculations for many states, I still found myself unable to guess what level of support Bernie Sanders got among black voters versus white ones in any particular state when looking at exit polling data.

This may sound absurd. After all, the exit polls can look straightforward at first glance. For example, in South Carolina, the exit poll data contains something like the following table:

Candidate White (35%) Black (61%)
Clinton 54% 86%
Sanders 46% 14%

Source: CNN2

Nothing could be simpler, right? 35% of the voters were white, 61% were black, and of the white voters, 54% went for Clinton, 46% went for Sanders. Of the black voters, 86% went for Clinton, 14% went for Sanders. Sanders has a gap among white voters of 8%, and a gap of 72% among black voters. Repeat this process on all 50 states, some of which are much closer than South Carolina, and you can trivially generate your hot take for MSNBC from there.

What’s wrong with this analysis? Well, what I’m interested in when I ask the question “what’s Sanders’ relative support among black and white voters?” is whether, if you asked every single voter leaving a primary election in 2016, a greater proportion of black voters would support Sanders than white voters, or vice versa. Or to put it in simpler terms, if you know 14 random white people, and 14 random black people, which group is going to have the greater number of Sanders supporters? I hope that seems like the obvious thing to be interested in to you too.

It will probably surprise you then to learn that the answer in South Carolina is that two in fourteen black voters support Sanders, and only about one in fourteen white voters support Sanders.

How can this be? It’s because of a very simple fact that the exit poll is unintentially obfuscating: if you know fourteen white people in South Carolina, about one of them will support Sanders, one will support Clinton, and twelve of them are Republicans! This is the kind of demographic fact that exit polls don’t capture, because they’re not designed to. Polling results are divided and reported separately for Democrats and Republicans, even though the elections and exit polls are (usually) held simultaneously.

Fortunately, official election results and exit polls do provide enough data to pretty reliably piece together what the actual political distribution looks like. The actual distribution of South Carolina voters looks like this:

Candidate White Black
GOP sum 84.6% 3.2%
Clinton 8.3% 83.3%
Sanders 7.1% 13.6%

As this table suggests, South Carolina is extraordinarily bifurcated along racial lines. White people in this state are extremely far-right, to such an incredible extent that in a primary election where 75% of voters were white, 61% of Democratic voters were black. While rarely to such an extreme extent, this is true of just about every state, and has a similar distorting effect on the results of exit polls, and therefore a similar distortion on political commentary that is based on those polls.

I went through the exit poll data, and put together a complete summary based on every state I could get data for. Let me briefly explain how the math here is done, using South Carolina as an example. Feel free to skip over this paragraph entirely if you’re not interested in this. The number of votes for each candidate is a matter of public record. I used The Green Papers as my primary source here. This site records that 740,881 votes were cast in the Republican primary, 370,904 votes in the Democratic primary. Now we look at the exit poll data. In South Carolina3, in the Republican primary 96% of voters were white, 1% were black. So we estimate that there were 7409 black Republican voters, 711,246 white Republican voters. The same procedure for the Democrats reveals that 226,251 of their voters were black, 129,816 were white. The exit poll data shows that 46% of white Democrats voted for Sanders, while 14% of black Democrats did. So this means there were about 31,675 black voters for Sanders, and 59,716 white voters. The total number of white voters in the election was 841,0624 and the total number of black voters was 233,660. So 13.6% of black voters went for Sanders, and only 7.1% of white voters did.

Obviously there will be some degree of error in the exit polls, and therefore in these results. But it’s not that severe: for example, if the proportion of Republican voters who were black was changed to 0% or 2% (from 1%), this would make a difference of about half a percent in Sanders’ support among black voters. Taking all states together should have the effect of evening out the errors, although some systamatic errors may remain. I’m not too bothered by this, since the point of this article is to counter a false view that the pundits take themselves to have learned from these very exit polls. If the polls themselves are untrustworthy, then their conclusion is unsound too.

Anyway, on to the results. They’re based on the total vote of 21 states in which all of the following were true: they held primaries in 2016, an exit poll was taken in them by the major media organizations, and the exit poll had a sufficient number of black respondents to draw conclusions about who they supported. (The primary qualifier is important because in caucus states like Iowa, the total popular vote count was not the official result reported by the election.) These states are South Carolina, Alabama, Arkansas, Georgia, Oklahoma, Tennessee, Texas, Virginia, Michigan, Mississippi, Florida, Illinois, North Carolina, Ohio, Wisconsin, New York, Connecticut, Maryland, Pennsylvania, and Indiana. I would have liked to include California, but they voted so late in 2016 that the media didn’t take an exit poll. Here is a spreadsheet with the math.

Here are the results:

Candidate White Black
GOP sum 63.9% 12.6%
Clinton 17.5% 67.8%
Sanders 17.9% 18.7%

In other words, my claim holds for all states in which there is data. On the whole, black voters are at least as likely to support Sanders as white voters. (The difference between the two is +0.8% for black voters, but I suspect that’s within the margin of error of this kind of research.)

Now, a certain kind of pundit might be inclined to respond as follows: “If you look just at the relative support for Clinton vs. Sanders among white voters, you’ll see that Sanders edges out, and so it remains true that Sanders lost the race because of poor support among minorities.”

I find this sort of analysis rather unhelpful. To put it simply, what we are imagining is disenfranchising all minorities … in which case, yes, Sanders would have won the 2016 Democratic primary, and then would have gotten utterly crushed in the general election because the Democrats depend on minority support for their basic viability as a party. It’s wrong in another way too: the pundit (at least rhetorically) takes the point of view of Sanders, and decides that “blame” needs to be parceled out to various Democratic primary demographic groups according to the degree to which they failed to support him. (Alternatively, a pundit might take a rhetorical position opposing Sanders and blame him for failing to reach out to these groups.) This isn’t really what’s happening in a primary. The reason that moderate and conservative black voters play such an enormous role in the Democratic primary is that almost two thirds of white voters are so far right that they don’t vote in the Democratic party primary at all!

Now, you might imagine a less racialized (and simpler) country in which the major political parties were basically in alignment with the range of political views along a left-right spectrum. There would be a lot more black Republican voters. The question of why we don’t live in something closer to that world is an interesting one; FiveThirtyEight took this question on directly in a recent article.5 Their conclusion was that “social pressure is what cements that relationship between the black electorate and the Democratic party”. The word “cements” is doing a lot of work here. Social pressure certainly can’t explain the majority of the effect; the same article says that 85% of black respondents identified as Democrats in an online poll where social pressure was not a factor.

It seems plausible to me that another significant factor is a response to the racialized politics of the Republican party, as the extreme proportion of white supporters in its ranks attests. If this is true, though, why wouldn’t the party take the pragmatic approach by toning down its rhetoric to pull in the many conservative minorities who are aligned with them on policy questions? Certainly, part of the answer is that they haven’t needed to so far, and that the rhetoric may serve to energize part of their white base, but what this research may suggest is that it can actually be helpful to a political party to have a large number of people consistently voting to nominate moderates in the opposing party’s primary process.

The promise of Sanders all along, of course, was that the supposed left-right spectrum is a lie. If people (and their candidates) do fall on a simple spectrum like that, then you can trivially show that the Condorcet winner will be a centrist. Even in a complicated two-party system like that of the United States, a centrist is expected to be the strongest candidate the majority of the time. (Obviously, the Electoral College throws a wrench into this.) But Sanders, and Trump to some extent, represent a claim that the true views of most voters are not well represented by the current two-party system, and that in fact someone very far to the left (or right) on the current spectrum might be more acceptable to the median voter than a centrist.

How else to understand Sanders’ candidacy at all? So far, he has not shown signs of being able to win a Democratic primary, suggesting (but not proving) that he’s too far left for many Democrats. If this is true, then he’d be sure to lose a general election that introduces an almost equal number of Republican voters. However, he has surprisingly performed at or near the top of recent head to head polls against Donald Trump, compared with other Democrats. What does that mean?

I suggest that the one explanation that suffices includes multiple factors. One very important reason why Sanders would stand a chance in a general election is polarization. Most regular voters in this country are loyal to one party or the other, and loath to switch parties based merely on the ideology of their candidates. (Moreover, there are slightly more Democratic voters than Republicans.) So if Sanders wins a Democratic primary, most of his support will come from loyal Democrats who don’t necessarily approve of all his policies. That said, it’s notable that Sanders has consistently performed at or near the top of these polls. I suggest this means that there must be some truth to his claim to represent those who do not find themselves cleanly on the left-right American political spectrum.

It’s important to notice that these two explanatory factors pull in opposite directions. On a strict party-loyalty hypothesis, it wouldn’t matter at all who gets nominated. This seems to be mostly true (for the small number of candidates who actually stand some chance of being nominated), but it’s not the whole story. Sanders represents the possibility of pulling support beyond mere party loyalty, and he’s succeeded to some extent at that, but perhaps not enough to win a primary election.

In the final analysis, this shows exactly why the exit poll based criticism of Sanders is misguided. Among Democrats, black voters are much less likely to support Sanders than white voters. But this is largely because of partisan demographics that Sanders can’t help: the Democratic party pulls in a number of surprisingly conservative black voters, while the Republican party presumably has a corresponding effect on many white voters who might be open to Sanders’ policy aims, but are more at home in their party’s racial antagonism.

On the whole, Sanders’ problem is not with black voters; they support him at equal or greater rates than do white voters. His problem is that his promise of pulling voters from both parties and those currently unaligned has not yet come to fruition. He has not been able to shift the majority of white voters away from their Republican or independent allegiances. The hope for left wing Sanders supporters must be that time and voter education will cause a realignment, and that people like Sanders will begin to see support across the political spectrum and from current non-voters. His high level of support from young voters does suggest some promise. But if the American political spectrum did accurately reflect the distribution of its voters, there would be little hope for candidates like Sanders in the near future. America simply has too many white people on the far right for that.


  1. And now the 2020 primary election as well. This article focuses on the 2016 election specifically, because in this election Sanders faced only one challenger, giving a clearer picture of where voters stood than in 2020, when he faced a very broad field. 

  2. https://www.cnn.com/election/2016/primaries/polls/sc/Dem 

  3. https://www.cnn.com/election/2016/primaries/polls/sc/Rep 

  4. This is neglecting third party voters. There are very few of them, and they appear to be disproportionately white, so if they were included, they would further reduce Sanders’ support among white voters. 

  5. https://fivethirtyeight.com/features/why-so-many-black-voters-are-democrats-even-when-they-arent-liberal/ 

God's RNG

09 September 2020


This is a short story that I wrote some time ago. It’s designed to illustrate some interesting properties of CSPRNGs (Crytographically Secure Pseudo Random Number Generators), which form the bedrock of modern encryption techniques. In particular, the story highlights the fact that a single rather short key is sufficient to generate all the random numbers anyone will ever need. You don’t need a continuous source of pure entropy.

Part 1: The Universal RNG

During the universe’s design stage, it was realized that making some events probabilistic from the point of view of human beings was a greatly desirable property. (Several small proto-universes failed shortly after the intelligent species that populated them was able to work out the deterministic laws behind every event.) For the universe that ultimately went into production, it was decided that making a great many events (including quantum fluctuations) chancy was the safest approach.

True Believers hold that all these chancy events are Really random. That is, they believe that whenever the universe needs a new random number, God uses their infinite power to create it in their mind ex nihilo, and there’s simply nothing more to be said in the way of explanation. Skeptics hold that not even God is capable of acts of creation of this kind, and that there must be some ultimately deterministic story about where these numbers come from.

As it turns out, both are wrong. God is perfectly capable of creating Really random numbers, but although omnipotent is far too lazy to continue doing this all the time. Perhaps God has other universes to tend to, or maybe Heaven needs random numbers for some secret purpose of its own. In any case, the fact is that God only ever bothered to generate 2^8 random binary numbers, in the form of a single 256 bit key embedded into the universe’s core systems. Whenever any “chancy” event needs to happen, the random information is generated by the universe using a CSPRNG that (entirely by coincidence) is exactly equivalent to ChaCha201. So it turns out that the universe is fundamentally deterministic, just not in the way anyone expected.

A stubborn cohort of angels on the review board insisted that this was an inelegant solution to the problem of randomness. They tried to convince God to create an Oracle that would generate Really random numbers all on its own for the universe to use. The Almighty was undeterred, ultimately ruling that the system as designed was “good enough”. Many suspected that God’s real reason was that having another thing around capable of generating uncaused events was taken to be a slight to the Divine dignity. Lucifer led several others in resigning from the panel in protest. Following a disruptive sit-in at God’s office, he was cast like lightning from Heaven.

Unsurprisingly, God was right about the system being good enough. After all, the whole point of the system was to prevent humans from predicting events that were meant to be unpredictable without requiring the intervention of miracles.2 One complaint was that the total number of requests to the RNG over the universe’s lifetime might possibly exceed a value at which the RNG would begin to cycle. However, it was shown that collecting enough data to exploit (or even have a chance at detecting) the issue was physically impossible due to the energy constraints of the universe.

Of course, no steps needed to be taken to prevent direct attacks on the CSPRNG’s state, or key recovery, since these were coded into the OS of the universe itself, and life forms in the universe would have no access to them. So that’s the system that was ultimately put in place: every “random” event that ever happens in this universe can ultimately be traced back to its initial state and the single 256 bit key that makes it unique. While other designs based on entropy pools with estimators were considered, God worried about the universe blocking if at some point they forgot to update the pool with new random data. It was determined that the CSPRNG approach provided enough practical security with a single hard-wired key set at the beginning of time.

Part 2: God’s /dev/random

It is well known that God has a phone number.3 What is less commonly known is that when God designed the universe, they added a number of other interfaces intended to be helpful to human beings. The True Believers, for example, have it as an article of their faith that God is listening in all the time on /dev/null. But the most useful interface in God’s /dev is undoubtedly /dev/random.

The design team realized quite early on that humans themselves would need sources of randomness. Since every bit of random data is ultimately generated by God’s RNG anyway, it was decided that /dev/random should just return data straight from the RNG with no scrambling. Although this provided far more direct access to the RNG than its designers had initially anticipated, it was determined that its security margin was sufficiently high to allow for these queries.

Access to /dev/random was provided on Earth in a number of high and holy places. God’s interfaces are so fast that they are able to provide data to human devices at the full speed of any interface any humans have been able to construct so far. Of course, all these interfaces have to get their data ultimately from a single device built into the universal mainframe, but light travel time isn’t a problem since that was a constraint built into the universe’s physical laws, not something that applies to the machine the universe runs on.

For a long time humans were happy to take their devices to the nearest /dev to be filled up with random data. But Lucifer, displeased with the success of the system, tricked one of them into accepting data from an illicit, possibly backdoored source. God was pissed, and things generally went to hell for a while after that. While some authorities wanted to shut down the /dev system entirely, God ultimately decided that since the security of /dev/random hadn’t been compromised in any way, they would leave the system in place. In general, however, access to /dev for ordinary humans became more difficult after this, and many of the high and holy places fell under the control of nation states or were sold off to corporations for extraction of their natural resources.

It gradually came about that humans started to need random numbers more frequently, and even though you could get as many numbers as you needed from /dev/random, the latency caused by having to travel to an accessible holy place was considered unacceptable. Instead, it became common for priests to provide their own sources of random numbers. They would do this by traveling themselves and returning with 256 bits of random data, which they would then use as a key to seed a CSPRNG that was (incidentally) similar to God’s own. While the priests’ computers could provide random data only much more slowly than /dev/random, the latency was much better because people didn’t have to travel so far. This method managed to sustain most civilizations for centuries, resulting in a hierarchy where only the highest ranking bishops had direct access to /dev/random, and local priests would seed their own CSPRNG’s from 256 bit keys provided from their RNGs instead of directly from God’s sources.

Cracks emerged. The role of priests in this scheme became widely regarded as suspect. After all, an untrustworthy priest could be providing random bits from a less-than-holy source, and if anyone on the chain between you and God’s RNG was a bad actor, they could potentially uncover your secrets. Protestants began to insist on making the journey to /dev themselves to get their own keys, and rolling-your-own PRNG functions quickly became a widespread practice. A number of televangelists were found to be using keys of unknown origin with less than 32 bits of entropy.

Cryptographers eventually invented solutions for collecting and estimating entropy, and most skeptics stopped caring about having any link back to the “supposedly” holy /dev/random. Instead, their operating systems gathered entropy from secular sources like ordinary “random” events. Of course any key they created was ultimately the result of deterministic processes that had their origin in God’s RNG, but practically speaking this had no effect on their security.

Perhaps most surprising of all was the group of Satanists who insisted on using random numbers generated from secret sources supposedly provided by Lucifer himself. They claim Lucifer has crafted mechanisms for generating Really random numbers, such that every number you get from the Devil’s /dev/random is entirely Real, not backed by a PRNG. Expert theologians and cryptographers currently believe this to be impossible. Even if Lucifer is using some kind of chancy mechanism to generate these numbers, the process must be ultimately deterministic and known to God.

Part 3: Unexpected Consequences

A number of crypto nerds needed to generate 2048 bit keys for use with asymmetric cryptosystems like RSA. Many of them suspected that God’s RNG might be a PRNG or otherwise distrusted it, and decided like the secularists to collect their own sources of entropy from the universe. They relied on the only the most conservative estimations of entropy, collecting a full 2048 bits of entropy into their pools before turning that data via convoluted methods into their keys. The irony of this, of course, was that every event in all of space-time put together only contained the 256 bits of true randomness hard coded into it at the moment of creation. Their keys were no better than 2048 bits taken from God’s /dev/random, even no better than 2048 bits taken from a CSPRNG seeded by 256 bits taken from God’s /dev/random.

There is a strange beauty to the fact that all of this was fundamentally secure. No one, no matter how many bits they stored and analyzed from God’s RNG, had any hope of doing better than 50/50 at guessing the next bit that would come out, which someone else could securely use for any purpose. So long as every person in the chain from God’s RNG was trustworthy, each person could take a mere 256 bits to seed a CSPRNG from the person who came before, and every 256 bits that came out of the 10th person’s CSPRNG was just as cryptographically secure as the same amount of data taken from God’s own /dev/random. 256 bits of sufficiently unpredictable data really is enough for everyone, forever.4

Unfortunately, it didn’t last forever. One of God’s interns introduced a use-after-free into the universe’s code, and a too-clever hacker who found their way into one of the remaining high and holy places managed to root the universal mainframe. In a matter of minutes, they had accidentally triggered a debugging function that had been left in the code, which led to a kernel panic. The universe went out like a light.


  1. To be precise, God used ChaCha20 with what Daniel J. Bernstein calls “fast-key-erasure” here. The point of this isn’t to provide protection against backtracking (key recovery was assumed to be impossible by the design team), but in this case is an efficient and secure way of rekeying which is required by the ChaCha20 cipher because of its smallish 64 bit counter. God briefly considered AES-256-CTR, but decided against it because of its small block size (128 bits), which makes it possible to distinguish from a random oracle with a sufficient number of requests. In theory fast-key-erasure might be enough to protect against this, even without rekeying with new randomness, but the security margin was deemed insufficient in light of available alternatives. 

  2. Additionally, leaving open the possibility (from the human point of view) that the universe was non-deterministic was discovered to have psychological benefits. 

  3. It’s 42, as suggested by the philosopher Majikthise in Douglas Adams’ Hitchhiker’s Guide to the Galaxy. Unfortunately, God did not put any audio interfaces in /dev

  4. Based on my reading of Bernstein’s article here

The Odds of a Correct First Guess in Clue

09 September 2020


Prompted by a strange dream, I decided to calculate what your odds are of correctly guessing the three pieces of evidence the first time in the game of Clue.

In practice, successfully doing this is likely to provoke accusations of cheating. But a simple calculation will show that this is likely undeserved. In a standard game of Clue1, there are six character cards, six weapon cards, and nine location cards. Without any information at all, that gives the odds of correctly guessing on your first turn at only 1/6 × 1/6 × 1/9 = 1/342, which is frequent enough that anyone who plays Clue many times is likely to encounter it. Keep in mind that each player has these odds on their first guess, which significantly raises the chances of ever seeing it happen in a game.

Of course, in every game of Clue each player will have some evidence, and so the odds of a correct first guess go up quite a bit. How much? That depends on how much evidence (how many cards) you receive.

The rules for the distribution of evidence are pretty simple. The three “correct” cards are removed from the deck of evidence, it’s shuffled, and distributed to players as evenly as possible. The players then proceed to interrogate each other about the cards they have, in order to eliminate live possibilities about the correct combination of person, weapon, and location. You hope to eliminate all but one combination (the correct one) before any other player can do so. In my circles, when children are playing, the players are arranged so that the younger will receive more cards than the older if they can’t be divided evenly.

Not every combination of cards is equally likely. If you are to receive five cards, you’re most likely to receive two of two of the card types and one of the third, or three locations, one weapon, and one person. These five-card hands are dealt a combined 58% of the time! In addition, some hands make a correct first guess easier than others: for a five-card hand, the best hand (all characters or all weapons) gives you almost three times better odds than the worst one (four locations, one other card). Note that actually, the better hands tend to be heavy in locations because you have to visit fewer of them. A low-location hand only improves your chances of guessing blindly.

Okay, so what we have to do is figure out the odds of each hand combination (multiset), and multiply that by the chances of a correct first guess for each hand, and sum up the results to get the total odds of a correct first guess (assuming a perfectly shuffled deck). I wrote Python code to do that here.

Now, here are the results! Some of the hands aren’t possible, because standard clue only supports six players, which means each of them would get three cards, but I’ve included “odds” games, where a player might start out with zero to two instead.

A graph showing the rarity of getting a correct first guess.

So for a five card hand, you’d expect to guess correctly the first time about once every 136 games. With six cards that drops to once every 111 games! Combining these facts with multiple players, you can show that fair games of clue will end with a player solving the mystery on their first turn every 30-40 games.

A Clue bot?

Thinking about this problem made me consider writing a Clue bot, but I ended up deciding against it. It might be an interesting project: you can do a very good approximation of perfect play with a bot that just tabulates its knowledge about every player’s hand and uses a simple pathfinding algorithm to efficiently traverse the board.

However, there are two good reasons not to bother. One is that Clue isn’t a “fair” game: an improved strategy may reduce your win rate rather than improve it. (In this specific sense, both Chess and Candy Land are fair.) The reason for this is that the standard rules of Clue say:

To make a Suggestion, move a Suspect and a Weapon into the Room that you just entered.

Normally, moving around the board is a slow process, since rooms are fairly far apart and you only get to move one d6 each turn. (This also adds quite a bit of luck into the game.) However, because the murder suspects are also other players, the above rule means that each guess (“Suggestion”) you make will instantly teleport one of them into the room with you. This can either aid (by vastly reducing travel time) or harm (by preventing an intended move) another player.

With coordination among the other players, it’s possible to harass one player and make it almost impossible to plan movements. Even without this unfair practice, it’s often in the interest of individual players to harass those of equal or greater skill to them. That’s just clever play! This can backfire, of course, but the better a player (or bot) is, the more likely other players are to attempt it, and it can make intelligent pathfinding impossible.

There’s a simpler reason not to bother with a bot, however, and that’s that close to perfect play is already easily achievable by humans. We’re already pretty good at intuiting optimal routes, and extracting as much information as possible from gameplay is easily done with an algorithm:

The game comes with worksheets for the players to use which list every card in rows, and have several columns (probably intended to save paper over multiple games). Simply assign the first column to yourself, and every succeeding column to the other players in the order of play. The additional columns are used to collect any information you can obtain about what hands the other players have. At the top of each column write the number of cards that player has. Use your own column to summarize everything you know about the solution. An “x” means that you know that a card is not part of the solution, and a box means that you know it is.

For the other columns, a box means that the player does not have the corresponding card. An “x” means that they do (and therefore, that there should also be an “x” in your column, the “solution” column). Whenever a player is not able to show any cards to someone (including you), place the box in each of the rows for that player. When a player shows a card to someone besides you, place a tiny number in that column in any row they might have a card in. (Simply increment the number you use in each column every time you need a new one.) Whenever logic forces you to place a box in someone’s column, check to see if only one row that shares a number remains, and you can then put an “x” there. If you can work out every card that a player has, you can put a box in every other row.

Example: Player 1 suggests Ms. Scarlet, the candlestick, and the ballroom. Player 2 has none of these cards, so you put a box on each one in their column. Player 3 shows a card, so you put a “1” in each box in their column. Player 2 suggests Ms. Scarlet, the knife, and the kitchen. Player 3 has none of these cards, so you now have a box for Ms. Scarlet in their column. It comes around to your turn, and you suggest Mr. Green, the candlestick, and the library. Player 1 shows you the candlestick. So you put an “x” on their column for candlestick, which means a box belongs in Player 3’s column for the candlestick. Now you’re only left with one “1” in that column, on the ballroom. So you know Player 3 must have shown Player 1 that card and so you can put an “x” in the box. Now you know it’s not part of the correct solution!

Obviously there’s a bit more improvement you can do with ideal guessing, and it might make sense to keep track of what other players know so you can surmise if they’re about to make a correct accusation, in which case you might want to jump the gun if you have a 50/50 shot. But 95% of strategy can be easily implemented by a player following the approach above.

Validating DNSSEC Locally, The 2020 Way

22 March 2020


You can find plenty of old, bad guides on validating DNSSEC online. The worst ones I’ve seen just say to do

% dig example.org

and tell you that the status: NOERROR you see in the response means that DNSSEC was validated (or at least, if it exists for that domain, it was validated).

That’s not true at all. Some resolvers do in fact validate this information for you, like Google’s DNS:

% dig @8.8.8.8 bad.dnssec-or-not.com

does give you status: SERVFAIL. But obviously you shouldn’t be counting on that. A DNS server that doesn’t support DNSSEC, like Level3’s, will happily return your query with the NOERROR status.

% dig @4.2.2.1 bad.dnssec-or-not.com

Some slightly better guides tell you to look for the AD flag. This is part of an IETF standard by which a recursive resolver can indicate to you that it has verified the DNSSEC data. So if you run those two commands I have above on a site with valid DNSSEC data (like example.org), you’ll see that the response from Google includes the ad flag, but the response from Level3 does not.

Does this mean that you have verified the DNSSEC data? No. It means that Google says it has verified the DNSSEC data. And the interesting thing is that it’s actually quite difficult to verify it yourself, at least with the traditional tools. And the tools you’re using most of the time, including dig, nslookup, and probably your browser too are not verifying DNSSEC data. They’re relying on you to have configured a resolver with DNSSEC support, and that resolver to return SERVFAILs if you query a domain with broken DNSSEC. It’s entirely based on trust.

I’ve found one or two guides out there which tell you how to fetch all the DNSSEC data you need and verify it yourself piece by piece. Most of the time you’ll use dig to get the data you need.

You can certainly do this, as long as you don’t slip up on any part of the process. (Most guides seem woefully incomplete on how exactly you need to do this.) But it turns out that a few years ago the BIND folks added a new tool (alongside their others, nslookup and dig) that does automatically verify DNSSEC. I discovered it by accident when reading a man page. You can get it on Arch Linux in the Extra package bind-tools, and Ubuntu and Debian have it in dnsutils.

The syntax is very similar to dig. The rest of this post is pretty self explanatory. Observe how delv discovers that the site’s DNSSEC is broken, even though it’s using a resolver that doesn’t verify DNSSEC.

% delv @4.2.2.1 +short bad.dnssec-or-not.com
;; validating bad.dnssec-or-not.com/A: no valid signature found
;; RRSIG failed to verify resolving 'bad.dnssec-or-not.com/A/IN': 4.2.2.1#53
;; resolution failed: RRSIG failed to verify

Compare dig:

% dig @4.2.2.1 +short bad.dnssec-or-not.com
173.230.152.222

And with a DNSSEC supporting resolver:

% delv @8.8.8.8 +short bad.dnssec-or-not.com
;; resolution failed: SERVFAIL

And with a site with woring DNSSEC:

% delv @4.2.2.1 +nocrypto example.org
; fully validated
example.org.            82231   IN      A       93.184.216.34
example.org.            82231   IN      RRSIG   A 8 2 86400 20200402175057 20200312201336 63865 example.org. [omitted]

Some thoughts on evangelicalism

26 September 2019


I really enjoyed this article called The Evangelical Mind by Adam Kotsko. Parts of it reflect my experience growing up as an evangelical Christian very well, other parts do not. I have a few thoughts on the parts that don’t.

  1. One point of difference is music. Kotsko’s parents complained that their Christian radio’s programming was “dull and conservative”. Kotsko says elsewhere that his father saw an important place for rock music in Christianity. My experience couldn’t be further from this. Even the most traditional music playing on Christian pop stations would have been regarded as wholly inappropriate for church, and questionable in general.

  2. Kotsko identifies the evangelical movement with the “seeker-sensitive” approach to church growth. Every church I attended as a child was violently opposed to this idea, and many of the pastors would rail against the idea (by name) from the pulpit. There was a constant fear that anything too friendly or enjoyable would water down the tough message of the gospel. The evangelicals I knew liked to point out that “narrow is the way…”

  3. Additionally, Kotsko accuses evangelicalism of “self-satisfied conformism”. While I think this is appropriate as a political and social point, Kotsko extends it to also mean that for the quintessential evangelical, “nothing could be stupider than expecting people to live by the teachings of Christ”. This would have been big news to my church, where nearly every member knew many verses of Romans 6 by heart. Their willingness to hold themselves to the Bible’s standards was certainly selective (never more so than on those political and social points), but the issue was always taken seriously. And apparently “arcane” points of doctrine like predestination were major issues: they were instrumental in a church split, in fact.

I rehearse this because I think Kotsko would not be surprised by any of it. It’s not simply that there are more serious and extreme evangelicals, as there are in any movement. It’s that this internal dissension is a central part of the evangelical movement itself. Whether you view evangelicalism as primarily a theological response to liberal traditions in the early 20th century, or a political response to the changing fabric of American culture of the 60s (as Kotsko does), it is undeniably characterized by paranoia and reactionary attitudes (as Kotsko says).

These are at the heart of modern evangelicalism’s instinct to eat itself. As Kotsko says, “Evangelical Christians nevertheless regard themselves as a persecuted and misunderstood minority, surrounded by a hostile secular culture that is actively seeking to deceive and corrupt their children.” Those who aren’t familiar with evangelicalism may be surprised to learn that this is no exaggeration. It’s a conspiracy theory as expansive as the Reptilian one, but believed by far more people. Beliefs like this are hard to go halfway on; they tend to consume you. You begin to see lizard people, or black helicopters, or “secularists” everywhere. When I came home from college after my first semester, I was excited to let everyone know there had been a mistake - not every non-evangelical had been a tool of Satan out to eat my soul. This did not go over very well.

When you take this kind of conspiratorial view of the world, it’s hard to stop with just those not in your group. Arguably this is made even harder by the plain fact that the majority of Americans claim to be Christians. If you’re going to maintain your self-understanding as a persecuted minority, while you’re the majority, you’ve got to believe that most of the people who claim to be on your side are actually infiltrators. And so it is: evangelicals are forever splitting into smaller, more specific, and more suspicious groups.

The points of difference, while taken extremely seriously by most evangelicals, are also necessarily created by this process. If you’re going to kick someone with almost identical beliefs out of your group, you need an important reason. What could be more important than a central doctrine like predestination, or not diluting your message with “seeker-friendly” music arrangements? Or what could be a more useful tool for purging your group of the infiltrators? The most serious evangelicals are always trying to purify themselves in this way. Controversies that seem unimportant to outsiders, like whose books Lifeway is selling, are great ways of figuring out who’s on the narrow path and who’s in danger of hellfire. Megachurches, in particular, are widely viewed as suspicious organizations that grift off an evangelical identity without any of its substance.

Once more, I note that I don’t think any of this would surprise Kotsko. This kind of continual purging is central to the evangelical experience, but the particular bugbears that apply to each evangelical subgroup are always unique. Mine viewed movies with suspicion, and thought that seeker-friendly worship was a sinister plot, but didn’t require women to cover their heads, use the KJV version of the Bible, or believe that drinking was inherently sinful. What I’m hoping this illustrates is how Kotsko’s particular experience fits into evangelicalism as a whole - a movement that’s a weird continuation of the paranoia of the reactionary conservatism of a prior generation.


Previous page Next page
©2021 Adam Fontenot. Licensed under CC BY-SA. About Me Projects RSS Feed