Adam Fontenot

God's RNG

09 September 2020

This is a short story that I wrote some time ago. It’s designed to illustrate some interesting properties of CSPRNGs (Crytographically Secure Pseudo Random Number Generators), which form the bedrock of modern encryption techniques. In particular, the story highlights the fact that a single rather short key is sufficient to generate all the random numbers anyone will ever need. You don’t need a continuous source of pure entropy.

Part 1: The Universal RNG

During the universe’s design stage, it was realized that making some events probabilistic from the point of view of human beings was a greatly desirable property. (Several small proto-universes failed shortly after the intelligent species that populated them was able to work out the deterministic laws behind every event.) For the universe that ultimately went into production, it was decided that making a great many events (including quantum fluctuations) chancy was the safest approach.

True Believers hold that all these chancy events are Really random. That is, they believe that whenever the universe needs a new random number, God uses their infinite power to create it in their mind ex nihilo, and there’s simply nothing more to be said in the way of explanation. Skeptics hold that not even God is capable of acts of creation of this kind, and that there must be some ultimately deterministic story about where these numbers come from.

As it turns out, both are wrong. God is perfectly capable of creating Really random numbers, but although omnipotent is far too lazy to continue doing this all the time. Perhaps God has other universes to tend to, or maybe Heaven needs random numbers for some secret purpose of its own. In any case, the fact is that God only ever bothered to generate 2^8 random binary numbers, in the form of a single 256 bit key embedded into the universe’s core systems. Whenever any “chancy” event needs to happen, the random information is generated by the universe using a CSPRNG that (entirely by coincidence) is exactly equivalent to ChaCha201. So it turns out that the universe is fundamentally deterministic, just not in the way anyone expected.

A stubborn cohort of angels on the review board insisted that this was an inelegant solution to the problem of randomness. They tried to convince God to create an Oracle that would generate Really random numbers all on its own for the universe to use. The Almighty was undeterred, ultimately ruling that the system as designed was “good enough”. Many suspected that God’s real reason was that having another thing around capable of generating uncaused events was taken to be a slight to the Divine dignity. Lucifer led several others in resigning from the panel in protest. Following a disruptive sit-in at God’s office, he was cast like lightning from Heaven.

Unsurprisingly, God was right about the system being good enough. After all, the whole point of the system was to prevent humans from predicting events that were meant to be unpredictable without requiring the intervention of miracles.2 One complaint was that the total number of requests to the RNG over the universe’s lifetime might possibly exceed a value at which the RNG would begin to cycle. However, it was shown that collecting enough data to exploit (or even have a chance at detecting) the issue was physically impossible due to the energy constraints of the universe.

Of course, no steps needed to be taken to prevent direct attacks on the CSPRNG’s state, or key recovery, since these were coded into the OS of the universe itself, and life forms in the universe would have no access to them. So that’s the system that was ultimately put in place: every “random” event that ever happens in this universe can ultimately be traced back to its initial state and the single 256 bit key that makes it unique. While other designs based on entropy pools with estimators were considered, God worried about the universe blocking if at some point they forgot to update the pool with new random data. It was determined that the CSPRNG approach provided enough practical security with a single hard-wired key set at the beginning of time.

Part 2: God’s /dev/random

It is well known that God has a phone number.3 What is less commonly known is that when God designed the universe, they added a number of other interfaces intended to be helpful to human beings. The True Believers, for example, have it as an article of their faith that God is listening in all the time on /dev/null. But the most useful interface in God’s /dev is undoubtedly /dev/random.

The design team realized quite early on that humans themselves would need sources of randomness. Since every bit of random data is ultimately generated by God’s RNG anyway, it was decided that /dev/random should just return data straight from the RNG with no scrambling. Although this provided far more direct access to the RNG than its designers had initially anticipated, it was determined that its security margin was sufficiently high to allow for these queries.

Access to /dev/random was provided on Earth in a number of high and holy places. God’s interfaces are so fast that they are able to provide data to human devices at the full speed of any interface any humans have been able to construct so far. Of course, all these interfaces have to get their data ultimately from a single device built into the universal mainframe, but light travel time isn’t a problem since that was a constraint built into the universe’s physical laws, not something that applies to the machine the universe runs on.

For a long time humans were happy to take their devices to the nearest /dev to be filled up with random data. But Lucifer, displeased with the success of the system, tricked one of them into accepting data from an illicit, possibly backdoored source. God was pissed, and things generally went to hell for a while after that. While some authorities wanted to shut down the /dev system entirely, God ultimately decided that since the security of /dev/random hadn’t been compromised in any way, they would leave the system in place. In general, however, access to /dev for ordinary humans became more difficult after this, and many of the high and holy places fell under the control of nation states or were sold off to corporations for extraction of their natural resources.

It gradually came about that humans started to need random numbers more frequently, and even though you could get as many numbers as you needed from /dev/random, the latency caused by having to travel to an accessible holy place was considered unacceptable. Instead, it became common for priests to provide their own sources of random numbers. They would do this by traveling themselves and returning with 256 bits of random data, which they would then use as a key to seed a CSPRNG that was (incidentally) similar to God’s own. While the priests’ computers could provide random data only much more slowly than /dev/random, the latency was much better because people didn’t have to travel so far. This method managed to sustain most civilizations for centuries, resulting in a hierarchy where only the highest ranking bishops had direct access to /dev/random, and local priests would seed their own CSPRNG’s from 256 bit keys provided from their RNGs instead of directly from God’s sources.

Cracks emerged. The role of priests in this scheme became widely regarded as suspect. After all, an untrustworthy priest could be providing random bits from a less-than-holy source, and if anyone on the chain between you and God’s RNG was a bad actor, they could potentially uncover your secrets. Protestants began to insist on making the journey to /dev themselves to get their own keys, and rolling-your-own PRNG functions quickly became a widespread practice. A number of televangelists were found to be using keys of unknown origin with less than 32 bits of entropy.

Cryptographers eventually invented solutions for collecting and estimating entropy, and most skeptics stopped caring about having any link back to the “supposedly” holy /dev/random. Instead, their operating systems gathered entropy from secular sources like ordinary “random” events. Of course any key they created was ultimately the result of deterministic processes that had their origin in God’s RNG, but practically speaking this had no effect on their security.

Perhaps most surprising of all was the group of Satanists who insisted on using random numbers generated from secret sources supposedly provided by Lucifer himself. They claim Lucifer has crafted mechanisms for generating Really random numbers, such that every number you get from the Devil’s /dev/random is entirely Real, not backed by a PRNG. Expert theologians and cryptographers currently believe this to be impossible. Even if Lucifer is using some kind of chancy mechanism to generate these numbers, the process must be ultimately deterministic and known to God.

Part 3: Unexpected Consequences

A number of crypto nerds needed to generate 2048 bit keys for use with asymmetric cryptosystems like RSA. Many of them suspected that God’s RNG might be a PRNG or otherwise distrusted it, and decided like the secularists to collect their own sources of entropy from the universe. They relied on the only the most conservative estimations of entropy, collecting a full 2048 bits of entropy into their pools before turning that data via convoluted methods into their keys. The irony of this, of course, was that every event in all of space-time put together only contained the 256 bits of true randomness hard coded into it at the moment of creation. Their keys were no better than 2048 bits taken from God’s /dev/random, even no better than 2048 bits taken from a CSPRNG seeded by 256 bits taken from God’s /dev/random.

There is a strange beauty to the fact that all of this was fundamentally secure. No one, no matter how many bits they stored and analyzed from God’s RNG, had any hope of doing better than 50/50 at guessing the next bit that would come out, which someone else could securely use for any purpose. So long as every person in the chain from God’s RNG was trustworthy, each person could take a mere 256 bits to seed a CSPRNG from the person who came before, and every 256 bits that came out of the 10th person’s CSPRNG was just as cryptographically secure as the same amount of data taken from God’s own /dev/random. 256 bits of sufficiently unpredictable data really is enough for everyone, forever.4

Unfortunately, it didn’t last forever. One of God’s interns introduced a use-after-free into the universe’s code, and a too-clever hacker who found their way into one of the remaining high and holy places managed to root the universal mainframe. In a matter of minutes, they had accidentally triggered a debugging function that had been left in the code, which led to a kernel panic. The universe went out like a light.

  1. To be precise, God used ChaCha20 with what Daniel J. Bernstein calls “fast-key-erasure” here. The point of this isn’t to provide protection against backtracking (key recovery was assumed to be impossible by the design team), but in this case is an efficient and secure way of rekeying which is required by the ChaCha20 cipher because of its smallish 64 bit counter. God briefly considered AES-256-CTR, but decided against it because of its small block size (128 bits), which makes it possible to distinguish from a random oracle with a sufficient number of requests. In theory fast-key-erasure might be enough to protect against this, even without rekeying with new randomness, but the security margin was deemed insufficient in light of available alternatives. 

  2. Additionally, leaving open the possibility (from the human point of view) that the universe was non-deterministic was discovered to have psychological benefits. 

  3. It’s 42, as suggested by the philosopher Majikthise in Douglas Adams’ Hitchhiker’s Guide to the Galaxy. Unfortunately, God did not put any audio interfaces in /dev

  4. Based on my reading of Bernstein’s article here

The Odds of a Correct First Guess in Clue

09 September 2020

Prompted by a strange dream, I decided to calculate what your odds are of correctly guessing the three pieces of evidence the first time in the game of Clue.

In practice, successfully doing this is likely to provoke accusations of cheating. But a simple calculation will show that this is likely undeserved. In a standard game of Clue1, there are six character cards, six weapon cards, and nine location cards. Without any information at all, that gives the odds of correctly guessing on your first turn at only 1/6 × 1/6 × 1/9 = 1/342, which is frequent enough that anyone who plays Clue many times is likely to encounter it. Keep in mind that each player has these odds on their first guess, which significantly raises the chances of ever seeing it happen in a game.

Of course, in every game of Clue each player will have some evidence, and so the odds of a correct first guess go up quite a bit. How much? That depends on how much evidence (how many cards) you receive.

The rules for the distribution of evidence are pretty simple. The three “correct” cards are removed from the deck of evidence, it’s shuffled, and distributed to players as evenly as possible. The players then proceed to interrogate each other about the cards they have, in order to eliminate live possibilities about the correct combination of person, weapon, and location. You hope to eliminate all but one combination (the correct one) before any other player can do so. In my circles, when children are playing, the players are arranged so that the younger will receive more cards than the older if they can’t be divided evenly.

Not every combination of cards is equally likely. If you are to receive five cards, you’re most likely to receive two of two of the card types and one of the third, or three locations, one weapon, and one person. These five-card hands are dealt a combined 58% of the time! In addition, some hands make a correct first guess easier than others: for a five-card hand, the best hand (all characters or all weapons) gives you almost three times better odds than the worst one (four locations, one other card). Note that actually, the better hands tend to be heavy in locations because you have to visit fewer of them. A low-location hand only improves your chances of guessing blindly.

Okay, so what we have to do is figure out the odds of each hand combination (multiset), and multiply that by the chances of a correct first guess for each hand, and sum up the results to get the total odds of a correct first guess (assuming a perfectly shuffled deck). I wrote Python code to do that here.

Now, here are the results! Some of the hands aren’t possible, because standard clue only supports six players, which means each of them would get three cards, but I’ve included “odds” games, where a player might start out with zero to two instead.

A graph showing the rarity of getting a correct first guess.

So for a five card hand, you’d expect to guess correctly the first time about once every 136 games. With six cards that drops to once every 111 games! Combining these facts with multiple players, you can show that fair games of clue will end with a player solving the mystery on their first turn every 30-40 games.

A Clue bot?

Thinking about this problem made me consider writing a Clue bot, but I ended up deciding against it. It might be an interesting project: you can do a very good approximation of perfect play with a bot that just tabulates its knowledge about every player’s hand and uses a simple pathfinding algorithm to efficiently traverse the board.

However, there are two good reasons not to bother. One is that Clue isn’t a “fair” game: an improved strategy may reduce your win rate rather than improve it. (In this specific sense, both Chess and Candy Land are fair.) The reason for this is that the standard rules of Clue say:

To make a Suggestion, move a Suspect and a Weapon into the Room that you just entered.

Normally, moving around the board is a slow process, since rooms are fairly far apart and you only get to move one d6 each turn. (This also adds quite a bit of luck into the game.) However, because the murder suspects are also other players, the above rule means that each guess (“Suggestion”) you make will instantly teleport one of them into the room with you. This can either aid (by vastly reducing travel time) or harm (by preventing an intended move) another player.

With coordination among the other players, it’s possible to harass one player and make it almost impossible to plan movements. Even without this unfair practice, it’s often in the interest of individual players to harass those of equal or greater skill to them. That’s just clever play! This can backfire, of course, but the better a player (or bot) is, the more likely other players are to attempt it, and it can make intelligent pathfinding impossible.

There’s a simpler reason not to bother with a bot, however, and that’s that close to perfect play is already easily achievable by humans. We’re already pretty good at intuiting optimal routes, and extracting as much information as possible from gameplay is easily done with an algorithm:

The game comes with worksheets for the players to use which list every card in rows, and have several columns (probably intended to save paper over multiple games). Simply assign the first column to yourself, and every succeeding column to the other players in the order of play. The additional columns are used to collect any information you can obtain about what hands the other players have. At the top of each column write the number of cards that player has. Use your own column to summarize everything you know about the solution. An “x” means that you know that a card is not part of the solution, and a box means that you know it is.

For the other columns, a box means that the player does not have the corresponding card. An “x” means that they do (and therefore, that there should also be an “x” in your column, the “solution” column). Whenever a player is not able to show any cards to someone (including you), place the box in each of the rows for that player. When a player shows a card to someone besides you, place a tiny number in that column in any row they might have a card in. (Simply increment the number you use in each column every time you need a new one.) Whenever logic forces you to place a box in someone’s column, check to see if only one row that shares a number remains, and you can then put an “x” there. If you can work out every card that a player has, you can put a box in every other row.

Example: Player 1 suggests Ms. Scarlet, the candlestick, and the ballroom. Player 2 has none of these cards, so you put a box on each one in their column. Player 3 shows a card, so you put a “1” in each box in their column. Player 2 suggests Ms. Scarlet, the knife, and the kitchen. Player 3 has none of these cards, so you now have a box for Ms. Scarlet in their column. It comes around to your turn, and you suggest Mr. Green, the candlestick, and the library. Player 1 shows you the candlestick. So you put an “x” on their column for candlestick, which means a box belongs in Player 3’s column for the candlestick. Now you’re only left with one “1” in that column, on the ballroom. So you know Player 3 must have shown Player 1 that card and so you can put an “x” in the box. Now you know it’s not part of the correct solution!

Obviously there’s a bit more improvement you can do with ideal guessing, and it might make sense to keep track of what other players know so you can surmise if they’re about to make a correct accusation, in which case you might want to jump the gun if you have a 50/50 shot. But 95% of strategy can be easily implemented by a player following the approach above.

Validating DNSSEC Locally, The 2020 Way

22 March 2020

You can find plenty of old, bad guides on validating DNSSEC online. The worst ones I’ve seen just say to do

% dig

and tell you that the status: NOERROR you see in the response means that DNSSEC was validated (or at least, if it exists for that domain, it was validated).

That’s not true at all. Some resolvers do in fact validate this information for you, like Google’s DNS:

% dig @

does give you status: SERVFAIL. But obviously you shouldn’t be counting on that. A DNS server that doesn’t support DNSSEC, like Level3’s, will happily return your query with the NOERROR status.

% dig @

Some slightly better guides tell you to look for the AD flag. This is part of an IETF standard by which a recursive resolver can indicate to you that it has verified the DNSSEC data. So if you run those two commands I have above on a site with valid DNSSEC data (like, you’ll see that the response from Google includes the ad flag, but the response from Level3 does not.

Does this mean that you have verified the DNSSEC data? No. It means that Google says it has verified the DNSSEC data. And the interesting thing is that it’s actually quite difficult to verify it yourself, at least with the traditional tools. And the tools you’re using most of the time, including dig, nslookup, and probably your browser too are not verifying DNSSEC data. They’re relying on you to have configured a resolver with DNSSEC support, and that resolver to return SERVFAILs if you query a domain with broken DNSSEC. It’s entirely based on trust.

I’ve found one or two guides out there which tell you how to fetch all the DNSSEC data you need and verify it yourself piece by piece. Most of the time you’ll use dig to get the data you need.

You can certainly do this, as long as you don’t slip up on any part of the process. (Most guides seem woefully incomplete on how exactly you need to do this.) But it turns out that a few years ago the BIND folks added a new tool (alongside their others, nslookup and dig) that does automatically verify DNSSEC. I discovered it by accident when reading a man page. You can get it on Arch Linux in the Extra package bind-tools, and Ubuntu and Debian have it in dnsutils.

The syntax is very similar to dig. The rest of this post is pretty self explanatory. Observe how delv discovers that the site’s DNSSEC is broken, even though it’s using a resolver that doesn’t verify DNSSEC.

% delv @ +short
;; validating no valid signature found
;; RRSIG failed to verify resolving '':
;; resolution failed: RRSIG failed to verify

Compare dig:

% dig @ +short

And with a DNSSEC supporting resolver:

% delv @ +short
;; resolution failed: SERVFAIL

And with a site with woring DNSSEC:

% delv @ +nocrypto
; fully validated            82231   IN      A            82231   IN      RRSIG   A 8 2 86400 20200402175057 20200312201336 63865 [omitted]

Some thoughts on evangelicalism

26 September 2019

I really enjoyed this article called The Evangelical Mind by Adam Kotsko. Parts of it reflect my experience growing up as an evangelical Christian very well, other parts do not. I have a few thoughts on the parts that don’t.

  1. One point of difference is music. Kotsko’s parents complained that their Christian radio’s programming was “dull and conservative”. Kotsko says elsewhere that his father saw an important place for rock music in Christianity. My experience couldn’t be further from this. Even the most traditional music playing on Christian pop stations would have been regarded as wholly inappropriate for church, and questionable in general.

  2. Kotsko identifies the evangelical movement with the “seeker-sensitive” approach to church growth. Every church I attended as a child was violently opposed to this idea, and many of the pastors would rail against the idea (by name) from the pulpit. There was a constant fear that anything too friendly or enjoyable would water down the tough message of the gospel. The evangelicals I knew liked to point out that “narrow is the way…”

  3. Additionally, Kotsko accuses evangelicalism of “self-satisfied conformism”. While I think this is appropriate as a political and social point, Kotsko extends it to also mean that for the quintessential evangelical, “nothing could be stupider than expecting people to live by the teachings of Christ”. This would have been big news to my church, where nearly every member knew many verses of Romans 6 by heart. Their willingness to hold themselves to the Bible’s standards was certainly selective (never more so than on those political and social points), but the issue was always taken seriously. And apparently “arcane” points of doctrine like predestination were major issues: they were instrumental in a church split, in fact.

I rehearse this because I think Kotsko would not be surprised by any of it. It’s not simply that there are more serious and extreme evangelicals, as there are in any movement. It’s that this internal dissension is a central part of the evangelical movement itself. Whether you view evangelicalism as primarily a theological response to liberal traditions in the early 20th century, or a political response to the changing fabric of American culture of the 60s (as Kotsko does), it is undeniably characterized by paranoia and reactionary attitudes (as Kotsko says).

These are at the heart of modern evangelicalism’s instinct to eat itself. As Kotsko says, “Evangelical Christians nevertheless regard themselves as a persecuted and misunderstood minority, surrounded by a hostile secular culture that is actively seeking to deceive and corrupt their children.” Those who aren’t familiar with evangelicalism may be surprised to learn that this is no exaggeration. It’s a conspiracy theory as expansive as the Reptilian one, but believed by far more people. Beliefs like this are hard to go halfway on; they tend to consume you. You begin to see lizard people, or black helicopters, or “secularists” everywhere. When I came home from college after my first semester, I was excited to let everyone know there had been a mistake - not every non-evangelical had been a tool of Satan out to eat my soul. This did not go over very well.

When you take this kind of conspiratorial view of the world, it’s hard to stop with just those not in your group. Arguably this is made even harder by the plain fact that the majority of Americans claim to be Christians. If you’re going to maintain your self-understanding as a persecuted minority, while you’re the majority, you’ve got to believe that most of the people who claim to be on your side are actually infiltrators. And so it is: evangelicals are forever splitting into smaller, more specific, and more suspicious groups.

The points of difference, while taken extremely seriously by most evangelicals, are also necessarily created by this process. If you’re going to kick someone with almost identical beliefs out of your group, you need an important reason. What could be more important than a central doctrine like predestination, or not diluting your message with “seeker-friendly” music arrangements? Or what could be a more useful tool for purging your group of the infiltrators? The most serious evangelicals are always trying to purify themselves in this way. Controversies that seem unimportant to outsiders, like whose books Lifeway is selling, are great ways of figuring out who’s on the narrow path and who’s in danger of hellfire. Megachurches, in particular, are widely viewed as suspicious organizations that grift off an evangelical identity without any of its substance.

Once more, I note that I don’t think any of this would surprise Kotsko. This kind of continual purging is central to the evangelical experience, but the particular bugbears that apply to each evangelical subgroup are always unique. Mine viewed movies with suspicion, and thought that seeker-friendly worship was a sinister plot, but didn’t require women to cover their heads, use the KJV version of the Bible, or believe that drinking was inherently sinful. What I’m hoping this illustrates is how Kotsko’s particular experience fits into evangelicalism as a whole - a movement that’s a weird continuation of the paranoia of the reactionary conservatism of a prior generation.

How to netboot a Raspberry Pi 3 from a Linux server with TFTP and NFS

31 January 2018

Update for Raspberry Pi 4:

I originally wrote this guide in 2018, and then I got a Raspberry Pi 4 the same month as they were released. Unfortunately, The Pi 4 was released without initial support for netbooting. I did manage to work out how to get basic functionality, but I still had to keep the /boot partition on the SD card, which also meant remembering to sync that partition with the NFS /boot folder every time there was an OS update.

Even though the SD card wasn’t even mounted 99% of the time, apparently the heat of being connected to a Pi was enough to eventually fry my SD card. I managed to get it working just long enough to update the firmware to a new version, which just now, as of this month, arguably has full support for netbooting. (SELF_UPDATE could previously be enabled, but it’s finally the default. This means that the Raspberry Pi folks now support doing firmware updates via USB or NFS, not just the SD card.)

Several steps are needed: make sure you’re on the latest (>= 2020-09-03) firmware. Then run the following to enable boot from a TFTP server:

cd ~
cp /lib/firmware/raspberrypi/bootloader/stable/pieeprom-2020-09-03.bin .
rpi-eeprom-config pieeprom-2020-09-03.bin > boot.config
# open boot.config in your editor of choice and change BOOT_ORDER to 0x412
rpi-eeprom-config --out new-pieeprom-2020-09-03.bin --config boot.config pieeprom-2020-09-03.bin
sudo rpi-eeprom-update -d -f new-pieeprom-2020-09-03.bin
# and reboot
sudo reboot

The 0x412 option will look for a TFTP server to boot from via DHCP, then fall back to the SD card, and then fall back to booting from USB. See here for documentation. The default option (as of the 2020-09-03 firmware) is 0x41, which unfortunately doesn’t even try to use netbooting.

Some other guides I found suggest that you disable the automatic update for the eeprom, in order to prevent it from disabling the netboot setting, but this should not be needed. The documentation says:

If you update your bootloader via apt, then any configuration changes made using the process described here will be migrated to the updated bootloader.

Once netbooting is enabled, the remainder of this guide should work just fine. As far as I can find, this is still the only guide to netbooting a Raspberry Pi that doesn’t assume you’re using another Raspberry Pi as a TFTP server.

One change I had to make is that I’ve switched from using a consumer grade router with custom Tomato-based firmware to using my own hardware with OPNsense. The same guidance I provide for how to get your DHCP server to send the PI to the TFTP server applies, but it seemed to be necessary to use the “Text” type for the server IP address field, not the “IP address or Host” option. Your milage may vary.

The original guide, for the Raspberry Pi 3:

You can find plenty of guides to booting a Raspberry Pi from the network, but the one’s I’ve seen make one or two assumptions. One is that the official guide assumes that you’re booting off of another Raspberry Pi, which you use to generate the boot folder. Another is that you’ll be providing DHCP and TFTP on the same server, e.g. using dnsmasq. This is annoying because you end up running two DHCP servers on the same network, and it has the potential to interfere with the network configuration on the server that’s running it.

So here’s my plan. I’ve already got a Tomato router (running Advanced Tomato), and it’s got dnsmasq as a DHCP server. So in theory I should be able to direct the Pi to my real server, which I’m going to run tftpd-hpa on. The same server also runs my NFS mounts, so I’ll use it to host the root directory for the Pi as well.

I expected this to be simple, but it turned out to be rather hard. There are a lot of moving pieces, and many of them don’t give you much help when they break. So I’m documenting the process here.

What you need to do this:

  1. A server to host NFS and TFTP
  2. A router with a configurable DHCP daemon, e.g. dnsmasq (Tomato-based routers are great for this.)
  3. A Raspberry Pi 3 (the first model to support netbooting without an SD card) with a working Raspbian installation on an SD card
  4. A working NFS setup (not hard to do, but not described here)
  5. Possibly a lot of patience. (This is very much not a step-by-step guide, I assume that you know how to find the tftpd config file yourself, know how to configure your network’s DHCP server, and so on and so forth. I’m simply documenting how to configure things.)

The first, and apparently most difficult part, is just getting a working TFTP server. I installed tftpd-hpa on Ubuntu 17.10, made sure port 69 was unblocked, did some basic configuration. No luck, even over localhost. After some mucking about, I could tell using -v -v -v with tftpd and journalctl that tftpd was receiving the request, but it never bothered to respond—and it didn’t say why. After hours of trying different suggestions online, I managed to get a working configuration with the following:

TFTP_OPTIONS="--secure -p"

I’m not sure if the -p is needed, but honestly I’m too scared to change anything now. Some of these settings are almost designed to cause frustration. --secure isn’t even a security setting, it sets the “default” directory to TFTP_DIRECTORY and translates all addresses to subdirectories of that one. Yes, that’s right, without --secure, a request for a relative path won’t work. My /boot files are owned by root, umask=022, and that seems to work fine.

The main thing you need to do for TFTP is set the directory to the path you want to use for the Pi’s boot files. Since I have the NFS server and the TFTP server on the same hardware, I can conveniently point the TFTP server at /boot in the Pi’s root directory under NFS. (Note that you’re making these files available via the TFTP server, so you want to use the real path to where the boot files are located on your file system, not the place they’re made available over NFS.)

But first things first, we have to get the root directory onto the server first. The official guide is rather useless here. It assumes that we’re going to run the server off of another Pi, which we can conveniently copy the root from. I went the opposite direction. I set up the NFS server with the options


and, mounting the folder on the client Pi (with Raspbian installed to an SD card), copied the root with rsync to the NFS folder. Conveniently enough, we can use the exact same folder and settings to point the Pi to later. (Having virtually no security on the root folder of the Pi’s filesystem is pretty concerning, however. I haven’t figured out how to get around this given the limitations of PXE booting on the Pi. You could at least limit the IP addresses which the NFS server will answer to, but I do only recommend running this on a carefully secured subnet.)

Alright, so I now have a working Raspbian filesystem served over NFS, and I can point my TFTP server to the /boot directory as previously described. I do need to edit /boot/cmdline.txt as described in the official guide to point the boot code towards the NFS server. Also best to edit /etc/fstab and remove everything but /proc, otherwise Raspbian thinks the boot didn’t succeed.

# cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/nfs 
    nfsroot=,vers=3 rw ip=dhcp rootwait elevator=deadline

The bit starting with “root” is the important part. We need to point the boot code to the directory with the Pi’s root on the NFS server, as shown.

The remaining task is to get the DHCP server on the network to respond to the PI’s PXE discovery request. I fought dnsmasq on Advanced Tomato for a long time, and finally settled on setting the appropriate DHCP fields by hand, as established in RFC 2132.

dhcp-option=43,Raspberry Pi Boot   

Where the IP address belongs to the TFTP server. There are three spaces after “Boot” as suggested in the official guide, but I don’t know if they’re actually necessary. Surprisingly enough, these two options alone were enough to get it working—I just set them on the DHCP page of Advanced Tomato.

The last step is just to make sure booting over ethernet is enabled on the Pi, as the official guide says (or see above for the Pi 4). After that, removing the microSD card and rebooting, everything should just work!

Previous page Next page
©2021 Adam Fontenot. Licensed under CC BY-SA. About Me Projects RSS Feed