Chat With Us
We are here for you!
Talk to a fellow human.
The Root Causes podcast explores the important issues behind today’s world of PKI, online trust, and digital certificates. In this episode hosts Jason Soroko and Tim Callan discuss the ins and outs of random numbers including their importance in cryptography, the extreme difficulty of generating truly unpredictable numerical values, and the bad consequences that can ensue when you don’t.
The Root Causes podcast explores the important issues behind today’s world of PKI, online trust, and digital certificates. In this episode hosts Jason Soroko (CTO of IoT, Sectigo) and Tim Callan (Senior Fellow, Sectigo) discuss the ins and outs of random numbers including their importance in cryptography, the extreme difficulty of generating truly unpredictable numerical values, and the bad consequences that can ensue when you don’t.
(Lightly edited for flow and brevity, this podcast originally appeared July 2, 2019.)
Tim: This time we’re going to talk about random numbers.
Jason: Randomness. That’s super important in the field of PKI and encryption.
Tim: As I think probably everyone understands, the short story of why random numbers are important is it puts unpredictability into your results. If people could understand how your hashes were generated then they could go do their own and they could spoof the whole system. So some sort of randomness is a necessary component for all of this to work.
Jason: Uncertainty, randomness, chaos, why is that a good thing? We’re going to do a few analogies here today, Tim. Because this gets into math real quick and we want to avoid doing that. So you know, let’s say you play at a lottery.
Jason: Or you’re forming a lottery and you’re going to pick six numbers. You know how do you randomly pick those six numbers? That’s really important to have a lot of chaos and uncertainty based on how those numbers are chosen. Because if there’s some level of bias.
Let’s say your numbers are between 1 and 50 and for four or five or six straight weeks your numbers are skewed towards 50 more than 0. Wow. That gives an enormous advantage to the people playing your lottery.
And so therefore you need to take advantage of randomness. Typically that’s done just by having, you know, ping pong balls that are rolling around seemingly in a completely random way. The problem is of course if the balls aren’t completely weighted identically, that brings in bias and this bias is the opposite of randomness.
Tim: And bias doesn’t imply certainty. It’s not like I have to say, “I know that these are going to be the six numbers.” But if I knew for instance that a certain number was more likely to come up or less likely to come up, even if it doesn’t mean it’s sure to come up, that bias is very important. In the case of the lottery, you would pick the numbers that are more likely to win and you would skew the odds in your favor.
In terms of crypto, I think what you would do is you would pick the numbers or configurations that are more likely to be right and you would reduce the necessary processing time to brute force attack a key.
Jason: That is the exact point because the basis of all of the good encryption algorithms that are out there is that you need to have a computer work through all the millions and billions and trillions of combinations.
And if you happen to know that a certain class or a certain segment of those permutations are not necessary to scan, you’ve reduced the safety factor of your encryption in a massive way. Even though you haven’t given away the answer, the ability to brute force that encryption rises exponentially.
Tim: And true randomness is hard. You can get a random number generator app to put on your phone and it will give you a number where it’s hard for you or me to predict what the next number will be, but that’s probably not actually random. It’s probably pseudo random, and it’s fine for my friends and me deciding who will go first in a board game, but it wouldn’t be so fine to use that as the basis for some critical digital security system.
Jason: Right. So let’s talk about entropy and information. Because this is where this all came from and there’s like a few things in the news recently we’re going to touch on a little bit. This podcast is going to be a minor tribute to a person who’s probably not talked about enough and that’s Claude Shannon. Claude E. Shannon who back in 1948 wrote a mathematical theory of communication.
His job back at Bell Labs was to basically deal with signal to noise ratio issues at Bell. You’re trying to put a signal modulating information across a wire and there’s a noise floor below which you can’t read that information any more. He was an expert in really maximizing the amount of information going across a wire and pulling it out of the noise.
His work is the basis of a lot of very important concepts in computing. We wouldn’t have the computers we have today running the way they do. We wouldn’t have the PKI industry probably. All these important concepts of randomness that we just talked about and how to measure entropy, how to measure randomness, Claude’s work is the basis of a lot of it. So I want to reference that. And anybody who’s interested in that really should check it out.
Tim: Ok. Got it.
Jason: So let me tell you a study that Claude Shannon did. Nothing to do with computers. Just to tell you why the word entropysometimes has what seems like a double meaning when in reality it’s not. Has a lot to do with, again that word informationand of course we use that word all the time, but sometimes we don’t use it very precisely.
The precise meaning of information for Claude Shannon: Here’s an analogy to help explain it. You and I right now, Tim, we’re talking in the English language and we’re conveying information to each other and to the listeners.
Jason: And what’s interesting is, we’re using a finite set of words. We’re using certain words over and over again. Sometimes in different contexts they have different meanings. The spellings of those words sometimes makes a lot of distinction between a word that might sound the same but is spelled differently. We happen to know that. There’s some information there. So Claude Shannon actually did a study on languages to try to determine which languages had the highest amount of information flow.
English came out very high, and there’s a reason for that. Take a look at say French or Russian, of which I have some familiarity. (Not a lot.) But French and Russian have the concept of genders for nouns. Genders for nouns, that’s a cultural aspect. It’s not an informational aspect. In other words, if you use a verb tense for a masculine or feminine noun, does that convey information? It actually doesn’t.
Jason: So they are low entropy languages. Russian is even worse; it actually has three genders. Go figure that one out. Additionally, one of the reasons why English doesn’t score at the absolute top is because English has a lot of detritus in it, again culturally. Like for example, the word through has a whole lot of extra letters in there, right? What’s that GH doing in there? So the spelled out English language actually has some things about its information that are lower than say perfect entropy. Something that would be the opposite of that and have the highest amount of entropy almost possible, think about the digits of pi. 3.14 and then every single digit after that is a surprise. It’s a new piece of information.
So therefore it’s an extremely highly informational concept. Now something that would be a low information would be say something like just a drumbeat. Can you convey a sentence to me just with a simple downbeat, upbeat, downbeat, upbeat? There’s only so much you can say in that. Morse code on the other hand has information.
One of the highest forms of entropy, think about something like white noise, and this is where we’ll circle back to bias. White noise is kind of perfect in a way because if you start looking into the white noise, it’s very, very difficult to predict what any of the given elements of white noise are going to be. If it’s just a time series, the amount of sound there is or where it happens to be in frequency is very, very difficult to predict at any given moment. However, if it’s like audio engineers play with this stuff to create pink noise and concepts like that, that’s when bias is introduced into the white noise.
So I think to get back to computing now, Tim we’ve seen examples in the Linux world and other places where it wasn’t truly random. It was pseudo random, and there was some kind of bias that was conceptually sort of the same as a pink noise, a bias within the random number generator in the operating system.
A lot of people might not realize but Linux for example uses just movements of the mouse. Other forms of input that are happening, it’ll use those seemingly random or chaotic features to be able to derive randomness.
Where we’re actually finding some of the least amount of randomness and where we’re running into a lot of trouble is in IoT. Some people are trying to generate certificates with some algorithm based off inputs that are going on in a device. The problem is that a lot of these devices that don’t have some sort of dedicated hardware random number generator, where’s the entropy coming from? Where’s the information? Where’s the chaotic randomness coming from?
It turns out it’s not coming from anywhere. It’s an extremely closed off and homogenous environment whereas each device probably was created exactly the same way except for just a serial number, and that’s not enough randomness.
Tim: We see what happens when these things go wrong. There have been some very high visibility examples.
The first thing that comes to mind of course is 2008’s Debian OpenSSL flaw. The OpenSSL function in the Debian flavor of Linux somewhere along the line shipped with a problem where basically the seed value was predictable. As soon as people realized that that seed value was predictable, they immediately calculated, like, the first 100,000 values. You then had a list and so you could probably crack anything that was created with OpenSSL.
If it was done in the first 100,000 thousand values (and by the way it’s most likely to have been done in the first thousand values), then you could just go down this list. So it basically meant all the certificates that where the CSR had been created in OpenSSL had predictable keys. Plus, it was easy to find them because you just looked for certificates that had keys on this list.
And it was big. There were tens of thousands of them.
That sort of thing can happen where people don’t realize. It looks like it’s all working correctly and you think it’s fine and then it turns out that it’s not actually random.
Another example was, also in 2008. Some security researchers managed to do a collision attack on a VeriSign off-brand certificate. It wasn’t VeriSign branded but it was from VeriSign, and part of the way that was possible is they detected a pattern in the serial numbers which allowed them to predict the next serial number. It happened, it was pretty easy pattern. It was increment by one, which was pretty easy to figure out.
Once they detected that pattern, it made it possible for them to predict that next serial number, and therefore no randomness. No entropy. Entropy is collapsed down to a single value. And under circumstances like those were real-world events with real-world consequences because the ability to have genuine unpredictability was absent.
Jason: It’s really key. A lot of developers, especially if they’re inexperienced, will take a look at the basic random number generator that is in a lot of languages or operating systems and think,”Hey, that must be really good. That must be really random.”
Tim: Yeah, and it looks random to your eyes, right? But that doesn’t mean it is actually random. Just the fact that the pattern isn’t immediately and obviously detectable by a human eye with few data points is not the same as saying that there is no bias or there is no pattern. Or there is no way to predict what the next one’s going to be.
Jason: And the funny thing about random functions is they’re typically not used unless it’s something important, right?
Jason: You’re not just playing around with it. You’re typically either doing cryptography or, again in a non-computing sense, perhaps you’re running a lottery. In all these cases there’s something at stake typically when randomness is being used.
Something in the news very recently Tim, I just kind of bring it up because it’s interesting. I think it was Cloudflare that formed the League of Entropy.
Tim: The League of Entropy.
Jason: Which is great. So I don’t know if everybody on this podcast listening is familiar but there are entropy beacons out there, randomness beacons. Just like a lot of things in cryptography, Tim, don’t do it yourself, right?
Tim: Absolutely, though Cloudflare does it themselves. They have a randomness beacon.
Jason: Yeah. They work in conjunction with the University of Chile, which is interesting, and they have an entropy beacon. This news event – I think it was June 17 where I think it was made public – all that’s really happening is they’re forming a consortium of these randomness beacons. They’re working together in conjunction to get the word out about their services, so for those of you who need an entropy service or randomness service, check these things out. They’re pretty cool.
Tim: A couple of these are cool, I have to say. Cloudflare has LavaRand. For LavaRand they have a camera pointed at a wall of lava lamps, and they are converting the image that the digital camera is capturing from the continual, chaotic, unpredictable movement of this whole wall full of lava lamps. They’re somehow converting that into digital values, which you can imagine there’s many ways you could do that. And those are phenomenally unpredictable, right? So that’s a random number generator.
I also like, U Chile has something they call Seismic Girl and Seismic Girl collects randomness from five sources. They are as follows: seismic measurements in Chile (and we know that’s a very seismically actively place), a stream from a local radio station, a selection of Twitter posts, data from the Ethereum block chain, and their own off-the-shelf RNG card.
Tim: It’s just a wonderful collection of things, and you can see how all those getting mashed together is going to give you something that is truly, truly unpredictable.
Jason: Don’t do it yourself because these are also people who’ve studied the math behind it. A lot of these things are absolutely amusing to think about being used, but it makes a lot of sense. A wall full lava lamps doing their thing is going to be a heck of a lot more random than just say somebody moving their mouse around on a, you know on their desk.
Tim: Exactly. And so those things are useable.
Now, we have touched on this topic before and I don’t want to go over old material, but I do want to reference it. In our Episode 9 we talked about the 63- versus 64-bit serial number concern that occurred earlier this year in the world of public CAs, and that comes back to an entropy concern. The problem was that one of those bits turned out to be predictable, which cut the genuine entropy in half. It surprised people and nobody was expecting it, and it got discovered incidentally as people were investigating other things.
That’s a perfect example of what we were talking about. You might think I'm hitting the button and getting what looks like a random number, but it’s not always actually a random number.
Jason: Goodness. Even in terms of guessing at large prime numbers, there’s some really amazing math out there that’s using geometry to try to find non-randomness. In other words, turning randomness on its ear and actually being able to use geometry to find where in a large batch of numbers, prime numbers exist without having to have a computer exert the massive horsepower to calculate all of them.
It’s just smart math, and the need for absolutely real randomness, the importance of it just gets more and more through time.
Tim: Let me just reference also, we brushed on this topic in our Episode 6 on quantum-resistant cryptography. If you enjoy this kind of thing and you’re interested in numbers and randomness and the mathematics behind making crypto work, that’s another one I do want to recommend. Go listen to that episode as well because we get into much more depth on the quantum computing challenge in particular and how that needs to be addressed.
Jason: One of the reasons we bring up these, what might in isolation look like a very esoteric topic in reality these things make headlines. It’s the basis of very important part of the security industry, and it’s good to at least have a cursory knowledge of what’s going on.
Tim: I'm certain this will come up again. I think this is a wonderful introduction to the topic. Thank you very much, Jay.
Jason: Thank you, Tim.