Redirecting you to
Podcast Aug 29, 2022

Root Causes 239: Post-quantum Cryptography Candidate SIKE Defeated

NIST's round four post-quantum crypto candidate SIKE (Supersingular Isogeny Key Encapsulation) has been defeated and is now out of consideration. In this episode we explain isogeny cryptography, why NIST is seeking additional candidates, and why failures of this kind are expected and healthy for PQC.

  • Original Broadcast Date: August 29, 2022

Episode Transcript

Lightly edited for flow and brevity.

  • Tim Callan

    We’ve been talking after the recent announcement of the NIST final selections for their post-quantum algorithm contest. There’s been a bunch of new stories that have broken around that, and part of that original announcement was that NIST kept going with, I think it was four additional algorithms that they put into what they called Round 4. And Round 4 was tother things that seemed to have potential and were worthy of being looked at and pounded on by the community to see if these might be legitimate algorithms to add to the list as well. And there was an interesting news story that broke right around the beginning of August, which one of these, called SIKE and SIKE is S-I-K-E - it stands for Super Singular Isogeny Key Encapsulation. We’ll call it SIKE for now on or at least I will - got basically completely and utterly broken. Broken wide open to the point where it’s no longer even considered a candidate. Is that correct, Jay?

  • Jason Soroko

    That is correct, and it’s unfortunate for several reasons, and we’re gonna go into what super singular isogeny or isogeny-based cryptography is, and then we’re going to talk about why it was vulnerable, but what’d I love to cover first, Tim, is the fact that the reason why you had the Phase 4, of NIST in their post-quantum cryptographic algorithm race in their short list was – you said it before in previous podcasts, but there is a desire to get away from monoculture, to get away from pure lattice-based post-quantum cryptography which seems to be leading right now mostly because of the fact that the mathematics is understood so well, it has stood up to the maximum amount of scrutiny, but there’s a desire for something else to have redundancy.

  • Tim Callan

    We need a backup plan.

  • Jason Soroko

    We need a backup plan because if lattice falls, oh my goodness. We need – it would be good to have something else.

  • Tim Callan

    And if everything is built on it and it’s all lattice, and some genius somewhere finds a whole new way to attack the problem, and all of sudden, lattice is no good, and everything is built on lattice, all of our crypto is no good, and that’s a bad situation.

  • Jason Soroko

    That would be truly bad. So, and of course, I’ve spoken about this before, but these is more that one way to do lattice, so there’s plenty of implementations. Some have stood the test and hose have moved on toward standardization by NIST and that’s great. However, isogeny was looked at as being a good alternative and one of the reasons, Tim, is because the bit lengths to make it secure are reasonably small.

    It is computer intensive so there’s latency but the belief was given enough time computers will, it’s like Moore’s Law it’s something we’ve known forever with computers. As computers get more and more powerful, that little bit of extra latency due to computation of isogeny basically would get soaked up by that and it wouldn’t be a problem. But other than that, you don’t have the problems of Classic McEliece which is these large bit lengths and so isogeny was just looked at as being shining light of the big hope. Because it had so many good properties to it and not believing the least of which, Tim, I’ll try to mention two things. One is the fact that it was based on elliptic curves, which is something we do know a whole lot about. So, there has been plenty of tire kicking on elliptic curves but also the way in which the private key is generated, isogeny lends itself towards perfect forward secrecy. In other words, ephemeral private keys, Tim, which is quite interesting isn’t it?

    So, it’s got a lot of really great properties. And so, that’s why it was a big hope. And SIKE, of course, just being one implementation just like the lattice-based implementations are just singular implementations of one particular brand of math within lattice, SIKE was just one way to do things in isogeny. So, with that all being said, I wanted to spend a little bit of time, and I’m going to, just like last time, attempt to not try to fail utterly miserably right away, but here we go.

    Let’s break this apart a little bit. Like this term super singular isogeny. Super singular refers to the actual elliptic curve. The shape of the elliptic curve because in ECC, something we use today with certificates and PKI, you’re talking about ordinary elliptic curves, and so in the case of isogeny cryptography, super singular refers to the fact that you’re not using ordinary elliptic curves. You’re using super singular elliptic curves. And so, I’m going to leave that right there in terms of why that is.

    Just in extremely plan English, the reason why we are using a different kind of elliptic curve is because what you want different between ECC and isogeny is the isogeny. In other words, you’re using ordinary elliptic curves for ECC. It’s a factorization problem - - the tough problem being solved with isogeny is a mapping. Think of just a mapping between two curves, between two elliptic curves that happen to be super singular curves. I think sometimes the words get in the way of what is essentially a very simple idea. And this mapping, you might call it a function mathematically but essentially this equation - you’re trying to equate one curve to another - is along this path, and so whenever you’re talking about two things that are equal, that’s why this term isogeny came up, cause that’s essentially what that word means. It’s an equalness between two things. That’s, again, an extreme oversimplification. But, on the other hand, it does explain what’s going on. It’s very tough for a quantum computer, any kind of computer, to solve an isogeny between two elliptic curves, and that’s an interesting factoid about it. And so, with now that being explained as, what is the tough math problem between these two elliptic curves, the Achilles heel, Tim, has to do with the fact that – I’m gonna back this up – I don’t think you're going to read this anywhere on the internet, but a gentleman by the name of Galbraith, right now, who was probably one of the deepest thinkers in elliptic curves and had proposed solutions and how to break things like SIKE, Super Singular Isogeny Cryptography – he has a blog out right now that you can all go search that basically does kind of a high level explanation, but I’m going to try and go one level higher than he did and say the problem is these auxiliary points, and now, what the heck does that mean cause that’s essentially what this SIKE attack – this is what broke it down to the point of, it only took a regular computer, what is it, just a few hours to break the SIKE encryption.

  • Tim Callan

    It took an hour or something.

  • Jason Soroko

    It was just crazy, just crazy. So the big bug bear in the math has to do with this, Tim. Imagine this in your mind for a moment. There’s this concept in mathematics called commutation which is a great big word that basically means a x b = b x a. If you think about that simple function, that’s a commutative function in the sense that the order of the items is irrelevant. The equation still holds. That is an important mathematical feature. That is, believe it or not, important to Diffie-Hellman and super isogeny Diffie-Hellman, is in fact the underlying protocol to SIKE, and it requires this commutative property within the isogeny. And so what that means, Tim, is that in order to deliver this commutative property which is not inherent in a super singular elliptic curve in terms of mapping the two of them, you have to basically abstract that out. You have to kind of create this commutative property, and the way that the mathematicians coming up with these cryptographic functions have decided to do it is to create what are called these auxiliary points, and the problem is that these auxiliary points – what they’re doing is they’re kind on giving the bad guy a little bit of information about the nature of the secret isogeny.

    In other words, this function as it is unfolding – there’s plenty of mathematical terms that I could use here but think of it like, it’s not function that you write out ahead of time, it’s a function that unfolds as you follow the paths of these two elliptic curves together. As you’re following this path, when you’re creating your private keys, there are distinct bits of information that need to be put into the input in order to create this commutative property. So you're giving away a tiny bit of the secret in order to be able to maintain this property. And unfortunately, the thought was, well, that information is not enough to unravel it. Well, back in 1997, people said, I bet you the best attack against something like SIKE would be against those auxiliary points. Try to see whether or not they are giving up enough information to be able to then figure out – basically cause a key recovery, which is a catastrophic failure of the cryptographic function. There was an original paper in 1997, and then there was another paper I believe in 2016 and then finally, of course, this SIKE is now short-listed for Round 4 of NIST and so, the mathematicians were like, okay, well, we never fully proved-out those previous papers that proposed the attack against those auxiliary points, and now you have the answer.

  • Tim Callan

    That’s interesting cause that means this basic idea has been around since 1997 in principle. Probably more work could have been done back then, and SIKE never would have made it this far, but nobody concentrated on it because it was a low-stakes problem. And then suddenly it became a high-stakes problems, and it got a bunch of attention, and it turned out that there was this weakness where the ground was already laid, we just had to kind of work through it. Is that right?

  • Jason Soroko

    Nobody understood how bad the weakness was. I think the mathematical community knew the weakness was there, it’s just nobody had proved it out. Nobody had written a software, nobody had taken just a classical computer, and, Tim, this is something I’m asked quite often – does it require a quantum computer to break a quantum algorithm, and the answer is absolutely no way. You’re talking about just a regular computer you could go and buy just with good software that is basically an implementation of these proposed attacks against the specific algorithm, and the math works out that the fear that these auxiliary points to create this commutative property are giving away just, just enough information for it to have a failure.

  • Tim Callan

    I mean, there’s a bunch of directions we can go with this. One of the interesting points here is that this is a discovery process. And before we can really feel good about our algorithms, we kind of have to feel good about the fact that this discovery process has occurred. We went from we all thinking like SIKE was a legitimate candidate to thinking it very much wasn’t a legitimate candidate because that’s what was discovered, and so for all of these things, you mentioned - -

  • Jason Soroko

    Failure is healthy.

  • Tim Callan

    But you mentioned earlier in this very episode that one of the advantages of the lattice-based stuff is that lattice has gotten a lot of attention, and so people feel like this kind of ah-ha moment is just considerably less likely in the case of that scheme.

  • Jason Soroko

    Yes. I would say so. Especially with the fact that number theory in lattice is mature enough that – and remember we actually podcasted about this, and it might have gotten buried in a lot of the detail, but lattice is probably going to be very strong through time. Not because it cannot fail. In fact, this is an interesting one where catastrophic failure of lattice is much more difficult because you can enumerate all the way to it being so incredibly difficult. Knowing how difficult it can be and having that measurable is something that is well-known to number theory, and it has been thought about for a very long time. With this, because of the fact that the parameters are much less mature and when I mean with parameters it really means you have to have this sweet balancing point between it being practically usable and not practically usable, and so people have to very carefully choose, what are my bit sizes and, how much information do I give away in the auxiliary points because if I give more, things become easier in the computation, la di da di da. There’s just so much here to consider, and so, therefore these parameters have to be carefully chosen. So, yes, it has failed, and it probably will not survive into the current round, obviously, but on the other hand, I sincerely think that isogeny as a cryptographic possibly, I think there’s still a lot of merit in it. It just means that we have a lot of work to do with respect to choosing good parameters. And it may change its usability or its pragmatism. Those are other factors that nobody has been talking about yet. But I do know that people are already on the path of, alright, these auxiliary points are now are terrible bugbear, how do we fix that. So, there’s a lot going on.

  • Tim Callan

    So, your point being, Jay, that though SIKE itself is not usable in its current form, it doesn’t mean that the isogeny strategy is not usable. It’s just that it may have to be reformed and re-encapsulated so that we can get around this particular vulnerability. Is that right?

  • Jason Soroko

    That is absolutely true, and I think that maybe the biggest take-away of this podcast – and keep in mind that I think that isogeny is attractive enough that it is going to be worth the work to get it right and to get it so that will not fail like this under simple circumstances, but obviously a lot more work to do.

  • Tim Callan

    So, do we need a NIST Round 5?

  • Jason Soroko

    I think, Tim, after the NIST Round 4, I wouldn’t be shocked if NIST basically says we’re going to be constantly on the lookout for good algorithms to standardize. Why would they shut it down? Why would they shut down good ideas? It just might mean that we might not call it a Round 5. It might just mean, it’s just future consideration, and they might have a name for what might end up being a permanent program for cryptographic agility.

  • Tim Callan

    Or just also just academia could take that on cause this is what academia is about advancing understanding. And it would be easy enough to say, okay, we’re going to take this, and I will work on it. I have an obligation. I’m a professor somewhere. I have an obligation to do original research, and I’m going to work on this problem. And that could go a long way, too.

  • Jason Soroko

    I think it would. I think at a baseline, that will naturally happen, even if it doesn’t get any press. There’s a lot of people who work in ivory towers who do this kind of yeoman’s work, and that’s absolutely true. But on the other hand, I do think that NIST might be involved with some kind of a program going forward because this redundancy idea is just too important. If we don’t get the redundancy we need after Round 4, they will come up with something.

  • Tim Callan

    I think you’re right. Absolutely, cause you can’t have all your eggs in that basket. I mean, you can’t for obvious reasons. Alright, well very interesting. So SIKE is out. There are other algorithms in Round 4 so that’s still not over, and we’ll see where all this develops.

  • Jason Soroko

    Sorry to heap all the math on everybody, but I think it is interesting enough to – like I said in the previous podcast on lattice, I at least wanted to get people to the point where they can very right off the top of their tongue to be able to say what is the tough math problem being solved, and I think we explained that for lattice and I think now we’ve explained that for super singular isogeny, and even though it makes for a lot of words, they are at least cool ideas.