Chat With Us
We are here for you!
Talk to a fellow human.
The Root Causes podcast explores the important issues behind today’s world of PKI, online trust, and digital certificates. The world recently witnessed the previously unprecedented strategy of a government attempting to force its own trusted root on its citizens for surveillance purposes. In this episode hosts Jason Soroko (CTO of IoT, Sectigo) and Tim Callan (Senior Fellow, Sectigo) discuss this action, its dangers, and some of the potential pitfalls in blocking it.
(Lightly edited for flow and brevity, this podcast originally appeared July 29, 2019. Since this discussion, Google, Apple, and Mozilla have decided to enforce distrust for this private root. Tim Callan has subsequently published an essay on the weaponization of PKI in Computer Business Review.)
Tim: I get to pick the topic today and the topic I have chosen is Kazakhstan.
Jason: Kazakhstan. It evokes images of the Great Game.
Tim: Former Soviet republic in the Middle East. Population of about 18 million people, and the reason we're talking about Kazakhstan today is that an event is going on that, in my experience is unprecedented. We’ve seen a new weaponization of PKI by a government. In particular the government of Kazakhstan is using the country’s ISPs to pressure, force—I'm not sure quite what the right word is—let’s say force the citizens of the country to add trust into their browsers for a local root that the government of Kazakhstan owns.
Jason: I'm thinking of the man-in-the-middle attacks there, Tim.
Tim: That’s absolutely, right. The way they’re doing this is if you want to connect to the ISP, you’re not going to be able to do it without this root. People want to have the internet, so what are they going to do? They’re going to install this root, and once they do in enables widespread man-in-the-middle of attacks by the government on basically anybody who is using that ISP for internet access.
Jason: So many questions that come up here, Tim, just in terms of how they might do that. Perhaps the most obvious one is, have they asked the browsers to put this as a trusted root in the root store?
Tim: That was phase one. In 2016 they went to the browsers and said, “Hey, we’re going to be a public CA. Here’s our paperwork. Put us in your root stores.” Then there was a lively community discussion and some investigation by the major browsers at the time and it was chosen not to give them root store access.
Much of the reason was what we just talked about, a belief that it was not intended to be used in the spirit of a trusted root in a browser. The browsers, of course, they own their root stores. Even though they have guidelines, they are allowed to choose not to give access if they deem that it’s bad for their users and bad for the security of the internet on the whole. That’s what happened.
Everybody thought that episode was over, and I guess for a couple of years it was. But now, three years later, 2019 Kazakhstan’s taking another run at it, and they’ve decided that they can just force the whole thing to happen on their end.
Jason: I'm thinking about how this would happen on a more targeted scale rather than a mass surveillance of a whole country. If you were a bad guy and you wanted to snoop on somebody’s communication, especially their web communication, you might convince or socially engineer the victim to download a certificate of your choice and then essentially act in the middle of the communication of what they think is their destination and the victim. Therefore, you would essentially have your own SSL connection to the ultimate destination, perhaps a bank or a social media site or a site where people were sharing activist information, let’s say.
And then if the victim is essentially connected to you with a certificate that you had previously convinced them to install into their root store, you can look at all the communication because you you have the keys to both sides.
Tim: Exactly right. It’s incredibly insidious on many levels. One of which is, there’s no real good way that anybody would know that this was being done. So, you can’t sit and say, “I'm not being man-in-the-middled right now.” You can never know that.
You know how is it expected that they will use it? The government of Kazakhstan is a dictatorship, and the widespread expectation is that this will be used to persecute their political enemies and human rights activists and journalists and really anybody who is not working to the advantage of what the sitting government in control wants.
Jason: I think of activists almost first and foremost here because they obviously need to communicate, and they might have sites where they go to, essentially social media sites or whatever the equivalent is for these activists. I'm sure that a government who wanted to snoop on its citizens would love to just partake in all that communication. Even if that communication happens to be on a server in a different country, what this means is the government might be able to listen in on that.
Tim: Certainly that’s the desire. Even if they don’t manage to successfully listen in on everybody, if they listen in on many people, that will still allow them to do a lot of harm. You know I might be completely buttoned up and use a VPN, and there’s no way anyone’s listening to me, but if I'm trying to chat online with you, I don’t know what you’ve done. Maybe they’re hearing my conversation by watching your traffic.
Jason: You just stole exactly what I was going to say, Tim. This just makes you want to run out and use your nearest VPN for all communications because goodness knows. But you’re exactly right. The second half, whoever it is you’re communicating with might not be on VPN, so therefore whatever you’re saying is probably in the clear to someone else.
Tim: If you think about the real targets of this, the real target is probably not stealing your bank account login. It probably is finding the people who are agitating for freedom so that we can imprison them. And in that regard, it’s fundamentally person-to-person communication-based activity, and taking away their ability to use the internet hamstrings them tremendously.
Jason: Mass surveillance is almost never…
Tim: Almost never benevolent.
Jason: Almost never benevolent. Also almost never used for fraud. It might be used to try to catch actual bad guys, but I guess bad guys is defined by the government.
Tim: What’s your definition of a bad guy? The person who wants to vote for leaders instead of just keeping me in is a bad guy, and therefore let’s go catch them.
There is a lively discussion on Mozilla board about this right now. People are proposing alternatives and actions, and the trouble is they’re all fraught with problems. It’s not an easy solution. You might say, “Well, why don’t we just do a manual one-off distrust of this root so that in our browser the root is distrusted regardless what you do as an end user?” But the response to that is, “Well then they’ll go to a different browser.” And then you go, “Ok fine. Let’s say we get all the major browsers to agree and all the major browsers do it?”
So that seems like a neat solution, right? But popular browsers including Firefox and Chrome are open sourced. So, you just branch the code, stub that part out, and release it locally and call it KazakhFox, and by the way if you don’t use KazakhFox you can’t connect to the internet. Bam.
Tim: So, it’s not that easy. Then people talk about, “Well maybe it’s better to have them, using KazakhFox than Firefox but maybe it isn’t.” Difficult thorny stuff. There isn’t an obvious, neat, clean solution that can be done on the browser side.
Jason: Tim, this is why I stick to the private trust side of the world because your public trust out in the world is just complicated as heck.
Tim: It gets political. It gets all kinds of crazy things.
Here’s another thing to understand. There are lots of great reasons for trust of a private root to be added to browsers. Like I might want all of the browsers inside of my enterprise firewall to have trust so that I can have my own root and have my own pages on my root using my self-signed CA and have you able to access them. They have it be secure and not in the clear and defended from attackers getting in and putting up their own false pages. Like there’s really good reasons to have this stuff.
Plus there are actually benevolent man-in-the-middle mechanisms. For instance if I have a firewall and need to be able to monitor data coming in and out to take out malware and prevent removal of data that’s not supposed to leave my firewall. If I work for a company and part of what the firewall does is it makes sure that my W-2 information isn’t being sent out, then that’s actually good for me. That’s benevolent. But how do you know when you’re building a technology platform… You can’t build into the code that this can only be used for things that are good and not things that are evil. That’s not a command that you can write.
Jason: Couldn’t agree more. There are absolutely legitimate reasons to have man-in-the-middle. In fact, you know our friends over at SonicWall, they have all kinds of reasons why they would want to decrypt communication going in and out for data loss prevention, checking for malware in communication. It’s some of their statistics that tell us that a very large majority of bad things that are going on your network are actually within an encrypted stream. Therefore as a company it’s really important to be able to know what’s going on, and so they have toolsets to be able to do that. However, these are extremely well engineered, extremely protected, extremely secure systems that are essentially a very, very good front door. And we’ve always talked about these front doors versus back door problems, Tim.
So thankfully, a lot of the mechanisms that do the legitimate work are typically very, very well-engineered, and they have to be exactly for the reason you just said.
Tim: And most enterprises and most individuals—consumers and small businesses—are prepared to trust that these large security vendors are getting their job right, are doing very well. Even though they do sometimes have a flaw or a zero day, they have actually a very good track record. I think that is almost all of the time a safe assumption. Not all the time but almost all of the time.
It’s been a good working assumption, but now all those same mechanisms basically are being used by a crafty government in a way nobody intended. And of course, one of the questions that comes up next is, “Well if this works, do we think we’re going to see other governments doing the exact same thing?”
Jason: It’s a possibility isn’t it? Kazakhstan’s starting it. Not surprising to see it come out of a non-western government like this that has very different laws.
Tim: And cultural tradition. But what about when it’s Russia? What about when it’s China?
Jason: Well, they already have their means. I mean China with a very controlled internet overall. A Great Firewall of China, it’s called.
Tim: The Great Firewall of China.
Jason: How about a reminder to everyone, if you see a certificate warning come up on your browser that says, “Hey do you want to install this certificate to move forward in whatever it is you’re doing?” Maybe you’re checking out your great stuff on Etsy. You know you’re shopping. Who knows? And you’re just like, “Man I want to buy that teacup. I'm just going to click on this thing so I can buy my teacup and get on with my life.” If you see that warning pop up in your browser, that’s there for a good reason.
Tim: The only real scenario that I would say yes to that is if I was on a corporate intranet and it was the corporate trusted root. Other than that, if you’re out in the world, if you’re in the DMZ anywhere and you get asked that question, I am hard pressed to think of the circumstances where the answer is yes.
Jason: So therefore, this is just of my own self-interest here, Tim, obviously. And I'm biased in this case, but if that certificate, even if you are, you know a legitimate Director of IT and you have some reason to have somebody trust a certificate that you want to push out to the enterprise, doesn’t it make sense to have that certificate ultimately rooted to a trusted third-party CA? Because at least that will be part of the messaging, right?
In other words, if it roots to something that’s in the trust store already, then first of all your employees are going to feel a lot more comfortable. Secondly, the whole process is a lot easier. When we first started talking about the subject, remember CA/Browser Forum, the browsers themselves, the decision was made not to allow Kazakhstan to have a trusted third-party CA.
Jason: And so therefore, self-signed certificates are problematic because if you’re being asked to download one and then put it into your trust store, it may very well be malicious. As a Director of IT who is just trying to get their job done, keep that in mind that when you’re training your employees in your organization to download something, you’re also training them to download things that may be malicious as well.
Tim: I think you’re totally right on that, Jay. These are legacies of a time when the level of the degree of IT savvy and the powerful PKI enterprise applications that exist today were not there. These basic concepts like add my own root, that goes back to 1995. Well 1995 was a very different day, and nobody knew exactly how all this stuff was going to shape up. And there was a very real reason why your average Netscape user who was a very technically savvy person might choose to add roots, but that is not the world we’re in anymore.
I'm not saying that aren’t valid reasons for this functionality because there are, but to your point, for almost all use cases there is a better way, and why don’t you use the better way instead?
Jason: This makes me think that we might have to have a podcast or two on how did we get here? You know, concepts such as certificate pinning, which is used often in mobile applications. There’s really good reasons why that exists and why that’s the best practice. A lot of people don’t even talk about it or I think about it much anymore, but these are vital things that we need once in a while.
While you and I are talking, I’ll throw it in just as a reminder to people because there was a Wild West day that was bad, and thankfully we’ve moved on from it, but it’s easy to make mistakes.
Tim: I do a lot of that too. “Well now remember the reason we do this is because back in blah, blah, blah, blah, blah this is what happened.” Because a lot of that does govern our decisions, and one of the things that I always want to remind people about security is every old threat is potentially a new threat if we stop defending against it. So even the things that we think are solved become unsolved the day that our known solutions stop being used. It is important to have that stuff.
But in this case, you know I do agree with you. I think that a private CA on a public trusted root is just a much a cleaner approach. It solves a lot of problems and really is the thing to do.