Chat With Us
We are here for you!
Talk to a fellow human.
The Root Causes podcast explores the important issues behind today’s world of PKI, online trust, and digital certificates. Research from two esteemed universities shows that sites with EV SSL certificates are much less likely to be engaged in criminal behavior like malware and phishing. And yet, leading browsers are reducing or removing EV information from the interface.
In this episode, our hosts Jason Soroko (CTO of PKI, Sectigo) and Tim Callan (Senior Fellow, Sectigo) explain the research results, this paradoxical browser behavior, and its likely effect on consumer security.
(Lightly edited for flow and brevity, this podcast originally appeared August 15, 2019.)
Tim: As we record this there’s a conference going on in Santa Clara, California called USENIX. USENIX had an interesting piece of research that I’d like to talk about today regarding phishing and SSL certificates.
Jason: Thanks, Tim. You sent me the link to the report. Some of the numbers were interesting.
Tim: A team of researchers from RWTH Aachen University (and I don’t know much about German universities so I had to look this up, but it’s the Rheinisch-Westfälische Technische Hochschule and according to Wikipedia it is the largest technical institution in Germany) went out and attained about 10,000 certificates from that were used on known phishing sites. Then they got about 40,000 certificates from what they call benign sites, non-phishing sites. They have their methodologies all spelled out in their paper, and the idea is that there’s supposed to be homogenous.
They compared the groups of certificates against each other, and there are a number of interesting conclusions in there. Some conclusions that jumped out at me that directly talk to the idea of certificates and their value in providing identity for a site and providing information that people can use.
One is that on the whole, when they compared the phishing certs to the domains that they were trying to replicate, they say, “These phishing certs do not typically replicate issue and subject information from the sites they’re trying to imitate.” In other words, the certs that are used on the phishing sites are very unlikely to actually contain information that is imitative of the site that they are trying to spoof. So if I'm trying to spoof a major bank, I don’t have a cert on there that has the name of the major bank in the organization field, by way of example.
Of course, that says something about the original purpose of SSL certificates, which was to tell you who you’re connecting with. Even now, when we see these certs on these phishing sites, they use it to get the lock icon, but that original purpose still potentially has value.
Jason: I’ll agree with that, Tim. I guess EV certs come to mind when you think about identity because of the full Extended Validation of the owner of the site, or perhaps more accurately, the legal entity that owns the domain, right?
Tim: Correct. The reason we have EV is because the Organization Validation concept was fundamentally unpoliced. At the time, therefore, it was hard for a relying party to turn around and say, “I'm prepared to trust this OV information” because there was no transparency and no real consistency between CAs. Maybe not even consistency over time. An individual CA might change its practices.
So it was hard for relying parties to say, “I'm prepared to trust that this cert is telling me who this is.” However, EV is different because it’s codified, it’s transparent, it’s consistent, and ultimately it’s highly reliable. It’s very accurate information.
This comes to finding number two from the Aachen study that I thought was interesting. They compared the percentage of the known phishing certs that were EV to the percentage of known benign certs that were EV.
On the phishing sites, the percent that were EV was 0.4%. 0.4% of these roughly 10,000 phishing sites were EV. When you get over to the benign sites, of the roughly 40,000 benign certs, 7% of them were EV. That’s showing that there is an order of magnitude greater propensity for a legitimate site—a non-phishing site—to have an EV cert than a phishing site. I think that’s a very important finding.
Jason: It’s difficult to get EV. Therefore why would the bad guys want to use EV? That makes sense.
Tim: A lot of people like to talk about the Stripe example. What happened in the Stripe experiment last year was a guy named Ian Carroll actually created a company in Kentucky that he called Stripe Incorporated and then got an EV cert for Stripe Incorporated in Kentucky, which was a real company, and then proceeded to make a big deal about it.
The fallacy with the conclusion that a lot of people want to come to—which is that we must throw out EV—is that it was trivially easy to figure out what that company was and who the owner was. I’m sure Ian Carroll didn’t actually commit any fraud with this cert, but hypothetically if he had, it would’ve been trivially easy to find out who had committed that fraud.
That’s one of the points behind authentication. A CA can’t necessarily tell you that somebody isn’t going to go do something bad. We can’t predict the future. This isn’t Minority Report.
But the CA cantell you who did that bad thing. At that point we have alternatives like law enforcement and the legal system, and you can arrest somebody or you can sue somebody. Those are very real deterrents from people being fraudulent or dishonest or otherwise malicious.
Jason: I'm trying to imagine if somebody’s trying to commit a phishing attack, what would be most advantageous is the velocity at which you could obtain a certificate and then utilize it. Especially something that doesn’t require a payment because payment typically will connect an identity. That’s the last thing a bad guy wants.
So free and easy, right? Free and easy is what the bad guys are typically going to want. EV is neither of those two things.
Tim: The economics of it are very different. Part of the reason phishing works is it’s incredibly cheap. It’s automated. It can be run from anywhere in the world, and most phishing schemes today are run by some sort of script kiddie. People who aren’t really creating anything original themselves. They’re taking attacks that have already been created for them and just running them.
Under these circumstances, part of the reason that it’s economically viable to do this is that it’s really, really cheap, and once you start doing things like making them create a business… Again, Ian Carroll says, “Oh, it only cost me $100.00 to buy my business license.” Yeah sure, but $100.00 changes the economics of this radically in a phishing world. Especially if it’s $100.00 per domain. That would make your classic, broad phishing attempt economically infeasible.
Jason: The numbers will show it, right? That’s what we’re concluding from this paper.
Tim: This ties into some other recent research that came out of Georgia Tech’s Cyber Forensic Innovation Lab (CyFI). CyFI gathered every domain it could find that had had an EV cert going back to 2010. They came up with a list of 2.6 million domains. They compared those domains to a couple specific forms of bad actor. Phishing wasn’t one of them, but malware distribution was and activity on the “Dark Web” as they like to say, criminal online marketplaces, was another one.
They tried to see how EV was connected to these forms of bad action and they came back that the propensity for an EV cert to be clean of these bad actors was 99.99%. I think these two data points in conjunction are important because those are the two halves of it. It’s malware distribution and classic phishing as the biggest threats that ordinary consumers who are just trying to go to their banks or the places where they shop online face. If you take those two pieces of research in conjunction, what they’re showing is that where an EV cert is present, the likelihood to run into these problems is exceedingly small.
Jason: That paper out of Georgia Tech seemed to be quite a large sample size as well.
Tim: They didn’t even try to sample. If you look at the Aachen University research, they’re going to have some discussion of their methodology and how they came up with their sample. CyFI, Georgia Tech, had a different approach, which is “We’re not going to sample; we’ll just get all of them.”
Jason: Yeah, all is still a sample size.
Tim: Allis still a sample size, yes. So, we don’t have to have a debate about how the sample was collected.
I think those are very strong data points to talk about the fundamental point that in the event that an EV cert is present on a site, it is much less likely that that site is malicious than if an EV cert is not present.
Jason: That’s the data point coming out of both papers. It’s not difficult to imagine that that’s because of the economics of hacking, as you said. We’re starting to see some data that actually is showing that.
Tim: Accountability as well. You mentioned accountability, and that’s is hugely important here because you if there’s at least a chance that I'm going to be arrested, that’s going to change behavior, even if it’s a fairly small chance.
Jason: If you’re going to commit a crime, the last thing you want is for somebody to be able to identify you as being part of that crime. That’s why we lift fingerprints.
So EV has two different things: There has to be a registered business, obviously. In some jurisdictions, maybe that’s not as comprehensive as it is perhaps in the United States or Canada or others, but on the other hand, a lot of businesses that are likely to be spoofed are out of North America and Europe and places where there is some more comprehensive connection to an actual person’s identity.
And failing all of that, these things have to be paid for. There’s no such thing as a free EV certificate that I know of, and therefore some payment has to be made, and that obviously potentially connects to an identity as well.
Tim: Some form of non-automated authentication has to take place. So once again, if I'm just going to set up my scripts and run this as a massive, cheap effort where my CPUs are just spinning as fast as they can, that doesn’t work. It doesn’t lend itself to that kind of application.
For all those reasons, it winds up being highly negatively correlative with those activities.
Now, this brings us to another development that’s coming, and this is a real disservice to users, which is that we have seen the systematic de-emphasis of certificate identity information in popular browsers over time, especially with regard to EV.
The most visible example is Chrome. Chrome did take the green away, so that where the company name used to be green, now its gray, and in doing so, it became less visible. Chrome has announced that in its release 77, which is coming in October, that there will be no EV indicator in the address bar at all.
Then we saw an announcement from Firefox just a few days ago that their Version 70, which is also coming in October, is going to do the same thing. It’s going to strip out the EV indicator, and you’ll have to click on the lock to even know that there’s a difference. That there is identity information there at all. A DV and an EV cert will look identical, and it won’t be in any of these browsers until you actually click down a level that you even have the opportunity to see if identity information is available.
And I just don’t get it. I think they’re doing something really bad for their users, and because these browsers between the two of them have a majority of global market share, they feel they can get away with it. And unfortunately, they probably can.
But it will be bad for internet security. It’s going to bad for the safety of the user.
Jason: I don’t know how it helps. I simply don’t know why taking away that indicator helps. Some users understand what that is. Some may not. The argument that some users don’t understand what that indicator is so therefore take it away from everyone, that argument to me just doesn’t work.
Tim: It’s an absurd argument. It would be like saying, not everybody pays attention to cancer warnings on cigarette packs and therefore take them off. Why would you do that? Some people do pay attention.
You know I use a lot of offline safety analogies in general. I like to talk a lot about seatbelts, so let’s talk about seatbelts. It would be like saying not everybody wears his seatbelt so therefore don’t give anybody the opportunity to wear a seatbelt. Why would you do that? It would be the wrong way to make a car. And yet, that’s essentially the argument that Google has made.
Jason: Honestly, I don’t think I'm that rare of a user that when I go to my banking site or a government site, somewhere where I'm going to do something of any kind of importance or perhaps do commerce, if I don’t see an Extended Validation UI, the good old green bar, there’s a feeling of discomfort. I have to admit. I’ve been running browsers just probably as long as you have, Tim. The EVs been around quite a long time. That green bar has become a familiar part of my usage and now it’s a bit creepy, you know?
Thankfully there are browsers that still have it. Therefore, I use those browsers, but you know it just feels a little bit uncomfortable to be on a site that I know should have that green bar and now it does not.
Tim: You touched on a point that also gets brought up as an argument, and this comes up a lot. Any time everybody says, “Look, I used this because blah, blah, blah, blah, blah,” what you hear from the opponents to the green address bar, which includes people from Google, is, “Well, you’re not typical. You’re unusual.”
My response to that is, “Ok, so what? Clearly the amount of benefit is more than zero. There are people out there that benefit from this, and we know that because they get vocal every time this happens. So, the amount of benefit is more than zero.” Nobody has been able to articulate the benefit of taking it away. There is no benefit of taking it away, so that means that this is a net negative for safety.
Jason: In terms of any form of security, Tim, we’ve been around security an awfully long time, and there’s no such thing ever as a silver bullet. And therefore, that’s why we have layered security.
Tim: Exactly right, it’s a layered security approach.
How do we defeat phishing? We do a bunch of things. We make it harder for the phishers to send emails. We make it harder for those emails to get delivered to inboxes. We take down their sites, and we make it harder for them to exactly imitate a site. You do all these things.
That last one is what we’re talking about. It doesn’t mean that you stop doing the others, but use a good, layered defense. And part of that is give the user at least the opportunity to save themselves and give them the tools they need to know the difference and then focus on making those tools better and more visible.
You know it’s funny to hear, “Well these people aren’t recognizing my indicators so I'm scrapping it.” If we were talking to the people at the Google online properties instead of the people who run the Chrome browser, if we were talking to the people who run Google Shopping and you were saying, “Well they can’t find the Buy button, therefore take away the Buy button,” I guarantee you the Google Shopping people would say, “No, we’re going to find a way to make the Buy button more visible.” Because at that point they have some skin in the game. And those people recognize that they’re able to do that. Those people say, “Look we can do a better job of interface design, and better interface design enables end users to do the things they want to do.” They know that.
Jason: But Tim you know let’s push it even further with that whole entire address bar. Because this is essentially what we’re talking about is the user interface that includes an address bar. Browsers have had that since day one simply because of how addressing works on the internet. One of the arguments that I’ve heard is once again, “Some users are unfamiliar with the technicalities of subdomains and domains themselves. Therefore, we should push everything through search.” To me that just smacks of pretty obvious motivation of commercialization and taking away what is a fundamental part of the user interface of a browser.
Tim: Who wins under those circumstances? Oh, what do you know? It happens to be the guy who owns the browser and is making the decision. I'm sure that’s a complete coincidence.
Jason: Well yeah, exactly. Because it probably isn’t a coincidence. But goodness gracious, I think that any browser that takes away an address bar is not a browser that I'm using ever again. Period.
Tim: It’s disturbing. It’s sad. EV is not perfect, but EV is light years ahead of anything else we have to let people truly identify who they’re dealing with on a website. Taking that away sets us back many steps. And if browsers had put any kind of effort into UX design around EV over the past decade and tried to make it better and more effective, there’s no way we would have wound up here.
Jason: And by the way Tim, if we’re going to wait for a perfect solution, then at the same time perhaps we should take a big broom and sweep away the entire expo floor at RSA.
Tim: And everything else. We’d go back to my seatbelt analogy. If your dependency on a safety alternative is for it to be perfect, then as long as people still die in vehicular accidents or may die in vehicular accidents, why have seatbelts at all? Why have airbags at all? And of course, we all recognize the complete and utter absurdity of that in that context, which is why I use this example. And yet basically we’re hearing the exact same thing about our browsers.
Jason: And you know what I haven’t heard, Tim, is what’s the alternative? What’s the bright idea that’s going to help stop phishing? Especially from the browser standpoint.
Tim: As I said, there are things you can do to make it harder. There are secure email gateways, and there’s DMARC, and there are other things that make it harder for phishing to work on the other side, but why are we not addressing the browser-based social engineering aspect of this to the best degree that we’re able? We absolutely have tools available to us to do this, and we’re deliberately walking away from them.
Jason: Perhaps this takes us slightly too far away from this discussion, but I think it’s worth mentioning. You know back in the day there was a problem with users being socially engineered to download certificates for the purposes of man in the middle, especially for things like mobile devices and very, very specifically banking done on mobile devices.
One of the solutions was to use a certificate pinning technique to essentially only allow a specific certificate to be used by the actual native application in the mobile device. That solved a lot of that kind of grief. Do a mind experiment and say to yourself, “Well, if I'm going to X bank, and the browser knows I'm going there because perhaps it was typed in a certain way, and therefore only a certain specific certificate would be accepted,” right?
Jason: In other words, some form of certificate pinning. I think one of the problems we’d run into there, Tim, and it’s probably a valid argument is, the addressing. Once you hard code an address to a specific site, it becomes very inflexible. Unfortunately, that’s just not the way that these sites work. They need some form of flexibility because of how they load balance or however they want to be addressed on the internet.
Tim: Absolutely. They might be repatriating to another environment, and they don’t want to be limited to using the exact same certs they used at the old environment, etc.
Jason: So, as a technical solution it probably wouldn’t work here simply for that reason. On the other hand, one of the advantages is, if the argument is we need to take away the user from the decision-making process of whether or not a site is legitimate then perhaps there is some form of technical solution where the browsers and the CAs can come together hand in hand.
The CAs are already doing the good, hard work of identity verification. There are other pieces of information beyond what’s done today in EV with respect to things such as the legal entity. Perhaps it needs to be a more generic set of company information that’s vetted in a certain way. This isn’t baked out yet, but I think there’s still some hope here to come together for a way of doing this correctly. I think throwing the baby out with the bath water is not really the greatest solution.
Tim: That’s exactly right. There is a standards body called the CA/Browser Forum. It is populated by many CAs, more CAs than you would imagine, and many browsers, more browsers than you would imagine, and other interested parties. There’s a lot of brain power, and there’s no reason why this group couldn’t be getting together and identifying all kinds of ways to use that information better. One of the reasons this has been a challenge is because the CA/Browser Forum is fundamentally a voluntary organization. It doesn’t really have enforcement teeth when it comes to browsers. So, the browsers ultimately get to do whatever they want, and browsers historically have been uninterested in changing their behavior based on what the CA/Browser Forum wants. That’s a big part of how we’ve gotten here.
Jason: It’s been a long road. And I don’t think we’re going down the right path.
Tim: Yeah. I agree. So we’ve probably belabored this enough. Thank you for indulging me, Jay. This has been on my mind and I wanted an opportunity to vent, and you know hopefully this is informative for our listeners.