Root Causes 472: AI Offensive Modeling
AI tools are now available to perform red-teaming activity for DevSecOps. Such tools are soon to be table stakes in the constantly escalating IT security arms race. Join us to learn more.
- Original Broadcast Date: February 26, 2025
Episode Transcript
Lightly edited for flow and brevity.
-
Tim Callan
So Jason, we're here in Toronto. We're doing Toronto sessions season two, it's been a great session so far. What are we going to talk about today?
-
Jason Soroko
Tim, I think that one of our look forwards was about a lot of AI, exciting AI stuff. My goodness. Something we said during these sessions already, a couple times now, is that something has to change on the defender side. The world needs to defend better.
-
Tim Callan
Bad guys are winning.
-
Jason Soroko
Their edge is growing. You and I had a great conversation last night when we both came into Toronto, and one of the things we talked about was how AI can act as a multiplier in the PKI world, where researchers who are trying to do bad things against RSA - -
-
Tim Callan
They are trying to find the vulnerabilities in the cryptography, could use this to really vastly extend the surface of what they can explore.
-
Jason Soroko
If you only have a small handful of people in the world who truly understand how to do that kind of research in the math, I would argue that really, really well-trained AI right now can probably 10x to 100x a team, and that would make complete changes to the way that this research is done, because a lot of those researchers, you said it best, they've got kids, they've got dogs, they've got parks to have to go to and dinners to eat. Well, if eight people have 800 researchers working for them, which is what AI might give them the equivalent of, it'll change everything. I think the defenders, this whole notion of, and it's an old, old war idea that there's an asymmetric problem between the bad guys can choose how they attack.
-
Tim Callan
The defenders have to defend everything.
-
Jason Soroko
The Blue Team has a hard job.
-
Tim Callan
And the hackers only need to get through one place.
-
Jason Soroko
The Blue Teamers. Who would ever want to be on a Blue Team, because you're going to lose right now. Well, the Blue Teamers need something to help fight against this asymmetric situation. I'm going to tell you, I think it's going to be this. Nothing makes a Blue Team learn faster than working with a really good Red Team. The thing is, I think that that's iterating too slowly. Good Blue Team. Red Team interactions are too slow.
Blue Team needs to learn faster. Blue Team needs to be prepared better on the first iteration than just getting slaughtered until 100 iterations in, until they reach a baseline. What happens if you could change that and make first iteration of Blue Teaming as good as the 100th? That’s the game.
-
Tim Callan
That sounds good. How do we do it?
-
Jason Soroko
There are now uncensored – this is a key word - uncensored models, AI models, that are optimized for building offensive code, building offensive structures and automating offensive tactics against Blue Teams. If a Blue Team has this tool - -
-
Tim Callan
So I can take this, and I can throw it at my defenses, and it will very quickly explore a lot of attack surfaces and attack methods just because it's going to be faster than a Red Team and because, like you said, it's not going to have to take breaks for food and sleep and kids’ birthday parties and all the rest.
-
Jason Soroko
Now I'm gonna throw it a name. It's called White Rabbit Neo. But there's gonna be others. Guaranteed. Let's explore this for a moment. They're uncensored models. So I don't know if you've ever played with offline, large and small language models, but sometimes you can get these uncensored models, and you might wonder, well, what kind of nefarious stuff are you doing with an un - - that word uncensored makes me uncomfortable. Well, if you're asking a large language model, build me malware code, well, you can't use the censored tools because they're going to go, oh, my - -
-
Tim Callan
They refuse to.
-
Jason Soroko
My policies say I can't do this. So we need carefully constructed, uncensored models that are built to do just this. So Blue Teamers now have tools that it's like maybe even my AI is smarter than the Red Team I'm going to be up against, and I could throw everything at it before my first iteration. So in other words, one of the things Tim, that I'm personally excited about AI right now is the fact that it's helping me to learn faster, faster and faster and faster. I think that that's what a Blue Team could use this for, is to learn faster. I give Blue Teams full credit for building good defense. The problem is that their imagination simply isn't big enough.
-
Tim Callan
I mean, fundamentally, it's the problem. How do you deal with the unforeseen? It's unforeseen. So the more of that you can expose more readily, more quickly, the better you are, because now you can have a sense for how to defend.
-
Jason Soroko
Absolutely. So you might ask me, well, what's the category of Blue Teamers that might benefit from this White Rabbit Neo and other tools that are coming out at the moment? Right now, it's DevSecOp teams. So people who are building Dev Op systems, who are the security analysts for that, if you are Blue Teaming that, my goodness, check into these tools, because your ability to learn from an absolutely rock solid Red Team teammate in order to test the armor that you've built - My goodness, that era is now with you. You can learn faster Blue Team and start to fight against this asymmetric situation that you're always just going to lose.
-
Tim Callan
So I don't see any reason why this would need to be limited at all to DevSec.
-
Jason Soroko
No but these tools, these models right now are optimized.
-
Tim Callan
That's what this is. That’s what that is. I get it. I get it. But I guess I'm trying to say this. We should expect this principle to show up in all aspects of security. So I like all of this. I think learning faster is always better. I think the problem with the security situation is you never know if tomorrow is the day you're going to be owned. So getting that hole plugged ASAP is extremely important. Do you feel that that need for speed increases as the opponents surely are doing their version of the exact same thing?
-
Jason Soroko
The Red Team can use Red Team tools, too. Absolutely.
-
Tim Callan
And not even the Red Team. The real bad guys.
-
Jason Soroko
Absolutely. Therefore, if Blue Team isn't using this, it just gets worse. Blue Teams, my goodness, some of you are incredible. Some of you can do orthogonal thinking. The problem is you can only put your fingers in so many places. And this helps. This helps to automate the testing of your defensive strategies. And fantastic. What a great way to use AI.
-
Tim Callan
What a great way to use AI. I love it. This feels like a story we want to follow, and we'll probably return to because this is early in this industry. And we're going to want to see how it develops and what it does.
-
Jason Soroko
Add agentic AI to these adversarial models, and it's quite something.
-
Tim Callan
You start combining agentic AI with the rest of this, and you see where both the stakes and the speed of everything is magnified.
-
Jason Soroko
I'm going to say this one more time, just because it's fun. Tim, you and I are working the world of Certificate Lifecycle Management, and we have people screaming on Reddit about, I don't want to automate my certificate renewal. I think it's hilarious that we live in a world where you'll have a legitimate Sys Op complaining about automating a cert renewal in a world where hyper, hyper, hyper automation is now all around them. It's hilarious to me that we live in literally extreme Luddite with, we're now going supernova with how we do automation.
-
Tim Callan
It is funny. It's odd when you think about that break. Like this is something that's straightforward and basic and obvious and yes, you'd have to have really a Luddite attitude not to want to do it and then, and then right adjacent to it, we've got this kind of drive for hyper acceleration of technology adoption, and that's going to create problems when those two things are existing simultaneously.
-
Jason Soroko
You brought up a term earlier in the Toronto sessions, and I love what you said. You talked about being agility native, or even perhaps automation native. And those younger folks right now are going to come into a world where just systems automating other systems is just going to be natural. And there's a whole generation of us who didn't come in that way, who are, I have to be at a command line and do this manually. No. This is just another example, Tim. That's all.