Root Causes 285: Can ChatGPT Write Malware?
In our ongoing exploration of the security implications of AI, in this episode we examine the suitability of ChatGPT as a malware-writing tool and possible future directions for AI in software creation.
- Original Broadcast Date: March 14, 2023
Episode Transcript
Lightly edited for flow and brevity.
-
Tim Callan
So, of course, we continue to return to the topic of ChatGPT because there is just so much interesting stuff going on and once again, we have a topic that is right up our alley and, in particular, I think you called my attention to this. You were looking at Schneier on Security article. Bruce Schneier, of course, often a source of information for us and this is a January 10, 2023 post and the headline is ChatGPT Written Malware, which is a very provocative headline.
-
Jason Soroko
It very much is. This article is actually very short by Bruce Schneier and he is actually quoting some work by Check Point Research so I will give them some credit as well.
Really, Tim, on this podcast wanted to bring up ChatGPT. A lot of people are asking, oh my goodness, can this thing write malware? Well, the answer is, yes, of course.
-
Tim Callan
Sure. It can write anything. It can pass the Bar.
-
Jason Soroko
It has passed the Bar, in fact, as far as I know.
So, Tim, what is malware? Maybe that’s the big question. I think a lot of people, maybe everybody listening to this podcast, is way to smart obviously to have to ask that question but malware really is just software. It’s just software like any other. And it’s often using operating systems, desktop operating systems, using mobile operating systems as they were intended to be used. A lot of people think of hackers like breaking something or hacking and breaking sound like there’s something incredibly subversive going on but really, a lot of times malware is just utilizing a stolen credential and then automating a set of tasks to perform an end goal against a target. That’s all it’s doing. And so it’s just software. In fact, often, it’s just automation software.
-
Tim Callan
Sure. Find this information, infiltrate it to here.
-
Jason Soroko
So, the example that Bruce Schneier was talking about from the Check Point Research was really some Python code apparently written by ChatGPT which combined various cryptographic functions “including code signing, encryption and decryption. One part of the script generated a key using elliptic curve cryptography. Another part used a hard-coded password to encrypt system files using Blowfish and a third used SRA keys and digital signatures, message signing and blake2 hash function to compare various files.” So, those are things that normal software does. It’s stuff that malware does. Why wouldn’t ChatGPT help you to write that kind of software?
ChatGPT doesn’t know that you are doing something potentially malicious and, in fact, if I were to do a code review with a lot of people and look at software, it would be difficult for a lot of people to tell whether or not malware was going to be maliciously used or not by not having that context given ahead of time, Tim.
-
Tim Callan
Absolutely. Malware is just software used maliciously. Yes. That’s the name.
-
Jason Soroko
It’s so funny because now we are relearning a lot of very old lessons but somehow still, in popular culture, being shocked that ChatGPT can do these things. Oh my goodness, ChatGPT is writing malware. Well, no. ChatGPT is writing software. It’s not super great at it yet but it might become really great in the future and maybe even the malware it can write will be better down the road.
-
Tim Callan
The fact that ChatGPT is writing software is the headline here. I mean malware is just another kind of software and the function of how effective that malware is or how effective the malware is is a function of how effective any software is that ChatGPT is creating. And of course it’s gonna get better. I mean the leaps and strides we’ve seen on all of this stuff is amazing and of course it’s gonna get better and I’m reminded of, this is kind of an old science-fiction trope. Like I’m thinking of there was a book by Neal Stephenson a long time ago called the Diamond Age where he’s got this situation where the bots are at war with each other and they are updating themselves and creating themselves and humans can no longer understand what they are doing. The bots are doing it all themselves against each other and they are in this constant very fast war that isn’t being engineered by a person because a person simply couldn’t keep up with the pace of the change and you can easily imagine, good job, Neil, writing about this in the 1990s but you can really imagine that is the outcome of this kind of thing. Not just ChatGPT but and it’s ilk as progress continues that we are gonna get to the point where the software writes its own software as well as we do and more quickly and then ultimately, better than we do and more quickly.
-
Jason Soroko
It very well could be, Tim, and think about how good a computer can write software that can possibly avoid things like memory exceptions. Because it can hold a lot more things in its head, using the human analogy.
-
Tim Callan
It’s attention never lapses. Yes.
-
Jason Soroko
And its ability to not make logical errors is substantial. I mean unless you forcibly do it, ask it to, it probably will write things in such a way where it will be able to think about a lot of things ahead of time.
So, Tim, I gotta tell you, I had my own experiences in terms of getting ChatGPT to author software for me. I was more than amused. In fact, I was kind of impressed. I asked it to do something fairly basic, which was to give me example scripts in Python and Go language and a few other languages that I’m kinda handy in. I’m not an expert in any of them, but I’m certainly handy in them and I asked it to actually do an evaluation of an SSL certificate from a website. So, in other words, reach out to a website, grab the certificate and give me attributes about it. Parse it and tell me what is in there.
And I looked at the code and darn tootin’, Tim, it was very accurate. Like I could literally copy and paste that code and become very productive with it if I didn’t have those functions already stashed away in my files. So, I can see why this is used because I can use human language, English language that is easy for me to say and type and the output is just some darn good code. Now, of course, we already have millions and millions of posts on message boards where I can copy and paste very good code that was written by a human being but I am depending on search engines and I am depending on a lot of reading and sorting out. ChatGPT kind of short circuits a lot of that and just allows me in my own language to enter input into a prompt and get out code snippets and sometimes fully baked code, long sections of code that are not bad and sometimes really good. So, Tim, I mean I’m not saying this from a position of, I’ve never used this before. It seems interesting. It’s like, no. I’ve used this and it’s more than interesting. In fact, my next little coding expedition, I’m opening up a ChatGPT prompt right away.
-
Tim Callan
I love that. And of course, one of the things you’ve talked about is you looked it and you scrutinized it and you made sure that you felt that it was working correctly. I think there is a lot of that going on right now with people but I could also imagine getting to the point where ChatGPT has a very high success rate to the point where we are perhaps prepared to take the gamble that it got wrong, which it usually didn’t and just trust it. Well, I can even imagine getting to the point where humans are prone to error too and we could get to the point where ChatGPT is actually more accurate than your average human, which doesn’t mean that it’s not able to commit an error but it does mean that it’s less likely to commit an error than you are and these are scenarios. These are realistic benchmarks that we will get to.
-
Jason Soroko
Absolutely, Tim. I think for non-software developers it might seem like asking ChatGPT to write software for you or to write malware. Same thing. A lot of people think that you are expecting it to write fully-baked software from front to back and it’s probably not the way it’s going to be used. It will probably be used the same way I used it, which is, hey, just give me a snippet because I can go and look this up but if I could just type my request into a ChatGPT prompt and get it quickly and have it be pretty darn accurate, well I’m gonna use that. I’m betting that ChatGPT is less used to give me fully-baked software or malware right now. It’s really just a speeding up the research process and speeding up the reference process within software making.
-
Tim Callan
But you could also imagine that this would lower the bar for what you needed to develop these kinds of things. So you and I talk a lot about script kiddies. And the basic idea behind a script kiddie is you have a certain amount of computer skills because you are able to do this but you don’t necessarily have the skills to develop the attack on your own. So what you do is you buy the attack from someone else who developed it and that person is in the business of developing attack kits, if you will, toolkits, and selling those toolkits to other people and that’s how they make their money is they sell their toolkits and then those other people purchase those toolkits and they actually go execute them in the real world. So you have to have a certain level of computer science ability but it is less than what is required to develop this attack from scratch. I could imagine a scenario where something like ChatGPT is a direct contributor to that. I’m not a complete naif. I’m able to house some things together and make it basically work and badge my way through it but for the hard stuff, I go to ChatGPT and I get something and I try it and if it’s working, then I say, oh great. If it isn’t, I rewrite my prompt and I try it again and I get it to the point where it’s working and now it’s kind of like you are on a homemade attack toolkit.
-
Jason Soroko
Absolutely. I would even say ChatGPT is phenomenal at helping to speed up your bodging, too, honestly.
So, let’s just for the developers who are listening right now, Tim, you and I not long ago talked about Microsoft and putting code signing as a first class citizen right within Visual Studio. The big IDE. The development environment for Microsoft and I foresee - - if Microsoft hasn’t done this already, you can credit me with this idea. But I would say with Microsoft’s gigantic investment in ChatGPT recently, which you guys can go and do an internet search on that topic, I wouldn’t be surprised if Microsoft puts ChatGPT or whatever other AI algorithm that they want to use directly into Visual Studio so you don’t even have to leave Visual Studio to be doing this kind of software development, Tim.
-
Tim Callan
Wow. Sure. Yes. I didn’t think of it until you said it but it makes perfect sense. Perfect sense.
-
Jason Soroko
So, right now, Visual Studio it’s got capabilities such as intellisense which are incredibly productive tools which helped to finalize variable string typing and all these things that you might think, to enter two letters and have your variable fully, fully typed out to me isn’t even just about making you a faster software developer, it reduces typos. Reduces errors. It’s fantastic. It’s extreme auto correct but for software developers. And I can see how AI could even be used to be like, oh, I see what you are doing here. It’ll take whatever you’ve got as just a big input and constantly be sending it back to the AI and then your AI is like, oh I know what to do next. Can I suggest the next three, four, five lines of code? Can I suggest the next 30 lines of code?
-
Tim Callan
A step ahead. Software development type ahead.
-
Jason Soroko
I think this is what’s coming Tim. And so therefore, it’s not just about, geez, I wonder if I can type in a human prompt and have it come back with some code. I think we are gonna enter the era of, oh no, it’s literally gonna become your fourth and fifth, and sixth hands while you are typing out code and it’s gonna make software developers incredibly productive. That’s my hope.
-
Tim Callan
Wow. Ok. Yes. Once again, every time you turn around there is like a new implication that we haven’t thought of before and I think this is a very interesting one.
-
Jason Soroko
You got it, Tim. And I think with this subject, we’ve unleashed the real power of what computers can do for us. I know a lot of people are scared. Oh my goodness, my job. My poor job. But you know what? I think like all the way back to the Industrial Revolution where these questions were asked, it just shifts what human beings are doing to what they are truly best at. And that just never ends.
-
Tim Callan
It’s removing the rote work and moving people up to the higher more difficult work. The stuff that really requires our special amazing brains. Absolutely. That certainly is a function of what’s going on here. And of course at the same time, that means we have to adjust to it. And if we don’t adjust it that’s how we can be hurt.
-
Jason Soroko
You got it, Tim. For all those people out there who are like language cultists and you all know who you are, I would like to make the argument that, you know what, I’ll just pick whatever language is best for the job and just because of it’s capabilities and the way that it’s angled, and then basically just utilize AI even if I’m not a true expert in the language. Maybe I’m not a programmer within a language but I’m very, very good at logic. Maybe, maybe, Tim, the future, the farther future is rather than being a programmer, we become logisticians, if you will, rather than programmers. That to me is the deeper future with AI.
-
Tim Callan
You’ve said this a number of times in the past, Jay and I agree with you which is that learning how to write prompts suddenly becomes a very valuable skill. Figure out how to write a prompt that gets you exactly what you want out of the AI becomes the new skill and then, of course, I think can I ask an AI to write the prompt? And where does that end? It turtles all the way down.
-
Jason Soroko
Tim, this is it. Right now we are stuck with the prompt but I think where we are gonna go is we are absolutely gonna enter a world where even the prompts are written for us and the human being is where they truly belong, which is at the meta level. The highest, highest level of thinking of intention. What is my intent? What is the desired outcome and I will describe my outcome and AI will help to get me there. That’s where human beings belong.
-
Tim Callan
I love it. Alright. So, we started with malware and we wind up with some very, very deep thoughts but that’s what we’re for. We’re humans. We are supposed to have the deep thoughts. Let the computers do the easy stuff.