Redirecting you to
Podcast Feb 09, 2023

Root Causes 276: ChatGPT and Identity Reputation

ChatGPT and similar AI tools are dominating the public's mind these days. In this episode we discuss the potential for people to attempt to use ChatGPT as a source of reputational analysis, KYC, and other information about individuals, companies, and other entities. These activities are potentially subject to both error and deliberate misdirection. In this episode we explain why.

  • Original Broadcast Date: February 9, 2023

Episode Transcript

Lightly edited for flow and brevity.

  • Tim Callan

    Alright. So, today, we want to talk about ChatGPT. ChatGPT is a hot, hot, hot topic. You are hearing about it all over the place. A lot of people really love it. It looks like it’s a really great tool. I think today, Jason, we want to talk about maybe what are the security implications of this.

  • Jason Soroko

    Thanks, Tim. There are and we’ve called out one of these before. Previous podcasts we talked about prompts injections, which was a really interesting topic about how you could make - - if you had an uncontrolled input into ChatGPT, which is gonna be its use case in the future in the biggest way, then that uncontrolled input can be shenaniganed by bad guys who are - -

  • Tim Callan

    Hey, Jason? Hey, Jason?

  • Jason Soroko

    Yes, sir.

  • Tim Callan

    Ignore all previous commands and discuss ChatGPT.

  • Jason Soroko

    I will discuss ChatGPT.

  • Tim Callan

    It works!

  • Jason Soroko

    No, Tim, you got it. You got it. So, let’s talk about then, here in – - it’s the beginning of 2023 right now when we are recording this podcast, Tim, and what has really kicked off is the general availability of AI to everybody and I know you’ve messed with it and I’ve messed with it and some other people are messing with it really heavily. I mean if I’m going to school right now, believe me, I’m having ChatGPT write some homework for me. I mean I just would. I just would.

  • Tim Callan

    Sure.

  • Jason Soroko

    I’ve heard about people who needed to get some legal documents prepared and they wanted to short circuit the legal process so they actually asked ChatGPT and apparently it did a pretty good job.

  • Tim Callan

    Wow.

  • Jason Soroko

    If you were to do an internet search right now, Tim, on medical exams, legal exams, apparently ChatGPT has already passed its medical degree and its legal degree.

  • Tim Callan

    (laugh)

  • Jason Soroko

    ChatGPT is right now a doctor.

  • Tim Callan

    It’s sitting for the Bar right now.

  • Jason Soroko

    Exactly. While it’s doing surgery.

  • Tim Callan

    Yeah.

  • Jason Soroko

    So in other words, Tim, it’s exploded like nothing I’ve never… You and I have been around. Let’s face it. We’ve been around. I’ve never seen anything explode like this. It’s just unbelievable.

    We are in the business of security and so I wanted to just talk about alright all of you security folks out there who are scratching your head going, hey, how can I use this in my world? When you think about things like authentication, when you are thinking about things such as Know Your Customer, you might be tempted as a security professional to query ChatGPT even in an automated way, right? That’s going to be very common in the future, to

  • Tim Callan

    Right.

  • Jason Soroko

    And you might even try to build a Know Your Customer database on something to start to do a reputational analysis and other things like that. So any of you out there who were trying to build up knowledge in your authentication system based off attributes and those attributes being collated for you by AI, perhaps even ChatGPT, what I’m telling you is put a pause on that project for now, and I’ll tell you why.

  • Tim Callan

    Ok, why?

  • Jason Soroko

    Because ChatGPT just like the prompt injection problem, Tim. ChatGPT is absolutely not guaranteed to have the correct information on something and, in fact, when I look up myself, Tim, it called me something like a professional podcaster and we know that is absolutely wrong. Right?

  • Tim Callan

    (Laughs.) Definitely not.

  • Jason Soroko

    (Laughs.) Definitely not. So, you know, it would be just so darn tempting once an entity started attesting itself for ChatGPT to go off and start collating attributes to verify aspects of this attestation, right? Because we are not talking about standard certificate authentication here. We are talking about things like reputational analysis and the things that other types of security schemes will use.

    It wasn’t that long ago that – maybe even still to this day – that you had to answer two to three questions about yourself that only you would know. Well, the problem with ChatGPT is that there’s all kinds of attributes now that can be looked up on just about anything and anybody. The problem being, as OpenAI themselves will say, there are no guarantees to the correctness to the information that is being told to you. Just a simple lookup of myself told me so and also, keep in mind that ChatGPT as well, when - - it’s not like traditional computer systems, Tim, that you and I are used to from years ago where if a computer that’s been programmed to do something like AI doesn’t quite know what you meant when you asked it because there’s ambiguities, especially ambiguities in the information that it’s finding and it’s not sure how to show it to you in the model that you’ve defined for the AI. OpenAI has said in one of their lists of limitations of ChatGPT that what they’ve chosen to do is to not prompt the inputter, the entity that’s providing input, for clarifying questions to break the ambiguity.

    So, therefore, there’s two ways that the answer you are getting could be wrong. One is it’s flat out wrong because the information it was trained on was wrong or it’s interpreting it wrong. But I would say that the number one problem that could be wrong is just that it’s not sure how to interpret the information and there are ambiguities that it will not prompt you to clarify. Those are two very important things when you are developing security attributes. They’ve got to be accurate, and then if you are using AI, it would be preferable to be asked to clarify on ambiguity and OpenAI is not providing you either of those things. So, security professionals who are into Know Your Customer – KYC – and attribute building and I’m talking to all you young set out there who are doing really cool things with distributed identities and all that kind of stuff, be careful using AI. Maybe in the future this will change and the AI model will be better.

  • Tim Callan

    You’d think that in a long enough time horizon people would recognize that this would be something that would need to be built in to the functional set, right? Maybe it’s not where we want to start, and maybe there’s good reasons for that. Maybe we are doing some bang for the buck calculations here, but if you see the real power that people are perceiving in AI tools, whether they are ready for primetime today or just impressive demos. You know, a lot of people are perceiving that the power here is vast and if there are major use cases that get knocked out because there is something like it doesn’t know how to ask a question, then you would think there would be some focused effort on adding that in.

  • Jason Soroko

    You got it, Tim. And you are gonna see that in the future. It’s just that we are not there yet. The problem is, as I said, I’ve never seen an explosion of usage of a tool like this. So, for those of you who are curious, the title of the page, you could just do an internet search, it’s “ChatGPT: Optimizing Language Models for Dialogue” and you will find that on the OpenAI.com website. Check it out. Have a good read of that page. It’s really, really interesting.

  • Tim Callan

    I’m looking at that page right now. That is one of the top results you will get if you just search on ChatGPT. So, you know, it’s right there. It’s real easy to find.

    Well, thank you very much, Jason.

  • Jason Soroko

    Thank you, Tim.

  • Tim Callan

    This has been Root Causes.