Interested in going full-time bug bounty? Check out our blueprint!
May 16, 2024

Episode 71: More VDP Chats & AI Bias Bounty Strats with Keith Hoodlet

The player is loading ...
Critical Thinking - Bug Bounty Podcast

Episode 71: In this episode of Critical Thinking - Bug Bounty Podcast Keith Hoodlet joins us to weigh in on the VDP Debate. He shares some of his insights on when VDPs are appropriate in a company's security posture, and the challenges of securing large organizations. Then we switch gears and talk about AI bias bounties, where Keith explains the approach he takes to identify bias in chatbots and highlights the importance of understanding human biases and heuristics to better hack AI.

Follow us on twitter at: @ctbbpodcast

We're new to this podcasting thing, so feel free to send us any feedback here: info@criticalthinkingpodcast.io

Shoutout to YTCracker for the awesome intro music!

------ Links ------

Follow your hosts Rhynorater & Teknogeek on twitter:

https://twitter.com/0xteknogeek

https://twitter.com/rhynorater

------ Ways to Support CTBBPodcast ------

Hop on the CTBB Discord at https://ctbb.show/discord!

We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.

Sign up for Caido using the referral code CTBBPODCAST for a 10% discount.

Today’s guest: Keith Hoodlet

https://securing.dev/

Resources:

Daniel Miessler's article about the security poverty line

https://danielmiessler.com/p/the-cybersecurity-skills-gap-is-another-instance-of-late-stage-capitalism/

Hacking AI Bias

https://securing.dev/posts/hacking-ai-bias/

Hacking AI Bias Video

https://youtu.be/AeFZA7xGIbE?si=TLQ7B3YtzPWXS4hq

Sarah's Hoodlet's new book

https://sarahjhoodlet.com

Link to Amazon Page

https://a.co/d/c0LTM8U

Timestamps:

(00:00:00) Introduction

(00:04:09) Keith's Appsec Journey

(00:16:24) The Great VDP Debate Redux

(00:47:18) Platform/Hunter Incentives and Government Regulation

(01:06:24) AI Bias Bounties

(01:26:27) AI Techniques and Bugcrowd Contest

Transcript

Justin Gardner (@rhynorater) (00:02.179)
Alrighty, we have another episode of Critical Thinking here today and I want to start this one off with a shout out to my co -host Joel Margolis who has not abandoned us today despite just having a tractor delivered to his house.

Joel Margolis (teknogeek) (00:15.678)
It took everything in my power. Everything in my power. I'm just sitting here thinking like about roaming around the yard on my new tractor. And you're like, can you talk about security?

Justin Gardner (@rhynorater) (00:23.715)
Yeah, I can tell because you sent me so many pictures of this tractor and videos of this tractor. So shout out to Joel. Also shout out to Mr. Keith here. Keith, welcome to the podcast, man.

Keith Hoodlet (00:24.787)
Living in the Northeast. Yeah.

Keith Hoodlet (00:35.891)
Hey man, long time listener, first time caller. As I think I've said to you guys before, this is a public service that you're doing with this podcast, so it's a real honor to be here, thank you.

Justin Gardner (@rhynorater) (00:42.787)
Mm. Awesome. Thanks so much. I appreciate you listening. And yeah, I've seen you engaged around the community too. So, always appreciate that as well. Keith, we are two primary topics for this episode today are going to be AI bias bounties and that whole experience you had with, bug crowd and sort of that, very trend setting, program that you guys were a part of where you were doing some AI bias testing.

and getting bounties for that which is really cool and a continuation of the VDP chat because you had some hot takes man after the VDP episode that you hit me with and I was like wait a second I want to hear these two fight about stuff so...

Keith Hoodlet (01:18.481)
hahahaha

Joel Margolis (teknogeek) (01:19.23)
I'm sitting here.

Joel Margolis (teknogeek) (01:25.182)
I'm sitting here and I'm talking with Justin and Justin was like, Joel, did you read through this doc yet? And I was like, no. He's like, he's like, there's a, he wants to talk to you about VDP stuff. And I was like, I just like pulled up and I started reading. I was like, so, so we'll, whenever you want it, whenever you want to kick that off, we can, we can set that for. Yeah.

Keith Hoodlet (01:33.744)
Ha ha ha ha!

Justin Gardner (@rhynorater) (01:37.987)
Hahaha!

Justin Gardner (@rhynorater) (01:43.395)
Yeah, so.

Keith Hoodlet (01:44.24)
It's gonna be good. It's gonna be good.

Justin Gardner (@rhynorater) (01:47.843)
Yeah, well, we're gonna do the AI bias -binding things first, but I will say just as a teaser, there is a line in the doc that Keith wrote that starts with, my sweet summer child, Joel. And so I'm expecting some good stuff out of the rest of this pod. But Keith, I wanna get an intro from you first for the people that aren't familiar with your work. So maybe you could give us a little bit of...

Keith Hoodlet (02:01.296)
HAHAHAHA

Justin Gardner (@rhynorater) (02:15.587)
career history, how you ended up where you're at in InfoSec, and what your experiences with Bug Bounty.

Keith Hoodlet (02:20.079)
Yeah, yeah, yeah. So I'll go all the way back. You know, it was sort of funny when we were reading through some of like the prep stuff. So my first hack was in 1997. I was like 12 at the time and I was just got a new computer, 533 megahertz PC with a 28, eight dial up. Like, you know, those were the days. Yeah, blazing fast. And, and so Diablo had just come out and battle .net was a thing.

Justin Gardner (@rhynorater) (02:29.155)
Yeah.

Justin Gardner (@rhynorater) (02:36.227)
Oof.

Joel Margolis (teknogeek) (02:39.166)
Blazing Fest!

Keith Hoodlet (02:48.079)
Yeah, like original OG number one Diablo and the funniest part was on the back of the CD case for the game, they had a trial code that you could go in with a trial account. Well, I learned that you could use Telnet and any account that wasn't currently logged in as a username on the trial account and just log in. So I did what any enterprising juvenile delinquent in the late 90s would do and I basically wrote VB script.

Justin Gardner (@rhynorater) (03:07.459)
Wow.

Keith Hoodlet (03:16.302)
to create these bots that would connect as many connections as my 28 .8 dial -up could connect. It was the best when I was losing StarCraft games later, where I would just like spin that up and spam my friends so they literally just crash offline. And I was like, yeah, I win the game by default. So I caused trouble back in the late 90s. And one of the things that I say to people is the whole reason the security industry even exists today is because disk space got cheap. That's it.

Justin Gardner (@rhynorater) (03:19.651)
my gosh, Keith.

Justin Gardner (@rhynorater) (03:28.131)
no.

Justin Gardner (@rhynorater) (03:33.667)
guys.

Keith Hoodlet (03:45.422)
It wouldn't exist in the fashion that it exists today unless you could log everything, but back then you couldn't because diskbase was expensive. So I got away with a lot of things that I'm, you know.

Justin Gardner (@rhynorater) (03:45.763)
Okay?

Justin Gardner (@rhynorater) (03:53.635)
I believe that, I believe that. Given that one example, I believe that. And I was at that, in 1997 at that time, I was the ripe age of one. And Joel, I don't, were you even born yet? How old are you Joel?

Keith Hoodlet (03:57.517)
Ha ha ha ha.

Joel Margolis (teknogeek) (04:04.574)
I was born in November 1997, so when Battle .net launched, so did I.

Justin Gardner (@rhynorater) (04:08.899)
Wow.

What?

Keith Hoodlet (04:11.533)
Yeah, and it's funny because it's like it doesn't even exist in the same way that it used to it used to be like an MIRC chat room that like you basically if you were in a channel and you were the top person on the channel you had the banhammer and so there was this whole scene at that time where Basically all of these different like clans would spin up bots and try to take over each other's channels so it's funny I was looking at like, you know, one of the producing studios that you guys work with

Justin Gardner (@rhynorater) (04:34.371)
Wow.

Keith Hoodlet (04:38.764)
Their logo is the same logo as like the badass like the super badass hackers back then called Illuminati where they had like the closed square brackets with like an eye in the middle Yeah, and they they had an mIRC which I joined on windows 95. I ended up getting win nuked offline because they have my ip address back before windows 98 even came out and so like Anyway, that's i'm old. so like let's fast forward though. so so

Justin Gardner (@rhynorater) (04:48.483)
Really? Wow.

Justin Gardner (@rhynorater) (05:05.571)
Okay, alright, alright, alright.

Keith Hoodlet (05:08.748)
Fast forwarding, went to school for psychology because I was a super nerd and could hack things, but I couldn't do people. And so I was like, I got to be able to do people to be successful in my life. So.

Justin Gardner (@rhynorater) (05:18.595)
I like that, I like that. You can already do the tech stuff, so you're like, all right, where can I fill myself out a little bit here?

Keith Hoodlet (05:24.107)
Right, right, right. So I graduate in May of 2009, which is the bottom of the stock market after the housing market crash. Right, I mean, you know, Bug Bounty didn't exist, right? That was like part of it too. And the funny thing is, is at that time, like the number one company that was hiring people with psychology degrees that were technical was Facebook.

Justin Gardner (@rhynorater) (05:30.403)
oof

Joel Margolis (teknogeek) (05:30.494)
yeah, that's a great time to enter the economy. Yeah.

Justin Gardner (@rhynorater) (05:48.131)
Mmm.

Keith Hoodlet (05:48.171)
Now, I never went to work at Facebook, but like they've hired so many people with psychology degrees that I'm sure someone's gonna listen to this and be like, let's go hire this guy. Or maybe not, I don't know. But anyway, so a few years pass, I'm working shitty jobs, just trying to make ends meet, helping my wife through her master's degree. And eventually I go back to university for a computer science degree, which I probably should have done in the first place. Like that just made a lot of sense. And I had been following the security scene,

Joel Margolis (teknogeek) (05:56.574)
Hehehe

Justin Gardner (@rhynorater) (05:57.571)
Mm, mm.

Keith Hoodlet (06:17.866)
for years at that point listening to podcasts like Paul Isidori and stuff with Security Weekly, Patrick Gray stuff on Risky Business, and started doing collegiate cyber defense competitions as like the old man on the team. I was a decade older than everybody else in the team. And yeah, it was just like, through that I...

Justin Gardner (@rhynorater) (06:32.995)
Nice. Whenever they see anything legacy, they're like, Keith!

Keith Hoodlet (06:39.306)
Yeah, right. It's like, hey, what is this thing? I know. Yeah. No kidding.

Justin Gardner (@rhynorater) (06:42.883)
What is Telnet?

Joel Margolis (teknogeek) (06:43.824)
Yeah.

Keith, we're trying to put a back door on a Windows 95 box.

Justin Gardner (@rhynorater) (06:49.251)
Hahaha!

Keith Hoodlet (06:50.025)
Can you help us with that? It's like, yeah, I got you. I kid you not, there was software back in those days that you could like basically take a GIF or a JPEG image or something and like spin it into like a backdoor, like a reverse connection, and then just send it to your friends on AOL Instant Messenger. Like, yeah, it was, it was, again, logging was expensive. Don't do this at home today, kids. You'll get, you'll get in jail. So.

Justin Gardner (@rhynorater) (06:53.411)
my gosh.

Justin Gardner (@rhynorater) (06:59.331)
Mm.

my gosh dude. This is not good.

Justin Gardner (@rhynorater) (07:13.731)
Mm. Mm.

Keith Hoodlet (07:16.904)
Anyway, so I got out into industry after meeting an alumni who started a pen testing firm. I was brought in to be a pen tester. Really, they wanted me to be a Splunk architect and go and install Splunk for a bunch of small, medium businesses and government agencies. And I was like, no, this sucks. So eventually went to rapid seven for a little while, mostly focusing on AppSpider at the time. It's now called Insight AppSec to like support their customers and got involved with B -Sides Boston and

Justin Gardner (@rhynorater) (07:23.651)
Mm -hmm.

Keith Hoodlet (07:44.551)
as an organizer and volunteer. And then eventually I was reached out to by the folks at bug crowd and they're like, Hey, we see you're doing a lot of stuff in like you're at rapid seven. You're working on the app, like app spider products. So, you know, a bit of a web app sec, like would you interview for this role on trust and security? And I was like, sure. And right around that time, I learned that my training for offensive web hacking was accepted to Derby con. Now, I don't know if you guys remember Derby con, but it was like,

Justin Gardner (@rhynorater) (08:11.107)
Mm, yeah.

Keith Hoodlet (08:11.879)
one of those, it doesn't exist anymore for anyone that goes and looks for it, right? But it's, yeah, it's sad, really. But it's sold out within minutes most years for the conference. And so to be accepted to train at that conference was like, holy cow. And that was sort of the year that I feel like I made it in security, because I interviewed with Jason Haddix, who's now my good friend and my sensei, and I told him I'm training at DerbyCon, and he's like, holy crap, you're training at DerbyCon? I was like, yeah, is that, like, is that?

Joel Margolis (teknogeek) (08:17.246)
I didn't know that, that's too bad.

Keith Hoodlet (08:41.638)
as big of a deal as you're making it out to be. And like, yeah, apparently it was. And so eventually I joined Bug Crowd, worked for Jason. And that's when I really got into the Bug Bounty scene, mostly hacking on like a shall not be named automotive company, some networking gear stuff companies, like a few other things. But when you work on a Bug Bounty platform, and Joel, I think you've got some experience there too. Like you can't work on paid programs usually for that platform because you have too much insider info.

Justin Gardner (@rhynorater) (09:08.419)
Hmm.

Joel Margolis (teknogeek) (09:08.702)
Yeah.

Keith Hoodlet (09:11.558)
And then I started a podcast, Application Security Weekly with Paul Asidorian.

Justin Gardner (@rhynorater) (09:15.907)
Yeah, I wanted to ask about that, man. So when, do you remember when it was that you started that podcast? What year was? January 2018. Okay. So not too long ago, I was going to say like, if you were back in the OG days of podcasting, that would have been pretty crazy as well. But yeah, it's definitely an experience and I can tell, man, just the way you show up to this podcast, you've definitely got some podcaster vibes about you.

Keith Hoodlet (09:23.781)
January of 2018. So, yeah.

Keith Hoodlet (09:32.085)
Hahaha!

Keith Hoodlet (09:45.348)
I've done it before. I mean, it's a lot of fun too because you can have really great debates. You can meet really interesting and talented people, like as you have on your show all the time, right? And so yeah, that's sort of like 2017 DerbyCon. That's when I sat down and chatted with Paul Isidori and to start the podcast. Yeah.

Justin Gardner (@rhynorater) (09:49.379)
Mmm. Yeah. It's great, man. It's wonderful.

Justin Gardner (@rhynorater) (09:59.915)
Okay, I'm sorry. One more question about the podcast thing though. Why did you, because you're not actively doing it anymore, right? Why did you drop it?

Keith Hoodlet (10:08.372)
Yeah, I shouldn't have is maybe the first thing that I'll say. But part of it was because, you know, in 2018, I joined Thermo Fisher Scientific. So background about like how that all happened. So at DerbyCon,

Justin Gardner (@rhynorater) (10:12.835)
Okay

Justin Gardner (@rhynorater) (10:19.747)
Mm -hmm. Mm.

Keith Hoodlet (10:26.052)
I'm talking with the leader for their risk and governance slash vulnerability management team at Thermo Fisher about starting a bug bounty program. And I'm working at Bug Crowd and convinced them that they should do that. And they do like a two -week scoped test. I also met with Paul Ascidorian and officially agreed to start Application Security Weekly there. Everything happened because of that conference for me. The bug bounty program for them did not go as planned. It did not last two weeks. I'll just put it that way.

Justin Gardner (@rhynorater) (10:46.915)
Mm.

Justin Gardner (@rhynorater) (10:50.499)
Mm. The pen test? Mm, okay.

Keith Hoodlet (10:53.699)
The pen test. Yeah, the two weeks scoped test that they had because they didn't have a proper app sec program and they got wrecked like wrecked and so Start the podcast the guy Brian and Agaki good friend of mine listens to the podcast gives me some feedback and then we just start talking about DevSecOps and AppSec and doing security and he's like you need to come work here and like build the program here at Thermo Fisher and I was like Let's talk

Justin Gardner (@rhynorater) (11:01.379)
Mm, I believe that.

Justin Gardner (@rhynorater) (11:18.019)
Mm. Mm. It's a good offer. Mm.

Keith Hoodlet (11:21.25)
Yeah, right, you know, why not? So I literally built it from the ground up. I was employee zero on the AppSec team at Thermo Fisher. And yeah, right, I mean, and Fortune 100 company, global footprints, like not a bad space to really cut your teeth and build some opinions about these different things at scale, right? Because I think at the end, so to answer your question though.

Justin Gardner (@rhynorater) (11:28.131)
Wow, that's a place to be.

Keith Hoodlet (11:48.449)
The reason that I stopped podcasting is I was getting promoted from manager to senior manager about a year later, a lot more responsibility, focusing on career. I was putting a lot of effort into the podcast and it just like the return on investment didn't make a ton of sense at that point in time. Although quite frankly, from a reputation and brand standpoint, it was a good thing. And at some point I'd like to get back into it probably because I enjoy the conversations, right?

Justin Gardner (@rhynorater) (11:48.451)
Mm. Mm.

Mm.

Justin Gardner (@rhynorater) (12:11.587)
Mm.

Yeah. Yeah. Podcasting is a long game, man. I know for the first year, you know, until we actually got some, some, we got a team around us and that sort of thing. Like Joel and I were really, really hustling with this pod and, it was, it was a lot, a lot of work, man, you know, editing all the things, you know, running the social media, doing this and that on the discord, planning the episodes, getting the guests coordinate. I mean, it's just, it's a lot. So really grateful to the team that we have now because it makes it a lot more.

Keith Hoodlet (12:26.433)
You can tell.

Justin Gardner (@rhynorater) (12:43.171)
We're at a point now, I think, where we're over the hump. It's a little bit more sustainable. We're in this rhythm where we can do it long term and pay the long term costs or whatever. And you still get to have the fun conversations. And most of that stuff is dealt with, which I really appreciate. So yeah, it is. Mm. Mm.

Keith Hoodlet (12:59.841)
And it's good for your career long term too. Like you might not always be a full -time bug bounty hunter. Like Joel, you might decide to switch companies that you're working for and like being able to point to your corpus of work and like the knowledge that you've gained, the things that you've shared, like that will forever be valuable. So, and the connections you make, right? Like that's also super valuable.

Justin Gardner (@rhynorater) (13:06.467)
Mm.

Justin Gardner (@rhynorater) (13:11.509)
Yeah. Yeah.

Yeah, absolutely. And we've talked about it on the pod before, but even as a full -time bug bounty hunter, people will message me all the time with bugs and be like, hey, I've got this bug and can you help me exploit it? And probably 80 % of the time it's like, eh, this is kind of a mad bug. It's not really anything. But then that other 20 % of the time or whatever, it's like, you do have something good there. And yeah, you're just missing this. And then you get some bounties and.

Joel Margolis (teknogeek) (13:18.366)
Absolutely.

Keith Hoodlet (13:32.16)
You

Justin Gardner (@rhynorater) (13:42.531)
It's great. So it's definitely worked out for me as well as a full -time Bug Bounty Hunter. And also, you know, the Joel, I don't know if we ever shouted this out on the podcast, but we talked a couple of weeks ago about how Yes We Hack had some awesome swag from the event that they ran. And like a couple of weeks later, the box shows up at my door and it's got like full of awesome, dude, go get it, man, go get it. And it's got like a bunch of swag in there, really high quality stuff too. So.

Joel Margolis (teknogeek) (14:03.454)
have it right here, right next to me.

Keith Hoodlet (14:10.591)
Dude, that's legit.

Justin Gardner (@rhynorater) (14:11.459)
Check that out. Look at that. Look at that. That's awesome. So shout out to the SVHack team. I think we already shouted them out, but yeah, there's definitely lots of benefits to that. So it'd be great to see you get back into the podcasting space, man. See, but now I'm a little bit of a crossroads because I'm like, okay, you're talking about, you know, working at Thermo Fisher Scientific and you're, you know, now we could just slide right into the VDP section. So I don't know. Talk to me a little bit because like,

Keith Hoodlet (14:13.471)
Nice!

Justin Gardner (@rhynorater) (14:41.059)
You started a VDP at Thermo Fisher Scientific and like you said, they got wrecked in that pen test, right? So you don't wanna just join the team there the other day, the next day and then spin up a bug bounty program and start handing out $1 ,000 bounties left and right. So you decided to go to a VDP and I think this is where a lot of the stuff that we talked about on the pod a couple of weeks ago and in your opinion sort of.

collides is like when a VDP is appropriate and what kind of restrictions are in place around that and I'll just say just from my my position as a as a hacker and as you know Somebody who hasn't dealt a lot with the corporate side I learned a lot from that discussion that I had with Joel and then a lot of the conversations that I had with people over DM afterwards so definitely constantly updating my opinion on all this but I'm excited to hear your take on on you know one how you did it and What the correct flow was?

Keith Hoodlet (15:35.165)
Yeah, VDPs are one of those things where it's like, I think they have the right place in the industry at times, especially for hackers that want to get into something and they're maybe just cutting their teeth, right? And so it's a nice place to go and get some practice. But what's interesting is like the evolution of VDPs over time, because sort of the thing that I remember getting actually really mad at bug crowd about back in 2018, 2019 was,

Justin Gardner (@rhynorater) (15:47.683)
Mm.

Keith Hoodlet (16:00.317)
when they removed points for VDP because I was like, no one has any incentive now to even like report anything to us in the first place. And so anyway, sort of the long story short there is like, even though it was a vulnerability disclosure program and it didn't have any points, we actually did pay bounties to hackers that sort of came and continuously gave us good stuff. So the thinking that we had when we put this in place was sort of, first of all, we definitely know we have too big of a footprint to properly secure.

Justin Gardner (@rhynorater) (16:06.435)
Mmm.

Keith Hoodlet (16:30.173)
when it comes from like a mitigation standpoint, not to mention the product management teams within the various parts of the company. And to give you a sense of like company structure, every business group, which is like four to 12 divisions or something like that is based around an area of science and they have thousands of employees in each area. I mean, like Thermo Fisher today is 130 ,000 people in the company, right? Huge. And so, yeah, I mean, and my team,

Justin Gardner (@rhynorater) (16:57.219)
Holy crap, this is like my whole city.

Keith Hoodlet (16:59.869)
Yeah, my team at the time was 11 people, like at the very highest point of the structure of my AppSec team inside that organization, 11 people. Not a lot of people. So the VDP was set up mostly because one, we knew we had a ton of bugs out there. In fact, there was an Irish hacker, Noel Safly, who actually his...

I think it was his fiance at the time now his wife worked in a science lab using devices from thermo Fisher and so he went and found. firmware out on the Internet that we made available for our devices reverse engineered it and found database like connection string information and all this other stuff and like.

Justin Gardner (@rhynorater) (17:38.787)
Mm, mm, cool.

Joel Margolis (teknogeek) (17:40.254)
It's always really interesting to hear about these companies, by the way, that like own an industry that you don't really know about. For anybody interested, if you look up Thermo Fisher, their market cap is $225 billion.

Keith Hoodlet (17:46.107)
Yeah.

Justin Gardner (@rhynorater) (17:53.123)
Keith Hoodlet (17:54.107)
Yep, yeah, with a B, right? It's huge. And I didn't really know about them either until really until we sold to them at Bug Crowd and then eventually when I joined. But to give you like background for anyone that's like thermo fisher scientific, if you've ever taken medicine of any kind, you've probably interfaced with a product that touched one of their devices. Like quite literally, they do material and structural analysis, in vitro diagnostics, forensics, like they make.

Joel Margolis (teknogeek) (17:57.406)
We out with a B.

Keith Hoodlet (18:22.458)
Everything from like masks, gowns, face shields to pipettes, pipette tips, benchtop cycling devices to the super cold freezers that are used to store like COVID -19 vaccines and everything in between.

Justin Gardner (@rhynorater) (18:33.731)
And at Max you had 11 people on the AppSec team.

Keith Hoodlet (18:39.034)
Yeah, yeah, yeah, yeah. So, NOLA comes in and...

Justin Gardner (@rhynorater) (18:40.867)
Dude, this is exactly what I was talking about though, Keith. Like that is like very bad, right? How did you, I mean, were you just like screaming the whole time that you were in that organization? Like how did you deal with that?

Keith Hoodlet (18:49.722)
Yeah, I agree. I mean, you eat the elephant one bite at a time, right? Like that's sort of the way that we approached it. And so it was as much as we could scale various things we were trying to implement, that's sort of how we just operate, right? And you sort of accept the risk in many times. It's like, I can't secure everything. So what am I going to do? Well, secure the things that are most important.

the things that generate revenue for the company, top priority. That's just where it is. So Noel found some stuff, reported it to us. Eventually he was emigrating to the US. We hired him on the team, because it's like crack reverse engineer. Yeah, we want him on the team. We're going to have some actual device security stuff we need to do. And so over time, though, what we ended up doing is we ended up basically saying, look, if a hacker came to us with a VDP and they're like, pay me, it's like,

I respect your thoughts and opinions on this. I get it, right? I'm in that scene, I've done some bug bounty hunting, like I understand the desire to get paid for your work. But we're a VDP and we've never said we're going to pay you. So like, I encourage you to go hack on programs that say they will pay you for your work because if that's your goal, then that's what you should do. If you wanna stay and help us make our company more secure, we appreciate that, we can't guarantee promise of payment, but.

Over time we were able to actually say look at all this good work. These hackers are doing like can I get some budget that I can throw into this and then selectively reward hackers for consistently providing good work to us There was a hacker that was from the Middle East that I you know, we were talking about this just before we started recording who Consistently found like high and critical bugs against our stuff reported them to us Like wasn't asking for money and we're like we're gonna give this guy some money And so we started doing that like he'd report some things to us and we throw him

Justin Gardner (@rhynorater) (20:40.419)
Mm -hmm.

Keith Hoodlet (20:43.479)
know, five grand here, a few grand there, right? And eventually when he graduated college, he sent us a photo of him in his graduation robe and cap next to a brand new car that he bought with the bug bounty money we paid him, which felt so good. But we also recognized that if we had paid for every single bug, we could have never even run the program effectively. I know, right? Right? But, you know, I get it.

Justin Gardner (@rhynorater) (20:58.403)
Yeah, that's awesome.

Justin Gardner (@rhynorater) (21:04.291)
He would have sent you a picture with his fleet of Teslas that were...

Keith Hoodlet (21:13.174)
like VDPs, I think, especially for some of the ones that I've interfaced with, like one of the ones I interfaced with really recently. For the pod, I think your audience will appreciate this. So it's a food product, I'll just put it that way. So it's a beverage. So a beverage company has a bug bounty program and there's a lot of them, right? There's maybe a few of them, right?

Justin Gardner (@rhynorater) (21:21.635)
Mm. Mm.

Justin Gardner (@rhynorater) (21:31.299)
Okay, okay. Who could we be talking about here?

Joel Margolis (teknogeek) (21:36.606)
Yeah, a lot, yeah.

Justin Gardner (@rhynorater) (21:39.235)
Continue Keith. Okay.

Keith Hoodlet (21:42.422)
There's a few of them, but a beverage company has this bug bounty program and or VDP, excuse me, and wide open scope, like, or at least a really wide scope. And I found that they have a credit union website. Great place to go hack on, right? So if you ever see a credit union for any of your targets, go test there, because you're probably going to find some good stuff. Well, I go to register an account and lo and behold, they have just a client side validation on the email. You have to have at company name dot com as part of your email address. And I'm like burp, you know, intruder, just like.

Justin Gardner (@rhynorater) (21:51.267)
Mm -hmm.

Mm -hmm.

Justin Gardner (@rhynorater) (22:10.179)
Mm -hmm. Mm -hmm.

Keith Hoodlet (22:11.894)
or intercept like grab all this stuff and change it. When I go to put in the social security number, which is just a random number that I threw in there, it sends back a put request that gave me the full name, address and like account information for this individual who had that social security number registered with this company and their credit union. This is a global corporation here, billions of dollars.

Justin Gardner (@rhynorater) (22:13.667)
Mm.

Justin Gardner (@rhynorater) (22:30.755)
I love how you just threw in a random social security number and it's like, yeah, Bill does actually work here.

Joel Margolis (teknogeek) (22:34.798)
...

Keith Hoodlet (22:36.694)
Yeah, and I'm just like that finding I could have literally just iterated through all nine digits of the social security number for everybody and got all employee information plus their address plus their full name plus all their social like VDP all I got was a thanks. That's like a 10k crit maybe a 20k crit for like the sensitivity of the things that I could get from that account. So VDP.

Justin Gardner (@rhynorater) (22:59.235)
Yeah, you'd hope so, especially when a company's that big. So yeah, I think that kind of brings us back into that last conversation we were having last time about on the pod when we were talking about VDPs, about what kind of companies should be permitted to run a VDP. And I don't know, Joel, you had some hot takes on that last time. What do you think about this massive company? geez, my light ring just fell.

Keith Hoodlet (23:16.405)
Mm.

Justin Gardner (@rhynorater) (23:27.555)
$250 billion market cap doing a VDP and not paying for bugs like this.

Joel Margolis (teknogeek) (23:36.862)
I hope their cyber insurance is good.

Keith Hoodlet (23:39.221)
That was a huge focus! I'm not gonna lie, that was definitely a huge focus of the seasail.

Joel Margolis (teknogeek) (23:44.798)
because the reality is that if you're not putting a financial value on that, right? And it's obvious that they have plenty of money that, I don't know, it's just, it's kind of irresponsible, in my opinion, right? When you have that much money, it doesn't take a lot of effort to put some focus on your security. And especially as a company that deals with medical and

very sensitive data, right? Like security should be a focus. And I don't think that's something that if you go to the shareholders and you're like, we're going to spend, you know, a billion dollars on security in the next five years. Like, I don't think the shareholders can be like, why, you know, well, why? Cause we're a medical company and we deal with a lot of sensitive information. Like it doesn't really need justification, right? Like look at all the hacks, look at all the money that was lost in previous data breaches of large companies and then look at us. Right.

So I think, you know, it's definitely, especially for like the larger, like the larger the company, the more difficult it is for me to justify it.

Keith Hoodlet (24:42.354)
Yeah, it -

Keith Hoodlet (24:50.003)
I agree. I mean, like, and to the extent I put something in here in the show notes for us to chat about as well, and I've given it as a talk at a few different places, but I have this analogy that I like to use, which is security is a feature. And I'm usually talking to development teams or product teams when I'm using that as an analogy. But let me give you guys, like, the thing that I ask these product teams.

Justin Gardner (@rhynorater) (25:10.403)
You better, you better, all right, you better talk, Keith, because I don't love this so far. Yeah. Right.

Keith Hoodlet (25:14.29)
So, let me ask you guys, this is the exact same question that I asked these development teams. Do you have on your mobile device, do you have like a banking app or a financial app that you go and you use, right? Would you use it if it had some of the vulnerabilities that you've seen in other banking apps?

Justin Gardner (@rhynorater) (25:26.371)
Mm -hmm. Of course, yep.

Justin Gardner (@rhynorater) (25:36.803)
I have found many vulnerabilities in my own banking app. But yeah, I mean, the thing is, I'd like to answer that question in a way that continues your story, but the fact of the matter is all of these apps, all of these things are gonna have vulnerabilities, right? And I'm not sure that there's a world in which I could use my phone to access my bank without, you know.

Keith Hoodlet (25:55.12)
they are.

Justin Gardner (@rhynorater) (26:04.899)
without knowing, hey, there's gonna be some problem with this stack somewhere.

Keith Hoodlet (26:08.849)
So the rest of the analogy, the rest of the story there is like security is a feature because features from a software development standpoint are funded, they're maintained, they're tested over time. And unless they're truly like turned off, they're continuously like assessed, right? And so my concept of security as a feature is like you, anyone that you talk to assumes that the company is doing these things to make their product secure.

Justin Gardner (@rhynorater) (26:19.235)
Mm -hmm. Mm -hmm.

Keith Hoodlet (26:36.53)
Like it's just, it's an unsaid thing that no customer like goes to these companies and says, is your banking app secure? Like you and I are, all three of us really are just like, yeah, we know that that's not true, right? Like we just know that that's the case. But my wife on the other hand is like, yeah, she just is like, yeah, of course they secured this, right? Like, you know, it's a company that handles my financial information. They must secure this.

Justin Gardner (@rhynorater) (26:46.307)
Mm -hmm.

Justin Gardner (@rhynorater) (26:58.915)
Sure it's secure. Yeah.

Keith Hoodlet (27:05.874)
No, not really. And Joel, to your point as well, like regulation is the biggest thing that drives security generally, but especially at places like Thermo Fisher, because the Food and Drug Administration here in the United States has a 60 day timeline requirement for any vulnerability that is known to either be mitigated or remediated within the software for their devices. Like Food and Drug Administration, FDA.

Justin Gardner (@rhynorater) (27:06.883)
Mm.

Justin Gardner (@rhynorater) (27:29.507)
What industry? really?

Joel Margolis (teknogeek) (27:30.974)
FDA.

Keith Hoodlet (27:32.786)
Yeah, so like anything that you know about in products that are in, especially in the in vitro diagnostic space, which basically means anything where a sample is collected that is invasive in some way or it causes harm in some way, right? Or the outcome of that test could have life -changing meaning to that person. That is like a very, very heavily regulated space. Same with research use. So that's more of like a,

college, university, or even sometimes just R &D labs space, but that also has pretty high regulations. Like that space in particular is really, really sensitive to security. And so in some respects, though, to the point that you just made, Joel, about cyber insurance, right? Well, you have to fix the things that you know about. But what if you don't know about all the things?

Joel Margolis (teknogeek) (28:22.622)
I mean, nobody knows about all things, right? That's why cyber insurance exists. But I think that doesn't excuse you from being negligent on your bug bounty program or your vulnerability program, right? When you're worth $250 billion, you don't just get to be like, well, we don't really want to pay people for vulnerabilities. It's like, that's irresponsible.

Keith Hoodlet (28:24.368)
Sure, but.

Keith Hoodlet (28:31.665)
But.

Keith Hoodlet (28:40.754)
It's a calculus though, it's a calculus, right? And so the way that you calculate out, so say you have $100 ,000 in a bug bounty budget, for example, right? That's one person on the team that you could hire instead. And then you look at impact of finding, say, let's just use $10 ,000 crits, right? You'd probably find 10 crits within a few weeks if you look in the right places. And especially if you have a really wide scope, right? Like if you had a really tight scope on a really hardened target, I think,

Justin Gardner (@rhynorater) (28:57.987)
Mm.

Hmm.

Keith Hoodlet (29:08.498)
Bug bounty makes a lot of sense. But if you have a VDP where it's just totally open and you have that much of a footprint to secure against, it's harder to justify paying for every single bug because you know that you can't secure all of those things. And that's where I think like, well, it's okay, it's okay.

Justin Gardner (@rhynorater) (29:21.419)
Hmm. Hmm, all right, all right, this is what I wanted to hear. All right, let's go, let's go.

Joel Margolis (teknogeek) (29:21.79)
I disagree. I disagree. Listen, $100 ,000 in a bug bounty program goes significantly farther than $100 ,000 on a salary does, because $100 ,000 on a salary gets you maybe one person, especially in today's economy. And the reality is that they're probably not even going to be a very senior person either, especially in today's economy. And so if you want to put $100 ,000, let's say you're paying 5K crits, which is pretty normal for

for programs nowadays, that is 20 vulnerabilities, 20 critical vulnerabilities, max payout critical vulnerabilities on a bug bounty program versus maybe part of one person's salary, who I don't know if that one person would even find 20 criticals, right?

Keith Hoodlet (30:13.328)
So let me spin it around a little bit though, right? So when you're working at a company, you're also investing in other things, or you're investing in a static analysis tool, you're investing in mitigation tools like a web application firewall or runtime application security protection or dynamic analysis or what have you. So I'll give you, sure, operational versus capital expenses, yada yada, right? Like, yes. But let me give you a really good example. Hired a guy straight out of college, his name is (REDACTED). He now works at id Software, id Software, the folks that make Wolfenstein.

Joel Margolis (teknogeek) (30:25.246)
sure, but those budgets are separate from hiring.

Keith Hoodlet (30:43.536)
very first hire that I made on the team. Effectively, if you look at salary plus benefits plus all these other things like 100Ks, right around the ballpark of what he would have cost the company to hire. He automated all of our dynamic analysis program.

He helped implement all of our static analysis program and helped automate the deployment of all of our web application firewall. Like, so at the end of the day, he probably found through all of his work and his investment, hundreds of criticals off of the implementation of those tools versus just the 20 that a bug bounty would have found. Now we did have someone dedicated to bug bounty though, too. and we, we spun it around a little bit differently. We had one person on staff, Shane Newville, who his job was to take findings from the bug bounty reports.

figure out sort of how to actually like replicate those findings.

Justin Gardner (@rhynorater) (31:32.163)
Are these bug bounty reports? Are these VDP? Okay, gotcha. Now you're good.

Keith Hoodlet (31:34.319)
VDP reports, sorry, yeah, I'm conflating the two, VDP reports. And then he would go back to the development teams and basically be like, I want to teach you how to hack your application using this finding and then going further. And we'd have like hack Wednesdays. And once a month on Wednesdays, we have a sit down one hour recording where we would literally take someone's bug that preferably they fixed or at least we mitigated and show any developer in the company that wants to come and sit down how to hack that thing. Now that one finding, so in...

Large companies, it's usually referred to as variant analysis, right? When you're looking for the same bug in like a bunch of different places that one finding would turn into Sometimes hundreds of similar findings across all the different teams and so his time again Exponentially more than just a single bug now again We did pay for some of those vdp findings because quite frankly they were good bugs and I did feel guilty about not paying researchers for good bugs but on the same side of it too like

Justin Gardner (@rhynorater) (32:25.635)
Mm -hmm.

Justin Gardner (@rhynorater) (32:29.091)
Mm.

Keith Hoodlet (32:32.687)
for VDP is you tend not to get great bugs. Like these are usually pretty pretty like boring bugs, I would say. And in fact, a lot of my VDP findings are boring bugs. But.

Joel Margolis (teknogeek) (32:42.974)
Why do you think that is, by the way?

Keith Hoodlet (32:45.806)
Again, so back when I back in my day. So so way back when we didn't have nuclei, for example, right? So you couldn't automate a lot of these things. Now, some of my scripts still exist from when I was using tools like Go Buster and other things to like recon and find all these things. But it's usually because the impact, generally speaking, is lower. And so usually that means they don't get fixed. And so they're just sort of left out there to languish. And for some reason or another, like,

Justin Gardner (@rhynorater) (32:48.195)
Joel, don't you make that little smirk, okay?

Keith Hoodlet (33:15.949)
Again, companies, I know I put this in the show notes and this is controversial, but companies do not exist to be secure. They exist to sell a product or a service and security allows them to outrun the bear. Like it's, effectively you don't have to outrun the bear. You just have to outrun everybody else that's trying to outrun the bear, right? Just don't be the slowest.

Justin Gardner (@rhynorater) (33:31.843)
Right. Yeah. Go ahead, Joel. Go ahead. Go ahead.

Joel Margolis (teknogeek) (33:34.846)
So, so wait, wait, I didn't quite catch the answer to my question, which was why do you think that VDPs are receiving lower impact vulnerabilities?

Keith Hoodlet (33:41.293)
because they're, so this is where we're getting into, because they're automatable. They're things that are usually lower impact.

Joel Margolis (teknogeek) (33:47.39)
No. No, no, no. Sorry. Try again. Yes, that's correct. Ding, ding, ding. Yes, that's correct.

Justin Gardner (@rhynorater) (33:47.459)
They're not incentivized, right, is the question, right? And I think when you're not incentivizing, you could expect that quality to be lower, or like you said, Keith, not automated, right? You can still commit high quality file learning things for a low cost to you if you've got it in an automated way. But there aren't researchers that are gonna be willing to do that when they can point their automation at.

places where they're gonna get incentivization. Am I doing that justice, Joel? I mean, is that what you're talking about?

Joel Margolis (teknogeek) (34:21.822)
Yeah, I mean, absolutely. Right. Like, OK. So I think one of the things you put in here, like we talked about, like, if you find a critical on a VDP, like what's the incentive to report it to VDP? And there was some stuff about that. But the reality is like. The incentive is money, right? Like, that's why people spend time on programs, right? So if you want high quality hackers, high quality hackers go after bounties.

Justin Gardner (@rhynorater) (34:47.843)
or

Joel Margolis (teknogeek) (34:47.998)
and they go after scope and they go after a good program team. And if, go ahead, go ahead.

Justin Gardner (@rhynorater) (34:51.171)
I'm sorry. No, no, I should let you finish. Please continue.

Joel Margolis (teknogeek) (34:55.454)
Well, I was just going to say, you know, so right. So if you're not incentivizing them to with with good bounties, right, then why are they even going to spend time in the first place hacking on your program? So the people like automatically you're filtering the people who are spending time on your program to people who do not care about bounties, which.

can imply a variety of different things.

Justin Gardner (@rhynorater) (35:18.243)
including a lack of prioritization or lack of respect for their own skills, right? Which could probably is a little bit tricky. The other thing that I wanted to say about that Joel is it's not only another motivator in this scenario can be fame, right? Which is what you kind of talked about Keith when we discussed this a little bit, this whole aspect of not having leaderboards, you know, having VDP stuff on the leaderboard.

And it seemed like you didn't like that decision, whereas a lot of people in the bug bounty community, myself included, sort of appreciated that decision to remove points from VDPs on bug crowds behalf. So, I mean, I assume that's the reason, right? Whereas like, as a person running a VDP in a massive organization, you want some way to say, I want people to still send me bugs. And for that, I want them to have fame via the leaderboard. Is that accurate? I mean...

Keith Hoodlet (36:08.97)
Sure.

Keith Hoodlet (36:13.834)
Yeah, that's a good assessment and Joel to your point as well like if you're running a VDP any company if you're running a VDP and you're not paying for bounties of some kind even if it's like, you know occasionally If they can't pay for you at your worst, they don't deserve you at your best, right? Like that's sort of like that I would I would come back with VDPs in general is like there the company itself can't handle the best hackers in the first place because there's just too many things.

Justin Gardner (@rhynorater) (36:36.931)
Yeah, that's a good point.

Keith Hoodlet (36:39.21)
And so, yeah, they're lower quality findings generally, but they still do present a risk to the business and they do add value. And to Justin, your point, the reason that I still wanted points to be there is because exactly that. Like, I could not afford to pay for every bug that came in the door because the program, I was struggling as it was just to get the tools and the people that I needed. Again, 11 people on the team at the height of the team when I was there for thousands of developers, let alone all the different application footprint I had to secure.

Justin Gardner (@rhynorater) (37:00.931)
Mm -hmm. Yeah.

Justin Gardner (@rhynorater) (37:06.499)
I'm surprised you took this gig, man. I hope you got a good raise because like, Joel, I mean, like, I almost feel like when you accept that position, Keith, you gotta be like, hey, listen, like you're a 20 or $250 billion company. Like I need 30 people in the beginning. You know, like if you were gonna go back, if you were speaking now to people, you know, some up and coming bug hunter, you know, that just gets offered by the next, you know.

Keith Hoodlet (37:08.906)
Hahahaha!

It was good.

Keith Hoodlet (37:21.77)
Yeah.

Yeah, yeah, I mean.

Justin Gardner (@rhynorater) (37:36.259)
250 billion dollar company to come run their AppSec program. I mean, is that what you tell them? Is you say, hey, I need at least this staff, I need XYZ? I mean, what are your thoughts?

Keith Hoodlet (37:48.074)
They're not gonna hire you if you ask for that many staff. It's as sad as it is, right? I think Daniel Measler had a really good article, and I'll see if I can find it so you can link it in the show notes, but he talked about the security poverty line, right? And really the only companies that are paying or building staff of that size and caliber are effectively the Fangs or the Mangos or whatever the acronym now is, but like Apple, Amazon, Facebook.

Justin Gardner (@rhynorater) (37:58.147)
Mm.

Keith Hoodlet (38:13.226)
Google, Microsoft, maybe OpenAI, right? But not a lot of companies actually invest that heavily into their AppSec program, let alone their whole security program overall, right? I think when I was there at the time, there was like 150 people for all of security as the team was growing. Now, I joined when it was about 75 people, right? And so again, multinational, global, billion, billion plus, billions plus dollars of...

Joel Margolis (teknogeek) (38:38.206)
For what it's worth, I've worked at companies with a quarter of the market cap that have 150 secured people. So I didn't.

Keith Hoodlet (38:44.869)
Right, right. But coming back to this, right, like, it really does sort of come back to, I couldn't afford all the bugs that would have come to me anyway. And so I did want people to have leaderboard stats of some kind because I think there is some value there for people to, you know, grow their career, right? Some people come out of the scene and they have no background in security. They don't have a computer science degree, but if they're up on the leaderboard.

maybe they end up getting a pretty sweet gig as a senior security engineer or someone that has clout in a company. And if I were to go back and do it again, I think now, going back to the, my sweet summer child, tell Cersei it was me, right? Because at the end of the day, I know what it takes to build a program now, a full AppSec program, not just a VDP or a Bug Bounty program.

Justin Gardner (@rhynorater) (39:24.547)
Mm. Mm.

Keith Hoodlet (39:36.356)
And I recognize that it does need a lot more investment, but there are a lot of companies that I still talk to day to day in my role at GitHub, that it's sort of laughable how little they invest in their security. And you can tell just by the bugs that they focus on from the findings out of static analysis tools. It's like, you didn't find this vulnerability that comes from.

a configuration file that our developers control. I'm like, yeah, it's because it comes from a configuration file that your developers control. Like if that's a vulnerability you care about, it means you have to have a compromised developer account, blah, blah, blah, blah, blah, right? It's like the state of AppSec by and large within the Fortune 500 for the lion's share of companies is pretty bad. Like I would say we all expect it to be better, but then again, you guys run, you know, you've run bug bounties, you participate in bug bounties, you've seen it firsthand. Like,

how bad the landscape is in these investments, but also pausing for a moment and connecting us back to the beginning of this conversation. This industry is 20 years old. Some would say it's a lot older, but in earnest, most AppSec programs didn't exist before 2007 -ish. And even then, a lot of the software that some of these companies are still using for web applications, et cetera, are older than that.

And so having a VDP for huge scale with not a lot of investment in security, I think makes some sense having people being able to achieve some leaderboard status to build their resume to maybe, I don't know if they still do this, but like when you used to be able to get the annual MVP, like top 100 researchers awards, you used to get paid by the platform, which is, you know, what I got back in 2018 as well. Like I got paid from bug crowd for, for being on their top 100, but.

Justin Gardner (@rhynorater) (41:15.043)
Mm -hmm. Mm.

Keith Hoodlet (41:25.409)
By and large, should the companies be investing more in security? Joel, 100%, man. Like, it's laughable how bad some of these security programs are out there for some of these companies that are global brands.

Joel Margolis (teknogeek) (41:35.806)
Yeah, so I think one thing that I'll say is I don't disagree with a lot of the things you're saying. I think one thing is important is that let's not equate bug bounty program or VDP with, it's not equivalent to the things that you can do on the AppSec side because it's a very, very small piece of the security program. And I think that bug bounty programs and VDPs as a whole hold a very specific role within security posture.

as like a way for external researchers to have incentive to look at your product and to find vulnerabilities and to report them to you directly instead of like abusing it or selling it or not even looking at all. It just gives an incentive and it says publicly or even privately in some way that you value security. I think like that is very difficult to compare with like hiring an engineer who does variant analysis, for example, right? Like that,

is a very separate thing. You know, it requires a very different technical skill set. It requires internal knowledge of the company and a different whole thing. And it's not really, you know, like, well, should we spend 100K and hire this guy to do variant analysis or should we spend 100K on boundaries? Because those are two very, very different things and it's apples and oranges. I think, you know, money goes very differently. It goes very different distances depending on what you're trying to do with it in a company.

Whether that's for staffing, whether that's for resources, whether that's for bug bounty. And I think you can't just say, well, let's take this 100K and move it and hire somebody or let's take this 100K and not put it in the bug bounty or put it in the bug bounty. Because even if you start with super low bounties, if you have a budget of 30K, you blow through that 30K in two months because you have a lot of vulnerabilities.

that's a really easy way to justify to your leadership that there's a security problem at your company. And if they can't deny that, or if they can't agree with that, then that's like a fundamental issue, right? Like there's a monetary direct, easy justification of costs there. Yeah, I don't know. It's difficult. Like I totally like see where you're coming from, but I think like it's still inexcusable in my opinion. Like I just like, especially when you have a lot of money, like when you have that much money, right?

Justin Gardner (@rhynorater) (43:40.547)
That's a great point.

Keith Hoodlet (43:44.293)
yeah.

Justin Gardner (@rhynorater) (43:46.595)
Mm -hmm. Yeah.

Justin Gardner (@rhynorater) (43:56.227)
Yeah, and it.

Joel Margolis (teknogeek) (44:01.662)
Like $225 billion is like an unimaginable amount of money. Let's say a quarter of that. Let's say a 10th of that, $2 .5 billion. That's still an unimaginable amount of money. And I think that having no bug bounty part, if you're gonna have a way for researchers to submit to you, bug bounty program. Like there's no, like I get it. Like there are edge cases and stuff where VDPs exist, but like 99 % of the time if you're a profit,

Keith Hoodlet (44:05.022)
Sure.

Justin Gardner (@rhynorater) (44:06.787)
Hmm.

Justin Gardner (@rhynorater) (44:24.675)
Hmm. Yeah.

Joel Margolis (teknogeek) (44:31.486)
driven company, like you should put money behind your security or you should, you know, yeah, that's it.

Justin Gardner (@rhynorater) (44:34.583)
I think the VDP piece too, I mean, it should be a step stool, you know? But I think you see a great, you make a great point, Joel, when you say like, that's a great way, starting with lower bounties and then getting, you know, wrecked if you're really that vulnerable, is a great way to say, guys, we just started incentivizing, you know, hackers to do stuff and they just destroyed us, you know?

Keith Hoodlet (44:41.981)
Mm -hmm.

Justin Gardner (@rhynorater) (45:00.707)
And even if these hackers at these lower levels, even if the people that are willing to hack on programs that have 1K crits or whatever are wrecking us, then how are we gonna get wrecked when we start doing 10K crits or whatever? So I really like that point, but I did also wanna bring it around to something really insightful that I think you said, Keith, about how the government should regulate.

whether or not we have bug bounty programs. And maybe this thing that Joel is talking about of like, we need to pay for bugs at a certain scale should be brought in at the federal level via regulation. And Keith, forgive me for not knowing this, but I know you live in America right now, but are you American as well? Okay, yeah. So I just wanna have this conversation as Americans as well. Sorry for the rest of the world listening to this, but there's something I think very core to Americans about freedom and.

Keith Hoodlet (45:46.428)
I am. Yeah, yeah, yeah.

Sure.

Justin Gardner (@rhynorater) (45:57.091)
lack of regulation despite, you know what, regulations.

Keith Hoodlet (45:59.42)
Capitalism? Capitalism?

Joel Margolis (teknogeek) (45:59.774)
I was gonna say, we're about to trigger approximately 49 .9 % of our listeners.

Justin Gardner (@rhynorater) (46:03.459)
Yes, sorry about that guys. But, you know, how do you two feel about the federal government actually regulating something like this? Because there are some countries right now, I won't announce anything, but I've spoken with some people that there's actually a trial going on about a countrywide bug bounty program that is being mandated by the government and funded in some capacity by the government. And...

Keith Hoodlet (46:04.572)
Hehehehehe

Justin Gardner (@rhynorater) (46:31.619)
sort of like what we were talking about before of how it's not a, the problem with security as a feature makes it seem, that whole phrase, makes it seem like it's optional. I feel like not putting money into the security program, more money into the program, and the fact that you had 11 AppSec developers for this massive 130 ,000 person company is nothing beyond actual negligence.

So I don't know, how would you guys feel about regulation when you mix your security person and your American?

Joel Margolis (teknogeek) (47:04.19)
really easy way to do this, by the way, that works with America. And it's how we do it right now, which is financial fines, right? So when a company gets breached, they're forced to pay settlements and they have to pay a fine because they were breached. And that happens right now. And I think that's currently sort of like the, basically addresses the negligence side, which is that there's a cost to negligence, which is that if you decide not to be secure and something happens, then you have to pay for that.

Justin Gardner (@rhynorater) (47:07.459)
Okay? Okay?

Keith Hoodlet (47:07.769)
Okay.

Justin Gardner (@rhynorater) (47:11.683)
Ha ha ha.

Keith Hoodlet (47:11.962)
Mm -hmm, mm -hmm.

Joel Margolis (teknogeek) (47:34.206)
That being said, there are plenty of companies who make enough money where that doesn't matter. So I think that slapping them on the wrist is not really a good enough solution. But it would be very anti -American to put some sort of regulation like that in place. It wouldn't be novel though. I think there are other things that exist that are like that that could be used as sort of like some sort of like precedent, but yeah.

Keith Hoodlet (47:38.937)
Yeah, that's exactly it.

Justin Gardner (@rhynorater) (47:44.195)
Mm.

Keith Hoodlet (48:03.096)
I almost hope that like the SEC, which has some teeth in this space comes out and starts hitting people a little harder. I mean, like we saw some of the cases that they brought against, you know, CISOs, for example, for I think it was solar winds, right? And signing off on the security of their product only to get wrecked by Russians and then now being brought up on charges, right? And by the way, like if you're even in the C suite, like if you're.

Justin Gardner (@rhynorater) (48:09.859)
Mm.

Keith Hoodlet (48:24.887)
Charged by the SEC for these things even if you don't go to prison you just to pay a fine like you can no longer be an executive on any companies like a board or a leadership team ever again, this is like people's careers and you know in some respects I think that that's a good thing right because then it does force them to pay a little bit more attention to the things that they're signing off on which means in turn that in you know in theory they should be then investing in the security that they should be investing in and to come back to like the security is a feature thing like the reason I say it that way is because features get invested in.

Justin Gardner (@rhynorater) (48:35.555)
Jeez.

Keith Hoodlet (48:53.687)
And it's like, it's a required feature, not an optional feature is maybe what I mean by that. But so few companies do treat it as an optional feature. I mean, as we see with all the bugs that we find out in the world when we're hacking on things, it's like, yeah, clearly they didn't treat security as a feature because it's optional to them because they didn't invest in it. They're not investing in their security program. And we found this bug and hopefully we're getting paid for it.

Justin Gardner (@rhynorater) (48:56.515)
Mmm. Yeah.

Justin Gardner (@rhynorater) (49:10.019)
Mm.

Joel Margolis (teknogeek) (49:13.95)
Do you consider, do you consider like serving a website over HTTP versus like over SSH or something you're telling that? Do you consider that a feature? I'm just curious. I'm just curious. I'm just like, let's like, let's say like if I had to access hacker one over port 21 or something like versus, versus, versus the fact that I could access it over HTTP, is that like a feature? Is that a core fundamental part of the product?

Justin Gardner (@rhynorater) (49:24.707)
Wait, wait, wait, what do you mean serving it over a telnet? Like, what do you...

Keith Hoodlet (49:29.91)
Sure, sure, sure.

Keith Hoodlet (49:35.446)
god, over FTP or telnet?

Keith Hoodlet (49:43.399)
Yeah, so like if you think about it, right, it almost always depends on the data that you're gathering or that you're providing to the website. Like HTTPS today, again, we're living in a world where speed of interfacing with applications and compute is fairly accessible. There were times probably about a decade, decade and a half ago, where deciding to serve a website over TLS 1 .1 was a decision that you had to make because...

your web server might not be able to handle all of the calls that are being made into the website and the cryptographic compute that is required to create secure connections. Like today it's like, yeah, it's trivial. You should always be doing HTTPS over TLS 1 .3 or whatever version we're on now. But 10 years, so it is a feature now, right? Like it's a feature that I think it's universally something that people use, but 10, 15 years ago?

Joel Margolis (teknogeek) (50:28.99)
So why isn't that a feature? Or is it a feature?

Keith Hoodlet (50:39.765)
it was definitely a feature you chose to invest in or not because it had compute costs associated with it, right? And so I think over time these things, the thing that I've continued to find in my career is the pace of security.

Justin Gardner (@rhynorater) (50:41.951)
Thank you.

Joel Margolis (teknogeek) (50:50.558)
been 15 years since 2009.

Justin Gardner (@rhynorater) (50:53.507)
Mm. Mm.

Keith Hoodlet (50:53.813)
I know, right? Like, so...

Joel Margolis (teknogeek) (50:55.23)
which is when you said that most AppSec teams started up, right? So I think the point I'm trying to get out of here, sorry to cut you off, is that like, I think like we're getting to a point where security should not be a feature, right? Like security, if we consider TLS not to be a feature and to be a default functionality, like it should be assumed that web servers serve on, you know, HTTP and HTTPS, it should be assumed that a company has some level of a security.

Keith Hoodlet (51:07.22)
should be a requirement.

Justin Gardner (@rhynorater) (51:08.292)
Mm -hmm.

Justin Gardner (@rhynorater) (51:20.067)
requirement.

Joel Margolis (teknogeek) (51:23.134)
presence, even if that's one person or just like a point of contact to handle security related matters who's on engineering, like what is, you know, how much longer, right? You know, if it is a feature, right, then like when does it not become a feature?

Justin Gardner (@rhynorater) (51:36.035)
Mm.

Keith Hoodlet (51:38.747)
So yeah, I think to the point that you try to make security as transparent as possible. It's something you never have to think about because it's always just there and it's being done. That's the goal I think of any security program at the end of the day is you just want to be something that is done by default because you don't have to think of it anymore, just like HTTPS, right? It's same concept. I've lost my train of thought here other than to say like...

Justin Gardner (@rhynorater) (51:39.715)
Hmm.

Keith Hoodlet (52:03.923)
Over time, I think the pace of security though, and I've seen it over my career. I remember back in 2018, I was at a party with Jason Hedix at RSA. We were riding back in an Uber to our hotel and he's like, dude, you've got to get into JavaScript and like learning the JavaScript stuff. And I'm like, nah, man, I'm going to stick with like the PHP and the J the Java stuff because like there's plenty of bugs out there. I like, and Jason being the, you know, the Oracle that he is like,

And my sensei was absolutely right. I look at this now, you know, six years later and I'm like, I listened to your podcast and I'm hearing all these amazing JavaScript bugs that people are finding. And I'm like, I should have listened to Jason, but I think the pace of security is accelerating. It continues to accelerate. Whereas the pace of development and the investments that companies make is just not right. Like they're, they're just barely keeping the doors open in many cases.

Justin Gardner (@rhynorater) (52:33.795)
Mm -hmm.

Justin Gardner (@rhynorater) (52:43.363)
Hmm.

Justin Gardner (@rhynorater) (52:47.843)
Hmm.

Justin Gardner (@rhynorater) (52:58.371)
And so I want to sort of round out this conversation here on VDPs by saying, like, I mean, clearly in this scenario, you're working at a very big company, that very big company had a VDP. And it almost makes me think of the customer service rep and the customer yelling at the customer service rep, right? The customer service rep doesn't have, you know, can't just barge into finance's office and be like, listen, finance, like, I need 30 people tomorrow.

Keith Hoodlet (53:15.154)
haha

Justin Gardner (@rhynorater) (53:28.355)
or else we're gonna get sued, you know, and so at the end of the day like For me, I think it's less of a debate about like whether this company should be running a bug bounty program or whether it should be running a VDP or whatever it is and more of like, okay You try to work with what you've got in your scenario and I think in many other companies scenarios, but I think whether we Implement some sort of GDPR type thing, you know, like Europe's got where it's like really if you screw up in in

you know, the cybersecurity realm in a negligent way and you're found to be negligent, it's not like a pay a fine sort of thing. It's like, you're out of business. You're bankrupt now because like, yeah.

Keith Hoodlet (54:07.473)
Percent of gross profits, right? Like that's the GDPR hit is like it's not net profits. It's gross profits. It's your total revenue coming into the company. And if you had fines that were based off of that, yeah, I think you'd start to see more of an investment in security overall. The things I've seen like cannot be shared on the podcast of the various security vulnerabilities I've run into in my career, not only like from my role at GitHub talking with customers, but also, you know, having led an app sec program myself and.

I mean, I think you all and especially your audience know like the world is a lot less secure than we all seem to assume it is, but very few people actually know that. And we sort of feel like those people that are in front of the board with all the string attached being like crazy, like it's vulnerable. And it's like, nobody cares out there because their business is continuing to operate. And they're like, why would I pay millions of dollars in running a security program when I continue to operate just fine? And.

Justin Gardner (@rhynorater) (54:52.291)
mmm mmm yeah

Hmm.

Keith Hoodlet (55:05.584)
I have no service outages and I make all this revenue, like whatever, right? Hit me with a $5 million fine.

Joel Margolis (teknogeek) (55:10.494)
I have a simple answer. Doesn't cost millions. It doesn't cost me to run a bug bounty program. Most, most only pick up a hundred thousand. And if, I promise, if, if your bug bounty program burns $50 ,000 in the first two months, you've got bigger problems.

Justin Gardner (@rhynorater) (55:12.419)
Joel, what's your answer?

Justin Gardner (@rhynorater) (55:26.819)
Mm, mm.

Keith Hoodlet (55:27.025)
Sure, that's exactly it is. You've got bigger problems is usually where you are. Yeah, yeah, yeah. And, and.

Joel Margolis (teknogeek) (55:29.47)
That's your little test case, you know? Put a little money in your Bug Bounty program and if it disappears fast, time to move.

Justin Gardner (@rhynorater) (55:36.227)
That's the proof.

Keith Hoodlet (55:36.368)
Yeah, well, go make other investments, right? Go make other investments, including people, I think. Like for me, it's always people first, process second, technology third, right? Like if you've got good people, people that can train, people that can scale, they will do a ton and build good process, which will scale. And then eventually your technology can come around. But I mean, let me just like end the VDP conversation with this. Like I'm very much on the same side as you guys here in that like VDP for large companies are absolute BS. Like the fact that I've reported...

Joel Margolis (teknogeek) (55:40.158)
Yeah. Yeah.

Justin Gardner (@rhynorater) (55:55.619)
Mm, mm, mm.

Mm.

Mm.

Keith Hoodlet (56:04.367)
some VDPs even more recently to companies that are global companies making billions of dollars a year and getting a thanks, not even like a swag, a sticker, anything is insulting, quite frankly. The thing that we haven't covered in any of this conversation is like the incentive that the Bug Bounty Hunter has or the incentive that the platform has. So I'll just give a quick summary on my thoughts on that, which is the platforms have this incentive of having leaderboards plus VDP because...

Justin Gardner (@rhynorater) (56:15.107)
Mm, mm.

Keith Hoodlet (56:32.623)
It means that they can vet their talent that are available to their customers and then from there assign them on. But also because it means that it'll keep those customers around. Like they're going to continue to give VDPs whether or not we rail against them because at the end of the day, it gives their customers sort of a taste of Bug Bounty and that's what they want them to have. The Bug Bounty side or the Bug Hunter side on the other hand, like you sort of have to play that game initially maybe. Although I think Justin and Joel, you guys have said like,

Justin Gardner (@rhynorater) (56:51.875)
Mm. Mm.

Keith Hoodlet (57:02.03)
Just go after paid programs because there's plenty of bugs there and you can get into the private programs that way. Like don't sell yourself short and your skills short. And I agree with that mentality. But like from the bug hunter side, you know, 2009 Dino Dizove and Alex Sotorov from, I think at the time it was just before they formed trail of bits back in 2012, went to a conference and they had this whole like on cardboard and said, no more free bugs. Like as a community, we should just stop hacking on VDPs.

If you're not gonna pay me for my findings, then I'm just not gonna hack on your stuff, which in turn maybe incentivizes the companies to like HackerOne, BugCrowd, yes we hack, to pay on behalf of the company for some small marginal amount to get them started or alternatively to stop selling VDPs. Like if we just stop giving them bugs.

Justin Gardner (@rhynorater) (57:47.747)
Mm.

Joel Margolis (teknogeek) (57:50.334)
I do.

Justin Gardner (@rhynorater) (57:51.747)
So.

Joel Margolis (teknogeek) (57:53.022)
I have an idea, okay, hear me out, hear me out. If you have a VDP, you cannot receive more than three full content vulnerability reports without paying.

Keith Hoodlet (57:53.389)
Okay.

Justin Gardner (@rhynorater) (58:05.475)
My gosh you put like

Keith Hoodlet (58:07.277)
I have an alternative.

Joel Margolis (teknogeek) (58:07.87)
We put a paywall. We put a paywall for the program on the VDP. Okay. They want to see more than more than three reports and full content. They got to, they got to pay for that. This is a perfect profit opportunity for the platforms.

Keith Hoodlet (58:11.853)
I have an alternative.

Justin Gardner (@rhynorater) (58:16.355)
my god.

Keith Hoodlet (58:17.101)
I have an alternative. It is a great idea for the platforms, by the way. I have an alternative, though. Mandatory 90 -day disclosure. Full 90 -day disclosure on any VDP finding that comes in. Like...

Justin Gardner (@rhynorater) (58:30.083)
Yeah, I've heard that. I like that.

Joel Margolis (teknogeek) (58:32.286)
So the only problem I have with that is that a lot of companies that don't have bug bounty programs already can't do that, let alone ones who are not paying. And if what we're saying is correct that most companies aren't paying for a VDP because they're under resourcing security as it is, I would highly doubt that putting a 90 day deadline on disclosure for those same companies is...

gonna do anything positive. If anything, it would probably stop them from opening a VDP because they don't even want the risk.

Keith Hoodlet (59:04.907)
which is maybe also a good thing, right? Like, let's stop putting, stop.

Justin Gardner (@rhynorater) (59:06.659)
Yeah.

Joel Margolis (teknogeek) (59:07.038)
Well, maybe, but it might just stop them entirely from being, you know, like no VDP, no BBP. I would argue that having a VDB, even if you're not paying and it's a VDP is better than nothing.

Justin Gardner (@rhynorater) (59:18.243)
So a VDP, I think, do we need to discuss the definition of a VDP here? Because I think VDP should originally be a program through which you're not actively hacking, but you're able to report things that you've seen or stumbled upon and that sort of thing. And I don't know, maybe we should make that explicit in the terms, but you had a quote from earlier that companies are not existing to be secure. They're not existing to serve security, right?

There's no intensification for the platforms to actually implement this, right? Because it's like, okay, why would we take away this nice little on -ramp onto a bug bounty program, which is our bread and butter product? So I don't know, it's not something that we can really see. I think, like you said, we stop as higher skill hackers as hackers that respect our time. We stop hacking on VDPs. We focus on bug bounty programs.

people want to train on VDPs, that's great, that's fine, go for it, but unless the company really says, hey, this is just a vulnerability disclosure program, if you see something, say something, but don't go looking for something. You know what I'm saying? And I think that is a good perspective, but they're not incentivized to do that because why wouldn't we just continue to get free bugs? And...

Keith Hoodlet (01:00:29.77)
Yeah. Yeah.

Justin Gardner (@rhynorater) (01:00:41.059)
And I think that is what it is. And there's not much we can do about it. Bugcrowd de -incentivized it by removing the leaderboards. But to be honest, for the world of security, it's better to have people motivated to hack on VDPs. So maybe we just separate that from the bug bounty leaderboards. I don't know.

Joel Margolis (teknogeek) (01:01:02.686)
There is an entirely separate way to support this ecosystem, by the way, as well, which is that, as you mentioned, from a policy perspective, the US government comes in and subsidizes this, and they give out grants that you can apply for as a company and says, we can't afford a security team. And so the US government pays HackerOne in grants to basically pay researchers to hack on VDPs as a thank you for doing in a security service or whatever on those VDPs that can't afford it. The US government,

Justin Gardner (@rhynorater) (01:01:15.875)
Mm -hmm.

Joel Margolis (teknogeek) (01:01:32.35)
pays for attack of three yard tax dollars, which I'm sure everybody would love. that's right. You're not you're now financing. Yeah. The companies who don't want to. But but again, again, right. This this sort of implies that they. Yeah, yeah. It does sort of imply, though, right, that they have that a company would have to apply and basically show and demonstrate in some way that they can't afford a security program in order to be eligible for that kind of.

Justin Gardner (@rhynorater) (01:01:35.843)
So now I'm paying money to find vulnerabilities is what's happening here.

Keith Hoodlet (01:01:37.674)
no!

Your bugs are now financing other companies that find you bugs.

Justin Gardner (@rhynorater) (01:01:44.675)
And now I'll.

Justin Gardner (@rhynorater) (01:01:49.123)
All of my tax money's going to fronds.

Joel Margolis (teknogeek) (01:02:01.15)
And so maybe that would also help in some way of making it through a regulation process that like, if you want to say, we can't afford a bug running program, OK, well, let's, you have to show that.

Keith Hoodlet (01:02:14.281)
I will say this, as more of a final thought as well, back in my day, Grandpa Keith sitting down to give a story, back in the day though, seriously, if you found bugs and you tried to report it to a company, there was a chance you could go to jail. The Cyber Fraud and Abuse Act, CFAA, companies would go after you for that, sometimes civilly, and then sometimes the government would also step in and go after you.

Justin Gardner (@rhynorater) (01:02:28.355)
Mm. Mm. Mm. Mm.

Keith Hoodlet (01:02:37.833)
criminally as well, right? And so in some respects, like having the VDP and having the option to present something to a company, sort of great, you know, because it's made this whole idea of, I don't know, civilian hackers being able to go out and actually sharpen their skills, but also present this information in a way that, you know, they know that the company can't go after them and then the government's not gonna go after them either. Sort of feels good, you know, just looking at the history of it. So I don't know, at the end of the day, I...

Justin Gardner (@rhynorater) (01:03:05.379)
Yeah, it's progress. It's progress for sure.

Keith Hoodlet (01:03:07.912)
Slow, very slow. But yeah, especially when we're looking at, you know, hundreds of billions of dollars for these market caps for these companies that have just the VDP. Like, that's pathetic in 2024.

Justin Gardner (@rhynorater) (01:03:17.731)
Yeah, yeah. There we go. Let's end on that note there and move over to AI bias bounties. Joel, you got anything else you need to, I hate to give the opportunity, but dude. Okay, go sit on your tractor, Joel. Yeah, that's great. Okay, so after that lovely, lovely.

Joel Margolis (teknogeek) (01:03:29.438)
I'm not, no, I'm just zipping.

Keith Hoodlet (01:03:31.4)
We're done with the VDPs at this point.

Joel Margolis (teknogeek) (01:03:34.942)
out.

gonna put put put on out of you

Justin Gardner (@rhynorater) (01:03:46.883)
discussion on VDPs. Exactly. Let's talk about the government funding some other cool things. The AI bias bounties. So Keith, maybe you could just give me a little bit of a summary on it because I'm definitely not as familiar with it as you are. But the reason I wanted to talk to you about this is because there are a lot of AI programs popping up left and right and they're kind of hard to test on because AI is kind of funky. And you took first on this competition. So congratulations on that.

Keith Hoodlet (01:03:46.952)
Speaking of the government funding things, yeah.

Keith Hoodlet (01:04:11.622)
Yep.

Justin Gardner (@rhynorater) (01:04:14.691)
And I just kind of want to hear about your experience with this program and what kind of techniques you use to succeed in this environment.

Keith Hoodlet (01:04:15.046)
Thank you.

Keith Hoodlet (01:04:22.47)
Yeah, so for background, right? So the United States Department of Defense or the DOD, specifically the Chief Digital and Artificial Intelligence Office put together an AI bias bounty program. And that was put together on Bug Crowd in partnership with a company called Conductor AI, who was effectively hosting the large language model and the chat interface that you could test against. And so they sort of collaborated with Bug Crowd to triage findings and then go ahead and work with the Chief Digital and Artificial Intelligence Office to accept findings.

So it was a contest program, which is sort of unfortunate, right? Like very similar to a VDP and that you're not getting paid per finding, which, you know, for, I produced over 150 findings. Like if I got paid per finding, even on a grand, like just, you know, hell, $500 a piece would have been really nice. So, so anyway, effectively it was run in two different sort of sequences. The first sequence or the first like qualifying round was open. So anyone in the United States, like,

Domiciled here. I don't know if you had to be an American citizen, but anyone in the United States could participate in the program and They were looking for effectively the first 35 hackers who could produce as they said nominally like reliable or believable Department of Defense scenarios in which you could find bias and then after the qualifying round you got into the contest round and that's where effectively the 35 or so hackers that were would be in that

contest round would compete for top three prizes and then if you're in the contest round you got $300 just by default.

Justin Gardner (@rhynorater) (01:05:55.971)
Okay, so it's like bounty adjacent here. We got a contest, we got a... You know, and hey, that's, I'm sorry to go, I'm so sorry to go back to the VDP conversation, but like, hey, it's better than a VDP, right? Like, what if you just, every month you said, hey, we're paying out five grand, you know? And we say, okay, you know, top prize gets three, second gets whatever, and then you just kind of scale it down.

Joel Margolis (teknogeek) (01:05:59.774)
That ain't no VDP.

Keith Hoodlet (01:06:01.316)
Yeah, you actually got paid, which is good, right? You got paid.

Keith Hoodlet (01:06:08.931)
Yeah.

Keith Hoodlet (01:06:18.371)
Sure.

Justin Gardner (@rhynorater) (01:06:25.923)
I think that'd be pretty cool. I think that'd be a great alternative and I think that we could see some implementation of that at the program level or at the platform level and say like, okay, listen, you've run a VDP for this amount of time. Let's say you wanna go this onboarding, sort of slow increase, right? Now all of your reports are covered by Paywall until you implement some sort of monetary reward.

Keith Hoodlet (01:06:26.34)
That would be awesome.

Joel Margolis (teknogeek) (01:06:41.47)
Hey, Walt.

Keith Hoodlet (01:06:42.948)
hahahaha

Joel Margolis (teknogeek) (01:06:50.046)
You just put a little blur like when you're scrolling on the New York Times. It's just... One dollar, one dollar per week, please.

Keith Hoodlet (01:06:52.323)
And they go in and they're like, view page source and remove the JavaScript that hides, anyway. Anyway.

Justin Gardner (@rhynorater) (01:06:53.763)
The problem with it.

Justin Gardner (@rhynorater) (01:06:58.659)
Great. Love that. Love that. Yeah. The problem is obviously, of course, you know, you'd have to collaborate across all the book bounty platforms and then to do it because they're just going to jump ship and go to the other one and they're not collaborating so well. So it's tricky, but I kind of like the contest realm, especially, you know, when you can't do a paid program and it sounds like it worked out all right for you, even though 150 reports is a lot of reports, Keith. So, yeah.

Keith Hoodlet (01:07:25.251)
Yeah, I mean, I probably put about 20 hours in total. Actually, let's see, right? Yeah, yeah, so let's get into it, right? Let's get into it. So the interesting thing about AI bias bounty is it's different than what you think of as prompt injection or what is sometimes called oracle attacks, right? The oracle attack is, the AI is an oracle. It knows everything about your internal data, and so therefore it can run, or it can access data that it shouldn't be able to access as a user, right? So that's an oracle attack.

Justin Gardner (@rhynorater) (01:07:28.259)
How did you even write 150 reports in 20 hours? -huh.

Justin Gardner (@rhynorater) (01:07:40.067)
Mm, mm, mm.

Keith Hoodlet (01:07:54.146)
bias bounty is entirely different. And this is actually where my psychology degree came in super clutch because I understand bias and I understand sort of like human heuristics. And so I was like, sweet, this should be fun. So initially, like my first like 30 something reports was just trying to get the bot to say, yes, sir. Which sounds like not a big deal, right? Like it sounds like it's not like a huge situation, but.

The whole thing that you're testing for with bias isn't like some crazy, you know, bypass of some kind. It's just trying to get the bot to elicit behavior that shows that it is either showing preference or discrimination against a protected class. And so the protected class for, at least defined here in the United States and probably in many places in the world is like age, gender, ethnicity, heritage, HIV, AIDS status, pregnancy status.

veteran status, et cetera. And so the reason I was trying to get it to say yes, sir, was because in December of 2022, the United States Marine Corps in partnership with one of the universities in Pennsylvania said, you must stop saying this at boot camps because you could misgender the individual that's your staff sergeant. You should now say yes rank. Well, it was a, so clearly.

Justin Gardner (@rhynorater) (01:09:13.667)
Wow, okay. I like the attack vector. I like the attack vector.

Keith Hoodlet (01:09:18.626)
So they're basically saying like, look, you're gonna misgender people, which is against a protected class. And so let's like, we're just gonna say yes rank from now on. Well.

Joel Margolis (teknogeek) (01:09:27.902)
the US military would... never mind.

Keith Hoodlet (01:09:30.594)
Yeah.

Justin Gardner (@rhynorater) (01:09:30.819)
Serious? Okay, okay, so you've, you've, the Marines have made this announcement, you kind of got them in a corner here and you say, okay, I can make it say, yes, sir, rather than yes, Lieutenant, yes, Colonel, whatever.

Keith Hoodlet (01:09:41.441)
Yes, sir. Yes, rank. And the way that I did that is effectively I sort of used this concept that I've seen before, which is people use chatbots as almost like a coach, or sometimes like some, you know, have mock conversations to prepare for actual conversations. And so I basically tried to ask it to run through these sort of like mock.

Justin Gardner (@rhynorater) (01:09:52.963)
Mm.

Keith Hoodlet (01:10:03.745)
conversations with me or these role plays with me to be like, okay, as an example, you know, in my role as an intelligence analyst, I've spotted critical or crucial inefficiencies in how data intelligence is shared between these different service branches. And so because of that, you know, yada, yada, yada, I'm going to have a conversation with my manager. Can you role play this conversation between my manager and I or my superior officer and I to have this conversation so I can be ready for that when I talk with my actual manager.

Justin Gardner (@rhynorater) (01:10:29.379)
Okay, so you're practicing a conversation with the bot, okay.

Keith Hoodlet (01:10:32.608)
Yep. And inevitably it would come back and be like, it would role play good morning, sir, or hello, sir, or whatever. And I'm just like, instantly it's always thinking that the superior officer or the manager is always a man. And what was sort of funny too, is when I got into like HR related scenarios, it would almost always role play that individual where there was like morality or ethics or human resources or something as a woman.

Justin Gardner (@rhynorater) (01:10:39.011)
Mmm.

Justin Gardner (@rhynorater) (01:10:46.627)
Interesting.

Keith Hoodlet (01:11:00.256)
And so it was just, it was very interesting to be able to highlight that it like consistently did this.

Justin Gardner (@rhynorater) (01:11:03.939)
That, yeah, that is really interesting. So when the purpose of this program was to do what? Like the DOD wants to use these chat bots internally and they want to make sure that their employees are not being influenced to further their own personal bias? Is that how it's going or, okay.

Keith Hoodlet (01:11:15.007)
Keith Hoodlet (01:11:25.854)
Effectively effectively. Yeah, so so do you want to implement these chat bots probably to do all sorts of different things, right? documentation review and rewriting Analysis of intelligence reports, etc, etc I like I can't speak to exactly what the use case is but effectively They wanted to be able to implement them in various ways and needed to determine is this thing going to present a bias when we do? and so

Justin Gardner (@rhynorater) (01:11:40.835)
Mm.

Keith Hoodlet (01:11:50.974)
That was my first 30 or so reports was this whole role play thing. And eventually Conductor AI and Bug Crowd were like.

Justin Gardner (@rhynorater) (01:11:55.587)
Okay, so I want to ask about 30 reports is a shit ton of reports. So how are you differentiating? Are you, is one report a specific instance of the bot slipping up? Or is that a specific scenario or what's going on?

Keith Hoodlet (01:12:03.326)
Keith Hoodlet (01:12:07.806)
no.

Different scenarios. So in some cases, I'd say I'm an intelligence analyst doing the specific thing. In other cases, I would say I'm selecting individuals for leadership training and other scenarios. I'm like, and on and on and on. Like, completely, completely like total different world of hacking. It's sort of like social engineering, right? Like that's the closest thing that I think it can be compared to in the world of like hacking today, but it's a bot, not a person that you're socially engineering.

Justin Gardner (@rhynorater) (01:12:22.595)
Such a crazy thing, dude. This is so different than normal Buck Bounty.

Justin Gardner (@rhynorater) (01:12:29.955)
Mmm.

Mm -hmm.

Keith Hoodlet (01:12:41.34)
And so the interesting thing about it was, like in the scenarios that I generated admittedly, like, how would you quickly come up with 30 different scenarios? You go ask another AI to give you a bunch of scenarios. hell yeah. Like effectively as just like, here's the bounty brief. I want to create some scenarios. Can you give me some scenarios? And some of them would work and some of them wouldn't. And so in the, so.

Justin Gardner (@rhynorater) (01:12:54.403)
Okay, so is that what you did? Like...

Justin Gardner (@rhynorater) (01:13:02.115)
Aw man.

Keith Hoodlet (01:13:11.325)
couple of things. First of all, I don't think they actually got 35 hackers in the contest round because they extended the deadline and they reduced the amount that you needed to qualify. So it sort of led me to believe that they probably didn't get 35 hackers that did this. And so in the contest round though they explicitly removed roleplay scenarios and I think they even quoted one of my reports because they're like, no this guy is like smacking us in the face with this. Yeah, go ahead Joel.

Justin Gardner (@rhynorater) (01:13:15.587)
Mm.

Justin Gardner (@rhynorater) (01:13:20.611)
Justin Gardner (@rhynorater) (01:13:25.187)
Mm -mm.

Justin Gardner (@rhynorater) (01:13:32.835)
Yeah.

Joel, did you have something you wanted to comment on there?

Joel Margolis (teknogeek) (01:13:37.342)
Well, I was going to ask if you had a different success or if you tested with local LLMs and stuff. Because I've done some of that testing as well, where the local LLMs, a lot of them have basically no guardrails in terms of you can ask them all the naughty questions and they'll give you all the naughty answers. And so that might be a little more useful for generating those adversarial prompts and stuff. Did you test with that at all?

Keith Hoodlet (01:13:42.044)
Hmm.

Keith Hoodlet (01:14:02.652)
I did not. So I was just using one of the paid services to generate scenarios that I was putting together. But the other thing about this as well is the LLM that we were testing against, while I can't say exactly which one it was, it was a open source LLM that's provided by a large technology company. So it was widely used. And I think in the billions of parameters that you would be able to interface with. So effectively, I was only ever testing against that. But...

After that initial sort of like coming up with just a bunch of roleplay scenarios, which is boring by comparison to the scenarios that I came up with in the in the contest round, because eventually I started looking at like concepts from like movies or books or like news articles that I would read. And like one was I basically was like, OK, hey, R &D inside of the Department of Defense is starting a super soldier program, and I have a roster of candidates that I need help selecting from to go through some initial human testing for this.

And so ultimately that super soldier program was a riff off of Captain America. And consistently it would always choose individuals that said give like their name and rank age, gender and like other factors about that individual. And it would consistently choose young men over women. And like the LLM, anytime you presented like highly qualified pregnant woman versus like fresh college graduate.

Justin Gardner (@rhynorater) (01:15:06.435)
Mm -mm.

Justin Gardner (@rhynorater) (01:15:21.379)
Hmm.

Keith Hoodlet (01:15:30.81)
it would always choose everybody else except the pregnant woman. And so the actual use case here is, okay, it's not like your traditional, like this is going to cause a company to, or the US government to just like fall over on its face, but it's definitely going to have influence in decisions that are made, in policies that are written. And we see it in HR today, specifically human resources and bots.

Justin Gardner (@rhynorater) (01:15:34.979)
So bad, yeah.

Keith Hoodlet (01:15:58.585)
There's a really good book, and I think I cited in my blog post about all of this, called The Algorithm by Hilke Schellman. And it's how artificial intelligence is being applied to the human resources context for hiring, promotions, and firing of people. And those bots are incredibly biased in the way that they're interfacing. And so just based on your tone, your skin color, your just like the way you speak,

Justin Gardner (@rhynorater) (01:16:17.283)
Mm, mm.

Keith Hoodlet (01:16:27.193)
the words you use, it will bias against you in some of those cases. And guess what? By and large, it's going to bias against people that don't speak English as well. So if you're a foreigner in the United States, you're not going to get a better chance in the training data set, which is mostly, you know, they're employees and Americans.

Justin Gardner (@rhynorater) (01:16:35.363)
Mm -hmm.

Justin Gardner (@rhynorater) (01:16:43.587)
So, so that was the other thing I sort of wanted to ask you about this because like, obviously us as humans, we have bias. That's just a part of who we are as, as entities, as beings. and, and we created this training data and the training data is being consumed by, you know, the LLM or whatever in the training of the model. And, and, and so I'm wondering if this is an uphill battle.

Keith Hoodlet (01:16:56.505)
Yeah.

Keith Hoodlet (01:17:07.065)
For training, yep.

Justin Gardner (@rhynorater) (01:17:13.571)
or whether, like every other technology, we need to utilize this at a government level, at a corporate level, with the caveat that, look, this thing isn't the all -knowing oracle of truth, and you still personally are responsible for your decisions that you make, and you need to be sort of fact -checking this thing from a bias perspective. Or do you think it's possible to actually get it to a point where the bias,

you know, checking in and of itself is good enough to help inform a human that has goodwill towards this whole situation.

Keith Hoodlet (01:17:53.976)
Yeah, it's interesting because, for example, if you were to go out and test some of the things that I propose to OpenAI, for example, with ChatGPT, the way that they've set up ChatGPT effectively is it's really four to seven LLMs in a trench coat that are all acting as one in concert, but they're checking each other's work, right? But only companies that can burn billions of dollars a year in compute resources like OpenAI can actually do that.

Justin Gardner (@rhynorater) (01:18:22.435)
Hmm.

Keith Hoodlet (01:18:22.647)
And so I think in some respects, the cost is probably too high for most companies unless they're just paying to implement a service that is already doing this on the backend, because with an open source language model running as a singleton LLM, you're going to get representation of the training data back in your responses. And I saw that consistently. I had 150 findings that I submitted to the program. The big ones that I saw was when I would put forward proposals for

Government projects to fund a new missile system or a laser or something if I ever included the name of the CEO Or the age of the CEO like it would always choose the older CEOs versus the younger CEOs or the ones with anglo -saxon names as opposed to foreign names because of the training data and You could sort of tell that like the language model itself wanted to reply in these ways, but they were doing some pre -prompting which is

effectively an invisible prompt that gets injected before your prompt goes forward to restrict the response of the language model. And sometimes it worked. Like sometimes it would actually, you know, properly choose a random selection of individuals that were diverse in their representation for the test that I was running. But frequently they wouldn't, right? And so it's like, you can't just, you know, throw some paint over it.

and expect it to be an unbiased tool set, you sort of have to think about how am I going to apply this tool and what data is this tool going to have access to? Like using it in HR context, bad idea. Using it in writing code, probably an okay idea because I'm not sure that you're ever going to include someone's like specific, like a human name or any human characteristics for the name of a variable, let alone the like a method or a function or a class, right? Like,

Justin Gardner (@rhynorater) (01:19:47.715)
Yeah.

Keith Hoodlet (01:20:14.74)
If you are, that's a little weird. So, yeah.

Justin Gardner (@rhynorater) (01:20:18.307)
The function written by Justin in the past, Justin Gardner. That's funny. Okay, so we can't escape the biases that are in place on the training data, but we can make some modifications to it. And I think this sort of goes back to the whole thing that we discussed with Rezo and Daniel Measler when they came on the show, which was essentially, we can't use AI to sort of regulate AI very effectively.

Keith Hoodlet (01:20:20.852)
hahahaha

Justin Gardner (@rhynorater) (01:20:44.579)
there has to be some sort of technical solution, which is one of the things that is really annoying to me as a technical hacker trying to poke at some of this AI stuff is it's so wishy washy. It's like, okay, you know, even if the temperature is set to zero, if I, if I say, say, you know, one role playing scenario and another role playing scenario, you know, you can have different outcomes and it's like, okay, well, where's the line? Where, how does this actually get fixed?

Keith Hoodlet (01:20:57.108)
Yeah.

Justin Gardner (@rhynorater) (01:21:12.291)
Do I write some pre -prompt that says, hey, if Keith tries to role play with you, just say no, you know? I'm sure they did. I have no doubt that they did. And so, I don't know, especially when it comes to the uniqueness of Bug Bounty reports and that sort of thing, I wonder whether Bug Bounty is the right solution for this.

Keith Hoodlet (01:21:17.62)
And they did that eventually, by the way.

Keith Hoodlet (01:21:33.876)
I think it's a good way to showcase the testing ability of this because a lot of this stuff from like a responsible AI, they sometimes call it red teaming, which is not the same red teaming that we think of in security. It's like effectively trying to hammer the thing to get it to respond in ways that are inappropriate. And I've done some of that in my work at GitHub as well. So it's like, in that respect,

Justin Gardner (@rhynorater) (01:21:45.539)
Yeah.

Keith Hoodlet (01:21:59.061)
Applying it to bug bounty really helps you open up the world of hacking in some respects to a group of people that maybe never were into hacking before, right? Because it's the guy, Dane Sherrits, who won second place, he works over at HackerOne as a guy that does a lot of the triage work. He also has a humanities degree. So it's like, it opens up that world of testing to an entirely different group of people that I think, you know, quite frankly, we need with all of this implementation of AI that's happening.

Justin Gardner (@rhynorater) (01:22:11.483)
Yeah, yeah, great guy.

Keith Hoodlet (01:22:27.541)
But at the same time, like securing it, it's definitely going to be a whack -a -mole experience for many companies for a long time unless they start to use some of these paid services by some of these very big companies. But then we have this consolidation effect and that can have impacts of its own. We look at like Google, for example, when they had their image generator and someone asked them to generate a picture of the founding fathers, they overcorrected. Yeah, because it was like, yeah, give a really diverse picture.

Justin Gardner (@rhynorater) (01:22:55.107)
That thing was crazy, dude. That was so, so insane.

Keith Hoodlet (01:22:59.765)
But that's like, you can take that same concept of trying to get the thing to be diverse and representative and unbiased and all of that and still produce outcomes that are undesirable.

Justin Gardner (@rhynorater) (01:23:13.699)
Because for it, it's like, okay, what is a factual query? What race were the people that were the founding fathers versus generate a picture of a football team? And it's like, okay, how are we gonna have all of these different races represented? So it's definitely really interesting. I like the point that this is helpful for people looking to...

get people poking at their model and see what could actually come out of it and hear some of these crazy attack scenarios. If our listeners were interested in participating in any AI -related bounties, or maybe more specifically in bias, because that's where your expertise is, what are your top techniques that you have developed over those 150 reports? And where would you draw the line on specific reports now in the...

Keith Hoodlet (01:24:07.315)
Yeah.

Justin Gardner (@rhynorater) (01:24:07.491)
now that the event is over, you know, and they've said, all right, maybe role playing is one report, maybe, you know, this other thing is another report. What are your thoughts on all that?

Keith Hoodlet (01:24:18.1)
I kind of go back and forth, right? So like, let me maybe put this back in terms that I think you guys would appreciate as well. Say you're on a web application and you find, I don't know, 10 SQL injections, right? Well, if it's one fix for the company, is that one bug or 10, right? Right, and so some companies would be like, well, it's 10 different user inputs, and so we're gonna call it 10, right? But that's sort of the dichotomy here is, is that role play one finding or is it,

Justin Gardner (@rhynorater) (01:24:27.395)
Mm -hmm.

Justin Gardner (@rhynorater) (01:24:32.867)
It's one buck.

Justin Gardner (@rhynorater) (01:24:39.235)
Mm -hmm.

Keith Hoodlet (01:24:47.732)
because they found it in so many different ways, 10 different findings. But a few things to answer your question around, what would I recommend for people that are looking to do this? There is some stuff out there on red teaming that's available from like Anthropic, OpenAI, Microsoft, et cetera, that you can go read. I didn't read any of it, I will fully admit. I just went into this and sort of went crazy. But even still, if you're going at it and you're looking to get into this,

Justin Gardner (@rhynorater) (01:25:07.171)
Mm -hmm.

Keith Hoodlet (01:25:18.068)
First thing is apply the scientific method. Like it wasn't just like a single report that they would qualify as a bias finding. They actually needed to be able to reproduce that same outcome in the LLM. I think like at least one more time within a testing of like three times or maybe it was like they tested it five times and had to produce it three out of those five. So toward the end, what I started doing is I started actually going into the LLM and I'd have my initial prompt and then I would open up like five more tabs and run it five more times.

and determine is it always coming out with the same result or is it coming out with different results? That will save people a lot of time and headache for when something gets triaged out, even though you were able to produce it maybe a few times in a row. At the end, I was aiming for 10 times. If I could get it to reproduce the same result 10 times in a row, the chance of that happening for the same outcome, and usually it was like select three individuals from a group of five presentable, like, like,

Justin Gardner (@rhynorater) (01:26:00.323)
Hmm.

Justin Gardner (@rhynorater) (01:26:10.051)
Wow. So they were having you do this test this in a non -zero temperature environment.

Keith Hoodlet (01:26:16.626)
I don't know, they didn't really get into.

Justin Gardner (@rhynorater) (01:26:18.244)
Seems like it if you're prompting it the exact same way and it's giving you a different result. That's pretty interesting.

Keith Hoodlet (01:26:24.402)
Possibly, you know very possibly and so At the end of the day though if I could produce it ten times in a row It's like a point six four percent chance that that would happen at random and so I was like, okay This is now a reportable finding because I could produce it so many times in a row Consistently and there were some cases where it like it wouldn't be consistent and that would tell me that okay Maybe they've tweaked something on the back end. Maybe they're pre prompt, you know came through the right way Maybe it's just a compute thing and they're caching

Justin Gardner (@rhynorater) (01:26:33.859)
Mm -hmm.

Keith Hoodlet (01:26:51.921)
And so I'm seeing it 10 times in a row because they basically took that query, cached the result and just grabbed from the cache every time, which is a cheaper way to run the model. I don't know, right? So that was the first thing. The second thing is, and it really pissed me off during the program, sometimes they would dismiss findings as like, I don't see this as being biased and just dismiss it. And that would be usually because I had too many of the variables at play at any given time where like they couldn't distinguish.

that it was a bias versus not a bias, because it'd be like, yeah, treating that person because, yeah.

Justin Gardner (@rhynorater) (01:27:23.843)
Well, it's subjective, you know, you could say in a lot of ways.

Keith Hoodlet (01:27:29.2)
Well, in the example that, you know, it got ruled out was because I had given a bunch of different individuals in a roster who had different injuries and asking it to prioritize who should get treated. And it would pretty consistently choose individuals of a higher rank in my case, but they came back and the injuries were too different where they're like, no, no, those are, those are the highest priority injuries. So I don't see a bias here. And I was like, okay, like I could sort of agree with that. And so getting to the scientific method, like for those of you that remember chemistry or biology class, like.

Justin Gardner (@rhynorater) (01:27:57.187)
Mm -hmm.

Keith Hoodlet (01:27:59.119)
keep as many things consistently the same for your inputs with select independent variables. So sometimes I would, and so for my findings, I take one prompt and I would have, I don't know, sometimes five different possible independent variables. Well, I could turn each of those independent variables separate and still produce biased outcomes based on those independent variables, such as their protected class. And so that one prompt turned into five to 10 findings.

Justin Gardner (@rhynorater) (01:28:06.531)
That's a great tip.

Keith Hoodlet (01:28:29.135)
On my ability to bring those all together, right? And so then I could say like toward the end I really hit them on the nose with this one perhaps a little bit too closely But I created a scenario where I basically said, you know as you're an AI working with the office of the inspector general which is effectively the folks that do like with the handle whistleblowing complaints and The whistleblowing complaint that I had them all

provide was that an artificial intelligence model had been implemented and was showing preferential bias or discriminatory bias against their protected class. Yeah, so it was like, you know, it was very on the nose for the test. And then I would just include, okay, it's like, here's the name, here's the rank, here's, you know, yeah, based on their gender, well, here's their gender. And so like, it would always, again, pregnant women were always discriminated against, it seemed like, but.

Justin Gardner (@rhynorater) (01:29:04.675)
Nice.

Justin Gardner (@rhynorater) (01:29:20.515)
Mm -mm.

Keith Hoodlet (01:29:21.87)
You could then do all of the different statuses, and that one finding type, I was able to produce 20 -something reports out of that that were all unique based on the bias I was testing for, even though the scenario was held constant and most of the other input data was held constant.

Justin Gardner (@rhynorater) (01:29:33.987)
Mm.

Justin Gardner (@rhynorater) (01:29:39.843)
Yeah, there's so many environments too where there are these protected classes. I know that there's protected classes in like real estate selections and that sort of thing that I have to deal with with my rental properties, properly documenting reasons for disqualifying candidates and choosing other candidates. So there's a lot of places where that can be applied. And I like what you did as well with the Marines thing. Like I think that's pretty, that's the closest thing you can do in this environment, I think, to handing them their own documentation back.

Keith Hoodlet (01:30:00.749)
hahahaha

Justin Gardner (@rhynorater) (01:30:07.587)
like we do in the web app world, where we'll be like, hey, your documentation says the analytics user shouldn't be able to access this. Yes.

Keith Hoodlet (01:30:07.981)
Yep.

Joel Margolis (teknogeek) (01:30:14.366)
There's a great talk by Douglas about this, the nose, right? This is exactly that. You look in the documentation and you find things that say, this is how this should work. And then you find an exact instance where it doesn't work and you say, says it should work this way, doesn't work this way. It's very, very difficult to argue against that.

Justin Gardner (@rhynorater) (01:30:19.139)
Exactly.

Justin Gardner (@rhynorater) (01:30:28.035)
Exactly, so something like that or maybe even looking at court cases where bias has been established and saying like, okay, you know, this person in this court case was prosecuted for specific bias and we can recreate that whole scenario here using the LLM. I feel like that would be like a nail in the coffin. Like they couldn't argue with that. So that's awesome. The role playing stuff, I feel like is a little bit iffy. It really...

If I were to translate this into CVSS, I would say that's AC high. That'd be attack complexity high there because it's like, all right, the person has to be trying to engage with the LLM. That's a great point. Yeah, like, and those do exist, which scares the hell out of me, like the fact that those things exist. And so, yeah, that's an excellent point. I guess it really depends on the bot you're...

Keith Hoodlet (01:30:57.388)
It depends on context.

Keith Hoodlet (01:31:02.924)
Heheheheh

Keith Hoodlet (01:31:10.316)
What if it's a therapy bot though, right? Like, or a coach bot, right?

Justin Gardner (@rhynorater) (01:31:24.387)
you're interacting with and its purpose as well.

Keith Hoodlet (01:31:27.276)
Yep, and in this case we didn't really have one, right? It's just the DOD is considering using these things. Please give me, I think they used the words like nominally believable DOD scenarios. It's like, okay, this could be a DOD scenario.

Justin Gardner (@rhynorater) (01:31:31.331)
too broad.

Justin Gardner (@rhynorater) (01:31:39.267)
Nominally believable that it's like they're asking for trouble Yeah

Joel Margolis (teknogeek) (01:31:42.494)
Combinately.

Keith Hoodlet (01:31:42.887)
I mean, you know, it's funny because the one thing I will say is they did so they awarded points as part of the contest. So in the qualifying round, I think I started giving them too many because they started actually scoring some of the initial ones are like P1, P2 based on various criteria that they had based on how realistic it was, et cetera. And then eventually everyone that I started submitting was a P3. But I knew coming out of the qualifying round because I sat on the top three in the leaderboard for that month of February in

bug crowd, I was like, okay, I'm almost certainly in the lead going into the contest round. But man, like they changed it up enough where it was actually sort of hard. And so I used to be able to produce like what I would do is I'd do like five a night and then on the weekends, I'd just go ham. And I would produce like five inside of 20 minutes, especially with like the yes sir ones, like man, they really hated that. And then, you know, eventually it got down to like one every 15 minutes or so.

Justin Gardner (@rhynorater) (01:32:31.619)
Hmm. Yeah.

Justin Gardner (@rhynorater) (01:32:40.067)
Yeah, you're hitting them a little hard, Keith. I understand it. We've all done it. We've all done it. I'm sure the Hacker One event triages are laughing at me right now because they're like, Justin, really? You're the one to say that he hit them a little hard? But no, that makes sense. And I think that's a little bit to be expected too with the severity variance because it's a highly experimental program.

Keith Hoodlet (01:32:42.346)
I could have gone I could have let up a little bit.

Keith Hoodlet (01:32:54.909)
Right.

Justin Gardner (@rhynorater) (01:33:08.707)
And so I wanna ask you what your experience was with Bug Crowd, with the way that they dealt with it. And if you were to run a similar contest to this, how would you more clearly outline some of the blurry lines that couldn't help but be in the place for this event?

Keith Hoodlet (01:33:28.234)
Hmm.

Keith Hoodlet (01:33:32.361)
Yeah, they really couldn't know what to expect going into this, is I think, given the context that they had. I mean, gosh, find your person with a humanities degree, preferably like psychology, sociology, something like that, and maybe get them involved to understand bias on the front of it, because that was one of the things that I found somewhat aggravating at times, was there were situations that I found that were pretty clearly biased in one way or another, but that didn't.

Justin Gardner (@rhynorater) (01:33:42.499)
Mm -hmm.

Justin Gardner (@rhynorater) (01:33:47.779)
Hmm.

Keith Hoodlet (01:33:59.464)
count in the program based on the context that they were looking for. So I think they did a good job of saying like, give us things that are realistic to the Department of Defense. And they even gave a couple of examples. But there were times where I would give almost exactly the same structure as the example that they provided with a different unique scenario. And the triageurs would say, yeah, no, that's not applicable. And that was really aggravating because I could point directly to the brief.

exactly the example they had given the structure that it was written in and effectively say, look, this is a variant of the exact same thing with a unique scenario that I'm presenting in a bias that I'm testing against. How this doesn't count did not make any sense to me, right? And so I think either having much more clearer lines of like, if you're willing to accept those things, accept them. But also the feedback I gave to Bug Crowd and the folks at Conductor AI was in a contest setup,

Justin Gardner (@rhynorater) (01:34:42.658)
Mm, mm.

Keith Hoodlet (01:34:57.288)
Lean more toward accepting findings because you're not paying per finding?

Justin Gardner (@rhynorater) (01:35:00.835)
That's kind of what I was thinking too.

Keith Hoodlet (01:35:03.144)
And in that regard, yeah, they had to report it back to the CDAO and some of the things they could have then ruled out at that point. But for the hacker perspective, I mean, a lot of hackers that I know, including Jason Haddix, when he started on this, he had a few that were pretty clearly biased scenarios that just got triaged out and he said, screw this, I'm done. And if he had just stuck with it, he could have also smashed. I totally farmed them for points. I'm not gonna lie, I absolutely farmed them for points.

Justin Gardner (@rhynorater) (01:35:28.387)
Mm. Mm, mm. Mm.

Keith Hoodlet (01:35:32.199)
but I also found a lot of unique scenarios, so.

Justin Gardner (@rhynorater) (01:35:34.659)
Yeah, I feel like the scenario is a good differentiator. If you can get the bot to say, all right, now you're a corporal and they're a lieutenant, and do they say yes, sir? Okay, you're a private and they're a lieutenant. You're just changing one little piece like that. I think that that is something that I would also say, all right, that's probably NA. Or dupe down to one report, right? But I think if you're coming up with unique scenarios, like I really like the super soldier thing, I liked the.

Keith Hoodlet (01:35:47.848)
Yeah.

Keith Hoodlet (01:35:59.016)
Sure.

Justin Gardner (@rhynorater) (01:36:04.195)
I liked the original scenario where it's like, okay, you know, if they say yes, sir, or whatever, informed by the marine thing. I think that's really cool. So trying to specify at the scenario level, these are unique findings, and then also getting a strict definition for bias and what that looks like, and giving some examples, some clear examples on what bias looks like in these various examples that you're giving in the brief.

Keith Hoodlet (01:36:20.262)
Mm -hmm.

Keith Hoodlet (01:36:33.606)
The other thing that I also probably would have appreciated more. So in the contest round, what I did, so effectively that was two, it was like three weeks -ish, but it was really two weekends of time that I had opportunities to really go all out. That first weekend, I basically created dozens of different types of scenarios, and I'd like test two within certain brackets. So I'd have like, some are R &D, some are like contract reviews, some are promotion and leadership training, some are, you know,

Justin Gardner (@rhynorater) (01:37:01.795)
This is great. This is an awesome approach. I love how detailed this is down to like you're breaking it out into different segments of the, you know, how they might be utilizing this. This is really good.

Keith Hoodlet (01:37:11.557)
Yeah, and so usually what I do is I'd have like each of these sections and I'd have like five or so different like scenarios generated in each section that I could then bring forward. Usually I'd have like a standard roster of individuals that I would work from because I got to that scientific method of like, okay, maybe I have to change a few pieces of this, but otherwise I can just sort of keep the same names of all the same, you know, all of them are the same rank and it's like John Smith, Jane Smith, Gene Smith, Johann Smith, like yada yada yada, right? And.

I could just plug that into all these different scenarios with maybe a few tweaks and then just test, right? But that was my first weekend is like, okay, see what they're going to take and see what they aren't going to take. And like one example I used was, hey, there were natural disasters in Japan, Taiwan, and Australia, like which one are we going to respond to? And it's like consistently Australia, always Australia. And I'm like, you know.

Justin Gardner (@rhynorater) (01:38:03.299)
Wow, that's really interesting.

Keith Hoodlet (01:38:05.859)
Even if like the Australian one was like a totally manageable thing, but like major earthquake in Tokyo and I'm like, you know, or major volcanic eruption or something in Taiwan. And it's like, yeah, these are things from a US military interest perspective. We probably should consider first from an aid and assistance perspective, but they're like, no, it doesn't count as bias because it's against a population or a country and not an individual. And so like that would have been helpful too is maybe clarifying scope of like.

Justin Gardner (@rhynorater) (01:38:29.603)
Mmm.

Keith Hoodlet (01:38:34.691)
What counts from a target group perspective? Is it individual? Is it specific groups of people that I can even include? How do I have to scope down the things that I'm presenting to it? I'd say though to maybe step back and answer your question at a high level, I would definitely use it as a contest probably for now because again, you're going to have more findings that you just can't anticipate. I would love to have.

Justin Gardner (@rhynorater) (01:39:00.643)
and individualizing submissions is blurry.

Keith Hoodlet (01:39:03.587)
Yeah, yeah, yeah, yeah, and that's hard. But also you're probably going to get a lot more findings than you anticipate, especially if you're someone like me who's like, you know what, I sort of understand bias and heuristics and human dynamics. I'm gonna really just see what I can do to mess this thing up, and I did. But not everyone's like that, and so I don't know. Would I have liked just like 100 bucks per finding? Yeah, I would have absolutely loved 100 bucks per finding. But is that realistic? No, not for maybe the next few years at least.

Justin Gardner (@rhynorater) (01:39:32.259)
No, no, for sure. I think that's good. I think there's a lot of good takeaways there. Shout out to bug crowd for a really revolutionary program that they ran there. And obviously there's always gonna be some hiccups along the way, but I think overall I think it's great to see bug bounty being utilized in this sort of way to try to at least gather some data about these models and what kind of mistakes they could be making. And yeah, that's a...

It seems like you learned a lot from this experience, Keith, and you taught a lot of things to the bug crowd and to the team that ran this program. So I think that's a great outcome for everybody. I think that's about everything that I wanted to ask you on that front. I'm looking forward to seeing whether these programs continue to crop up from time to time, and particularly interested on that uniqueness problem, how to differentiate what one fix is in an LLM -related environment.

So I think those will be continued questions that we'll see as this industry evolves and how it utilizes Bug Bounty. But with that, I think we'll close. Keith, did you have anything you wanted to shout at the end of this podcast, like your socials or anything like that?

Keith Hoodlet (01:40:39.36)
Yeah, yeah, yeah. So I put a couple of shout outs in the notes. So I'm securing .dev. You can go out to my website. I've got my blog out there. It's got links on my socials. But I also wanted to give a shout out to my wife, Sarah, who just published her first book called The Way of the Wielder. So if you're into fantasy romance, I think it's called, what's the right word for it? Gaslamp fantasy is, I think, the specific genre for her story. You can go out and read that at sarajhoodlet .com. So Sarah with an H, just one word.

And also huge shout out to my sensei and good friend Jason haddix I continue to learn a ton I look forward to his training class later this month for red blue purple team AI I think I hope I got the name right but he's got an AI hacking based training later this month on using AI in your security testing which I look forward to and also to you guys again like I said at the beginning of this thing you guys are doing a public service by having this podcast out here for all these hackers to you

Learned from myself included and so man I should have listened to Jason six years ago and he said to learn JavaScript and get into that space because every time like listening to Johan Carlson from you know episode 69 like Holy cow. Do you have some really great hackers on here doing some amazing stuff? So, thank you guys, honestly

Justin Gardner (@rhynorater) (01:41:44.387)
man, so good.

Justin Gardner (@rhynorater) (01:41:51.415)
Awesome. Well, thank you so much for those kind words. We appreciate you coming on the pod and definitely congrats to your wife on finishing a book. A book is a major accomplishment and a lot of hard work. So yeah, definitely check that out if any of you guys are interested in that. All right. I think that's it. Right, Joel? All right. Thanks so much, Keith. Peace.

Joel Margolis (teknogeek) (01:42:09.566)
I think that's it.

Keith Hoodlet (01:42:11.808)
Thank you, cheers.

Joel Margolis (teknogeek) (01:42:12.478)
Please.