WEBVTT
00:00:00.080 --> 00:00:01.360
These things actually can't fully stop.
00:00:01.360 --> 00:00:04.160
The reality is like you're never gonna fully stop that.
00:00:04.160 --> 00:00:07.919
Like zero product losses is like not realistic.
00:00:07.919 --> 00:00:14.720
These like very large companies like Capitoline, for example, on the balance sheet are provisioning like hundreds of billion dollars away.
00:00:14.720 --> 00:00:15.919
They know it's gonna happen.
00:00:15.919 --> 00:00:23.760
It's just exciting to think about like a time where we can be a little bit more than putting with these.
00:00:23.760 --> 00:00:28.239
Don't think that because a major attack isn't happening to you.
00:00:28.239 --> 00:00:42.079
I don't want to like it probably will eventually, if the major attention deck, everybody, welcome back to the risk and reason podcast.
00:00:42.159 --> 00:00:44.320
Uh I'm Eli, your host from Footprint.
00:00:44.320 --> 00:00:46.880
This episode is brought to you by our friends at Loan Pro.
00:00:46.880 --> 00:00:55.280
And we have an awesome guest today, Zach Trotsky, founding BizOps Hire at Kamun by way of Mercury, by way of Capital One.
00:00:55.280 --> 00:01:01.840
Uh you've worked at some of the most fascinating companies in Risk, some of the largest and now some of the fastest growing.
00:01:01.840 --> 00:01:03.119
Thanks for joining us on the show.
00:01:03.359 --> 00:01:03.840
Yeah, of course.
00:01:03.840 --> 00:01:06.159
Thank you for having me on and excited to have a good discussion.
00:01:06.560 --> 00:01:10.319
Zach, we were talking about before, uh, how does one end up in this world?
00:01:10.319 --> 00:01:17.200
Uh this doesn't seem like, you know, childhood, you're looking at NASA, uh, you're looking at NBA players.
00:01:17.200 --> 00:01:20.640
Uh how did were you looking that you've posters of compliance professionals?
00:01:20.640 --> 00:01:21.840
You know, what what drew you in?
00:01:21.840 --> 00:01:23.280
Yeah, like the Michael Jordan of compliance.
00:01:23.280 --> 00:01:24.959
The Michael Jordan of compliance, exactly.
00:01:25.200 --> 00:01:33.359
No, it's uh I feel like a lot of people might say this, but I think for me it was probably a little bit of an accident the way I fell into this career path.
00:01:33.359 --> 00:01:39.359
Um actually it went back to when I first interned in Capital One after my junior year of college.
00:01:39.359 --> 00:01:42.480
Um, it was a rotational corporate strategy program.
00:01:42.480 --> 00:01:46.640
And they kind of for the internship place you embed you within a specific team.
00:01:46.640 --> 00:01:50.239
Um, I obviously, you know, only started to know about fintech.
00:01:50.239 --> 00:01:55.519
I was not really knowledgeable about the space, definitely not knowledgeable about um risk compliance and fraud.
00:01:55.519 --> 00:02:06.640
And my interviewer when I was interviewing for the internship at Capital One, someone running my case interview, actually, was a VP in fraud, first party fraud specifically.
00:02:06.640 --> 00:02:18.800
And I remember her telling me about some of her stories, like stopping fraud at scale for this like obviously very large Fortune 100 company, and how kind of they were on the forefront of machine learning, artificial intelligence.
00:02:18.800 --> 00:02:20.400
And this is back in 2018.
00:02:20.400 --> 00:02:22.719
And I obviously didn't have a background in this.
00:02:22.719 --> 00:02:24.000
I didn't, you know, know a lot about it.
00:02:24.000 --> 00:02:25.120
And it sounded fascinating.
00:02:25.120 --> 00:02:28.240
I'm like, this sounds like kind of like working for the FBI.
00:02:28.240 --> 00:02:28.479
Yeah.
00:02:28.479 --> 00:02:31.759
And and like I was just kind of enamored by the opportunity.
00:02:31.759 --> 00:02:34.240
So after that, I'm like, okay, I love this.
00:02:34.240 --> 00:02:38.560
Like, and then when I got the offer, I was like, okay, my interviewer worked in fraud.
00:02:38.560 --> 00:02:41.680
This fraud sounds super cool, you know, busting the bad guys.
00:02:41.680 --> 00:02:46.319
And I like requested to be a part of that team, and and sure enough, they placed me on that team.
00:02:46.319 --> 00:02:49.520
And that's kind of where I got my first exposure to it.
00:02:49.520 --> 00:02:51.680
Um, and obviously it was really cool.
00:02:51.680 --> 00:02:59.520
You were working on the forefront of data and on the forefront of machine learning and and all these advanced kind of quantitative techniques, and I loved it.
00:02:59.520 --> 00:03:04.319
And then um fast forward, you know, obviously working for Mercury and then my current job.
00:03:04.319 --> 00:03:09.599
Um, it's, you know, I've gotten the opportunity to kind of play a more active role in building those programs from the ground up.
00:03:09.599 --> 00:03:25.120
But yeah, I think just it kind of happened by accident, but I just really fell in love with just kind of the prospective, you know, the the advanced nature of fighting fraud and just kind of how kind of sometimes fun it is to just like the bad guys, you know.
00:03:25.439 --> 00:03:27.439
What did Day One look like at Cap One?
00:03:27.439 --> 00:03:29.599
Uh iconic, very innovative company.
00:03:29.599 --> 00:03:32.560
Uh QD Letter of Series A, they're the founders of Capital One.
00:03:32.560 --> 00:03:34.800
We've very much heard the the origin stories.
00:03:34.800 --> 00:03:35.199
Yeah.
00:03:35.199 --> 00:03:42.000
Do they give you did were you given a a textbook of this is how you should go and learn about first party fraud with sister osmosis?
00:03:42.000 --> 00:03:42.960
How did you think about that?
00:03:42.960 --> 00:03:45.039
And how do you think someone should think about that?
00:03:45.360 --> 00:03:45.680
Yeah, yeah.
00:03:45.680 --> 00:03:50.400
I mean, basically the team, there are so many resources.
00:03:50.400 --> 00:03:58.000
Like you think about it, Capital One has like many, many, many millions of customers, like moving billions of dollars um every single day.
00:03:58.000 --> 00:04:02.240
The sheer amount of data they have is unlike anything I've ever seen.
00:04:02.240 --> 00:04:05.120
And like I think fraud largely is a data problem.
00:04:05.120 --> 00:04:14.400
And, you know, so the team had just like developed such in expertise on the various MOs, the data, the, you know, various techniques.
00:04:14.400 --> 00:04:16.160
There are all these like PowerPoint decks.
00:04:16.160 --> 00:04:22.240
And I think like one of the things that they're really good at is transferring knowledge to people like within teams, like within the company.
00:04:22.240 --> 00:04:32.720
I think part of why Capital One is so good is like they have it's very talent-dense, but also like they're very good at passing information between the company and to people who are growing within the company.
00:04:32.720 --> 00:04:47.600
So just literally by finding mentors like on my team who are a little bit senior and um who had developed all these resources, and also quite frankly, by just like playing in with the data, I was able to kind of start to learn about the MO, start to learn about what the data looks like.
00:04:47.600 --> 00:04:54.319
My first project as an intern actually was to work with this novel geolocation data set that they just had.
00:04:54.319 --> 00:05:08.160
They had just purchased it from some vendor, and it was literally a data set that had, you know, the time zone of a of a user, you know, the operating system time zone, like all this like geolocation-based data.
00:05:08.160 --> 00:05:17.439
And like my job was essentially to just like play with it and understand it and try to come to a reasonable conclusion by the end of the project on how this can add business value.
00:05:17.439 --> 00:05:26.399
And that was just kind of, you know, dipping my toes in that world of data and exploration hypothesis testing was kind of really how I learned about it.
00:05:26.399 --> 00:05:41.600
Like you can learn about it by talking with people, you can learn about it by by reading resources, but I think just like getting in it and understanding it and eventually like seeing fraud happen and starting to pattern match, like using the data, I think was kind of, you know, my how I learned about the space in the most effective way.
00:05:41.600 --> 00:05:42.639
So it's very interesting.
00:05:42.959 --> 00:05:47.040
You bring up this FBI analogy, Joe, like of you you're a detective in a way.
00:05:47.040 --> 00:05:48.240
What does it actually look like?
00:05:48.240 --> 00:05:52.000
So you you bring up this scenario, you buy a geolocation data set.
00:05:52.000 --> 00:05:57.120
I I know this is first proud of the internet, so you maybe weren't given the budget to buy that data set to give them to.
00:05:57.120 --> 00:06:03.839
How do you think about this kind of mouse game where it's do you think that about buying tools proactively?
00:06:03.839 --> 00:06:08.959
Do you do you think we just got hit by this fraud at all was from the state of Georgia?
00:06:08.959 --> 00:06:10.800
We need to be better at geofencing.
00:06:10.800 --> 00:06:13.040
How do you think about putting those pieces together?
00:06:13.040 --> 00:06:17.199
Also knowing that once you've caught someone, they're probably gonna come back a different way.
00:06:17.439 --> 00:06:19.120
Yeah, it's it's really difficult, obviously.
00:06:19.120 --> 00:06:23.279
And I think like it's still the companies still aren't very good at it.
00:06:23.279 --> 00:06:27.439
And I think this is actually more relevant to some of my experience at smaller companies.
00:06:27.439 --> 00:06:35.360
Um, you know, when you don't have the level of sophistication, the level of scale that uh, you know, obviously a company like Capital One does.
00:06:35.360 --> 00:06:46.560
Um I think like there, when you're kind of on the ground level, I think you start to actually look at individual cases, you know, see people starting to commit fraud.
00:06:46.560 --> 00:06:50.959
And that really is where you start to like, you know, understand the user behavior.
00:06:50.959 --> 00:07:08.800
And I think when you don't have these sophisticated techniques yet and you know are relying on kind of a patchwork of vendors and other like defenses, like static rules, you almost the way you learn is almost by getting hit the first time, you know, and just kind of mitigating, make sure the blast radius of that initial fraud attack is mitigated.
00:07:08.800 --> 00:07:11.040
Learn from that and respond to that.
00:07:11.040 --> 00:07:17.120
Obviously, you know, that's very reactive, but the reality is fraudsters are consistently evolving their approach, like you said.
00:07:17.120 --> 00:07:22.079
Um, they are consistently trying new strategies, like using new technologies.
00:07:22.079 --> 00:07:23.600
And it's a balance.
00:07:23.600 --> 00:07:31.920
You have to balance, you know, trying your best to anticipate and put yourself in the psychology of the fraudster.
00:07:31.920 --> 00:07:37.680
Like I think that was one of the parts that I underestimated was just like how psychologically driven, you know, fraud is.
00:07:37.680 --> 00:07:45.199
You have to under put yourself in their shoes, understand like where might they attack, how might they find vulnerabilities, what type of resources do they have at their disposal.
00:07:45.199 --> 00:07:53.759
And you can start to like, you know, procure different defenses, you know, build a team that can, you know, proactively prevent against that type of attack.
00:07:53.759 --> 00:08:18.240
But the reality is that will have to be balanced by just proactive monitoring, you know, or reactive monitoring, like making sure that you are watching the right things, making sure that literally, you know, if you're launching a new product and you might suspect you have some vulnerabilities, literally watching, you know, transactions come in by one case, cases come in one by one and review those and try to be as reactive as possible.
00:08:18.240 --> 00:08:28.240
So I think like it's you don't want to be fully reactive, but it's almost like impossible to be 100% fully proactive because fraudsters change all the times.
00:08:28.240 --> 00:08:30.800
They understand your vulnerabilities, they shift their approach.
00:08:30.800 --> 00:08:32.159
You have to kind of balance the two.
00:08:32.159 --> 00:08:33.679
And it's an ongoing balance between the two.
00:08:33.679 --> 00:08:34.399
And it's very difficult.
00:08:34.799 --> 00:08:38.720
When you get into psychology of a fraudster, are you joining Telegram chats?
00:08:38.720 --> 00:08:44.240
Are you, you know, watching, you know, you're reading news articles on who you won in Cambodia.
00:08:44.240 --> 00:08:45.440
How how do you think about that?
00:08:45.440 --> 00:08:57.200
Because I think this is such a misunderstood point, maybe of we spend so much time talking about defenses that we don't spend enough time talking about, like it or not, these are real people and this is a full-time job.
00:08:57.200 --> 00:08:57.600
Yeah, yeah.
00:08:57.600 --> 00:09:05.440
And we probably all disagree with the ethics of the full-time job, but yeah, they're they're also multinational companies at this point with thousands of employees.
00:09:05.440 --> 00:09:09.679
We're probably we're very far gone from the days of someone in their mom's basement.
00:09:09.679 --> 00:09:15.919
But how do you think about getting in at psychology and trying to almost predict what they would want to do next?
00:09:16.240 --> 00:09:17.360
Yeah, you you make a good point.
00:09:17.360 --> 00:09:30.399
Like at the first, like a lot of people who aren't familiar with the space won't believe that, you know, there are people who are literally here, their job is to have many different devices, you know, many different like access to the dark web, access to all the data breaches.
00:09:30.399 --> 00:09:39.279
That if you're not familiar with the space, you don't know that there's this like insane flow of information on the dark web from various data breaches from some of the largest company in the world.
00:09:39.279 --> 00:09:40.559
And it's flowing around.
00:09:40.559 --> 00:09:44.720
Like there's an active market of people buying this information, using it to commit fraud.
00:09:44.720 --> 00:09:46.399
And it's it's very real.
00:09:46.399 --> 00:09:50.720
Um, and that I think is is obviously important for people to recognize.
00:09:50.720 --> 00:10:01.120
And yeah, I think just the the key is to like work with people if you who have access to this type of information, like at various jobs I've I've had in the past.
00:10:01.120 --> 00:10:08.720
Like we've worked with advisors, you know, who are very deep in the space and they'll be like, oh yeah, we heard about chatter about your company on the dark web.
00:10:08.720 --> 00:10:19.279
And like the reality is, is like when you recognize that this is a full-time job for for some people, you also have to recognize that they are constant, they're very opportunistic.
00:10:19.279 --> 00:10:29.600
They're going to look for every opportunity to, you know, attack you if you're if you're a fintech company, if you have vulnerabilities and you have to be extremely careful.
00:10:29.600 --> 00:10:43.279
Like obviously, you know, if you're about to launch a new product or if you're about to like start something new, you have to realize that someone whose full-time job it is to commit financial fraud will be, you know, on the prowl for these types of opportunities.
00:10:43.279 --> 00:10:52.000
And just I think understanding that is incredibly important for, you know, making sure that you have the right level of safety for whatever product you're launching.
00:10:52.000 --> 00:10:55.440
Cause I think it's one thing to know that that's happening.
00:10:55.440 --> 00:11:00.720
And then it's another thing to, you know, see that hop in and have that start, you know, cannibalizing your PL.
00:11:00.720 --> 00:11:15.519
So um it's very you have to it, you have to think about what they are watching out for from a psychological perspective and then like, you know, make sure that your defenses are to the point that can actually stop them.
00:11:15.919 --> 00:11:19.440
Froster's full-time job to make your life unfortunate.
00:11:19.440 --> 00:11:21.120
Um, from trying to take money.
00:11:21.120 --> 00:11:27.039
Uh, growth team's full-time job is to also probably bug you by saying, Are you adding all of these defenses?
00:11:27.039 --> 00:11:34.159
Which puts someone in your shoes in a pretty unique and difficult position that there are two people whose goals are somewhat at ends.
00:11:34.159 --> 00:11:34.480
Yeah.
00:11:34.480 --> 00:11:36.639
This leads to, I guess, a two more question.
00:11:36.639 --> 00:11:38.799
One, do you think it's actually possible to get rid of all fraud?
00:11:38.799 --> 00:11:45.759
And two, what do you think how do you think about the balance of checks that you're adding versus friction?
00:11:46.080 --> 00:11:47.279
This is this is the age-old question.
00:11:47.279 --> 00:11:50.720
I feel like I I've I've been j I was joking with my girlfriend the other day.
00:11:50.720 --> 00:11:56.399
She's like, I feel like this answering this question, the balance between fraud and growth has been your full-time job at three different companies now.
00:11:56.399 --> 00:11:59.759
So it's very like it's very, it's very personal.
00:11:59.759 --> 00:12:04.080
But I think, yeah, the reality is there's it's a healthy debate.
00:12:04.080 --> 00:12:16.720
It's a very healthy dynamic to think, have, think about how can you grow versus, you know, how can you defend yourself from fraud and from, you know, penal cannibalization.
00:12:16.720 --> 00:12:32.480
Um, and I think there's not really a right answer, more so than just making sure that it's a healthy ongoing debate between those two teams with like two different, you know, underlying incentives, but ultimately like the same goal to put the company in the best success position to succeed.
00:12:32.480 --> 00:12:34.000
And I don't know.
00:12:34.000 --> 00:12:38.240
I think it's very like it's very interesting.
00:12:38.240 --> 00:12:52.720
And the reality is that we try to remind ourselves every day is you wanna put yourself in the best position to fight against fraud, especially you want to put yourself in the best position to see it, like know that it's happening, like have the right data, have the right visibility.
00:12:52.720 --> 00:12:56.480
But I think the reality is like you're never gonna fully stop fraud.
00:12:56.480 --> 00:13:00.240
Like zero fraud losses is like not realistic.
00:13:00.240 --> 00:13:03.679
And I think to be honest, like zero fraud losses means zero revenue.
00:13:03.679 --> 00:13:05.919
Like you're you're not gonna be able to grow.
00:13:05.919 --> 00:13:18.960
Um, there's a reason that these like very large companies, like Capital One, for example, on their balance sheet are provided provisioning like hundreds of billion dollars away, like each earnings call for like, you know, fraud losses, credit losses, et cetera.
00:13:18.960 --> 00:13:21.600
Credit's a little adjacent, but they know it's gonna happen.
00:13:21.600 --> 00:13:24.320
They know that there are going to be losses incurred.
00:13:24.320 --> 00:13:25.759
It's just a part of growing.
00:13:25.759 --> 00:13:39.759
And it's less about like making sure you have zero fraud, but more so making sure fraud is like within budget and that you have the data and you can learn from it and consistently adapt to make it like less and less of a problem.
00:13:39.759 --> 00:13:42.000
So, like the reality is you're never gonna get rid of it.
00:13:42.000 --> 00:13:43.919
Like the goal is to grow, obviously.
00:13:44.159 --> 00:13:53.679
Um it's an ongoing crazy statement of capital on this is public, you know, hundreds of millions of dollars on their annual earnings calls are set aside for fraud.
00:13:53.679 --> 00:13:56.080
Do you think that should be a paradigm?
00:13:56.080 --> 00:14:02.159
You know, like that it when you hear something like that, does that make you think these systems are fundamentally flawed?
00:14:02.159 --> 00:14:09.840
The the other things I could think about is legal expenses where public companies will just set aside an amount, we're gonna get sued, and this is what we have to spend on it.
00:14:09.840 --> 00:14:17.440
Like, does that not almost make you drive your head into a wall, which is you know, what we've just accepted that this this has to be the case?
00:14:17.759 --> 00:14:18.000
Yeah.
00:14:18.000 --> 00:14:19.039
I mean it it does.
00:14:19.039 --> 00:14:24.159
Like when you think about it, it's why are you kind of settling for defeat, I guess.
00:14:24.159 --> 00:14:27.759
I mean, when you're when you're Capital One and you're also, you know, making it won a lot.
00:14:27.759 --> 00:14:28.879
They say yeah, yeah.
00:14:28.879 --> 00:14:29.600
They won a lot.
00:14:29.600 --> 00:14:38.480
And they also have like other processes in place that they're amazing at, you know, like recovering fraud, you know, and obviously they're an extremely profitable, extremely successful company.
00:14:38.480 --> 00:14:40.320
So they can they can get away with it.
00:14:40.320 --> 00:15:10.320
But I think the reality is is that, you know, I think that the current state of fraud, the current way fraud is committed is very much a reminder of kind of the, I don't want to say legacy, but just, you know, financial infrastructure, payment stack, the way that all these products have been built have been, you know, underlying like relatively similar for the past maybe like 30, 40 years, and we're starting to see more evolution in the fintech space around the underlying infrastructure, you know, around like crypto, around tokenization.
00:15:10.320 --> 00:15:24.159
And part of me wonders if like this concept of like, oh, fraud is gonna happen and we just have to like make the right budget for it and let it happen and learn from it, if that's like reminiscent of kind of, you know, the old guard of fintech infrastructure.
00:15:24.159 --> 00:15:26.159
Um, but it's a really compelling question.
00:15:26.159 --> 00:15:31.440
And I think, you know, like as there's both sides of the coin could potentially be true.
00:15:31.440 --> 00:15:35.679
Like you have to, you you know, you have to learn from fraud and it's gonna happen.
00:15:35.679 --> 00:15:45.519
But at the same time, like you shouldn't settle for it because when you do see it, like I I know when I see this happening at certain jobs, I'm like, you know, this can this could have been prevented.
00:15:45.519 --> 00:15:50.720
If we had the right systems in place, we could have identified this pocket of users, you know, allowed the good users to grow.
00:15:50.720 --> 00:15:53.440
And like, there's no reason we should just like settle for this coming in.
00:15:53.440 --> 00:15:54.559
So it's interesting.
00:15:54.559 --> 00:15:57.919
But I I think that again, there it I'm not so sure there's the right answer just yet.
00:15:58.480 --> 00:16:04.559
What's the most unique thing you've seen a fraudster or a group do to try to compromise the system?
00:16:04.799 --> 00:16:05.039
Yeah.
00:16:05.039 --> 00:16:08.559
I mean, this this was at a at a previous job, but um I've seen a lot.
00:16:08.559 --> 00:16:10.879
Like you'd you'd like, I mean, you wouldn't be surprised.
00:16:10.879 --> 00:16:15.039
Other people would be surprised that but how sophisticated these people can get.
00:16:15.039 --> 00:16:43.679
But like literally, we had something happen in a previous job a while ago where someone just had some sort of, you know, machine learning, rapid automation system where they just spammed bin numbers, like bank identification numbers, like into cards, and literally was just guessing credit card numbers in rapid, like sequential order to try to like guess what what card like expiration date combo was correct.
00:16:43.679 --> 00:16:51.360
Obviously, you know, when you think about the different permutations of a credit card number, an expiration date, like a security code, there's infinite number of permutations.
00:16:51.360 --> 00:17:05.680
Like we had a fraudster once who was literally just like had an automated system that was just guessing these, like like many thousands per second in random sequential, just trying to get authorizations to get transactions to go to through a frauddy merchant.
00:17:05.680 --> 00:17:09.519
So I think just that story was So they're successful?
00:17:09.519 --> 00:17:23.359
Partially for the most part, not like a lot of them were declined, a lot of them didn't work, but like they they still were able to, like, you know, um it probably goes to the PL of that fraudster where they're willing to spend a certain amount on on compute to get a certain amount of transactions.
00:17:23.359 --> 00:17:24.000
Yeah, exactly.
00:17:24.000 --> 00:17:29.039
And like they definitely weren't just doing this to us, you know, they were doing this probably to like a lot of other comp like companies.
00:17:29.039 --> 00:17:41.039
And I think it just goes that was my first reminder of like, you know, even fraudsters targeting smaller fintechs have these crazy capabilities, are very sophisticated, are leveraging like machine learning.
00:17:41.039 --> 00:17:54.559
And um, I think that, you know, was just a reminder for me of like how creative they can get and like how, you know, when it's a full fraudsters full-time job and they want to be, you know, state of the art, then they have a lot of capabilities.
00:17:55.680 --> 00:17:57.599
Machine learning there makes a lot of sense.
00:17:57.599 --> 00:18:00.160
Uh using it to guess these permutations.
00:18:00.160 --> 00:18:02.720
Very hot topic these days, artificial intelligence.
00:18:02.720 --> 00:18:02.960
Yeah.
00:18:02.960 --> 00:18:18.319
Another tool that's very good for fraudsters, and that now for the first time ever, you can fairly cheaply and quickly generate hundreds or thousands of compelling fake documents, whether they're driver's licenses, bank statements, utility bills, or nurture synthetic identities.
00:18:18.319 --> 00:18:25.279
What are you seeing on the other side of ways that you can leverage artificial intelligence in your seat to try to defeat fraud?
00:18:25.599 --> 00:18:27.119
Yeah, there's there's a lot of different ways.
00:18:27.119 --> 00:18:42.720
I think like they I will say, first of all, like a lot of it, some right now is, you know, it's very, I don't want to say hit or miss, but I think like there's a lot of like really interesting conceptual ways in which like we can prevent fraud.
00:18:42.720 --> 00:18:47.599
There's like a few that we're seeing that like prev that show some immediate viability.
00:18:47.599 --> 00:19:12.160
I think one of them is actually, you know, as as we were talking about before the episode, you know, fraud traditionally has involved like some, you know, element of manual review, um, having like, you know, agents, fraud agents specifically, you know, actually going through and reviewing customers, reviewing documents, like looking into our systems, like to try to like look, oh, is this customer, is this activity they've done on our platform, is it fraud?
00:19:12.160 --> 00:19:13.359
Is it not fraud?
00:19:13.359 --> 00:19:17.359
Um, you know, they submitted this document for us as as proof that they're not doing fraud.
00:19:17.359 --> 00:19:18.799
Is this document legit?
00:19:18.799 --> 00:19:32.160
Um, and uh companies all over the world are still using like manual agents to to go and you know, review back office documents and and conduct investigations and and leave an audit trail on on you know various customer activity.
00:19:32.160 --> 00:19:49.920
And, you know, we think that AI can, you know, do a lot of these reviews, you know, for us, like help us, you know, create a feedback loop between what our systems are seeing and what defenses we're building, make that feed book feedback loop a lot more efficient, a lot cheaper, obviously a lot faster.
00:19:49.920 --> 00:19:58.559
And so that's one immediate area where we are kind of using it and we, you know, see this as being like incredibly valuable.
00:19:58.559 --> 00:20:02.480
Um, there's a lot of other really interesting potential use cases here too.
00:20:02.480 --> 00:20:09.279
I think going back to first party fraud, um, obviously I learned about it a lot from my time at Capital One working on the specific team.
00:20:09.279 --> 00:20:15.599
And it was kind of crazy that they hold had a whole business team specifically dedicated to that one type of fraud.
00:20:15.599 --> 00:20:36.640
Um, but like behavioral intelligence, like behavioral analytics, like literally think and looking at the you know, cookies and the way that you click on the application and on like the web portal, and and use so there's a lot of artificial intelligence capabilities around understanding just user behavior and you know, peract putting in proactive defenses against that.
00:20:36.640 --> 00:20:40.079
So those are two ways in which we're seeing it, but I think there's a ton.
00:20:40.079 --> 00:20:44.079
Like obviously, as I mentioned before, fraud is a is a big data problem.
00:20:44.079 --> 00:20:46.720
I think another thing that comes to mind is data labeling.
00:20:46.720 --> 00:20:59.920
You need to like be able to actually label the fraud after it's happened in your systems to learn from it and take that data and have it feed machine learning models to, you know, get more proactive and get more sophisticated.
00:20:59.920 --> 00:21:08.720
So I think like, you know, using artificial intelligence to like help label data and like help, you know, make sure that our feedback loops like going into our machine learning models are accurate.
00:21:08.720 --> 00:21:12.000
So I think there's a ton of interesting potential in here.
00:21:12.000 --> 00:21:13.680
And we're really just at the frontier.
00:21:13.680 --> 00:21:20.000
I think like, you know, it's it's just a lot of the capabilities here are very nascent and very promising.
00:21:20.000 --> 00:21:24.400
And there's so much, I think, to be discovered in the space, and it's very exciting.
00:21:24.799 --> 00:21:28.880
We're very bullish on on this first bug you mentioned about manual review.
00:21:28.880 --> 00:21:33.440
At the same time, the biggest question we hear is hallucinations.
00:21:33.440 --> 00:21:33.839
Yeah.
00:21:33.839 --> 00:21:36.640
Uh these are big decisions.
00:21:36.640 --> 00:21:42.000
And if you make it to the wrong person, it could be a pretty catastrophic impact.
00:21:42.000 --> 00:21:42.319
Yeah.
00:21:42.319 --> 00:21:58.640
How do you think about building in protections there to or just more broadly, how do you think about getting to a level of confidence whether you, any partner banks you work with, do given that even if you ask the LLMs what part they're wrong about, they will know.
00:21:58.960 --> 00:21:59.200
Yeah, yeah.
00:21:59.200 --> 00:22:01.440
This is This is the hard part, obviously.
00:22:01.440 --> 00:22:03.440
I think one of the really difficult things.
00:22:03.440 --> 00:22:07.839
And like to be honest, like I don't even know if we have a good answer yet, but like we don't yet either.
00:22:07.839 --> 00:22:08.960
Yeah, yeah, yeah, yeah, yeah.
00:22:08.960 --> 00:22:13.599
But like one of the things is just, you know, obviously how regulated fintech is.
00:22:13.599 --> 00:22:15.279
Like everything must be documented.
00:22:15.279 --> 00:22:19.039
Every decision you make must be, you know, auditable, traceable.
00:22:19.039 --> 00:22:26.079
And that, you know, obviously is a little bit difficult sometimes, especially in the nascent era of artificial intelligence.
00:22:26.079 --> 00:22:42.880
And I think the thing that I found, um, I've in in two different jobs now I've worked, you know, on problems where, you know, we needed to do something for a partner bank, or there was like an audit from like a regulator or a partner bank or whatever, and we needed to like provide some documentation.
00:22:42.880 --> 00:22:53.839
And obviously, you know, like I think partner banks, United States regulators, they don't have necessarily always the reputation of being so like technologically advanced.
00:22:53.839 --> 00:22:58.720
I think like the concept of artificial intelligence is sometimes like a little bit scary.
00:22:58.720 --> 00:23:14.720
And so I think like, you know, we the there's always been like a really delicate balance between like, you know, thinking about trying to be forward thinking and trying to be, you know, as efficient and as accurate and you know, as, you know, technologically advanced using artificial intelligence as much as possible.