March 26, 2026

Zero Fraud Means Zero Revenue w/ Zach from Comun

Zero Fraud Means Zero Revenue w/ Zach from Comun
Spotify podcast player iconApple Podcasts podcast player iconYouTube podcast player icon
Spotify podcast player iconApple Podcasts podcast player iconYouTube podcast player icon

In this episode of Risk and Reason, Eli Wachs sits down with Zach Trunsky, Founding Business Operations at Comun, to explore how early-stage fintechs build fraud and risk programs from the ground up. Drawing on his experience at Capital One, Mercury, and now Comun, Zach shares why zero fraud losses is an unrealistic goal, how fraudsters operate as sophisticated full-time professionals, and why the era of massive BPO review teams may be coming to an end.

Chapters 

(0:00) Can You Actually Eliminate All Fraud? 

(1:25) From Capital One Intern to Founding Risk Hire 

(5:42) The Cat-and-Mouse Game With Fraudsters 

(11:15) Why Zero Fraud Means Zero Revenue 

(18:25) AI as a Weapon on Both Sides 

(24:06) Why BPOs Are Holding Fintechs Back 

(27:38) Rating BPOs vs. AI Copilots 

(33:11) Advice for New Risk Leaders and 2026 Predictions

Follow Zach Trunksy
LinkedIn: https://www.linkedin.com/in/zach-trunsky-436879125/ 

Follow Eli Wachs
LinkedIn: https://www.linkedin.com/in/eliwachs/

Check out Footprint
https://www.onefootprint.com

Footprint helps fintechs and banks verify identity, prevent fraud, and manage compliance with end-to-end onboarding and risk infrastructure.

00:00 - Why Zero Fraud Is Impossible

00:28 - Welcome And Zach’s Background

05:41 - Proactive Tools Versus Getting Hit

11:15 - Fraud Versus Growth Tradeoffs

15:58 - The Strangest Fraud Tactics

18:03 - Using AI To Fight Fraud

25:41 - Manual Review Copilots And BPOs

31:22 - Privacy, Audit Trails, And Regulators

33:10 - Day-One Advice For Fraud Leads

36:00 - 2026 Predictions And Closing

WEBVTT

00:00:00.080 --> 00:00:01.360
These things actually can't fully stop.

00:00:01.360 --> 00:00:04.160
The reality is like you're never gonna fully stop that.

00:00:04.160 --> 00:00:07.919
Like zero product losses is like not realistic.

00:00:07.919 --> 00:00:14.720
These like very large companies like Capitoline, for example, on the balance sheet are provisioning like hundreds of billion dollars away.

00:00:14.720 --> 00:00:15.919
They know it's gonna happen.

00:00:15.919 --> 00:00:23.760
It's just exciting to think about like a time where we can be a little bit more than putting with these.

00:00:23.760 --> 00:00:28.239
Don't think that because a major attack isn't happening to you.

00:00:28.239 --> 00:00:42.079
I don't want to like it probably will eventually, if the major attention deck, everybody, welcome back to the risk and reason podcast.

00:00:42.159 --> 00:00:44.320
Uh I'm Eli, your host from Footprint.

00:00:44.320 --> 00:00:46.880
This episode is brought to you by our friends at Loan Pro.

00:00:46.880 --> 00:00:55.280
And we have an awesome guest today, Zach Trotsky, founding BizOps Hire at Kamun by way of Mercury, by way of Capital One.

00:00:55.280 --> 00:01:01.840
Uh you've worked at some of the most fascinating companies in Risk, some of the largest and now some of the fastest growing.

00:01:01.840 --> 00:01:03.119
Thanks for joining us on the show.

00:01:03.359 --> 00:01:03.840
Yeah, of course.

00:01:03.840 --> 00:01:06.159
Thank you for having me on and excited to have a good discussion.

00:01:06.560 --> 00:01:10.319
Zach, we were talking about before, uh, how does one end up in this world?

00:01:10.319 --> 00:01:17.200
Uh this doesn't seem like, you know, childhood, you're looking at NASA, uh, you're looking at NBA players.

00:01:17.200 --> 00:01:20.640
Uh how did were you looking that you've posters of compliance professionals?

00:01:20.640 --> 00:01:21.840
You know, what what drew you in?

00:01:21.840 --> 00:01:23.280
Yeah, like the Michael Jordan of compliance.

00:01:23.280 --> 00:01:24.959
The Michael Jordan of compliance, exactly.

00:01:25.200 --> 00:01:33.359
No, it's uh I feel like a lot of people might say this, but I think for me it was probably a little bit of an accident the way I fell into this career path.

00:01:33.359 --> 00:01:39.359
Um actually it went back to when I first interned in Capital One after my junior year of college.

00:01:39.359 --> 00:01:42.480
Um, it was a rotational corporate strategy program.

00:01:42.480 --> 00:01:46.640
And they kind of for the internship place you embed you within a specific team.

00:01:46.640 --> 00:01:50.239
Um, I obviously, you know, only started to know about fintech.

00:01:50.239 --> 00:01:55.519
I was not really knowledgeable about the space, definitely not knowledgeable about um risk compliance and fraud.

00:01:55.519 --> 00:02:06.640
And my interviewer when I was interviewing for the internship at Capital One, someone running my case interview, actually, was a VP in fraud, first party fraud specifically.

00:02:06.640 --> 00:02:18.800
And I remember her telling me about some of her stories, like stopping fraud at scale for this like obviously very large Fortune 100 company, and how kind of they were on the forefront of machine learning, artificial intelligence.

00:02:18.800 --> 00:02:20.400
And this is back in 2018.

00:02:20.400 --> 00:02:22.719
And I obviously didn't have a background in this.

00:02:22.719 --> 00:02:24.000
I didn't, you know, know a lot about it.

00:02:24.000 --> 00:02:25.120
And it sounded fascinating.

00:02:25.120 --> 00:02:28.240
I'm like, this sounds like kind of like working for the FBI.

00:02:28.240 --> 00:02:28.479
Yeah.

00:02:28.479 --> 00:02:31.759
And and like I was just kind of enamored by the opportunity.

00:02:31.759 --> 00:02:34.240
So after that, I'm like, okay, I love this.

00:02:34.240 --> 00:02:38.560
Like, and then when I got the offer, I was like, okay, my interviewer worked in fraud.

00:02:38.560 --> 00:02:41.680
This fraud sounds super cool, you know, busting the bad guys.

00:02:41.680 --> 00:02:46.319
And I like requested to be a part of that team, and and sure enough, they placed me on that team.

00:02:46.319 --> 00:02:49.520
And that's kind of where I got my first exposure to it.

00:02:49.520 --> 00:02:51.680
Um, and obviously it was really cool.

00:02:51.680 --> 00:02:59.520
You were working on the forefront of data and on the forefront of machine learning and and all these advanced kind of quantitative techniques, and I loved it.

00:02:59.520 --> 00:03:04.319
And then um fast forward, you know, obviously working for Mercury and then my current job.

00:03:04.319 --> 00:03:09.599
Um, it's, you know, I've gotten the opportunity to kind of play a more active role in building those programs from the ground up.

00:03:09.599 --> 00:03:25.120
But yeah, I think just it kind of happened by accident, but I just really fell in love with just kind of the prospective, you know, the the advanced nature of fighting fraud and just kind of how kind of sometimes fun it is to just like the bad guys, you know.

00:03:25.439 --> 00:03:27.439
What did Day One look like at Cap One?

00:03:27.439 --> 00:03:29.599
Uh iconic, very innovative company.

00:03:29.599 --> 00:03:32.560
Uh QD Letter of Series A, they're the founders of Capital One.

00:03:32.560 --> 00:03:34.800
We've very much heard the the origin stories.

00:03:34.800 --> 00:03:35.199
Yeah.

00:03:35.199 --> 00:03:42.000
Do they give you did were you given a a textbook of this is how you should go and learn about first party fraud with sister osmosis?

00:03:42.000 --> 00:03:42.960
How did you think about that?

00:03:42.960 --> 00:03:45.039
And how do you think someone should think about that?

00:03:45.360 --> 00:03:45.680
Yeah, yeah.

00:03:45.680 --> 00:03:50.400
I mean, basically the team, there are so many resources.

00:03:50.400 --> 00:03:58.000
Like you think about it, Capital One has like many, many, many millions of customers, like moving billions of dollars um every single day.

00:03:58.000 --> 00:04:02.240
The sheer amount of data they have is unlike anything I've ever seen.

00:04:02.240 --> 00:04:05.120
And like I think fraud largely is a data problem.

00:04:05.120 --> 00:04:14.400
And, you know, so the team had just like developed such in expertise on the various MOs, the data, the, you know, various techniques.

00:04:14.400 --> 00:04:16.160
There are all these like PowerPoint decks.

00:04:16.160 --> 00:04:22.240
And I think like one of the things that they're really good at is transferring knowledge to people like within teams, like within the company.

00:04:22.240 --> 00:04:32.720
I think part of why Capital One is so good is like they have it's very talent-dense, but also like they're very good at passing information between the company and to people who are growing within the company.

00:04:32.720 --> 00:04:47.600
So just literally by finding mentors like on my team who are a little bit senior and um who had developed all these resources, and also quite frankly, by just like playing in with the data, I was able to kind of start to learn about the MO, start to learn about what the data looks like.

00:04:47.600 --> 00:04:54.319
My first project as an intern actually was to work with this novel geolocation data set that they just had.

00:04:54.319 --> 00:05:08.160
They had just purchased it from some vendor, and it was literally a data set that had, you know, the time zone of a of a user, you know, the operating system time zone, like all this like geolocation-based data.

00:05:08.160 --> 00:05:17.439
And like my job was essentially to just like play with it and understand it and try to come to a reasonable conclusion by the end of the project on how this can add business value.

00:05:17.439 --> 00:05:26.399
And that was just kind of, you know, dipping my toes in that world of data and exploration hypothesis testing was kind of really how I learned about it.

00:05:26.399 --> 00:05:41.600
Like you can learn about it by talking with people, you can learn about it by by reading resources, but I think just like getting in it and understanding it and eventually like seeing fraud happen and starting to pattern match, like using the data, I think was kind of, you know, my how I learned about the space in the most effective way.

00:05:41.600 --> 00:05:42.639
So it's very interesting.

00:05:42.959 --> 00:05:47.040
You bring up this FBI analogy, Joe, like of you you're a detective in a way.

00:05:47.040 --> 00:05:48.240
What does it actually look like?

00:05:48.240 --> 00:05:52.000
So you you bring up this scenario, you buy a geolocation data set.

00:05:52.000 --> 00:05:57.120
I I know this is first proud of the internet, so you maybe weren't given the budget to buy that data set to give them to.

00:05:57.120 --> 00:06:03.839
How do you think about this kind of mouse game where it's do you think that about buying tools proactively?

00:06:03.839 --> 00:06:08.959
Do you do you think we just got hit by this fraud at all was from the state of Georgia?

00:06:08.959 --> 00:06:10.800
We need to be better at geofencing.

00:06:10.800 --> 00:06:13.040
How do you think about putting those pieces together?

00:06:13.040 --> 00:06:17.199
Also knowing that once you've caught someone, they're probably gonna come back a different way.

00:06:17.439 --> 00:06:19.120
Yeah, it's it's really difficult, obviously.

00:06:19.120 --> 00:06:23.279
And I think like it's still the companies still aren't very good at it.

00:06:23.279 --> 00:06:27.439
And I think this is actually more relevant to some of my experience at smaller companies.

00:06:27.439 --> 00:06:35.360
Um, you know, when you don't have the level of sophistication, the level of scale that uh, you know, obviously a company like Capital One does.

00:06:35.360 --> 00:06:46.560
Um I think like there, when you're kind of on the ground level, I think you start to actually look at individual cases, you know, see people starting to commit fraud.

00:06:46.560 --> 00:06:50.959
And that really is where you start to like, you know, understand the user behavior.

00:06:50.959 --> 00:07:08.800
And I think when you don't have these sophisticated techniques yet and you know are relying on kind of a patchwork of vendors and other like defenses, like static rules, you almost the way you learn is almost by getting hit the first time, you know, and just kind of mitigating, make sure the blast radius of that initial fraud attack is mitigated.

00:07:08.800 --> 00:07:11.040
Learn from that and respond to that.

00:07:11.040 --> 00:07:17.120
Obviously, you know, that's very reactive, but the reality is fraudsters are consistently evolving their approach, like you said.

00:07:17.120 --> 00:07:22.079
Um, they are consistently trying new strategies, like using new technologies.

00:07:22.079 --> 00:07:23.600
And it's a balance.

00:07:23.600 --> 00:07:31.920
You have to balance, you know, trying your best to anticipate and put yourself in the psychology of the fraudster.

00:07:31.920 --> 00:07:37.680
Like I think that was one of the parts that I underestimated was just like how psychologically driven, you know, fraud is.

00:07:37.680 --> 00:07:45.199
You have to under put yourself in their shoes, understand like where might they attack, how might they find vulnerabilities, what type of resources do they have at their disposal.

00:07:45.199 --> 00:07:53.759
And you can start to like, you know, procure different defenses, you know, build a team that can, you know, proactively prevent against that type of attack.

00:07:53.759 --> 00:08:18.240
But the reality is that will have to be balanced by just proactive monitoring, you know, or reactive monitoring, like making sure that you are watching the right things, making sure that literally, you know, if you're launching a new product and you might suspect you have some vulnerabilities, literally watching, you know, transactions come in by one case, cases come in one by one and review those and try to be as reactive as possible.

00:08:18.240 --> 00:08:28.240
So I think like it's you don't want to be fully reactive, but it's almost like impossible to be 100% fully proactive because fraudsters change all the times.

00:08:28.240 --> 00:08:30.800
They understand your vulnerabilities, they shift their approach.

00:08:30.800 --> 00:08:32.159
You have to kind of balance the two.

00:08:32.159 --> 00:08:33.679
And it's an ongoing balance between the two.

00:08:33.679 --> 00:08:34.399
And it's very difficult.

00:08:34.799 --> 00:08:38.720
When you get into psychology of a fraudster, are you joining Telegram chats?

00:08:38.720 --> 00:08:44.240
Are you, you know, watching, you know, you're reading news articles on who you won in Cambodia.

00:08:44.240 --> 00:08:45.440
How how do you think about that?

00:08:45.440 --> 00:08:57.200
Because I think this is such a misunderstood point, maybe of we spend so much time talking about defenses that we don't spend enough time talking about, like it or not, these are real people and this is a full-time job.

00:08:57.200 --> 00:08:57.600
Yeah, yeah.

00:08:57.600 --> 00:09:05.440
And we probably all disagree with the ethics of the full-time job, but yeah, they're they're also multinational companies at this point with thousands of employees.

00:09:05.440 --> 00:09:09.679
We're probably we're very far gone from the days of someone in their mom's basement.

00:09:09.679 --> 00:09:15.919
But how do you think about getting in at psychology and trying to almost predict what they would want to do next?

00:09:16.240 --> 00:09:17.360
Yeah, you you make a good point.

00:09:17.360 --> 00:09:30.399
Like at the first, like a lot of people who aren't familiar with the space won't believe that, you know, there are people who are literally here, their job is to have many different devices, you know, many different like access to the dark web, access to all the data breaches.

00:09:30.399 --> 00:09:39.279
That if you're not familiar with the space, you don't know that there's this like insane flow of information on the dark web from various data breaches from some of the largest company in the world.

00:09:39.279 --> 00:09:40.559
And it's flowing around.

00:09:40.559 --> 00:09:44.720
Like there's an active market of people buying this information, using it to commit fraud.

00:09:44.720 --> 00:09:46.399
And it's it's very real.

00:09:46.399 --> 00:09:50.720
Um, and that I think is is obviously important for people to recognize.

00:09:50.720 --> 00:10:01.120
And yeah, I think just the the key is to like work with people if you who have access to this type of information, like at various jobs I've I've had in the past.

00:10:01.120 --> 00:10:08.720
Like we've worked with advisors, you know, who are very deep in the space and they'll be like, oh yeah, we heard about chatter about your company on the dark web.

00:10:08.720 --> 00:10:19.279
And like the reality is, is like when you recognize that this is a full-time job for for some people, you also have to recognize that they are constant, they're very opportunistic.

00:10:19.279 --> 00:10:29.600
They're going to look for every opportunity to, you know, attack you if you're if you're a fintech company, if you have vulnerabilities and you have to be extremely careful.

00:10:29.600 --> 00:10:43.279
Like obviously, you know, if you're about to launch a new product or if you're about to like start something new, you have to realize that someone whose full-time job it is to commit financial fraud will be, you know, on the prowl for these types of opportunities.

00:10:43.279 --> 00:10:52.000
And just I think understanding that is incredibly important for, you know, making sure that you have the right level of safety for whatever product you're launching.

00:10:52.000 --> 00:10:55.440
Cause I think it's one thing to know that that's happening.

00:10:55.440 --> 00:11:00.720
And then it's another thing to, you know, see that hop in and have that start, you know, cannibalizing your PL.

00:11:00.720 --> 00:11:15.519
So um it's very you have to it, you have to think about what they are watching out for from a psychological perspective and then like, you know, make sure that your defenses are to the point that can actually stop them.

00:11:15.919 --> 00:11:19.440
Froster's full-time job to make your life unfortunate.

00:11:19.440 --> 00:11:21.120
Um, from trying to take money.

00:11:21.120 --> 00:11:27.039
Uh, growth team's full-time job is to also probably bug you by saying, Are you adding all of these defenses?

00:11:27.039 --> 00:11:34.159
Which puts someone in your shoes in a pretty unique and difficult position that there are two people whose goals are somewhat at ends.

00:11:34.159 --> 00:11:34.480
Yeah.

00:11:34.480 --> 00:11:36.639
This leads to, I guess, a two more question.

00:11:36.639 --> 00:11:38.799
One, do you think it's actually possible to get rid of all fraud?

00:11:38.799 --> 00:11:45.759
And two, what do you think how do you think about the balance of checks that you're adding versus friction?

00:11:46.080 --> 00:11:47.279
This is this is the age-old question.

00:11:47.279 --> 00:11:50.720
I feel like I I've I've been j I was joking with my girlfriend the other day.

00:11:50.720 --> 00:11:56.399
She's like, I feel like this answering this question, the balance between fraud and growth has been your full-time job at three different companies now.

00:11:56.399 --> 00:11:59.759
So it's very like it's very, it's very personal.

00:11:59.759 --> 00:12:04.080
But I think, yeah, the reality is there's it's a healthy debate.

00:12:04.080 --> 00:12:16.720
It's a very healthy dynamic to think, have, think about how can you grow versus, you know, how can you defend yourself from fraud and from, you know, penal cannibalization.

00:12:16.720 --> 00:12:32.480
Um, and I think there's not really a right answer, more so than just making sure that it's a healthy ongoing debate between those two teams with like two different, you know, underlying incentives, but ultimately like the same goal to put the company in the best success position to succeed.

00:12:32.480 --> 00:12:34.000
And I don't know.

00:12:34.000 --> 00:12:38.240
I think it's very like it's very interesting.

00:12:38.240 --> 00:12:52.720
And the reality is that we try to remind ourselves every day is you wanna put yourself in the best position to fight against fraud, especially you want to put yourself in the best position to see it, like know that it's happening, like have the right data, have the right visibility.

00:12:52.720 --> 00:12:56.480
But I think the reality is like you're never gonna fully stop fraud.

00:12:56.480 --> 00:13:00.240
Like zero fraud losses is like not realistic.

00:13:00.240 --> 00:13:03.679
And I think to be honest, like zero fraud losses means zero revenue.

00:13:03.679 --> 00:13:05.919
Like you're you're not gonna be able to grow.

00:13:05.919 --> 00:13:18.960
Um, there's a reason that these like very large companies, like Capital One, for example, on their balance sheet are provided provisioning like hundreds of billion dollars away, like each earnings call for like, you know, fraud losses, credit losses, et cetera.

00:13:18.960 --> 00:13:21.600
Credit's a little adjacent, but they know it's gonna happen.

00:13:21.600 --> 00:13:24.320
They know that there are going to be losses incurred.

00:13:24.320 --> 00:13:25.759
It's just a part of growing.

00:13:25.759 --> 00:13:39.759
And it's less about like making sure you have zero fraud, but more so making sure fraud is like within budget and that you have the data and you can learn from it and consistently adapt to make it like less and less of a problem.

00:13:39.759 --> 00:13:42.000
So, like the reality is you're never gonna get rid of it.

00:13:42.000 --> 00:13:43.919
Like the goal is to grow, obviously.

00:13:44.159 --> 00:13:53.679
Um it's an ongoing crazy statement of capital on this is public, you know, hundreds of millions of dollars on their annual earnings calls are set aside for fraud.

00:13:53.679 --> 00:13:56.080
Do you think that should be a paradigm?

00:13:56.080 --> 00:14:02.159
You know, like that it when you hear something like that, does that make you think these systems are fundamentally flawed?

00:14:02.159 --> 00:14:09.840
The the other things I could think about is legal expenses where public companies will just set aside an amount, we're gonna get sued, and this is what we have to spend on it.

00:14:09.840 --> 00:14:17.440
Like, does that not almost make you drive your head into a wall, which is you know, what we've just accepted that this this has to be the case?

00:14:17.759 --> 00:14:18.000
Yeah.

00:14:18.000 --> 00:14:19.039
I mean it it does.

00:14:19.039 --> 00:14:24.159
Like when you think about it, it's why are you kind of settling for defeat, I guess.

00:14:24.159 --> 00:14:27.759
I mean, when you're when you're Capital One and you're also, you know, making it won a lot.

00:14:27.759 --> 00:14:28.879
They say yeah, yeah.

00:14:28.879 --> 00:14:29.600
They won a lot.

00:14:29.600 --> 00:14:38.480
And they also have like other processes in place that they're amazing at, you know, like recovering fraud, you know, and obviously they're an extremely profitable, extremely successful company.

00:14:38.480 --> 00:14:40.320
So they can they can get away with it.

00:14:40.320 --> 00:15:10.320
But I think the reality is is that, you know, I think that the current state of fraud, the current way fraud is committed is very much a reminder of kind of the, I don't want to say legacy, but just, you know, financial infrastructure, payment stack, the way that all these products have been built have been, you know, underlying like relatively similar for the past maybe like 30, 40 years, and we're starting to see more evolution in the fintech space around the underlying infrastructure, you know, around like crypto, around tokenization.

00:15:10.320 --> 00:15:24.159
And part of me wonders if like this concept of like, oh, fraud is gonna happen and we just have to like make the right budget for it and let it happen and learn from it, if that's like reminiscent of kind of, you know, the old guard of fintech infrastructure.

00:15:24.159 --> 00:15:26.159
Um, but it's a really compelling question.

00:15:26.159 --> 00:15:31.440
And I think, you know, like as there's both sides of the coin could potentially be true.

00:15:31.440 --> 00:15:35.679
Like you have to, you you know, you have to learn from fraud and it's gonna happen.

00:15:35.679 --> 00:15:45.519
But at the same time, like you shouldn't settle for it because when you do see it, like I I know when I see this happening at certain jobs, I'm like, you know, this can this could have been prevented.

00:15:45.519 --> 00:15:50.720
If we had the right systems in place, we could have identified this pocket of users, you know, allowed the good users to grow.

00:15:50.720 --> 00:15:53.440
And like, there's no reason we should just like settle for this coming in.

00:15:53.440 --> 00:15:54.559
So it's interesting.

00:15:54.559 --> 00:15:57.919
But I I think that again, there it I'm not so sure there's the right answer just yet.

00:15:58.480 --> 00:16:04.559
What's the most unique thing you've seen a fraudster or a group do to try to compromise the system?

00:16:04.799 --> 00:16:05.039
Yeah.

00:16:05.039 --> 00:16:08.559
I mean, this this was at a at a previous job, but um I've seen a lot.

00:16:08.559 --> 00:16:10.879
Like you'd you'd like, I mean, you wouldn't be surprised.

00:16:10.879 --> 00:16:15.039
Other people would be surprised that but how sophisticated these people can get.

00:16:15.039 --> 00:16:43.679
But like literally, we had something happen in a previous job a while ago where someone just had some sort of, you know, machine learning, rapid automation system where they just spammed bin numbers, like bank identification numbers, like into cards, and literally was just guessing credit card numbers in rapid, like sequential order to try to like guess what what card like expiration date combo was correct.

00:16:43.679 --> 00:16:51.360
Obviously, you know, when you think about the different permutations of a credit card number, an expiration date, like a security code, there's infinite number of permutations.

00:16:51.360 --> 00:17:05.680
Like we had a fraudster once who was literally just like had an automated system that was just guessing these, like like many thousands per second in random sequential, just trying to get authorizations to get transactions to go to through a frauddy merchant.

00:17:05.680 --> 00:17:09.519
So I think just that story was So they're successful?

00:17:09.519 --> 00:17:23.359
Partially for the most part, not like a lot of them were declined, a lot of them didn't work, but like they they still were able to, like, you know, um it probably goes to the PL of that fraudster where they're willing to spend a certain amount on on compute to get a certain amount of transactions.

00:17:23.359 --> 00:17:24.000
Yeah, exactly.

00:17:24.000 --> 00:17:29.039
And like they definitely weren't just doing this to us, you know, they were doing this probably to like a lot of other comp like companies.

00:17:29.039 --> 00:17:41.039
And I think it just goes that was my first reminder of like, you know, even fraudsters targeting smaller fintechs have these crazy capabilities, are very sophisticated, are leveraging like machine learning.

00:17:41.039 --> 00:17:54.559
And um, I think that, you know, was just a reminder for me of like how creative they can get and like how, you know, when it's a full fraudsters full-time job and they want to be, you know, state of the art, then they have a lot of capabilities.

00:17:55.680 --> 00:17:57.599
Machine learning there makes a lot of sense.

00:17:57.599 --> 00:18:00.160
Uh using it to guess these permutations.

00:18:00.160 --> 00:18:02.720
Very hot topic these days, artificial intelligence.

00:18:02.720 --> 00:18:02.960
Yeah.

00:18:02.960 --> 00:18:18.319
Another tool that's very good for fraudsters, and that now for the first time ever, you can fairly cheaply and quickly generate hundreds or thousands of compelling fake documents, whether they're driver's licenses, bank statements, utility bills, or nurture synthetic identities.

00:18:18.319 --> 00:18:25.279
What are you seeing on the other side of ways that you can leverage artificial intelligence in your seat to try to defeat fraud?

00:18:25.599 --> 00:18:27.119
Yeah, there's there's a lot of different ways.

00:18:27.119 --> 00:18:42.720
I think like they I will say, first of all, like a lot of it, some right now is, you know, it's very, I don't want to say hit or miss, but I think like there's a lot of like really interesting conceptual ways in which like we can prevent fraud.

00:18:42.720 --> 00:18:47.599
There's like a few that we're seeing that like prev that show some immediate viability.

00:18:47.599 --> 00:19:12.160
I think one of them is actually, you know, as as we were talking about before the episode, you know, fraud traditionally has involved like some, you know, element of manual review, um, having like, you know, agents, fraud agents specifically, you know, actually going through and reviewing customers, reviewing documents, like looking into our systems, like to try to like look, oh, is this customer, is this activity they've done on our platform, is it fraud?

00:19:12.160 --> 00:19:13.359
Is it not fraud?

00:19:13.359 --> 00:19:17.359
Um, you know, they submitted this document for us as as proof that they're not doing fraud.

00:19:17.359 --> 00:19:18.799
Is this document legit?

00:19:18.799 --> 00:19:32.160
Um, and uh companies all over the world are still using like manual agents to to go and you know, review back office documents and and conduct investigations and and leave an audit trail on on you know various customer activity.

00:19:32.160 --> 00:19:49.920
And, you know, we think that AI can, you know, do a lot of these reviews, you know, for us, like help us, you know, create a feedback loop between what our systems are seeing and what defenses we're building, make that feed book feedback loop a lot more efficient, a lot cheaper, obviously a lot faster.

00:19:49.920 --> 00:19:58.559
And so that's one immediate area where we are kind of using it and we, you know, see this as being like incredibly valuable.

00:19:58.559 --> 00:20:02.480
Um, there's a lot of other really interesting potential use cases here too.

00:20:02.480 --> 00:20:09.279
I think going back to first party fraud, um, obviously I learned about it a lot from my time at Capital One working on the specific team.

00:20:09.279 --> 00:20:15.599
And it was kind of crazy that they hold had a whole business team specifically dedicated to that one type of fraud.

00:20:15.599 --> 00:20:36.640
Um, but like behavioral intelligence, like behavioral analytics, like literally think and looking at the you know, cookies and the way that you click on the application and on like the web portal, and and use so there's a lot of artificial intelligence capabilities around understanding just user behavior and you know, peract putting in proactive defenses against that.

00:20:36.640 --> 00:20:40.079
So those are two ways in which we're seeing it, but I think there's a ton.

00:20:40.079 --> 00:20:44.079
Like obviously, as I mentioned before, fraud is a is a big data problem.

00:20:44.079 --> 00:20:46.720
I think another thing that comes to mind is data labeling.

00:20:46.720 --> 00:20:59.920
You need to like be able to actually label the fraud after it's happened in your systems to learn from it and take that data and have it feed machine learning models to, you know, get more proactive and get more sophisticated.

00:20:59.920 --> 00:21:08.720
So I think like, you know, using artificial intelligence to like help label data and like help, you know, make sure that our feedback loops like going into our machine learning models are accurate.

00:21:08.720 --> 00:21:12.000
So I think there's a ton of interesting potential in here.

00:21:12.000 --> 00:21:13.680
And we're really just at the frontier.

00:21:13.680 --> 00:21:20.000
I think like, you know, it's it's just a lot of the capabilities here are very nascent and very promising.

00:21:20.000 --> 00:21:24.400
And there's so much, I think, to be discovered in the space, and it's very exciting.

00:21:24.799 --> 00:21:28.880
We're very bullish on on this first bug you mentioned about manual review.

00:21:28.880 --> 00:21:33.440
At the same time, the biggest question we hear is hallucinations.

00:21:33.440 --> 00:21:33.839
Yeah.

00:21:33.839 --> 00:21:36.640
Uh these are big decisions.

00:21:36.640 --> 00:21:42.000
And if you make it to the wrong person, it could be a pretty catastrophic impact.

00:21:42.000 --> 00:21:42.319
Yeah.

00:21:42.319 --> 00:21:58.640
How do you think about building in protections there to or just more broadly, how do you think about getting to a level of confidence whether you, any partner banks you work with, do given that even if you ask the LLMs what part they're wrong about, they will know.

00:21:58.960 --> 00:21:59.200
Yeah, yeah.

00:21:59.200 --> 00:22:01.440
This is This is the hard part, obviously.

00:22:01.440 --> 00:22:03.440
I think one of the really difficult things.

00:22:03.440 --> 00:22:07.839
And like to be honest, like I don't even know if we have a good answer yet, but like we don't yet either.

00:22:07.839 --> 00:22:08.960
Yeah, yeah, yeah, yeah, yeah.

00:22:08.960 --> 00:22:13.599
But like one of the things is just, you know, obviously how regulated fintech is.

00:22:13.599 --> 00:22:15.279
Like everything must be documented.

00:22:15.279 --> 00:22:19.039
Every decision you make must be, you know, auditable, traceable.

00:22:19.039 --> 00:22:26.079
And that, you know, obviously is a little bit difficult sometimes, especially in the nascent era of artificial intelligence.

00:22:26.079 --> 00:22:42.880
And I think the thing that I found, um, I've in in two different jobs now I've worked, you know, on problems where, you know, we needed to do something for a partner bank, or there was like an audit from like a regulator or a partner bank or whatever, and we needed to like provide some documentation.

00:22:42.880 --> 00:22:53.839
And obviously, you know, like I think partner banks, United States regulators, they don't have necessarily always the reputation of being so like technologically advanced.

00:22:53.839 --> 00:22:58.720
I think like the concept of artificial intelligence is sometimes like a little bit scary.

00:22:58.720 --> 00:23:14.720
And so I think like, you know, we the there's always been like a really delicate balance between like, you know, thinking about trying to be forward thinking and trying to be, you know, as efficient and as accurate and you know, as, you know, technologically advanced using artificial intelligence as much as possible.

00:23:14.720 --> 00:23:24.400
But at the same time, like, you know, we can't do anything that necessarily puts us at risk with like regulators, partner banks, et cetera.

00:23:24.400 --> 00:23:27.039
And I think I think we're really trying to find that balance.

00:23:27.039 --> 00:23:30.079
I recently was speaking, meeting with regulators about this.

00:23:30.160 --> 00:23:38.079
And yeah, one thing that came up is the idea of we can definitely put AI in a spotlight to say, well, what what about this issue?

00:23:38.079 --> 00:23:42.559
On the flip side, as I'm sure you can speak about from different companies.

00:23:42.559 --> 00:23:43.119
Yeah.

00:23:43.119 --> 00:23:49.759
The flip side is I'm guessing Cap One, Mercury, hundreds, thousands of different humans who do interview.

00:23:49.759 --> 00:23:52.160
You don't have consistency there either, right?

00:23:52.160 --> 00:23:52.559
Yeah.

00:23:52.559 --> 00:23:56.480
How could you maybe log through like what does the paradigm look like today?

00:23:56.480 --> 00:24:06.480
And maybe why though you do see, you know, we can talk for many hours about what could go wrong, but what's going wrong today that actually could go right if we use these tools correctly?

00:24:06.799 --> 00:24:10.240
Yeah, I I think just like the consistency is one of them.

00:24:10.240 --> 00:24:32.640
And obviously, you know, that but I think just the the other thing is just, you know, the sheer kind of like from thinking from a business perspective, like the sheer cost and like the sheer like lack of mobility you have sometimes when there's so like there's so many manual reviews, like I think that is like you know, a little bit like it holds you back a little bit as a basic.

00:24:32.880 --> 00:24:37.279
Because manual reviews translating to the bottom line in that you're not onboarding as many people or businesses.

00:24:37.599 --> 00:24:37.839
Yeah.

00:24:37.839 --> 00:24:46.000
Or or like if you're in the case where you like a lot of you know, partner banks, a lot of regulators, they see the manual review as like a vote of confidence.

00:24:46.000 --> 00:24:52.480
Like you have a sign of confidence in this customer, we want to see like all these customers manual reviewed because we trust the eye of the human.

00:24:52.480 --> 00:24:56.720
However, like I think, you know, manual reviews are not always perfect.

00:24:56.720 --> 00:24:58.720
Like, so I don't know if that's true, number one.

00:24:58.720 --> 00:25:10.880
But number two, like if you're doing if you're in a place where, you know, you have to, you know, you could prove some, you use your manual reviews to like prove something to the regulator, prove something to the partner bank, like that holds you back.

00:25:10.880 --> 00:25:11.759
It's not very fast.

00:25:11.759 --> 00:25:20.720
You have to like complete those before you can go, you know, launch new products, launch new business units, gain the trust of regulators, you know, like grow as a business, like do the things that you want to do.

00:25:20.720 --> 00:25:36.480
But like when you have to like, you know, use like these BPOs, like business process outsourcing companies to go and review your customers, review your products, it feels like you're living like 20 years in the past and odyssey it's expensive and it's slow.

00:25:36.480 --> 00:25:41.279
And I think there's there's probably a future where like you don't have to use those companies at all.

00:25:41.599 --> 00:25:46.240
Do you think about kind of a copilot approach where you can have AI maybe have a 14-step review process?

00:25:46.240 --> 00:25:50.240
AI is doing 12 and you get to look at what it puts out and do the final tune.

00:25:50.240 --> 00:25:56.160
Is there a world where then you can actually give to banks this notion of everybody who's manually reviewed?

00:25:56.400 --> 00:25:56.799
Yeah, yeah, yeah.

00:25:56.799 --> 00:25:58.160
I think I think it's sad too.

00:25:58.160 --> 00:26:05.119
And I think like that the copilot, you know, is very promising and it's very, and then obviously like the copilot is efficiency gain.

00:26:05.119 --> 00:26:14.880
And like um, we've been thinking about using some some manual review copilots as well, um, just to like, you know, augment the review, like help with accuracy, help with efficiency, drive down cost, et cetera.

00:26:14.880 --> 00:26:22.640
And I think like the way this goes is like I don't want to say that this will go to a completely autonomous human out-of-the-loop system.

00:26:22.640 --> 00:26:31.759
I think the heat feedback loop is very important, but I think like what I've seen at like fintech companies so far is uses utilizing these like generalized outsourcing companies.

00:26:31.759 --> 00:26:33.440
And and these are huge companies.

00:26:33.440 --> 00:26:36.799
This was a world I had no idea about before I entered, you know, fintech.

00:26:36.799 --> 00:26:46.319
And then I realized, you know, there's all these massive companies with like armies of of people offshore, and their their whole job is to just like plow through documents, plow through reviews, plow through a checklist.

00:26:46.319 --> 00:27:25.359
I think like what this enables is you can move from that generalized model to a more, you know, focused subject matter expertise specific model of have like, oh, instead of having an army of 50 people who are all generalists, you can have, you know, four or five people who are specialists, who are trained in financial crimes, who are trained in like finance, like fluent in regulation, understand what to do, add like real business value in addition to just completing the investigation and equip them with like, you know, a co-pilot to be able to like be much faster and be much more effective.

00:27:25.359 --> 00:27:26.160
And I think so.

00:27:26.160 --> 00:27:37.759
We'll probably, when it comes to manual reviews, move towards that model, and that'll probably help the business, you know, reduce costs and just make the general system a lot more effective.

00:27:38.640 --> 00:27:48.640
A scale of one to ten, how confident would you be in an arbitrary decision made by the BPO army versus scale of one to ten uh an AI co-pilot?

00:27:48.960 --> 00:27:49.200
Wow.

00:27:49.200 --> 00:27:51.759
It's a very close that's a that's a very good question.

00:27:51.759 --> 00:28:04.799
Um might get in trouble for saying this, but I but I but I think I don't know, the B the BPO I think would I it's probably a three or four, and I think AI co-pilot, I would I would generally trust.

00:28:04.799 --> 00:28:08.079
I wouldn't fully trust, but I'd probably put that around like a seven or eight.

00:28:08.400 --> 00:28:10.079
And the cost of the BPO is real.

00:28:10.240 --> 00:28:10.720
Yeah.

00:28:10.720 --> 00:28:12.960
And it's probably not the most enjoyable work.

00:28:12.960 --> 00:28:14.480
Yeah, no, definitely not.

00:28:14.480 --> 00:28:15.839
It's not the most enjoyable work.

00:28:15.839 --> 00:28:23.839
And like you obviously, you know, if you, you know, you have to, and I think from the business side, you have to, and you think about it, it's expensive.

00:28:23.839 --> 00:28:35.839
And then there's also overhead in addition to, you know, just paying for the agents, paying for the BPO, you have to like, you have to hire a team to manage the BPO, you have to like train the BPO every time you do something new.

00:28:35.839 --> 00:28:40.000
There's switching costs from having them switch from one thing to another thing.

00:28:40.000 --> 00:28:45.279
Um, and then you have to like, if you want to launch a new product, you have to like retrain them on the new product.

00:28:45.279 --> 00:28:56.960
Um, so I think just from like a cost optimization perspective, and then just, you know, a broader question of like, you know, efficiency more generally, like it's not so ideal.

00:28:56.960 --> 00:29:08.799
Because like obviously, you know, like even at when in at my past job at Mercury, when you're at a company that all of a sudden grows from maybe like 150, 200 people to like much bigger, the BPO kind of scales with it.

00:29:08.799 --> 00:29:09.119
Yeah.

00:29:09.119 --> 00:29:11.759
And then all of a sudden you have this like huge army of agents.

00:29:11.759 --> 00:29:15.599
Like can you talk about the switching costs there?

00:29:15.599 --> 00:29:16.319
Yeah, yeah.

00:29:16.319 --> 00:29:17.759
It's it's fairly sticky.

00:29:17.759 --> 00:29:19.920
And it gets like stickier the bigger that it gets.

00:29:19.920 --> 00:29:21.920
Um I guess you've trained them on your processes.

00:29:21.920 --> 00:29:28.400
Yeah, you you've trained them on your processes, you've trained them to think of do things a certain way, think a certain way.

00:29:28.400 --> 00:29:34.319
And the reality of working in early stage or mid-stage tech and fintech is things move very fast.

00:29:34.319 --> 00:29:36.079
The the market moves very fast.

00:29:36.079 --> 00:29:46.319
You know, you have to consistently launch new products, um, especially like when the bigger you get, you launch more products, you start to kind of like create a bundle around your core target customer.

00:29:46.319 --> 00:29:54.319
People forget that you you have to, if you're a fintech company, you have to also train your agents like in all these products because they all come.

00:29:54.319 --> 00:30:04.559
The reality is like every new fintech product that's launched, or a lot of new fintech product that's launched, comes with its own set of like back office processes that have to be done.

00:30:04.559 --> 00:30:11.440
That so, like if you're launching a new product, for example, and it has like a different level of KYC, like you still have to do KYC.

00:30:11.440 --> 00:30:13.119
It's a regulatory obligation.

00:30:13.119 --> 00:30:21.119
And you are likely going to have like some element of manual review required for that, you know, a new onboarding that you have.

00:30:21.119 --> 00:30:24.079
And new agents have to be trained in a different way.

00:30:24.079 --> 00:30:28.160
They have to undo their mental model that they were thinking, get trained in something new.

00:30:28.160 --> 00:30:28.559
Yep.

00:30:28.559 --> 00:30:30.240
And, you know, that takes time.

00:30:30.240 --> 00:30:36.240
It slows down your ability to roll out the product, you know, it might lead to inaccuracies, it might lead to a poor customer experience.

00:30:36.240 --> 00:30:52.480
And obviously, if you grow, sometimes like the easy answer if you're growing, instead of like figuring out, oh, how can we, you know, use AI to like not grow back office operational headcount, like that takes time, that takes scoping, that takes resources.

00:30:52.480 --> 00:30:57.920
Sometimes the easiest answer is like, oh, let's just hire more agents, we'll deal with it, and then we'll cut back down later.

00:30:57.920 --> 00:31:00.160
Well, like the cut back down like doesn't always happen.

00:31:00.160 --> 00:31:11.440
So I think like in theory, like it's nice to think about a world where you don't have these switching costs, you don't have this like major item on your PL and see it grow over time.

00:31:11.440 --> 00:31:22.160
And like it's just exciting to think about like a time where like we can be a little bit more autonomous, autonomous, and quicker moving um with using artificial intelligence.

00:31:22.559 --> 00:31:25.200
We try to be balanced, the risk and reason podcasts.

00:31:25.200 --> 00:31:27.680
How do you weigh the following two concerns?

00:31:27.680 --> 00:31:31.680
For BPOs, no secret, they're pretty international overseas.

00:31:31.680 --> 00:31:36.799
So from a privacy perspective, you're sending data outside the US and then you're bringing it back to a bank.

00:31:36.799 --> 00:31:46.720
Flip side, for models, you're sending sensitive information to large language models who aren't supposed to train on it, uh, but the auditability there is a bit loose.

00:31:46.720 --> 00:31:50.000
Yeah, you think about both of those from a privacy security perspective.

00:31:50.319 --> 00:31:51.359
Yeah, it's tough.

00:31:51.359 --> 00:31:57.759
I think I'm not like a huge expert in privacy and security pertaining to data.

00:31:57.759 --> 00:32:02.400
But like what I do know is that the BD, the BPO model is like it's very proven.

00:32:02.400 --> 00:32:08.960
Like they have their, these are very, very large companies and like they have ways to deal with data security.

00:32:08.960 --> 00:32:17.119
Um I think like, as from my understanding, like the data security element of of LLMs as it comes to audibility, you know, it's an on, it's an ongoing thing.

00:32:17.119 --> 00:32:20.000
It's an ongoing discovery, like how to make that most effective.

00:32:20.000 --> 00:32:30.079
And I think the thing that we just consistently put in front of mind in all our jobs, it's just like, you know, making sure we have very clear decision making, very clear audit trails.

00:32:30.079 --> 00:32:47.680
And like I think the reality is, is like right now for us, if we're trying to use like artificial intelligence to do something to like, you know, automate some process, to, you know, make some area of our business a little bit more efficient, it's we're not at a place where like we can have an LLM be fully autonomous.

00:32:47.680 --> 00:32:58.880
Like someone needs to kind of, you know, it operates within this kind of broader box of like some person managing this process and an LM and the LLM is just one part of that because of partially because of the auditability piece.

00:32:58.880 --> 00:33:01.759
So I think like I'm sure that problem will be figured out.

00:33:01.759 --> 00:33:03.279
I'm sure that problem will be solved.

00:33:03.279 --> 00:33:10.400
Yeah, but it it's hard for me to say that that's at a point where you know it can be fully solved right now, if that makes sense.

00:33:10.799 --> 00:33:17.039
As we get to the end here, what would your advice be to someone taking your seat at a different role?

00:33:17.039 --> 00:33:26.559
And by that I mean if I'm the founding uh BizOps kind of fraud person at a fast growing startup, yeah, what would you do on day one?

00:33:26.559 --> 00:33:36.480
And if I'm joining a large public FI to establish risk systems, but still a couple hundred million on the balance sheet data fraud, yeah.

00:33:36.480 --> 00:33:37.279
What would I do?

00:33:37.519 --> 00:33:38.720
Yeah, it's a great question.

00:33:38.720 --> 00:33:41.759
For the for the first part, just talk to people.

00:33:41.759 --> 00:33:48.079
Like it could be a lonely job if you try to, if you try to solve it alone, if you try to like fight fraud alone, you're probably gonna fail.

00:33:48.079 --> 00:33:51.440
Um like definitely like leverage your network.

00:33:51.440 --> 00:33:54.720
Fraud stores often, you know, coordinate across multiple companies.

00:33:54.720 --> 00:34:11.039
Everyone, like obviously, you know, there's there's competition, businesses are competing against each other, but at the same time, like I think businesses can probably, you know, unite around the shared common goal of, you know, wanting to fight fraud, make sure that that is not, you know, cannibalizing anyone's ability to operate.

00:34:11.039 --> 00:34:21.280
So I think definitely talk to people, share knowledge, um, understand latest trends in the industry, understand what other people are seeing when it comes to fraudsters.

00:34:21.280 --> 00:34:27.360
Because if a fraudster has hit another company, but it hasn't hit you yet, it's probably gonna come soon.

00:34:27.360 --> 00:34:31.280
So I think just if you're standing something up, that that's my advice.

00:34:31.280 --> 00:34:38.480
And also just don't think that because it hasn't a major attack hasn't happened to you that it won't.

00:34:38.480 --> 00:34:40.239
Like it probably will eventually.

00:34:40.239 --> 00:34:41.840
It's a nature of being in fintech.

00:34:41.840 --> 00:34:53.679
Every in my career, I think m almost all like exciting product launches have come with, you know, fraud, particularly like credit products, pray myth products.

00:34:53.679 --> 00:34:56.320
It all it all happens, so it will come.

00:34:56.320 --> 00:35:16.320
And yeah, if you're joining a bigger company, I think obviously, you know, you're afforded kind of the you know, safety net of having, you know, a company that's doing really well, probably, you know, making makes a lot of money, kind of, you know, every, you know, incremental, you know, $10,000, $20,000 in fraud loss is an existential.

00:35:16.320 --> 00:35:22.800
But take advantage of that, you know, cushion to really understand how to build a state of the art system.

00:35:22.800 --> 00:35:34.960
Like where can you invest your money into like building like the best, most frontier forward-facing systems and understand what it looks like to be like really sophisticated at scale at scale.

00:35:34.960 --> 00:35:37.679
But again, the same principle applies.

00:35:37.679 --> 00:35:40.079
Like it will happen, like it will continue to scale.

00:35:40.079 --> 00:35:50.719
So, like if you're working at a really large company, then just you know, take that opportunity that you have there to like think about how can I build something like really truly sophisticated and state of the art.

00:35:50.719 --> 00:35:59.920
Because then, like, if you want to like to take go from there and then become like the head of fraud, head of risk at a smaller company, you can take the principles there and like start to build your own system from the ground up.

00:36:00.400 --> 00:36:05.039
Final question any predictions for 2026 when it comes to fraud?

00:36:05.760 --> 00:36:07.119
It's a good question.

00:36:07.119 --> 00:36:18.639
Um just as we're talking about AI, you know, using being helped being used to help fight fraud, we'll see AI, you know, potentially starting to commit fraud.

00:36:18.639 --> 00:36:30.559
We'll see, you know, you know, we've already seen like talked with some vendors about like, you know, the issue of you know, people using deepfakes to get access to the bank accounts, like gener AI generated people to try to get access to bank accounts.

00:36:30.559 --> 00:36:49.360
We haven't seen too much of that yet, but if I had to guess in some way or some form, like we'll probably have these fully AI created synthetic identities that are just, you know, completely autonomously signing up for bank accounts, you know, committing fraud, not tied to a person because it's all AI.

00:36:49.360 --> 00:36:49.679
Yeah.

00:36:49.679 --> 00:36:54.480
Maybe that's beyond 2026, like a little bit further, but I think we'll start to see that soon.

00:36:54.480 --> 00:36:57.920
And I don't know how we're gonna stop that, but we'll find a way.

00:36:57.920 --> 00:36:59.039
There's work to do.

00:36:59.039 --> 00:37:00.400
Fighting AI with AI.

00:37:00.400 --> 00:37:03.199
So it becomes a little meta, but very interesting.

00:37:03.519 --> 00:37:04.079
There's work to do.

00:37:04.079 --> 00:37:05.679
We appreciate your role in stopping it.

00:37:05.679 --> 00:37:07.119
And thanks so much for coming on the podcast.

00:37:07.440 --> 00:37:08.079
For trying, yeah.

00:37:08.079 --> 00:37:08.800
Thank you for having me.