Sofia Kung
Trust & Safety AI Products Data Analytics Design

Two questions every fraud team is quietly asking about AI

Just presented at ACFE Singapore 2026 on leveraging AI for fraud detection.

The room was full of fraud practitioners — investigators, auditors, risk managers — who understand the threat deeply but are still figuring out how to operationalise AI. That gap is real.

Two questions came up that I think every fraud team is quietly asking:

Which LLM is best for fraud detection?

The right LLM depends on your specific fraud use case, your data, and your constraints.

The only way to know is to run evals. Define what good looks like for your fraud problem — accuracy on ambiguous cases, reasoning quality, latency, cost — then test models against your actual data. The model that wins your eval is the right model for you.

Can LLMs catch fraud they’ve never seen before?

Think about it this way — if fraudsters are already using LLMs to brainstorm new attack vectors, we as defenders can use the same capability to ask: where are our vulnerabilities before they find them?

LLMs are already well trained on fraud data. Feed in your fraud taxonomy — your product features, your user flows, your existing controls — and ask the LLM to map where the gaps are. It will surface attack patterns your team hasn’t encountered yet because it’s reasoning from a much broader base of fraud knowledge than any single analyst or team has.

The same tool fraudsters use to plan attacks, you can use to stress test your defences. That’s the defender’s advantage — if you use it well.


The gap between how well fraudsters are using AI and how well defenders are using it is real.

Closing that gap is what I’m focused on.

The deck

Pitch deck Open in new tab ↗
↑ Index All speaking Back to home Home index →