Insights

Roundtable insights: The most useful agentic AI use cases in insurance right now

Written by Jerrie Craig | Sep 26, 2025 8:48:55 AM

 

For the second time this year, Insurtech UK, facilitated by NashTech, brought together 37 insurance leaders for a virtual roundtable with a practical mission: share lessons learned, surface blockers, and identify opportunities for agentic AI use cases in insurance. Introduced by Melissa Collett of Insurtech UK and led by NashTech’s James Loveridge, the discussion focused on where agentic AI works today and how to deploy it safely in a highly regulated sector that can’t afford to “move fast and break things”. 

What follows is an executive summary of the key ideas, tensions, and takeaways for businesses considering agentic AI use cases in insurance.  

Agentic AI in insurance: Microsoft’s playbook for building a “frontier firm” 

The roundtable delegation was introduced first to Dan Henry, Strategic Account CTO for Insurance and Investments at Microsoft. Dan outlined a pragmatic success framework: the frontier firm, which Microsoft describes as “a next-generation insurer that blends human judgement with AI agents so teams can scale faster, work smarter, and unlock new value.” The focus is on on-demand intelligence from hybrid human–agent teams: with AI doing the legwork; and humans staying accountable for outcomes.  

The idea is that frontier firms will create use cases for agentic AI in workflows that deliver real value, such as:  
  1. Enriching employee experiences. 
  2. Reinventing customer engagement. 
  3. Reshaping business processes. 
  4. Bending the curve on innovation.  
A simple maturity path helps firms get started: 


To become a frontier firm, Dan outlined that many organisations choose to start with AI assistants that help them work better and faster. Next, agents join teams as “digital colleagues”, taking on human-led tasks that plug into agent-operated workflows: then humans set the direction; agents run the process and check in when needed. This can often span multiple agents collaborating, using reasoning models and communicating with each other to output the task, and, of course, escalating when help is required to work better and faster. 

Michael McLaughlin of Open Dialog AI asked, “Do you ever foresee a phase beyond human in the loop where you can safely trust AI, and what would need to be true for that to be the case?”.  

Dan’s answer was honest, “Honestly, we haven’t considered that stage; there are many unknowns right now. Agentic agents only came out in 2025, so it's too risky at this stage to leave them to their own devices. That might change, but right now, human-in-the-loop isn’t a compliance checkbox; it’s a philosophical stance about accountability. When things hallucinate, you are the person using that tool… You are still responsible.”  

That caution isn’t just cultural. It’s a regulatory reality. One delegate cut to the chase: “Automated decision making in GDPR becomes very relevant as you introduce ever more autonomous agents. In sectors governed by the UK Senior Managers and Certification Regime (SM&CR), you’d be very brave to relinquish control any time soon.” 

Kimberley Miles (Head of AI Governance) at Howden Group added that moving beyond human-in-the-loop “would be a challenge with the evolving regulatory landscape, not just AI regulations but data protection obligations also.”  

Use cases for frontier firms 

Frontier firms across the insurance sector are looking to build agents to improve performance and efficiency while reducing costs and risks.  

From simple agentic AI use cases in general insurance, such as:  

  1. Claim status agents: Show the latest updates, repair bookings, and payments on claim a specific claim or claims. 
  2. Pricing and rate change agent: Retrieve the last 12 months of filed rate changes by postcode and loss cost drivers. 

To the more advanced agentic AI use cases 

  1. A FNOL triage agent: Ingest crash photos/telematics, validate policyholder, pre-classify severity, and open a claim with supplier routing
  2. Quote assist agent: Pre-fill a motor quote from driver’s license image and prior policy data; surface key risk factors and required statements
  3. Weather surge readiness: Monitor severe-weather alerts, predict surge by area, pre-book adjusters/repairers, and push customer guidance

Agentic AI use cases that land 

Across personal, commercial, and specialty lines, the group converged on a simple rule: start where the output is verifiable and the work is document-heavy. 

  1. Claims and processing stand out. As Dan noted, “Processing of information highlights the biggest opportunity for change.” Think first notice of loss (FNOL) triage, document ingestion, fraud flags, and subrogation prep-flows where an agent proposes a decision with evidence and a human confirms it. 
  2. Commercial underwriting operations are ripe: extract structured data from submissions, reconcile with risk appetite and pricing rules, and surface gaps for underwriters. 
  3. Back-office finance is standardised and auditable, ideal for scale. 

The “verifiable output” lens matters. As one delegate put it, when an agent flags a discrepancy and a human reviews it, “there’s not going to be a negative outcome of false positives… and when it has a true positive… that’s clearly an enhancement to an existing human-based fallible process.” 

Hastings’ journey: starting in finance to build trust and scale 

Many companies are experimenting with using agentic AI to reimagine an entire end-to-end experience. Some insurance companies are going as far as looking at how they can completely reinvent themselves using agentic AI. But when ROI might not yet be realised, what’s the best way of getting investment and buy-in, and what does the journey look like?  

Coralyn Co (Coco), Head of Finance Transformation, shared how Hastings is building a world-class finance function as part of its wider digital transformation. With investment usually saved for the front-end, back-office functions are often at the back of the queue for investment, but Hastings aims to change that by starting with agentic AI in finance, where recurring processes and decisions based on standard operating procedures suit human-in-the-loop automation.  

Take accounts payable, for example.  Agents can pick up invoices from emails, validate details, route them for approval, post them to the ERP, and, on the due date, prepare items for payment, which humans can then review for exceptions.  

Coco explained, “We’re embarking on an ERP upgrade, so Accounts Payable is just not ready for agentic AI yet. Rather than pause, we asked ourselves, what can we do now to build trust in agentic AI to deliver a successful proof of concept? So, we went back to the drawing board and identified several possible use cases. The team landed on a risk and loss analysis problem: automating categorisation of the type of peril. We chose this because agentic AI can perceive, reason, act, and then learn from consistent data, and also because misclassification creates downstream inconsistency and manual rework, so this would remove that.” 

The approach is refreshingly disciplined: “Start by writing ideas down, otherwise they stay as ideas and never get actioned.” The team documents the problem statement, objectives, workflow (human vs agent), and how to track performance, including how to detect hallucinations in the system. “We’ll be engaging our governance committee to scrutinise what the agent does and why.” 

Getting executive sponsorship for agentic AI use cases, without the ROI to back it up, can be challenging. But mission matters. Hastings has a vision to modernise finance and scale without simultaneously scaling costs. 

Business cases can be framed around vision, revenue, cost, and unique value. As Coco put it: “Business cases for agentic AI projects have to align with the company vision. Focus on scalability, i.e., increasing transactions and products to outrun the competition. All propositions for projects should be centred around value creation and better and faster insights. More automation and more data points enable better data-led decisions.” That’s language that all CFOs will understand. 

Explainability is central. Coco plans to evidence decision logic, show how historical patterns yield the framework, track performance and surface anomalies, and use real-time dashboards for both financials and tooling performance. 

Dan added, “whilst it comes down to testing and observability, businesses also need to start shifting to the term ‘explainability’, actually getting the AI model itself to explain how it came to that decision, with citations and references to source documents so reviewers can see exactly why an agent recommended an action.”  

What to do next 

  1. Prioritise verifiable outputs. Start with claim triage, AP matching, reconciliation, policy data extraction, and MI production. 
  2. Write down the logic. Use Coco’s template: problem, objectives, workflow, and performance tracking. 
  3. Build in explainability. Require citations and rationale by default. 
  4. Treat data readiness as a product. Fund quality, lineage, and access controls. 
  5. Socialise a decision framework. Define risk thresholds, control points, and escalation rules. 
  6. Use the FCA sandbox early. Socialise proposals before costs and opinions harden. 
  7. Sell the right business case. People and data drive cost; benefits come from scalable capacity and faster insight. Remove risk barriers upfront. 

The punchline 

Don’t wait for perfect. As Dan Henry warned, firms moving fastest “recognise that if they don’t do anything they are going to fall behind.” Start small, prove it, write it down, then scale what works. 

Ready to turn talk into traction? Join the Insurtech UK × NashTech community and help shape the next wave of agentic AI use cases. Or catch up on roundtable #1 to get the full story. 

Join the community | Read roundtable #1