Home / Our thinking / Insights / Inside the NashTech and Insurtech UK agentic AI in insurance workshop

Inside the NashTech and Insurtech UK agentic AI in insurance workshop

Table of contents

 

On 1 October 2025, NashTech and Insurtech UK reunited the agentic AI working group for an in-person deep dive, with coffee, croissants, and open debate included. 

After an hour of networking over breakfast, Melissa Collett (CEO, Insurtech UK) and James Loveridge (Client Director, NashTech) set the tone for the day as a practical, outcomes-driven, and honest look at both the possibilities and pitfalls. 

The AI legal and regulatory landscape  

First up, Charlotte Gregory (Capital Law) framed the day with an overview of the legal and regulatory landscape around agentic AI in insurance, touching upon the UK’s evolving AI principles, data and IP obligations, employment policy implications, and the need for robust, auditable compliance frameworks 

Charlotte said, “You can’t simply sack AI when errors occur; so accountability must be assigned.” 

Panel session: Implementation and risk associated with agentic AI 

Moderated by Chris Weston (Senior Technology Consultant, NashTech), a panel featuring Charlotte Gregory, Usha Badrintha (CDO at Mosaic Insurance), Coco Co (Head of Finance Transformation at Hastings), and Josh Hart (Co-founder & CPTO at Yu Life) got straight into the realities of agentic AI adoption, with discussion around: 

  • Cost vs compliance vs customer outcomes: how to balance them without losing momentum. 
  • Human oversight: non-negotiable. Know what tasks the AI is supporting, why it’s making decisions, and what happens when it’s wrong. 
  • Operating protocols: the importance of establishing clear playbooks for errors and escalation before you scale. 
  • Starting simple: and prioritising predictable outcome use cases with a clear right or wrong answer (finance and controls were popular starting points). 

One speaker summed it up neatly: “We have a duty of care to all stakeholders, so we implement data protection protocols before proof of concept.” 

Technology strategies: picking the right levers 

Thomas Pointer (Senior Technology Consultant, NashTech) explained agentic AI implementation options, guiding delegates through the advantages and disadvantages of prompt engineering, RAG, and fine-tuning, with an in-depth look at the trade-offs between accuracy, speed, and domain expertise. The key message: your technological choice influences everything, so it should be aligned with your risk profile and the specific problem you’re aiming to solve. 

Hands-on workshop: use cases that ship 

To close, Robert Stenzel (Senior Consultant, NashTech) led an interactive session where teams built out practical use cases, mapping benefit levers, cost levers, and business cases. A recurring theme: insurance is fundamentally about empathy, and AI needs clear guardrails when claims are sensitive and human. 

Group highlights include: 

Use case one: Underwriting data 

  • Wrangling structured and unstructured data into usable formats. 
  • Using historical data points to prioritise conversion-likely opportunities. 
  • Target outcome: reduce processing times from 10 days to ~3, with human checks to prevent model drift and rework. 

Use case two: Fraud escalation to MGAs 

  • Automating report generation and narrative assembly to cut admin ping-pong between handlers, admins, underwriters, and agents. 
  • Benefits: faster approvals and fewer hand-offs. 
  • Watch-outs: reviewer workload, data gaps, retraining, and auditing responsibilities. 

Use case three: Adoption, where teams are 

  • Meet underwriting where the maturity is today; educate leadership on ROI and responsible implementation. 
  • Shift the focus from commodity productivity to decisioning, compliance, risk management, and sanctions. 
  • Personas emerging as a useful design tool. 

Use case four: Metrics that matter 

  • Optimising book size, agent management, and renewal rates. 
  • Reducing repetitive interactions between brokers and underwriters; “trust but verify” on broker inputs. 
  • Faster customer response through chatbots/call agents—paired with human oversight. 

Overall, the event gave a clear, cross-functional view of how to introduce agentic AI safely: with legal guardrails, focused use cases, and measurable outcomes. Plus, a practical community of peers who are sharing playbooks rather than reinventing the wheel in silos. 

Want in on the next session? 

We’d love that. Here’s how to get involved: 

  1. Register your interest in joining the next agentic AI working group by emailing james.loveridge@nashtechglobal.com  
  2. Connect with NashTech for a discovery chat on your use cases (underwriting, fraud, compliance, and customer service are great starters). https://www.nashtechglobal.com/contact-us/

We help you understand your technology journey, navigate the complex world of data, digitise business process or provide a seamless user experience

Get in touch today