On 1 October 2025, NashTech and Insurtech UK reunited the agentic AI working group for an in-person deep dive, with coffee, croissants, and open debate included.
After an hour of networking over breakfast, Melissa Collett (CEO, Insurtech UK) and James Loveridge (Client Director, NashTech) set the tone for the day as a practical, outcomes-driven, and honest look at both the possibilities and pitfalls.
First up, Charlotte Gregory (Capital Law) framed the day with an overview of the legal and regulatory landscape around agentic AI in insurance, touching upon the UK’s evolving AI principles, data and IP obligations, employment policy implications, and the need for robust, auditable compliance frameworks.
Charlotte said, “You can’t simply sack AI when errors occur; so accountability must be assigned.”
Moderated by Chris Weston (Senior Technology Consultant, NashTech), a panel featuring Charlotte Gregory, Usha Badrintha (CDO at Mosaic Insurance), Coco Co (Head of Finance Transformation at Hastings), and Josh Hart (Co-founder & CPTO at Yu Life) got straight into the realities of agentic AI adoption, with discussion around:
One speaker summed it up neatly: “We have a duty of care to all stakeholders, so we implement data protection protocols before proof of concept.”
Thomas Pointer (Senior Technology Consultant, NashTech) explained agentic AI implementation options, guiding delegates through the advantages and disadvantages of prompt engineering, RAG, and fine-tuning, with an in-depth look at the trade-offs between accuracy, speed, and domain expertise. The key message: your technological choice influences everything, so it should be aligned with your risk profile and the specific problem you’re aiming to solve.
To close, Robert Stenzel (Senior Consultant, NashTech) led an interactive session where teams built out practical use cases, mapping benefit levers, cost levers, and business cases. A recurring theme: insurance is fundamentally about empathy, and AI needs clear guardrails when claims are sensitive and human.
Group highlights include:
Use case one: Underwriting data
Use case two: Fraud escalation to MGAs
Use case three: Adoption, where teams are
Use case four: Metrics that matter
Overall, the event gave a clear, cross-functional view of how to introduce agentic AI safely: with legal guardrails, focused use cases, and measurable outcomes. Plus, a practical community of peers who are sharing playbooks rather than reinventing the wheel in silos.
We’d love that. Here’s how to get involved: