Home / Our thinking / Insights / AI: predictability, regulation and ethics
AI: predictability, regulation and ethics
Table of contents
At the 2026 NashTech Connect Conference, NashTech’s Senior Technology Consultant Chris Weston and Tia Cheang, a globally recognised leader in data, AI and digital transformation, delivered a breakout session on AI predictability, regulation and ethics.
These topics are very much shaping how organisations build and deploy AI right now and each theme could have filled a full-day workshop. However, the session focused on giving practical clarity: what’s changing, where the risks sit and what business leaders should be thinking about before scaling AI any further.
If you’re building, buying or embedding AI into products and operations, this is where the real conversation starts.
Predictability: deterministic vs generative AI
The session opened with a distinction that often gets blurred.
Chris said, “Ask a calculator what two plus two is, and you get four, every single time. That’s deterministic computing, rule-based, consistent and predictable. Ask generative AI to write a marketing headline and you’ll get a different result each time. That’s because it’s non-deterministic.” And that variability isn’t a flaw, it’s a feature.
Generative AI is designed to be creative. It produces novel outputs. But that same creativity becomes risky when your use case requires features such as:
- Auditability
- Consistency
- Regulatory compliance
- Legal defensibility
- A deterministic “wrapper” or guardrail framework
- A generative engine operating within defined boundaries
- One major supermarket chain misreported financial returns due to a spreadsheet error, resulting in fines and a significant share price drop.
- One AI-powered robotic technology platform saw a 35% share decline linked to accounting system issues.
- Fairness
- Transparency
- Accountability
- Safety
- What would you refuse to do with AI, even if it were legal and profitable?
- Who bears the cost when your AI is wrong?
- To whom do you owe explainability beyond regulatory requirements?
- Where is human oversight non-negotiable?
- AI problems are rarely new problems. They are longstanding issues, (trust, accountability, governance and transparency), amplified by speed, scale and automation.
- Generative AI doesn’t remove responsibility. A human is still responsible for their outputs.
- Organisations that address predictability, regulatory complexity and ethical decision-making early won’t just reduce legal exposure. They’ll build resilience.
- As AI becomes infrastructure, trust becomes the differentiator. Yet trust doesn’t happen by accident, it’s engineered, deliberately.
In regulated environments, unpredictability quickly becomes a liability. So, organisations need a hybrid ‘deterministic and generative’ model.
The hybrid model that most organisations are adopting
Rather than choosing between creativity and control, many organisations are building hybrid systems:
The guardrails determine what must happen, what must not happen and what needs to be checked. The generative layer produces ideas, language or analysis inside those limits.
When designed well, this approach balances innovation with control. When designed poorly, it increases exposure to faulty outputs or constrains the model so hard it’s not worth spending the compute on it.
Chris’s message here was that the architecture around generative AI matters as much as the model itself.
When technical errors become business consequences
To ground the discussion, Chris pointed to real-world examples;
These weren’t generative AI failures. But as AI systems become embedded in reporting, decision-making and automation, similar errors could scale faster and become harder to trace.
Trust is fragile. Markets are unforgiving. And regulation is becoming increasingly important.
Regulation: one technology, very different rules
AI regulation is evolving, but not uniformly.
|
European Union
|
United States
|
United Kingdom
|
China
|
|
The EU AI Act takes a highly prescriptive approach. It defines detailed categories of risk and prohibited uses. Much like GDPR, it will take years to be fully tested in courts, but it is already in force. If you operate in the EU, compliance isn’t optional.
|
There is no cohesive federal framework. Instead, over 200 state-level bills were introduced in a single year. This creates a fragmented, evolving and legally complex environment. For businesses, that means uncertainty.
|
The UK has opted for a principles-based model. Rather than rigid rulebooks, it sets high-level expectations around safety, fairness and accountability. This offers flexibility, but also risk. Until tested in court, companies are operating with interpretation rather than certainty.
|
China’s model includes state-directed controls and mandatory labelling requirements. While Western companies may not directly deploy Chinese AI systems, the geopolitical and regulatory implications still matter globally.
|
Hallucinations and the confidence problem
One of the most intriguing sections of the session addressed hallucinations.
AI systems can generate outputs that are incorrect, fabricated or entirely fictional and deliver them with complete confidence.
Chris called out a study from Stanford University that found that “hallucination-free” legal AI tools still produced error rates between 17% and 33%. That’s in products explicitly marketed as reliable. Public AI tools vary depending on usage and prompting, but the risk remains.
The challenge isn’t simply that AI can be wrong. Humans are wrong too.
The difference is that AI outputs are plausible, confident and difficult to trace. With a spreadsheet, you can audit the cell. With a large language model, you can’t easily inspect the reasoning path.
That makes governance and validation critical.
Accountability: if it’s on your platform, it’s your problem
The session explored what happens when AI-driven decisions go wrong.
In a high-profile US case, a major HR and recruitment technology provider faced legal action after its automated hiring system was alleged to have repeatedly rejected applications from a candidate who was disabled and African American. The court determined that the SaaS provider could be held legally responsible for the system’s outcomes, not only the organisations using the software. Similarly, one Canadian airline was ordered to honour a discounted fare quoted by its chatbot. The tribunal ruled that customers cannot reasonably distinguish between reliable and unreliable parts of a company’s own website.
The implication for software builders is significant: liability may extend beyond the end user.
Human in the loop - but how human?
Many organisations assume that human oversight solves these problems. The session challenged that idea.
Chris argued that if a person is reviewing thousands of AI-generated outputs, fatigue sets in. Automation bias creeps in. Oversight can become superficial.
Human review is essential. But it is not a silver bullet. It needs to be deliberate, empowered and meaningful.
Ethics: what you choose to do
Beyond compliance lies ethics. Compliance tells you what you must do. Ethics is about what you choose to do.
Many organisations still lack a clearly articulated AI ethics policy. Where frameworks exist, they typically focus on four principles:
Yet these principles often conflict. Greater transparency may expose competitive advantage. Stricter safety controls may limit innovation. Designing for fairness in one dimension may create an imbalance in another.
In a bid to balance these tensions, Chris asked the four critical questions business leaders should ask themselves:
These are board-level conversations, not engineering checklists.
Public trust in AI companies has reportedly fallen significantly, while only a minority of organisations have formal AI governance frameworks in place. That gap presents both risk and opportunity.
Organisations that can clearly articulate how they govern, monitor and take responsibility for AI systems are better positioned to retain trust as others falter.
Ethics vs compliance (and why they are not the same)
Towards the end of the plenary, the discussion shifted from ethics and responsibility, into regulation, explainability, risk, and operating models with Tia Cheang. Tia is known for turning complex challenges into strategic advantage. With over 20 years of senior experience across health, government, telecoms and finance, she has delivered some of the most ambitious and high-impact data programmes in the world.
Tia argued that as AI systems increasingly influence real-world decisions, organisations must move beyond narrow compliance and take full ethical responsibility for how these technologies are used..
Central to her view is accountability: organisations cannot outsource responsibility to vendors, foundation models, or “offtheshelf” tools. If an AI system rejects an application, sets a price, or makes a hiring decision, the organisation deploying it must be able to explain who authorised that decision, why it was made, and who is accountable when things go wrong.
Explainability is a duty, not a “nice to have”
Cheang also stressed that explainability and human judgement are non-negotiable, particularly in high-impact use cases. Automated decision-making must include clear audit trails, defined points of human oversight, and the ability to justify outcomes to those affected. Rather than seeing regulation as a blocker, she framed it as a practical guide to building trust, transparency, and operational discipline, especially for organisations operating across multiple jurisdictions.
Her message was clear: effective AI governance is not a one-off policy exercise, but a continuous process of monitoring, testing, and accountability, and the organisations that take this seriously will be far better positioned to deploy AI safely, sustainably, and at scale.
The bigger picture
The key messages of the breakout session were:
Ready to transform complex AI challenges into strategic opportunities? Partner with NashTech’s expert AI teams to ensure your organisation is equipped for the future with trusted, transparent and accountable solutions. Contact us today to discover how we can help: Contact Us
Suggested articles
Six must-know tips for CIOs on generative AI implementation
Generative AI has become a major priority for CIOs, as boardrooms increasingly recognise its potential to transform business models. While GenAI...
10 technology trends set to transform the next 3 years
Technology is changing faster than organisations can keep up. Just as one project gets the green light, a newer and smarter technology trend emerges...
Accelerate Digital Modernisation with NashTech & Microsoft
Application modernisation has become of increasing interest to businesses, and there is growing recognition of its transformational benefits. On...