Your Board Has an AI Strategy. What It Probably Doesn't Have Is the Governance Framework That Makes It Work.


Direct answer: Katie King, leading AI strategist, World Economic Forum board-level reference author, and member of the UK Government All Party Parliamentary Group on AI, argues that the governance gap in healthcare AI is not a technology problem. It is a boardroom problem. Most dental organisations deploying AI are not failing because they chose the wrong tools. They are failing because they deployed the right tools into organisations that were never structurally or culturally ready to absorb them. The returns being promised by AI vendors will not materialise until boards understand that distinction and treat governance not as a compliance overhead but as the operating discipline that makes AI commercially viable at scale.


Listen to the full episode: 🎧 Spotify 🎧 Apple Podcasts ▶️ YouTube


What Is the Governance Gap and Why Does It Matter Now?

There are more than 125,736 dental professionals registered with the GDC. Every major dental group has an AI strategy. Or at least, every major dental group says it does.

The distinction between having an AI strategy and having AI activity turns out to be the most commercially significant question in dental technology right now. Katie King, who has advised global organisations on AI integration for 35 years, describes the same failure mode across every sector she works in. Organisations confuse the two. The result is significant investment and underwhelming returns.

"Most organisations confuse AI activity with having an AI strategy. So it might be some pockets of experimentation, a chatbot over here or a pilot project there. But when there's no coherent plan that's actually connected all of these initiatives to business outcomes, that's the problem. A real AI strategy has got clear ownership at board level, it's got defined use cases that are tied to measurable value, and thirdly, a governance layer that's ensuring that really important accountability."

In dentistry, that absence of governance has consequences that go beyond strategic underperformance. Deploying AI in a patient-facing clinical environment means making decisions about patient data, clinical recommendations, consent, and liability. Those are not IT decisions. They are governance decisions. The boards treating them as IT decisions are accumulating unrecognised regulatory and legal exposure.

The regulatory direction of travel makes this urgent. The EU AI Act is the world's first binding law on AI and it categorises healthcare as a high-risk sector. The UK is developing its own frameworks. The organisations waiting for regulation to arrive before they act will be the ones scrambling to retrofit governance after the fact.


What Must Exist Before Any Patient-Facing AI Deployment?

Katie King gives a CEO the minimum viable AI governance framework for a multi-site dental group in five points.

First, a named AI lead at board level. Not delegated to middle management. Not buried in the IT function. A named executive who carries accountability for AI governance across the whole organisation and reports to the board on it directly.

Second, an AI use case register: what AI is being used across the organisation, where it is being used, and by whom. This sounds simple. Most organisations cannot produce it.

Third, a risk classification system. Not all AI carries the same risk. The AI that surfaces a scheduling reminder is categorically different from the AI that informs a clinical recommendation. Organisations need to triage accordingly, with proportionate oversight for each category.

Fourth, mandatory bias and impact assessments before any patient-facing deployment. Not after a problem surfaces. Before deployment begins.

Fifth, a continuous feedback loop. Governance is not a one-off audit. It is an ongoing discipline.

"Without those in place, you're flying blind when it comes to patient data. You need clear accountability when, and I say when not if, when the AI gets it wrong, you need the feedback loop so that the system can keep improving."


Is Responsible AI First a Sophisticated Way to Do Nothing?

The devil's advocate position on AI governance is a legitimate one. Most organisations treat governance as the thing that slows AI down. Governance frameworks are often captured by the compliance function rather than strategic leadership, and can become a sophisticated way to avoid making any decisions at all.

Katie King's response is direct.

"Without it, you have lost the essence of what you're trying to achieve as a healthcare provider. You could look at other parts of the world where it's much more lax and I think it is going to backfire. We've got to do things properly. If we're using AI across our stack, it's going to future-proof us. It's going to differentiate us from other dentists that aren't using it. But we've got to do it in a way that's structured and responsible and safe."

The organisations that have made the most commercial progress with AI in dental are often the early adopters. But Katie distinguishes between first-mover advantage and durable competitive advantage. An organisation might get early benefit from moving fast. The ability to deploy AI at scale, across marketing, patient retention, and service delivery, requires the governance infrastructure underneath it. Without it, early gains compound into fragility rather than structural advantage.


What Does Responsible AI Actually Look Like Operationally?

The most important distinction Katie makes is between responsible AI as a published policy and responsible AI as an operating discipline. Most organisations have the former. Very few have the latter.

"They'll publish a beautiful AI ethics statement. It will sit in a poster in the office or on their website. But nothing changes operationally. In practice, responsible AI means every team deploying AI has a check. They know the model's been tested for bias. They know how they can reach out and explain its different outputs. They know what happens when a patient challenges a decision that's been made. Where it really comes into fruition is where it's embedded into the different workflows, the procurement decisions, the performance reviews."

The commercial argument for transparency is one that most dental organisations have not properly made to themselves. In a consolidating market where patient trust is increasingly difficult to earn and easy to lose, the practices that can demonstrate how their AI was tested, what human oversight exists, and how a clinical recommendation was reached will have a differentiator that cannot be purchased quickly.

"Transparency is not a cost, it's a trust builder. The dental practices that can hand on heart say, we use AI responsibly and here's how, they're going to be the ones that are going to win the patient trust."


What Is the Single Most Broken Thing AI Will Expose?

The organisations most at risk from AI are not the ones that refuse to adopt it. They are the ones that adopt it without fixing what is already broken. Put directly to Katie King: what is the single most broken thing inside most organisations that AI will expose before it creates any advantage?

"For a lot of organisations, what's broken is that they have a very tactical approach to the way that they run their organisations. It might be that they're constantly jumping on the bandwagon, trying to catch up with others, without thinking about what are we trying to achieve here. Those that don't do their research, those that constantly try to protect themselves by investing in the next tool out of fear of obsolescence. What's often broken is a way of running their organisations that is very, very tactical. And if they then invest in AI, it just exposes that further."

The AI-mature organisations Katie describes share three characteristics. They treat AI as a business-wide capability, not a departmental project. They invest in data readiness before selecting a tool. And they measure outcomes against specific KPIs, not vanity metrics. Not how many models have been deployed. What measurable value has been delivered.

The governance and cultural infrastructure required to reach that level of AI maturity is explored in detail in The Intelligence Layer: What a DPO Inside a Growing UK Dental Group Knows About AI That Most Leaders Don't.


What Will Historians Say About the Organisations That Got This Right?

When historians write about this period of AI adoption in healthcare, Katie King argues they will say the successful organisations were those that treated patient and customer data respectfully, building and maintaining trust as their primary operating principle.

"They will say that the successful organisations were those that treated their customer or client data respectfully so that they maintained and built the trust with them. They will say that they were organisations that treated it holistically across the whole organisation, didn't treat it as the next tech iteration, but saw it as something that was helping them to run their businesses in a profitable way across every stack."

The bifurcation is already in progress. The dental groups building governance infrastructure now will have a structurally different operating position in five years to the ones that spent the same period buying tools and hoping the returns would follow.

Governance is what makes AI commercially viable, not what slows it down. The organisations that treat it as the price of admission rather than the bureaucratic delay will be the ones that can deploy at scale and deliver measurable returns. The regulatory environment is about to demand what the best organisations are already building. The EU AI Act categorises healthcare as high-risk for AI. The UK is developing its own frameworks. The organisations that act now will not just be compliant. They will be positioned as thought leaders in a conversation that will shape UK healthcare AI policy for the next decade.

The compliance ceiling is real and approaching faster than most platforms are pricing. The platforms that survive will be built on something the compliance requirement cannot replicate: genuine governance infrastructure that connects AI deployment to measurable clinical and commercial outcomes.

The question is not whether your organisation is using AI. It is whether your organisation is ready for what AI will reveal about how it is actually run.

For further reading on related themes from TechDental, the workforce engagement and professional development infrastructure argument explored in The Profession Needs a Passport: What Dentinal Tubules Tells Us About the Future of Professional Development Infrastructure in UK Dentistry connects directly to the cultural readiness argument Katie King makes here.


Key Takeaways

Most dental organisations do not have an AI strategy. They have AI activity. The commercial distance between those two things is the most significant unpriced exposure in dental technology right now. A coherent AI strategy has named board-level ownership, use cases tied to measurable outcomes, and a governance layer that ensures accountability. Most organisations have none of these three things simultaneously.

Governance is not what slows AI down. It is what makes AI commercially viable at scale. The organisations treating governance as a compliance overhead are building fragility into their AI investments. The organisations treating it as operating infrastructure are building the conditions in which AI can actually deliver the returns being promised.

The five-element framework is the minimum, not the ceiling. A named AI lead at board level, a use case register, a risk classification system, mandatory bias and impact assessments before patient-facing deployment, and a continuous feedback loop. Organisations that cannot demonstrate all five are not ready to deploy AI in a clinical environment, regardless of what the vendor has told them.

Responsible AI is an operating discipline, not a published policy. An ethics statement on a website changes nothing. What changes things is governance embedded into procurement decisions, performance reviews, and clinical workflows. The practices that can demonstrate how their AI was tested, what human oversight exists, and how a recommendation was made will win patient trust in a consolidating market where that trust is increasingly difficult to earn and easy to lose.

AI will expose what is broken before it creates any advantage. In a well-run organisation, AI accelerates what works. In a poorly run organisation, it accelerates the dysfunction. The single most broken thing in most dental groups is a fundamentally tactical approach to how they are run. Deploying AI into that environment does not fix it. It makes it more expensive and more visible.

The regulatory environment is tightening faster than most procurement cycles. The EU AI Act classifies healthcare as high-risk for AI. The UK is developing its own frameworks. The dental groups building governance infrastructure now will not only be compliant when enforcement arrives. They will be positioned as the operators who understood what AI deployment actually requires, at the moment most of their competitors were still running pilots.


About TechDental

TechDental is a strategic intelligence platform for founders, executives, operators and investors shaping the future of dentistry. Through high-level analysis and systems-focused conversations, we explore how AI, governance frameworks and operating model design influence performance, scalability and enterprise value in dental organisations.

www.techdental.com

info@techdental.com

LinkedIn

The future belongs to those who deploy technology with discipline.


© 2026 RIG Enterprises Limited. All Rights Reserved. TechDental® is a trading name of RIG Enterprises Limited (Company No. 11223423), incorporated in England and Wales on 23 February 2018, registered at 1a City Gate, 185 Dyke Road, Hove, England, BN3 1TL. All editorial content, analysis, synthesis and intellectual property contained within this article are the original work of the author and remain the exclusive property of RIG Enterprises Limited. Opinions and statements attributed to named guests reflect the views of those individuals as expressed during recorded interviews and are reproduced here for editorial and informational purposes. No part of this article may be reproduced, distributed, transmitted, republished, or otherwise exploited in any form or by any means without the prior written consent of RIG Enterprises Limited.