
The Intelligence Layer: What a DPO Inside a Growing UK Dental Group Knows About AI That Most Leaders Don't
Direct answer: The most common strategic error in UK dental group AI adoption is not choosing the wrong tool. It is misclassifying the decision itself. AI adoption is not a technology decision. It is an organisational design decision, and the consequences of treating it as anything less are predictable: confusion at practice level, staff resistance, fragmented data and technical debt that compounds with every acquisition. John Grainger, Data Protection Officer, IT and Communications Lead at Riverdale Healthcare, sits at the exact intersection where these consequences either materialise or get prevented. His perspective is not that of a product evangelist or a vendor. It is that of the person inside a growing, PE-backed UK dental group who is responsible for data integrity, system infrastructure and the governance architecture that either enables AI to deliver value or ensures it delivers chaos. The lessons he articulates are the ones most leaders encounter too late.
What Are Most People Fundamentally Misunderstanding About AI in Dentistry?
Grainger's opening observation strips away the product marketing that dominates dental AI conversations and replaces it with the operational reality.
"The technology is still very new. Every single supplier, I get multiple a day, is saying have you checked this new AI software out? Have you not seen this new product that we've got on the way? And it's all really cool stuff. But when you actually look at implementing it, it's how the people on the ground are actually going to take it up. Are they going to like it? Is it going to fit in with their daily routine?"
Two failures run in parallel in most dental group AI rollouts. The first is the mismatch between the supplier demo and the live deployment reality: the integrations that weren't shown, the compatibility gaps that only appear at practices 2 and 3, the workflows that look clean in a controlled environment and fracture under daily clinical pressure. The second is the compounding damage to staff confidence when a pilot fails.
"Those staff on the ground, they've gone through it. They've actually had to experience that tech going wrong. And then that makes them nervous for the next pilot that might be up and coming. It's dangerous. I'd say dangerous might be a bit harsh, but I think it is a little bit dangerous."
The word choice is deliberately careful and all the more credible for it. A failed AI pilot does not just waste budget. It raises the activation energy required for every subsequent initiative. In a multi-site group where clinical staff have experienced technology promised as a solution delivering additional friction, the residual scepticism becomes a structural barrier to future adoption that no product demonstration can easily overcome.
What Breaks First When a Group Scales Quickly and Layers Technology Without Structural Alignment?
Grainger's answer addresses the question that boards and PE investors in the UK dental consolidation market rarely ask until the answer is already visible in their operational performance numbers.
"When you introduce technology into a business, you have to look at what is that going to replace something that you've got in place. How long is it going to take for certain people to adapt to that change? And once it's actually in, has it worked? Is it actually for the betterment? The data might suggest that it is helping with what we wanted it to help with. But when we look at it on the ground, actually, is there a net gain to what we've done? Are the receptionists or the clinicians spending more time trying to figure out the process or the technology than they are benefiting from it?"
The net gain question is the one most technology decisions in dental groups skip entirely. Throughput metrics, adoption rates and login frequency get reported. The real-world cost in staff cognitive load, patient interaction quality and time-per-patient rarely does. Grainger's framing introduces a rigorous test that should precede any AI procurement decision: measured against the workflow it replaces, does this system produce a genuine net operational gain for the people using it daily?
We examined how the gap between tool activation and genuine operational integration determines AI outcomes across dental groups in Why AI Doesn't Fix Broken Dental Practices - It Exposes Them
How Does Fragmented Data Across Acquired Practices Work Against AI, Even When the Tools Are Strong?
The data infrastructure problem in UK dental consolidation is documented but rarely quantified by those managing it from inside. Grainger's account is specific and instructive.
"When we have acquired practices, we might have left the tech that's already there. And when we're trying to introduce a new project, it's a lot easier to do it when everyone's on the same page. We've got some practices on a server-based PMS system, which makes it very hard to implement any AI technology right into the PMS, if at all. But we've got others that are cloud-based that we can quite easily integrate different stuff in with. That often makes it hard to see what it looks like at scale and if it can be scaled."
The divergence between server-based and cloud-based practice management systems within a single group is not an edge case in UK dental consolidation. It is the standard condition of any group that has grown through acquisition rather than greenfield development. Each acquired practice carries its own legacy infrastructure, its own imaging protocols and its own data architecture. When an AI tool is evaluated at group level, the question of whether it can actually be deployed consistently across mixed infrastructure environments is almost never answered in the supplier demo.
"Say we have two PMS systems that integrate with the thing that we want to integrate with. That's then two bits of data that we need to look at. Do they push out the data we want in the same way? So that's something that often you have to look at. The productivity in terms of: I'm now checking two systems rather than one and getting two sets of data that need to merge."
This is the hidden tax on AI investment in acquisition-driven dental groups. The productivity promised by the tool is partially or wholly consumed by the data reconciliation work required to make it function consistently across a heterogeneous infrastructure. Groups that do not audit and standardise their data architecture before AI deployment do not eliminate this cost. They simply discover it later.
We examined how infrastructure decisions made at the point of acquisition compound into either competitive advantage or technical debt at scale in Scaling Dentistry Without Breaking It
How Should Leaders Rethink the DPO Role in an AI-Enabled Dental Organisation?
In most UK dental groups, the Data Protection Officer is a compliance function. The role is associated with GDPR policies, data subject access requests and ICO reporting. Grainger's account of what the DPO role actually requires in an AI-active environment reframes it as something considerably more strategic.
"As the DPO, you need to look at how data's being used across the organisation, who we're giving it to, are they abroad? A lot of these companies are really excelling abroad, in the US or parts of Europe. We need to look at where our data's stored, what they're doing with it, are they selling it onto anyone else? All this is a process that we have to go through. And a lot of the time, the answer's no, to be honest with you. Or we put very stringent safeguards in place. But again, that eats into productivity."
For a UK dental group operating under NHS contracts and subject to CQC oversight, the data sovereignty question is not abstract. Patient data processed by a US-headquartered AI vendor is subject to different legislative frameworks, different storage standards and different downstream use rights than data processed by a UK or EU provider. The DPO's assessment of each AI tool's data handling architecture is not a box-ticking exercise. It is the prerequisite for any compliant deployment.
Grainger illustrates the practical complexity with the AI note-taking use case currently generating significant interest across UK dental groups.
"One of the ones we've got at the minute is AI note-taking software. Have we got the patient's explicit consent? Do they know that they're doing that for every single patient being told in the chair? By the time you've discussed what the software does, have you then wasted the time it would've taken the dentist to write up the notes?"
This is the kind of second-order thinking that separates governance maturity from governance theatre. The GDPR compliance question around ambient AI in the dental surgery is not resolved by a consent form in a new patient pack. Explicit consent for AI-mediated transcription must be obtained in context, for each patient encounter, in a way that does not itself consume the efficiency the tool was deployed to create. Groups that do not resolve this at policy level before practice-level deployment will face the same problem in every surgery they deploy to.
Which Governance Decisions Matter Most Before AI Is Deployed at Scale in a UK Dental Group?
Grainger's governance priority is expressed with a clarity that reflects both his professional role and the operational experience of managing it in a growing group.
"The biggest governance decision for me is who's got hold of our data and are they doing anything with it that we don't want them to do? We want a very simple governance procedure where we say: yes, you can have access to this data. Our patients are aware that this is the data they're going to be using and this is what we plan to do with it. In our opinion it should stop there. We need to make sure that our policies, both internal and external, are showing what it is we want to achieve and with what tech. I want to be completely open with everyone in the organisation, patients included, about where we want to go and what software we want to use."
The transparency principle Grainger articulates is both an ethical position and a strategic one. Dental patients are increasingly informed about data rights. NHS dental contracts carry specific obligations around data use and patient consent. A group that communicates openly about its AI strategy, from board to surgery to patient-facing policy, builds the trust infrastructure that makes adoption sustainable. A group that deploys AI tools without visible policy frameworks creates exposure at every level simultaneously.
His answer on what good data actually looks like in practice is equally grounded.
"It's a mix of anecdotal feedback from the sites and what the hard data shows, and then seeing where you end up on a net."
Structured data without qualitative context produces metrics that look clean and mislead. Qualitative feedback without structured data produces opinions that feel true and also mislead. The combination is what allows a group to distinguish between a tool that is statistically performing and one that is genuinely improving the experience of the people using it and the patients receiving care through it.
Why Do Infrastructure Decisions Made Early in a Group's Growth Compound Once AI Enters the Picture?
Grainger's reflection on this question is candid in a way that makes it immediately actionable for any UK dental group currently in the 5 to 20-practice growth phase.
"I think part of the problem with scaling is: are the things that you are introducing going to still work when you are bigger and when you have more sites? And, to be completely honest, I think that's something that I've been guilty of: looking at what's going to help in terms of AI, looking at what policies we can put in place that are going to help with this particular pilot, now, with these five sites. Once you get to 10, 15, 20, you realise: oh, okay, we don't actually have the infrastructure ready for that sort of scale. Or we need to completely change the way that we now think of this in order to match that scale."
This is the infrastructure trap that recurs across UK dental consolidation. The pilot is designed for the group's current size. The evaluation criteria are calibrated against the group's current complexity. The governance framework is built around the current estate. None of these things scale automatically. Each acquired practice adds another variable to the data architecture, another configuration question for the AI tool, and another layer of change management to the deployment.
"These conversations are going on all the time of: if we were to roll this particular AI software out across the whole of the group, what would that look like? Who would it impact? Who would control it? Can the AI handle it? That's a big one. I think a lot of these suppliers are desperate to get all of your sites running on it. And then you have to really look at that and think: in practice, how does that actually look?"
The supplier's incentive is full estate deployment. The group's interest is estate-wide value. These are not the same thing. A tool that performs well at 5 practices and creates significant operational friction at 20 is not a scalable investment. Evaluating AI tools against the group's projected 3-year size rather than its current size is the infrastructure discipline that most groups apply too late.
If Leaders Could Only Fix One Part of Their Infrastructure Before Deploying AI, Where Should They Start?
Grainger's answer is the most operationally important contribution of the entire conversation, and it is not the answer most technology leaders would expect.
"Staff buy-in. Or internal stakeholder buy-in. By that: the clinicians and the other dental staff. And then every layer of the organisation I think needs to be aware of what this software can do before you proceed. Get the staff's input as you go along. So there's no group think, because we can sit in a meeting room and think, oh, this is fabulous, it's going to be absolutely brilliant. But if you get a receptionist in or a nurse and they go, actually, that's not the way we work at this site, you'll need to look at how you can accommodate that. With everything that we do, we're doing it for the benefit of the team on the ground. And if it's not helping them, then what's the point?"
The word "group think" is precise and deliberate. AI decisions in dental groups are predominantly made by people who are not using the software daily. Clinical directors and IT leads who evaluate tools in boardroom demos are not experiencing the friction of a morning appointment list, a nervous patient, a system that requires three additional steps to generate a compliant note or an integration that breaks when the PMS updates. The receptionist or dental nurse who surfaces that friction in a pre-deployment consultation is providing the information the board cannot access any other way.
"The dental groups that win will be the ones that listen to their staff."
This is not a motivational statement. It is a governance principle. The groups that create structured feedback mechanisms between frontline staff and technology decision-makers, before deployment rather than after, will avoid the failure modes that Grainger describes. The groups that do not will rediscover them, at greater cost and with greater organisational damage, at every subsequent implementation.
We examined how people-first thinking determines whether AI initiatives succeed or collapse across dental organisations in People-First AI: Why Most AI Projects Fail in Dentistry
How Does AI Change Consistency Across Clinicians and Locations When Deployed Properly?
Grainger's experience with consistency at practice level reflects a broader truth that large-scale diagnostic AI deployments have confirmed across UK and international contexts. Achieving consistency is not the same as mandating uniformity, and conflating the two is one of the most common governance errors in multi-site AI rollouts.
"Consistency's a really difficult one with clinicians because clinicians have a very specific way of working and what they want to use. Let's take AI note-taking software, for example. Some clinicians don't want to use it and we're not going to force clinicians to use it. So it makes it hard to have consistency in that sense because everyone in that practice might not be using it. I see our job as facilitating it so that the dentist who doesn't want to use it thinks, oh no, we can see the benefit here, I want to use that software now."
The facilitation model, rather than the mandate model, reflects a fundamental difference in how clinical AI adoption works sustainably. A clinician who adopts a tool because the evidence of its benefit is demonstrated to them through their colleagues' experience brings genuine engagement to its use. A clinician who adopts a tool because it has been mandated by a group policy brings compliance at best and active resistance at worst. Neither is a sound foundation for the governance quality the tool is meant to deliver.
"I haven't heard that much pushback on that point, which did surprise me slightly. What I found with a lot of the software at the minute is that they operate very heavily on: these are recommendations, can you review this first before you make a decision? I haven't come across a software, certainly not one that we use, that says it's definitely this and it's definitely that. Because the clinicians quite rightly still need that control over the treatment they give their patients."
The recommendatory architecture of current clinical AI, its insistence on surfacing findings for clinician review rather than automating clinical decisions, is the design choice that has most effectively managed the autonomy concern in practice. Suppliers who have maintained this architecture have earned the trust that allows adoption.
What Is the Single Most Uncomfortable Truth About AI in UK Dental Groups That Most Leaders Are Not Ready to Hear?
"It's not quite ready yet."
This is not a counsel of inaction. Grainger is explicit that AI is the right direction and that the sector is moving correctly. His position is more precise: that the current moment is a genuinely uncertain one in which the technology, the regulatory frameworks, the data infrastructure and the organisational cultures required for safe and effective AI deployment are all in simultaneous development. Leaders who treat the current state as more resolved than it is will make procurement and deployment decisions they will regret. Leaders who use this period to build the infrastructure, governance and staff readiness that the next generation of AI tools will require are investing in something real.
"I really think AI is fantastic. We are moving in the right direction. It's this funny bit we're in at the minute where all this stuff, if we're honest with ourselves, it's still not figured out. We still don't really know."
The honest acknowledgement of uncertainty from someone managing AI adoption inside a live, growing UK dental group carries more strategic value than most vendor case studies. Certainty in the AI discourse is cheap. This kind of measured, grounded perspective is where genuine strategic intelligence begins.
We examined how the distinction between AI as a tool layer and AI as an intelligence infrastructure determines which dental groups scale with clarity and which scale with chaos in The Intelligence Gap: Why 550 Dental AI Tools Exist and Most Practices Are Still on Paper and Pen
Key Takeaways
AI in dentistry is an organisational design decision, not a technology decision. The question is not which tool to buy. It is whether the data infrastructure, governance architecture, staff culture and clinical workflows are mature enough to support it. Groups that skip this distinction will learn it through failed pilots.
Failed pilots create lasting damage beyond wasted budget. Staff who have experienced technology going wrong become resistant to subsequent initiatives. The cost of a failed AI pilot must be calculated to include the trust deficit it creates across the frontline team, not just the sunk procurement cost.
Fragmented PMS infrastructure across acquired practices is the primary constraint on AI scaling in UK dental groups. Server-based practices cannot integrate many current AI tools at all. Cloud-based practices can. A group that has not audited and begun standardising its infrastructure before AI procurement will discover the incompatibility after deployment, not before.
The DPO is a strategic architecture role, not a compliance function. In an AI-active organisation, the DPO's assessment of data sovereignty, storage location, vendor data use rights and patient consent frameworks is the prerequisite for any compliant and sustainable AI deployment. UK dental groups with NHS contract obligations face specific obligations in this area that most vendor contracts do not address by default.
Explicit patient consent for AI-mediated clinical tools must be obtained in context, per encounter. The assumption that a new patient consent form covers ambient AI transcription or diagnostic AI analysis is legally insufficient and practically unreliable. Policy frameworks must resolve this at group level before practice-level deployment.
Staff buy-in is the infrastructure investment that most leaders deprioritise and most rollouts fail without. The receptionist or dental nurse who surfaces a workflow incompatibility in a pre-deployment consultation is providing intelligence the boardroom cannot access. Groups that build structured frontline feedback into every technology evaluation will avoid the failure modes that those who don't will repeatedly encounter.
The facilitation model of AI adoption produces better clinical governance outcomes than the mandate model. A clinician who adopts a tool because its benefit has been demonstrated through peer experience uses it with genuine engagement. A clinician who adopts it under policy mandate uses it with compliance at best.
"It's not quite ready yet" is the most strategically honest assessment of UK dental AI in 2026. This is not a reason to delay investment in infrastructure, data maturity and governance frameworks. It is a reason to invest in those foundations now, so the organisation is structurally positioned to extract value when the tools mature, rather than scrambling to retrofit readiness after procurement.
About TechDental
TechDental is a strategic intelligence platform for founders, executives, operators and investors shaping the future of dentistry.
Through high-level analysis and systems-focused conversations, we explore how AI, governance frameworks and operating model design influence performance, scalability and enterprise value in dental organisations.
If you are building, scaling or investing in dentistry and want independent, systems-level insight into AI, governance and capital readiness:
www.techdental.com | info@techdental.com
LinkedIn: https://www.linkedin.com/in/dr-randeep-singh-gill-576580357/
For strategic advisory, board briefings or keynote speaking enquiries: www.techdental.com/advisory
The future belongs to those who deploy technology with discipline.
© 2026 RIG Enterprises Limited. All Rights Reserved.
This article was authored by Dr. Randeep Singh Gill and is published under the TechDental brand, a trading name of RIG Enterprises Limited (Company No. 11223423), incorporated in England and Wales on 23 February 2018, registered at 1a City Gate, 185 Dyke Road, Hove, England, BN3 1TL. All content, analysis, opinions and intellectual property contained within this article are the original work of the author and remain the exclusive property of RIG Enterprises Limited. No part of this article may be reproduced, distributed, transmitted, republished, or otherwise exploited in any form or by any means, whether electronic, mechanical, or otherwise, without the prior written consent of RIG Enterprises Limited.
Unauthorised reproduction or use of this content may constitute an infringement of copyright under the Copyright, Designs and Patents Act 1988.
