Your business is already using AI. The question is whether anyone is governing it.
A professional services firm I know lost a place on a shortlist last quarter. Not on price. Not on capability. The procurement team asked three questions about how AI is used in service delivery, what data flows into which tools, and what governance sits around it. The firm had no documented answer to any of them.
They are a good business. Capable people. Genuine expertise. And they were using AI well — for research, for drafting, for analysis. The problem was not the AI use. The problem was that nobody had thought about how to explain it, govern it, or answer questions about it in a formal setting.
That conversation is starting to happen more often than most business leaders realise. And it is happening to businesses that are doing everything else right.
Ungoverned AI is the term for what this looks like in practice: AI tools adopted across a business without a documented inventory, without a policy on acceptable use, without evaluation criteria for new tools, and without a process for when something goes wrong. It is not a dramatic failure. It is the gap between adopting AI in good faith and being able to account for that adoption when someone outside the business asks.
Shadow AI is the subset that carries the most immediate risk: personal AI accounts (ChatGPT, Claude, Gemini, Copilot) being used for business purposes on logins the organisation has never seen or approved. In most UK SMEs with 20 to 200 employees, the shadow AI layer is now larger than the sanctioned one.
What does ungoverned AI look like in a small business?
Most organisations with 20 to 200 people would recognise at least three of these four patterns.
No inventory of AI tools in use. Nobody in the business has a complete, current list of every AI tool being used, who is using it, what data it accesses, or how outputs are being applied. The tools were adopted individually, by different teams, at different times, for different reasons. Each decision made sense on its own. The cumulative picture has never been assembled.
No policy on data and AI. Most businesses have some form of data handling policy. Fewer have thought about what happens when an employee pastes client data into a personal ChatGPT account to get a faster answer, or when a sales team uploads a prospect list to an AI enrichment service, or when a finance team feeds revenue data into an AI forecasting model hosted on someone else's infrastructure. The data governance frameworks that exist were designed before AI tools were part of the daily workflow.
No evaluation criteria for AI tool selection. When someone in the team finds an AI tool that saves them time, the decision to adopt usually comes down to whether it works and what it costs. Rarely does the evaluation include questions like what data it retains, who owns the outputs, what happens if the provider changes their terms, or whether any of this is consistent with what the business has told its clients about how it handles their information.
No process for when AI gets it wrong. This is the one that carries the most commercial risk. AI tools produce wrong outputs. They fabricate statistics in proposals, generate customer communications with incorrect information, make recommendations based on patterns that do not apply. In most businesses, when this happens, the person who noticed it corrects it and moves on. There is no process for understanding why, whether it has happened before, or what the business should do differently.
Can employees use personal AI accounts for business work?
This is one of the most common questions business leaders are asking right now, and the honest answer is that most have not decided yet, which means employees are deciding for themselves.
When a team member uses a personal ChatGPT or Claude account to draft a client proposal, summarise a meeting, or research a prospect, the data they input is processed under that provider's consumer terms, not a business agreement. The business has no visibility of what was entered, no control over how the provider retains or trains on that data, and no way to demonstrate to a client or regulator that their information was handled in accordance with any stated policy.
The question is not whether to allow it or ban it. Banning personal AI use outright tends to push it further underground. The practical answer is to establish a clear, written AI use policy that distinguishes between acceptable and unacceptable uses, specifies what types of data can and cannot be entered into AI tools, and provides sanctioned alternatives for the tasks people are currently using personal accounts for.
A one-page AI use policy is not bureaucracy. It is operational clarity. Most businesses could write one in an afternoon if they decided it mattered. The businesses that have done it describe it as permission: their team can now use AI confidently because the boundaries are visible and the risks are named.
A future post in this series will cover what a practical AI use policy looks like for a UK SME, including a starting template.
What are the commercial risks of ungoverned AI?
The risk is not about catastrophic scenarios. It is about everyday business situations where ungoverned AI creates tangible problems.
Fabricated content in client deliverables. A professional services firm uses AI to draft client deliverables, and the AI introduces a fabricated case study, complete with a company name and a percentage improvement that never happened. The deliverable goes to the client unchecked. The credibility damage is severe and difficult to repair.
Bias in automated screening. A recruitment firm uses an AI screening tool to filter CVs, and over time the tool develops a pattern bias against candidates from certain educational backgrounds. Nobody notices because the screening criteria are opaque. An unsuccessful candidate asks for an explanation, and the firm cannot provide one that would satisfy an employment tribunal.
Silent quality degradation. A manufacturing business uses AI-powered quality inspection. The system flags fewer defects after a software update and production speeds up, but three months later customer returns increase significantly. The connection between the software update and the quality change is only identified after the damage is done.
These are not edge cases. They are the kinds of situations that emerge naturally when capable technology is deployed without governance. The people involved are acting in good faith and the tools are genuine productivity enhancers. What is missing is the structure around them.
Does AI use affect professional indemnity insurance?
This is a question more business leaders should be asking their broker, and more brokers are starting to raise with their clients.
Professional indemnity insurance typically covers losses arising from professional negligence, errors, or omissions in service delivery. When AI tools are used in that delivery, the question of where the error originated becomes more complex. If an AI-assisted report contains a material inaccuracy and the client suffers a loss as a result, the PI policy may respond, but the insurer will want to understand what checks were in place, whether the business knew AI was being used in that process, and whether there was a documented procedure for validating AI-generated content.
Commercial insurance brokers are beginning to include AI governance in their standard risk conversations. "How does your business govern its use of AI?" is not on every renewal questionnaire yet, but it is appearing. The businesses that have something credible to say are not necessarily doing more than those who do not. They have just thought about it and written it down.
The practical implication: if your business uses AI in any part of its service delivery and you cannot explain to your insurer how that use is governed, you may be carrying a coverage gap you have not identified. A conversation with your broker about AI use is a sensible first step, and having a documented AI policy strengthens your position significantly.
How are procurement teams and clients asking about AI governance?
This used to be an internal conversation. It is becoming an external one.
Procurement. RFPs are starting to include sections on AI use in service delivery: what tools are used, what controls are in place, whether the business can explain its AI-assisted processes to a client if asked. Businesses that leave those sections blank or write something vague are losing shortlist places to competitors who have structured answers.
Clients. The question "is this AI-generated?" is now being asked in professional services relationships. Not always formally. Sometimes over coffee. But the businesses that can give a clear, confident answer about how AI is used in their work, what safeguards are in place, and what human oversight looks like are in a stronger position than those who stumble through it.
Regulation. The EU AI Act is in force. The UK government has published its AI regulation framework. Neither requires ISO 42001 certification. But both create expectations that businesses using AI should be able to explain how they govern it. The regulatory direction is clear, even where the specific requirements are still forming.
The common thread: external parties are starting to ask questions that most UK businesses cannot currently answer with confidence. That gap is already affecting commercial outcomes.
How do AI governance gaps affect the rest of the business?
One of the patterns I see consistently in commercial work is that problems in one part of the business rarely stay contained. Sales issues create forecasting problems. Forecasting problems create cash flow tension. Cash flow tension creates board-level stress.
AI governance gaps follow the same pattern. A sales team using AI-assisted proposals without review processes creates a client trust issue, which becomes a retention problem, which becomes a revenue problem that lands on the MD's desk with no obvious connection to the AI tool that someone adopted nine months ago.
The cross-functional nature of AI use means that governance gaps in one area can surface as commercial problems in another. By the time the problem is visible at leadership level, the root cause is often several steps removed from where the impact is felt.
What does AI governance actually mean for an SME?
If you are reading this and recognising your business, you are already ahead of most. The majority of businesses in this position have not yet had the conversation — not because they do not care but because the adoption happened organically and nobody has had reason to step back and look at the whole picture.
Here is the part worth sitting with.
The businesses without AI governance are not the ones moving fastest. They are the ones holding back. Leadership hesitates on the bigger AI investments because nobody can articulate the risk profile of the ones already in play. The AI-assisted service offering that would differentiate them in a pitch stays in the "not yet" pile. AI use in client conversations gets played down when it could be talked about as a strength. The absence of governance does not create speed. It creates a ceiling.
AI governance for an SME means knowing what AI tools are in use, understanding the risks they carry, having clear policies on acceptable use and data handling, and being able to answer the questions that clients, insurers, procurement teams, and your own board are going to ask. It does not mean building a compliance department. It means building enough structure that the business can commit to AI with confidence.
Do that, and two things happen. The ceiling on how far you can commit to AI lifts. And the next time someone asks how your business governs its AI use, you have a confident, credible answer instead of a silence.
Far enough that it holds water. Not so far that it holds up sales. That is the balance.
The international standard for AI management systems, ISO/IEC 42001, provides a structured framework for finding it. In the next post in this series, I will look at something most of the AI governance conversation misses entirely: why governance applies to businesses that use AI in their operations, not just businesses that build and sell AI products.
This is post 2 of 6 in the From Exposure to Advantage series. Read the previous post: From sales transformation to AI governance.
Coming Next
Coming next in the series
Post 3 will examine why AI governance applies to every business that uses AI in its operations, not just those that build or sell AI products — and why that distinction matters more than most leaders realise.
Get New Posts in Your Inbox
Insights on sales transformation, revenue growth, and commercial measurement. Plus get the Free Commercial Function Evaluation when you subscribe.
No spam. One email per post. Unsubscribe anytime.
Want to discuss this? Get in touch.