The AI questions boards are really asking - answered

The most important AI decision a board needs to make is who owns AI governance. Without it, AI adoption happens department by department - without accountability, controls or visibility into what is actually running across the business.

Are you ready to get in touch?

Request a Call back

AI adoption for business is one of the most discussed and least-resolved topics in boardrooms right now. Everyone is talking about it. Very few organisations have a clear plan.  AJ Thompson, Chief Commercial Officer at Northdoor, will answer the questions business owners and directors are genuinely wrestling with. Not the surface-level commentary. The real concerns – around control, accountability, data, jobs and ethics. AJ sits on IBM’s Worldwide Security Advisory Council and works with organisations at every stage of their AI journey. What follows are his answers, as he gave them.

Why AI adoption feels uncomfortable for business leaders

Q: What are the key issues posing a challenge for AI adoption – the fears of business owners and directors?

It is not that the stress is even about AI. It’s about control and comprehension. Business owners and directors are accustomed to making defensible decisions. There is an additional layer of opacity that AI introduces that makes people feel uncomfortable. When a system generates an output, and you cannot pinpoint the what, why or how behind it – or where it came from, entirely – and that is really disconcerting for people accountable to boards, shareholders or regulators.

Three anxieties run in parallel. The first is competitive in nature: the fear that a competitor is already using AI against you to undercut your price, speed, or service quality and you will not see it coming until it is too late. The second is reputational: just one highly publicised AI failure, be it a discriminatory output, a data breach or simply hallucinated responses sent to clients, can undo a lot of the hard work. The third is existential and more intimate: directors are privately asking themselves whether the skills and instincts that brought them here will still count in five years. That’s not something you want to say in a board meeting.

Underlying all of this is a data problem that nobody talks about. For decades if not longer, many companies have been growing their data in siloes, in inconsistent formats, no governance frameworks. That immediately shows up with AI; as they say: rubbish in, rubbish out. It is never going to be an ideal solution because you cannot produce reliable answers based on bad data.

The most important AI decision your board needs to make

Q: What is the most important AI decision a board must make in the next twelve months – and what are the risks of getting it wrong or doing nothing?

This is the single most important decision – who owns AI governance in this organisation. Or, who wants to own its governance.

It raises the issue of who is charged with responsibility for how AI is to be deployed, supervised, and governed across the organisation. Everything else is a guess without that.

The danger of inaction is that AI adoption does happen department by department, by shadow technology with enthusiastic employees, by third-party vendor. And all without formal governance.

And that’s how customer data ends up going to a massive language model that hasn’t been vetted, or a procurement decision is based on an AI output nobody bothered to question. Ungoverned AI doesn’t mean no AI – it means hidden AI with no accountability.

You have a governance structure in writing but not the control to implement it. Boards should approach this with the same seriousness they assigned to GDPR and, if anything, learn from those who did not take that seriously enough in 2018.

northdoor

Where to actually start with AI

Q: Where do we start? How should companies treat AI and use it?

Start with problems, not technology. The worst AI projects I have seen started with ‘we need to do some work with AI’ and with zero clarity about what problem they were solving. The best ones started with a process that was slow, costly, and error-prone or annoying and a real question asking whether or not AI could improve the process.

Practically, I’d suggest three steps. First, audit your data. AI is only as good as your input, and at this point, most – actually all – businesses learn the hard way that their data is messier and dirtier than they thought. Second, choose two or three high-friction, low-risk internal processes – document summarisation; first-draft report generation; answering requests for routine and commonly needed information – to pilot in a controlled manner. Third, most definitely establish a governance framework before you grow up. Specify who can use which tools, what data is allowed to be processed, how outputs are reviewed and who is liable if something goes wrong. Without this framework you are exposed to being breached or to employee data leakage. Who wants staff to see colleagues’ health information?

The temptation is to leap to client-facing applications because that’s where the companies feel a commercial return is clearest. However, your reputational exposure there is also most pronounced, so you have to gain the trust from employees first.

Skills, jobs and the over-reliance problem

Q: What new skills will be required? How are jobs reshaped? How do you excite, not frighten? Are we already seeing over-reliance?

The most important skills will not be technical; they will be critical and interpretive. The capability of assessing an AI output, to frame the appropriate question, understanding what quantifiably can or cannot be done by a model. This is known as ‘prompt literacy’. What to ask, how to ask it and how to check that it has produced what you had hoped; these are new skills that need honing.

Instead, jobs are being reimagined – which is a small consolation if your role is the one getting reshaped. The truth is that routine cognitive work – writing repeatable documents, converting structured information or triaging basic requests – will go the way of the horse and carriage. However, jobs that involve managing relationships, making ethical judgments, solving creative problems or understanding the specific context of a client or market become more valuable rather than less.

To bring staff along the AI journey it is important to both involve them and be transparent about what the company can achieve and how it will help them, rather than leaving them to worry about their futures. Engaging employees will lead to a more successful project.

Over-reliance is not, yet, an issue for most. But the most worrying pattern I have seen is what I would term ‘verification abdication’, when people just accept AI outputs without subjecting them to scrutiny, since the output looks so credible and we are all tempted to save time. It is as much a problem of culture, and it must be addressed at the governance frameworks level. We’ve all seen this in action, such as legal cases that are argued by barristers based on false precedent. Their research has the appearance of an official document, yet it is a piece of fiction.

AI in customer service – the benefits and the risks

Q: Customer relationships and customer service – in what way can AI be beneficial, and what harm could it do?

The pro case is real and being proved already. Artificial intelligence can provide consistency at scale. It can search context immediately, meaning a customer service rep walking into a conversation is already aware of the likely issue and history. It can significantly lower the response times for common queries, leaving human agents to grapple with complex, sensitive, humans-in-the-loop issues.

The same is true when you consider the negative case, which is just as real and often overlooked. The greatest risk is not the AI getting it wrong as such – but rather giving an answer that’s technically correct, but inappropriate in terms of context. Situations that demand emotions such as empathy or caring, abilities that AI has yet to master, still require a human touch. And, when AI is deployed during such times, without proper escalation channels, the reputational cost is high and permanent.

Also, it is apparent that, at present, people know when they are engaging with an AI bot. Sometimes this is acceptable but there are times when a customer just wants to speak to a person, and levels of frustration escalate very rapidly. How many times have we interacted with a bot that drives us up the wall?

The AI risk that keeps directors up at night

Q: Take away data protection and confidentiality; what is the one AI risk that gives you sleepless nights as a business owner or director?

Accountability without visibility. More precisely, that an AI-driven decision causes harm – commercial, regulatory, or reputational – and no one in the organisation can clearly say how that decision was reached and who authorised it.

Directors are personally accountable. As AI becomes more embedded into operations, the chain of accountability gets harder to follow. If an AI system helped with a credit decision, a hiring shortlist, a pricing model or procurement recommendation and that decision then turns out to be faulty, biased or damaging – the question ‘who is responsible?’ becomes very difficult to answer. Regulators are already asking it.

Third-party dependency risk also has its own dimension. The majority of businesses don’t build AI – they consume it, in the form of software vendors, platforms and APIs. If those underlying models change – say, if a vendor gets acquired or if a model gets retrained on different data – the business consuming it might not even be aware. Very few boards have ever formally addressed that invisible dependency.

Relationship agents, deepfakes and where the line is

Q: The ‘relationship agent’ plus deepfake voice questions – service enhancement or surveillance? Where’s the ethical line?

This is where I think the industry needs to get real, not bullish.

An AI agent that listens into client calls, evaluates tone and sentiment, and provides advice ahead of the business talk? That would be what I would term augmented service; just as long as the client knows it’s happening. The value is tangible. It may pick up on whether a client feels frustrated or disengaged where a busy account manager might miss the signs, and responding to the situation could be seen as good relationship management.

But transparency is the issue that must pass ethical standards. It strays from augmented service into surveillance the moment you use that capability without the client knowing. And if a client finds that out, trust is broken irreparably.

On the question of deepfake voice – I would say any situation that can convincingly lead a client to reasonably believe they are speaking with a human when they are not should be termed deepfake. If I sound like a synthetic version of myself and am giving a pre-recorded update such as a product briefing, which is plainly identified as such, that is acceptable, if only for efficiency gains. But using it to simulate a live conversation, to make an impression of having spoken myself? That violates the very foundation of a professional relationship.

Too much too soon – data, pace and getting it right

Q: Too much too soon? Is there a danger of being overwhelmed by all this? Have companies given sufficient thought to the quality of their data?

These three are all valid and interlinked.

On ‘too much too soon’ – the velocity of AI development really is remarkable, and the pressure on business leaders to react is real. However, rushing adoption of a technology is how most companies end up with technical debt, security holes and implementations gone wrong. The companies that will be in better shape three years from now are the ones who adopted not just fast – but with clarity and discipline.

Data quality is the most underappreciated problem when it comes to adopting AI, full stop. In general, businesses overrate their data quality and structure; they think their data is better than it really is. AI does not just require data – it requires verified, up to date, consistently formatted data that has appropriate governance. A six-month data remediation programme is likely to do more for many businesses than any AI tool available today.

And psychologically, this technology is very different from past waves of automation. It works in areas that humans thought only they were able to operate – language, reasoning, creativity, judgment. It is unrealistic to expect people to accept this dramatic change without fear. Organisations that understand this fear and manage the communication process correctly will be best placed to benefit from successful AI programme delivery.

If any of this sounds familiar, we can help

Whether you need to put governance in place, get your data in order, or simply understand where AI can add real value in your organisation – talk to us. Northdoor works with businesses at every stage of AI adoption. We’ll be straight with you about what makes sense and what doesn’t.

Looking for more on what Northdoor does in AI?

Request a demo or contact sales on: 0207 448 8500

Explore our AI services
1

Our Awards & Accreditations