top of page
  • Writer's pictureAndrew Laird

Be both bold and careful when adopting AI in public services

Updated: Dec 21, 2023

Artificial Intelligence is developing at breakneck speed with the standard Silicon Valley attitude of “go fast and break things”. This creates a serious dilemma for public service leaders. They cannot simply licence the wide-spread adoption of a technology that cannot explain itself, sometimes “hallucinates” and is often biased. But neither can they afford to ignore the potential productivity and analytical benefits. MV’s Andrew Laird argues that public service leaders can find a sensible middle ground where they are both bold and careful.

At the start of December, Mutual Ventures hosted a webinar discussion exploring a “values driven approach to AI in public services”, which featured Newcastle City Council who we have been working with. This article reflects on that discussion.

Until relatively recently you could have been forgiven for thinking that Artificial intelligence, that vague and distant concept, was something you could opt in or out of. I don’t think anyone thinks that is the case now.

The past six to 12 months has seen a huge advance in the technology which has been made available to the public. Most people will now be familiar with Chat GPT (or at least what it does) - but there is also Google Bard and then a range of increasingly popular AI “virtual assistant” applications such as Microsoft Co-pilot and Google Duet.

Against this rapidly evolving backdrop, there are several key reasons why public service leaders need to get on the front foot.

The first is of a practical nature. Staff are already using these tools whether there is a policy in place or not. One of the key questions I ask Council Chief Executives is how certain are you that Chat GPT isn’t being used to produce Cabinet reports? Or how sure are you that staff aren’t uploading council data onto an AI tool (most of which are open systems) to produce summary reports etc.? Unless public service organisations have systems and policies in place, the answer has to be that they are not sure.

The second reason is more fundamental. You may have been observing the bizarre events unfolding at Open AI (the Chat GPT developer) where the Chief Executive Sam Altman was dramatically fired and then rehired. This was more than just a corporate drama. It went to the heart of the debate on the future of AI.

There are two emerging factions in the AI debate. The “utopians” and the “doomers”.

The utopians are convinced that AI is overwhelmingly positive and that even if AI ends up smarter than humans it will be a good thing. This group relentlessly pursuing what is known an Artificial General Intelligence or (AGI). This is essentially a multi-modal, multi skilled AI that can consider everything a human can consider only faster and with the ability to solve more complex problems - think “Hal” in 2001: A space odyssey. Utopians seem generally relaxed about AI becoming more powerful and influential than humans. Google founder Larry Page famously called Elon Musk “speciesist” when debating this risk. In terms of how this impacts us mere mortals, in the pursuit of AGI, there is often huge pressure on companies and developers to release products which have undergone only limited vetting.

Then there are those (the so called doomers) who are more cautious and want to ensure that AI is always ultimately there to serve the humans, is regulated and that the risk of AGI is contained. They also believe in the need for AI to be more transparent and explainable.

The naming of these groups is obviously overly sensational and as with everything there is a sliding scale – but you get the point.

For public services, there can be only one side of this argument to come down on. Public services need to be accountable and where a decision impacts an individual, it needs to be transparent, explainable and challengeable. To illustrate why this is a problem, for Chat GPT4, Open AI have not revealed the data sets on which it was trained. Is this a problem? You might make the point that you wouldn’t ask a new member of your team to list every book they have ever read, every movie they have ever watched in order to assess the bias their opinion might have. But then a human can be asked to explain or justify an opinion or decision. All an AI model can do is point to the billions of pieces of data it has consumed and say that this is the most probable answer to your question.

So how do we proceed?

Digital automation began transforming the way local government operates long before ChatGPT arrived on the scene. Yet the ever-increasing availability of AI with genuine learning capability takes the potential opportunities and risks to a new level. There are lots of issues around data bias, transparency, accountability and decision making etc. that need exploring.

We have been doing some of this work with Newcastle City Council (See case study) which has given them a full grounding in AI basics as well as an ethics and values basis on which to develop their AI journey. For sure, public services cannot afford to ignore the potential benefits of AI in terms of productivity and data insight - but this must be from a position of understanding the risks and limitations as well as the exciting bits.

Through our work with Newcastle, we have developed a framework to support public service organisations to assess the opportunities and risks as well as understanding their baseline readiness. It focuses on the key potential use cases of AI in public services and organisational enabling factors to fully harness its potential. See diagram below:

AI can have multiple uses across public service delivery and organisational functions. You need to be strategic about how you want to use AI, identifying the potential use cases that would deliver the best value for your organisation and your local communities. This first part of the diagnostic framework investigates a starting position on your AI transformation journey and where your focus should be going forward.

You also need to act holistically to harness the power of AI. The enabling factors described in the second part of the diagnostic framework represent the key conditions for change.

We recently held a webinar which discussed all of this. You can catch up on that via this link: Catch up on our webinar: A Values-Driven Approach to AI in Public Services.

Drop me a line at if you want to hear more or discuss your own AI journey.


bottom of page