
AI is everywhere in the headlines at the moment, and I’ve lost count of the number of rooms I’ve sat in where someone says, “We need to do something with AI,” without being able to say exactly what problem they want to solve. That’s the crux of the issue. If public sector organisations start from “how can we use AI?” rather than “what are we trying to achieve?”, they are already on the back foot.
Perago is a Swansea-based change consultancy, which works with organisations across both the private and public sectors on pretty much any kind of change. Our management team and staff base are rooted in the public sector, and we understand the challenges those organisations are facing. At the same time, we work with a fair amount of private sector clients, which means we can cross-pollinate ideas between both worlds. Crucially, we’re supplier-agnostic: we’re not tied to any particular technology or platform.
That shapes our attitude to AI. When we talk to organisations about AI readiness we encourage them to approach it in the same way they would approach any other major change. That starts with taking the time to work out what it is you’re trying to achieve, and developing a clear corporate narrative so everyone understands the problems you’re trying to solve. If you can’t clearly define those problems, you can’t credibly assess whether AI – or anything else – is actually helping.
Once you have properly defined the problems, you can start to look at a range of solutions. AI might be one of them. There may also be other technologies, or improvements to processes, or changes in culture and behaviour that provide better answers. The point is that AI should sit in the mix as a potential solution to specific, well-understood issues, not as a product you feel obliged to adopt because it is fashionable.
Government has been very supportive in encouraging the use of AI. Over the past 18 months we’ve seen both the UK Government and the Welsh Government set up structures specifically aimed at promoting AI across public services. The intent behind that is positive. But it has, at times, landed in a way that feels a bit like: “AI is here, it’s there for you to use, go forth and adopt it.” That can create pressure on public services to move quickly, perhaps prematurely, without really taking stock of what they are trying to achieve and where AI could be most fruitful.
There are, however, certainly parts of Welsh public services where the correct approach is being harnessed. When I talk to organisations like South Wales Fire and Rescue Service, what stands out is that they are taking the time to think about their “why”: efficiency, safety, productivity, value for the people who fund them. They’re setting up AI working groups, looking at cybersecurity, data, confidentiality, ethics and transparency. They are also grappling with the reality of very diverse internal audiences – different levels of digital confidence, different attitudes to AI, people working in very different environments. Creating a culture where people can safely experiment, learn at varying levels and voice their concerns is a big part of making AI work in practice.
For us, ethics and governance come into the conversation as soon as we move from problem definition into potential solutions. We tend to work through four broad stages with organisations. First, discovery: understanding where the organisation is now, the landscape, and the issues it is facing. Second, definition: refining those problems and starting to think about possible solutions. Third, development: taking a small number of those solutions and really working them up. This is where ethics becomes critical. Just because you can do something with AI does not mean you should. It may not be right for your users, and it may not safeguard their needs – including needs they don’t explicitly articulate, such as keeping their data secure at all times. Finally, delivery: making sure those ethical considerations stay on top of the risk register as you implement, so you’re not creating unintended harms as a side-effect of innovation.
I also understand why many people in the public sector can feel overwhelmed. AI is huge, and it is not going anywhere. It can feel a bit like the early days of the internet, when the idea of being an “expert” in it was almost meaningless because it was so broad. I don’t think any of us are going to be experts in every aspect of AI. That’s why I keep coming back to the same basic advice.
Don’t get dazzled by the shiny products or the sheer volume of choice. Start with what you know best: your organisation, your people and the communities you serve. Be clear about the problems you are trying to solve. Use AI as one of a range of tools that might help, applied responsibly and ethically. Build a solid foundation rather than rushing to badge something as “AI-enabled”. If public sector bodies in Wales can do that, they’ll be in a far stronger position than those who simply set out to “do something with AI” and hope the benefits follow.
Chris Elias talks about this and more in the Government and Not for Profit podcast episode Ready for AI. Listen here.








