
GUEST COLUMN:
Rhian Moore
Head of Communications and Engagement
South Wales Fire & Rescue Service

AI is becoming part of everyday conversation, and like many public sector organisations we are exploring what it could mean for the way we work. But rather than rushing towards tools or products, we are starting with understanding why we would use AI, what problems it could help us solve and how it could support both our frontline operations and our back-office teams.
This measured approach is a deliberate choice to build the foundations properly so that any technology we adopt genuinely improves the service we provide.
Like many organisations with long histories and complex roles, we have a lot of legacy systems. Some of our staff are offline for most of their working day. Firefighters don’t sit behind laptops; they work on stations and attend incidents. Much of our communication is traditional, via printed material, briefings and face-to-face conversations. That context matters. It affects how new technology can be introduced and how it will be used. For AI to work in our environment, it has to be rooted in people’s real roles, real constraints and real needs.
That is why we have started with a digital transformation programme that looks first at the basics. Before you even think about AI, you need systems that talk to each other, up-to-date equipment and an intranet that works well for everyone. Many of our biggest gains will come not from AI itself but from getting those fundamentals right. Once we have a more coherent digital environment, AI becomes one of the tools we can consider alongside others.
At the moment we are working through our “why”. We are looking at how AI could streamline the work we do, improve productivity and free up time. AI has potential both at incidents and in the back office. On the operational side, it could help us better understand a building before crews arrive, drawing on available data to support their situational awareness. On the prevention side, it could help us identify vulnerable communities that would benefit most from home safety checks or targeted campaigns. And in our day-to-day work, it could support tasks such as administration or content creation, giving skilled staff more time to focus on what really matters.
Crucially, AI cannot replace the human element of our service. When someone calls 999, they want people to turn up who can cut them out of a car or put out a fire. Technology can support that work, but it cannot do it. So we are not asking how AI can take over. We are asking how it can help us work more effectively and safely.
To make sure we approach this in a structured way, we have set up an AI working group with representation from across the organisation including operations, data protection, information technology and communications. The group is looking at the art of the possible and, just as importantly, the parameters we need around that. Cybersecurity, confidentiality, ethics, transparency and the question of how we say when AI has been used all matter. We want to build a clear framework for what we will use AI for and what we won’t.
Change on this scale also depends on culture. Our workforce is diverse, with different levels of digital confidence and different attitudes towards new tools. Some colleagues are already using AI informally. Part of our challenge is to support that curiosity while making sure it is done safely and consistently.
Training will be central. As communicators, my team will lead the work of taking people through the “hearts and minds” part of the journey, helping them understand why we are doing this, how AI can make their lives easier and how it fits with our wider transformation. People want to know that booking leave will be easier, or that entering expenses will be simpler, because those things matter to them. Demonstrating early, practical benefits builds trust.
We are also looking at our own use of AI within communications. It can help with content creation and campaign analysis, allowing us to go deeper and produce more targeted, data-driven work. If we can use it to free up some of the time-consuming tasks that often push evaluations down the list, we can improve the quality of our campaigns and our understanding of what works.
As a safety-critical service, we must consider risks carefully. There are questions about data, confidentiality and ensuring information isn’t uploaded into systems in ways that could be used elsewhere without our knowledge. There are also questions about public expectations. When residents want to reach us, they may not want to interact with a chatbot. They may want to speak to someone. So we need to consider both the opportunities and the boundaries.
We are still at the start of this journey, but I see that as a strength. We know we want to be more effective, more efficient and better value for the people we serve. We know AI can support that, but only if we are clear about the purpose. By beginning with the fundamentals – our systems, our people, our processes and our values – we are giving ourselves a strong base to build on. This is not about adopting technology for its own sake. It is about making thoughtful decisions that support the work we already do and help us serve our communities even better.
Rhian Moore talks about this and more in the Government and Not for Profit podcast episode Ready for AI. Listen here.








