top of page
®

AI in Public Services: A Human-First Reality Check

Updated: 18 hours ago

Sam Altman watches over a family consuming his AI. The living room vignette is framed by power tool imagery. The words “Behaviour Power” are the top of the artwork, framing the intent of the piece.
Bart Fish & Power Tools of AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

By Hannah Tempest, Director of Design & Strategy at GAIN Experience


The Big Robotic Elephant in the Room

On the final day of SDinGov, I went to Steph Wright’s keynote, where she succinctly addressed the big robotic elephant in the room: AI.

 

I’ll tell anyone who’ll listen that I’ve been made obsolete at least four times in my design career. I graduated dreaming of being an edgy print designer, only to discover that job had evaporated by 2003. So I retrained in something very high-tech - Flash. Then the iPhone arrived and Flash was promptly binned. Being clever, I pivoted to app design. That worked… for a while. But the days of little £30k projects you could knock out in a fortnight gave way to vast product teams. So I thought: “UX! That’s safe. Humans can’t be made obsolete.” Right?

 

So this is the lens through which I’m approaching AI. I’m no Luddite; I love new technology, and I use it daily. Did I use AI to write this? Absolutely not (otherwise the grammar would be better). But we can use AI to power ingenuity and allow it to be a catalyst for human innovation. Augmenting teams with AI tools lets the public sector spend more effectively and innovate to solve the notty problems of citizens and society effectively. We are seeing genuine change for good through the use of these tools, and as the hype fades and real use cases are solidified, I believe they will just become part of the furniture in the same way mobile and web have.

  

When the Stakes Are Human

But in public services, the stakes are higher than anywhere else. Mistakes aren’t abstract: they can mean loss of dignity, livelihoods, even lives. And just like we all discussed with social media algorithms in the early 2010’s, AI brings the “black box” problem. Systems are opaque. Who’s accountable when things go wrong? Where can people appeal? A promise to “keep humans in the loop” isn’t enough if that human has no authority to enact change, so we need humans in control, not just in the loop. I believe we will see organisations developing a more nuanced understanding of what’s in the loop versus what’s in control in the coming months as we all grapple with the pressures of efficiency and quality.


 

A pink and yellow abstract image of an office with people working, chatting and walking around. Above their heads are clouds of network connections.  It was painted with guache and drawn with pencils.
Jamillah Knowles & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Designing for the Margins

What gave me hope at SDinGov was our community of service designers and UX professionals. People who design with care, always consider the unintended consequences of services and products. Our discipline gives me hope that factors such as accents, skin tone, temporary or permanent disabilities will be fully considered by our teams and that synthetic users won’t replace real human interactions. As Steph pointed out, if we design for the average, we exclude those in the margins. And it’s the margins who most need these systems, it’s the margins where we will all live at some point in our lives, I LOVE the Margins! Designing for the edge cases makes services better for everyone, and unless it’s for everyone, it’s not good design!

 

Participation and Power

Steph also asked important questions about participation and power. AI isn’t neutral; it reflects its creators and its context. Within our toolkit, we can call on participatory methods to work with communities. This tool should be a first port of call for any AI product we’re making. Headlines about AI psychosis show how, without observing real people's actual behaviour with tools, not their reported behaviour, is integral to avoiding deeply troubling consequences. Pilots need to be about partnerships, not test subjects – participants, not users.

 

The collage shows multiple repeated tabs with the words "people who liked this also liked...". In the middle, there is a painting of a woman feeding a goat. The goat has been highlighted in a yellow bounding box. The background is a painted nature scene.
Dominika Čupková & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Governance and Responsibility

It’s easy to see AI as either magic or doom – wizards or prophets. But if we strip it back, good AI should be like good design: thoughtful, iterative, user-centred. We already have the tools to make it work as long as we keep to our roots and don’t exclude people from power and participation.

 

And yes, governance matters. Not just red tape, not just to spoil the fun, but to protect people. Stopping must be a choice – we can say “no”. Transparency and working in the open are baked into service design; they must be baked into AI. “Move fast and break things” doesn’t work when the things being broken are people’s lives.

 

In the public sector, where we are not in service of pure profit but wider goals such as participation, access, and fairness we have a real opportunity to shape the conversation around what good could look like.

 

A North Star for AI

Steph’s call for a North Star really resonated. We need a shared vision of accountability and human-first design. Services that include everyone, not just the “average.” Because AI in public services isn’t just about technology. It’s about trust, dignity, and inclusion.

 

What do you think our North Star should be?


Read More:

More great thinking on training and AI from Steph Wright: https://www.linkedin.com/in/steph-wright/ 


Humanoid robots, glowing brains, outstretched robot hands, blue backgrounds, and the Terminator. These stereotypes are not just overworked, they can be surprisingly unhelpful.

non-profit collaboration, researching, creating, curating and providing Better Images of AI. (thanks Steph for sharing)

 

We'd love to hear from you...

THIS IS GAIN Ltd. use your information to administer your account and keep you informed about our products and services. You can unsubscribe anytime. Please review our Privacy Policy for details on our privacy practices and commitment to your privacy. To keep updated on our products, services and content that may be of interest to you, join our mailing list by ticking the box below:

By clicking submit below, you consent to allow storage and processing of the information submitted above.

bottom of page