Who's really in charge? The hidden risk of AI in the public service | HerCanberra

Everything you need to know about canberra. ONE DESTINATION.

Who’s really in charge? The hidden risk of AI in the public service

Posted on

When Assistant Minister Patrick Gorman told the Australian Public Service that “AI wrote it, not me” is the new “the dog ate my homework”, he drew a clear line in the sand: human accountability isn’t negotiable.

Speaking to the Australian Public Service, he went on to state that “AI is no longer optional”, and public servants are “personally accountable for your use of generative AI”.

It sounds clear enough. But Canberra leadership coach Kim Vella says there’s a quiet force working against leaders who want to toe that line — and most of them don’t even know it’s happening.

“Patrick Gorman’s comments echo AI strategies across government, which highlight a desire to leverage AI while ensuring human oversight and accountability is maintained,” says Kim.

“It sounds simple enough. However, there are subtle dynamics at play which can make this more difficult to achieve than leaders might realise.”

Kim notes that one such dynamic is ‘automation bias’: an emerging cognitive phenomenon in which people over-rely on AI and come to trust its judgment more than their own.

“Research tells us that automation bias will cause a person to favour automated recommendations over their own judgment, even when contradictory and more accurate information is available,” she explains.

Describing it as “an unsurprising mental shortcut” that is being created every time we revert to AI explanations in online search or have a ‘conversation’ with generative AI, she believes that the productivity push in workplaces is inadvertently reinforcing this bias.

“Generative AI has become the go-to for drafting documents, analysing data, and making decisions. For leaders and teams that are already decision-fatigued and just want to get through everything faster, it’s a welcome relief.”

“However, if we move to a point where automation bias sets in, we end up with an organisation full of people who don’t trust their own judgement – yet are still accountable for the judgements they are relying on AI to make.”

Kim suggests that one way to avoid automation bias from taking hold is to normalise challenging AI outputs and promote a healthy dose of scepticism within teams.

“When time-poor leaders constantly direct their team to AI for answers, they are losing opportunities to help staff build their own decision-making and critical thinking skills. Then, you have senior executives reading pre-briefs before important meetings, unsure whether the information provided is accurate and contextually realistic. This all leads to an erosion of trust in ourselves, in others, and across the organisation.

“This is why it’s essential not to pass up those teaching moments in leadership, and encourage teams to add a layer between asking AI a question and accepting its response. That layer is how we continue flexing our own decision-making and critical thinking muscles, so they don’t atrophy.”

Where over-reliance on AI starts to become a widespread cultural issue, Kim says it will require vigilance from senior executives who may need to step in and reset the tone.

“If executives or APS staff find themselves in a position where they feel unsafe to challenge automated responses, this must be addressed at the highest levels. We all know what can happen when automation goes unchecked, and it runs contrary to every APS value. With Patrick Gorman himself stating that ‘every APS value applies with every use of AI’, it’s clear this type of approach will no longer be tolerated.

“We know AI isn’t going away, that’s also clear. However, the question now is not whether we should be using AI. It’s how do you hold your own judgment in the presence of AI? That’s the real challenge.”

For information on leadership coaching with Dr Kim Vella, visit Kim Vella Coaching.

Related Posts

Comments are closed.

© 2026 HerCanberra. All rights reserved. Legal.
Site by Coordinate.