AI adoption in the shadow of robodebt
Robodebt showed what can happen when technological systems lack human accountability. For AI to succeed in government, we must learn from its mistakes.
First published in the Mandarin.
AI is set to become an essential part of public service governance and decision-making. Across Australia and worldwide, governments are grappling with how to effectively utilise AI to boost efficiency, expand capacity, and deliver services at the fast pace communities now expect.
For the APS, AI offers more than just automation; it presents an opportunity to rethink how the public service learns, makes decisions, and acts in real time. It can analyse large datasets, identify fraud or errors invisible to humans, predict demand spikes, and provide insights for complex policy choices. In an era of climate shocks, cyberattacks, and demographic pressures, these tools are no longer luxuries; they are becoming essential.
However, for the APS and other public services, AI adoption is not starting from a clean slate. AI adoption is happening under the long shadow cast by robodebt.
Robodebt as a cautionary tale
We live within the stories we tell about events just as much as within the institutions where we work. The narratives we share about what has happened and what it means shape our moral imagination, influencing not only our thoughts but also our actions in the world.
Robodebt has become more than just a failed welfare program; colloquially — and not always accurately — it now represents what happens when governments automate processes without proper oversight. In government, failures echo, and robodebt will continue to have repercussions within the public service for years to come.
As a story embedded in APS culture, robodebt risks increasing resistance to artificial intelligence, encouraging defensive decision-making, and fostering a climate where incremental change feels safer than reform. It is no longer just a name for a program, but a warning: technology, unchecked by ethics and responsibility, can amplify human flaws rather than fix them.
This raises the unavoidable question: Will the public sector’s naturally cautious stance, strengthened by the robodebt experience, slow down the adoption of AI?
A cultural scar
Robodebt is a cultural touchstone. The associations that quickly come to mind are harmful systems, lost accountability, over-responsiveness, and people being reduced to data points.
The royal commission found that the failures were not just technical but also moral. Commissioner Catherine Holmes described it as an “extraordinary saga” of “venality, incompetence and cowardice”.
This matters for AI adoption because public servants are active participants. They carry institutional memory: lessons passed down through training, case studies, and stories told in corridors are all shaping their risk appetite. Robodebt reinforces the idea that technology can erode public trust and put lives at risk.
The challenge for AI adoption isn’t just technical or legal; it’s social and cultural. Can the public service find a way to learn, adapt, and move forward at the pace AI now demands?
Historical fears, modern risks
The resistance to AI will also draw strength from older traditions of unease with mechanisation.
Max Weber described the “iron cage” of bureaucracy, where rational systems stifle individuality and accountability. Jacques Ellul warned that once a technical system is established, it develops unstoppable momentum, independent of human values. In literature, Mary Shelley’s Frankenstein serves as a parable about the dangers of creating without responsibility. These stories about people and technology consistently remind us that technology is never neutral; it is always influenced by human values.
AI in government is not neutral code but reflects cultural choices: whether to design for transparency or opacity, to empower judgment or replace it, to support human judgment or undermine it.
People often ask, “Is there a better way to run government?” Usually, the focus is on technology. The next crucial question for public servants should be, “How do our core ethical values, culture, norms, and practices adapt to the new technological environment?” Particularly now, in a world where AI is pervasive.
Defensive decision-making as a cultural drag
One of the lasting cultural impacts of robodebt could be a tendency towards defensive decision-making. In professions prone to legal action, this happens when choices are made more for self-protection than for effectiveness. In the APS, this can show up as decisions that escalate up the chain, pilots that never scale, and an overreliance on external vendors for reassurance.
Defensive choices focus on safety over success, favour internal politics over practical realities and cling to outdated models instead of embracing innovation. The shadow of robodebt could make this trend worse. Public servants, worried about scandal, might learn to prioritise defensibility rather than delivering real value.
In trying to prevent another robodebt, the public service risks becoming both structurally and culturally brittle. Innovation stalls, capability gaps grow, and communities continue to receive outdated services. Cultural resistance to AI adoption, in the shadow of robodebt, may further deepen reluctance to reform, even as the government and community push for change.
The silence after the storm
If defensive decision-making is the behavioural legacy of robodebt, silence in the immediate aftermath was its cultural echo. A study by ANU’s Daniel Casey and Maria Maley found that in the six months after the royal commission, only 55% of agencies communicated with staff about its findings. More than a quarter of public servants received no communication from their leaders about how to interpret the crisis or its lessons.
Of those who communicated, many focused on procedural issues or simply shared central talking points without reflection. Some messages were dismissive, denying the presence of cultural problems in their agency and overlooking the core cause of the crisis. Few addressed the most damaging issue: over-responsiveness to ministers.
For new technologies to be adopted successfully, rules and safeguards alone aren’t enough. Cultural honesty must go with them. Without it, implementing AI risks repeating the cycle seen with robodebt.
AI adoption cannot wait
Standing still is not an option. AI is advancing rapidly in the private sector, and the community expects government services to keep pace with their digital lifestyles. Fraud, cyberattacks, and disinformation already function at machine speed. Without deliberate and prompt adoption of AI, public services risk dependence on vendors, erosion of sovereignty, and a growing trust gap.
AI adoption in public service is inevitable; the challenge is implementing it effectively. Will public servants act with caution, ethics, and transparency? Or will they become hurried, defensive, and easily swayed? The administrative and cultural issues that led to robodebt have been exposed, but they still pose a risk.
What is clear is that public servants will need to work at the speed of AI.
A vision grounded in culture
To adopt AI responsibly, public servants must view technology as a design issue. This involves three key commitments rooted in public service values and craft.
Transparency: Public servants and the community need to understand and be able to challenge automated decisions.
Meaningful oversight: Humans must be responsible for judging fairness, context, and values.
Ethical architecture: Systems — organisational and technological — should be designed with people in mind, not force people to fit the technology.
AI adoption is more than just a technical project; it involves a cultural shift. It requires leaders who demonstrate openness, speak honestly about failure, challenge deep-seated norms, and ensure the right lessons are learned from history.
A positive outlook
The philosopher Hannah Arendt observed that “the essence of human action is to begin”. For the public service, AI adoption is exactly this: a way of engaging with AI that learns from past mistakes and gives the institution every opportunity to improve.
Robodebt warns of what happens when systems lack accountability and when cultures of compliance overshadow core values. The risks of AI adoption are both technical and social: public servants must consistently build trust, demonstrate integrity, and show that technology can support policy design and implementation instead of undermining them.
The shadow of robodebt will persist, but it doesn’t have to determine the future.