The Australian Public Service’s (APS) official culture of AI adoption is not found in the glossy press releases of its ministers. But we’ll get to that shortly.

The real story of AI adoption is revealed in the quiet, conflicting, and deeply secretive actions of its most senior bureaucrats. A case in point: Hamish Hansford. Hansford is not a mid-level manager; he is the Department of Home Affairs’ Head of National Security, the Commonwealth Counter-Terrorism Coordinator, and the National Counter Foreign Interference Coordinator.

In August 2025, Hansford gave a public speech, lauding the “foundationally amazing” capabilities of Microsoft’s AI chatbot, Copilot. He assured his audience that Copilot “didn’t write this speech, but it gives me lots of ideas”. FOI requests subsequently proved this to be, at best, a misleading trivialisation. Documents obtained by Crikey show that on the morning of his speech, Hansford had directed the AI: “Write an analysis of the critical infrastructure environment where Australia has conceptually come from since 1980 to today; outline the emerging threats…”. The resulting 900-word extrusion formed the “bones” of his address, with sections on the history of Australia’s critical infrastructure used “near verbatim”. He even prompted the tool for “an analogy”.

This act of outsourcing the core intellectual labour of a national security chief is revealing, but it is not the critical failure. The true failure, the one that provides a perfect, contained microcosm of the government’s entire AI strategy, is the secrecy and self-preservation that followed… When the FOI request was lodged, the “authorised decision-maker” who signed the letter responding to it was old mate himself. It’s always a good sign when the subject of an inquiry becomes its gatekeeper, right?

Hansford exempted twelve documents from release, asserting they were “personal interactions with members of his team”, which is a concerning claim given they were generated on an official, logged government system. Hansford’s official response letter argued that transparency itself is a threat. He suggested that “any reduction of the Department’s capacity to use AI functions” (presumably such as that caused by public disclosure) “could be reasonably expected to have a substantial adverse effect on the proper and efficient conduct of the operations of this department”. So public accountability is not a pillar of governance, but a harmful obstacle to efficient operations.

The government has just released its “Whole of Government AI Plan”, and launching-minister Minister Katy Gallagher speaks of “strong governance and transparency”, but Hansford really just proves that the bureaucracy’s default setting is to shield itself from scrutiny. It is a preview of how the entire plan will function: as a black box, protected by the folks who benefit from its opacity. The plan’s reliance on internal committees and self-reported transparency is doomed to fail, because the old mate Hansford shows us what officials do when they are asked to govern themselves.

I’m really mad at all of this, so I’m going to go into this a bit more. Sorry.

The case of a national security chief using generative AI to write the “bones” of his speeches is not just a question of poor optics, it’s also a pretty-bloody-huge ethical and legal failure. The bloke in question is a Senior Executive Service (SES) Band 3 employee, a classification that places him amongst the big boys of APS. His total remuneration is in the hundreds and hundreds of thousands, and is not intended to be compensation for an ability to formulate prompts; it is for human-centric skills that the public service is built on.

The APSC defines what an SES Band 3 is hired to do in its Integrated Leadership System (ILS). The core capabilities of this role are not administrative; they are cognitive. They include “Shapes strategic thinking”, “Applies intellect and knowledge to weigh up information and identify critical factors”, and “Demonstrates effective judgement”. They do not include vibe speeching.

When Hansford prompted an AI to “Write an analysis of the critical infrastructure environment… outline the emerging threats”, he was outsourcing the very “intellect” and “judgment” that define his half-million-dollar role. This is not delegation, which involves the strategic transfer of outcome ownership; it is an “abdication of critical thinking”. It creates a governance gap where the executive is potentially accountable only for the activity of prompting, not the result or reasoning of the final product.

We already know that this de-skilling of the executive function leads to “skill atrophy” and cognitive inactivity. Over-reliance on AI reduces critical thinking and makes users less capable of spotting errors, creating a dangerous feedback loop of automated, unverified, and high-stakes work. Research published in the Journal of the American Medical Informatics Association found that erroneous AI advice increased the risk of incorrect decisions by 26% compared to control groups without automated decision support. This “automation bias” manifests in government settings where bureaucrats over-rely on AI recommendations even when incorrect, raising concerns about arbitrary decision-making in high-stakes administrative contexts. This is in direct conflict with the government’s own AI guidance, which states public servants must critically assess generative AI outputs for accuracy and bias.

This abdication is a clear breach of the legal standards that govern old mate’s employment. The Public Service Act 1999 (PS Act) and the Public Governance, Performance and Accountability Act 2013 (PGPA Act) establish a non-negotiable framework of duties.

  1. Failure of the SES “Personal Example” Duty (PS Act): The PS Act creates a special, higher standard for the SES. Section 35 states the function of the SES is to provide “APS-wide strategic leadership of the highest quality” and that each SES employee must, “by personal example and other appropriate means, promotes… compliance with the Code of Conduct”. By publicly championing a high-risk tool while privately using it to bypass his core cognitive functions, Hansford has set a “personal example” of negligence, not diligence. He has failed this statutory duty.
  2. Failure of the “Care and Diligence” Duty (PGPA Act): As an official, Hansford is bound by the general duties of the PGPA Act. Section 25 requires him to exercise his powers with the “degree of care and diligence that a reasonable person would exercise” in his circumstances. Finance guidance clarifies that failing this duty includes “not taking reasonable steps to inform yourself about significant issues before making a decision” and “undertaking an unfamiliar task without checking legislative requirements”. In a post-Robodebt world, where the risks of automation are known, using a foreign-owned AI for national security analysis before a legal framework exists is a definitive failure of care.
  3. Improper Use of Commonwealth Resources (PS Act): The APS Code of Conduct in Section 13(8) of the PS Act mandates that an employee must “use Commonwealth resources in a proper manner and for a proper purpose”. The APSC’s guidance on the Code of Conduct confirms “Commonwealth resources” explicitly includes “the salary costs of APS employees” and their time at work. Using taxpayer-funded time and a massive salary to outsource the very strategic analysis he is paid to perform is not a “proper use” of that resource.

Australian administrative law, as established in the Pintarich v Deputy Commissioner of Taxation case, requires a “mental process” by an authorised human officer for a decision to be legally valid. An automated output without this “requisite mental process” is not a legal decision. The government, by “driving adoption” of these tools, is systematically engineering a new generation of legally invalid decisions.

Robodebt is back, baby!

The APS AI plan, released today, is bonkers. And it contains a massive chasm between its public rhetoric and its actual architecture.

Minister Gallagher, who is also the Minister for the Public Service, has publicly acknowledged that the Robodebt scandal was not a technology failure but “a failure of leadership, ethical decision-making, and proper oversight”. With this precise acknowledgement, the government has proceeded to draft an AI plan that replicates these exact failures. The plan is a masterclass in governance theater, and is a voluntary compliance regime masquerading as mandatory oversight.

The plan’s “strong governance” rests on two hollow pillars: an “AI Review Committee” and “transparency statements.” Both are designed to create the appearance of accountability while preventing its practice.

The AI Review Committee is designated to provide “whole-of-government oversight” and “expert advice on higher risk uses of AI”. The plan discusses a committee that has no published charter, no statutory powers, and no evidence it can halt harmful deployments. It is an administrative body created by executive decision, not legislation, meaning it has little to no parliamentary oversight and can be disbanded at will. And it cannot overrule agency secretaries, impose penalties, or conduct surprise audits. It also won’t be operational for ages, but we’ll get to that.

The second pillar is “AI transparency statements” that agencies must publish. In practice, these are self-reported with no external verification. A review of statements already published by departments like Home Affairs and Health suggests that they’re likely to contain only generic, high-level descriptions such as “exploring AI” or “participating in Copilot trial”. Riveting.

These statements provide zero technical detail on how systems work, what training data is used, accuracy rates, bias testing results, or individual use case specifics. For an Aussie affected by an AI-driven decision, for example, a denied welfare payment or a failed visa application, these statements are useless. They offer no data, no algorithmic explanation, and no pathway to challenge a decision, perfectly mirroring the opaque wall of automation bias that defined Robodebt. Exciting!

The plan’s true priority is revealed in its timeline. It follows a sequence that guarantees technology deployment precedes accountability.

A safe, responsible, and logical rollout of high-risk technology might follow a clear sequence:

  1. Establish a Legal Framework.
  2. Create a Governance Body with Statutory Power.
  3. Appoint Accountable Officers.
  4. Pilot and Test the Technology.
  5. Deploy the Technology.

The APS AI Plan inverts this logic to prioritize speed and adoption:

  1. Deploy Technology: The GovAI platform is scheduled to become “operational” by April 2026.
  2. Appoint Accountable Officers: Agencies must appoint their Chief AI Officers (CAIOs) by July 2026 (that’s three months after the system is already live, which seems less than ideal).

This ensures that the governance structure arrives after critical technology decisions have already been made. The CAIOs are not being appointed to act as gatekeepers or ensure if a system should be built; they are clearly being appointed to rubber-stamp. Minister Gallagher’s own press release confirms this, stating the CAIOs’ role is to “promote adoption”.

This structure is the “failure of leadership” that caused Robodebt, now codified as a new national plan. It ensures that no genuine, empowered oversight body exists before the system is turned on, guaranteeing that by the time “governance” arrives, it is already too late.

The Robodebt Royal Commission provided one overwhelmingly clear directive for all future government automation: Recommendation 17.1. It called for the “reform of legislation” to create a “consistent legal framework” for automated decision-making (ADM), including “a clear path for those affected by decisions to seek review” and a requirement that “business rules and algorithms should be made available, to enable independent expert scrutiny”.

The Australian Government’s official response to this was to “accept” the recommendation. The APS AI Plan is the definitive proof that this “acceptance” was a meaningless public relations gesture. The plan has not just been built without this legal framework; it has been deliberately timed and structured to circumvent it.

The GovAI platform is set to be rolled out across the APS in April 2026. However, the Privacy and Other Legislation Amendment Act 2024, which contains the new, mandatory disclosure obligations for automated decision-making, does not commence (for some of its provisions regarding ADM) until December 2026.

This creates an 18-month gap where the government can probably deploy and run high-risk, black-box AI slop machines without mandatory disclosure or the appeal rights that the Robodebt Royal Commission demanded. During this window, AI can influence critical decisions on welfare, visas, and taxes (and god knows what else), and Australians will have no legal right to know or challenge the algorithmic basis of their-decision.

If the intention here is to grandfather in a massive new ADM ecosystem under the old, failed, pre-Robodebt rules, then… great success!

This has to be a deliberate feature, not a bug, designed to allow the APS to proceed without the friction of human rights, legal rights, or worker consultation. Where the plan does mention safeguards, they are part of a voluntary compliance regime dressed as governance:

  • Non-Mandatory Assessments: Privacy Impact Assessments (PIAs) for GovAI Chat, a system to be trained on sensitive, cross-agency government data, are merely recommended, not mandatory.
  • Voluntary Procurement: The Digital Transformation Agency’s (DTA) AI Model Clauses for procurement are voluntary guidance with no enforcement mechanism, allowing agencies to pick and choose provisions with no consequences for omission.
  • No Penalties: Breaches of the new AI policy trigger no automatic penalties under the Public Governance, Performance and Accountability Act 2013.

This voluntary framework extends to the workforce itself. The plan is silent on employment law implications. It includes no mandatory union consultation before deployment, only encouraging engagement. This is in direct defiance of CPSU surveys showing 85% of public servants are concerned about AI’s use in recruitment and promotion (amongst all sorts of other interesting things), and ignores recommendations from the Senate Select Committee on Adopting AI.

The government is repeating the exact legal and ethical arrogance that defined Robodebt: the belief that the public service can operate above the law in pursuit of efficiency.

Architecting Exclusion

The APS AI Plan claims to be “human-centred”, with Minister Gallagher assuring the public it will free up staff for work requiring “human insight, empathy, and judgment”. This rhetoric is a cruel inversion of the plan’s actual design. The plan is, in fact, an architecture of exclusion, systematically ignoring every single vulnerable population that was disproportionately harmed by Robodebt.

The plan will inevitably create a two-tier system of citizenship: a “seamless” automated future for the digitally literate and data-rich, and a new, faster, and more opaque system of discrimination for the data-poor.

The plan exhibits an alarming silence on Indigenous considerations. It contains zero mention of Indigenous data sovereignty frameworks, even as one is being developed, and no consultation mechanisms with Aboriginal and Torres Strait Islander communities. This is despite known issues of AI models misrepresenting Indigenous knowledges and the high risk of perpetuating colonial patterns of data collection and algorithmic bias.

Given the documented, severe, and disproportionate harm that Robodebt inflicted on Indigenous welfare recipients, this silence is inexcusable. This is a willful decision to deploy powerful new systems without consideration of data sovereignty, cultural appropriateness, or algorithmic justice.

On top of it all, and In one of the plan’s most cynical (or hilarious, depending on your mood) contradictions, it contains no gender impact analysis, despite being championed by the Minister for Women. This omission is made all the more damning by the government’s own internal report on its Microsoft Copilot trial, which was released at the same time as the AI Plan. That report explicitly warned: “women could be disproportionately impacted as they currently comprise most APS administration staff”.

The government was in possession of a direct warning that its chosen technology would disproportionately harm women’s jobs, yet the final AI plan contains no job security guarantees, retraining commitments, or redundancy protections. It also fails to address how algorithmic bias disproportionately affects women in areas like welfare or loan approvals.

The negligence extends to all other vulnerable groups (because of course it does)…

  • Disability: The plan shows minimal integration with the APS’s own Disability Employment Strategy and contains no accessibility requirements for AI-enhanced services or reasonable adjustments provisions.
  • CALD Communities: The plan is also silent on CALD considerations, with no provisions for language accessibility or cultural bias assessment. This is a high-risk omission, given we know that AI systems can have 10-100x higher error rates for non-white faces and demographics, amongst many other things.
  • The Digital Divide: The plan also assumes universal digital capability and access. It contains no requirements for maintaining non-digital service channels. This is a direct repudiation of a key Robodebt Royal Commission recommendation, which found that “more face-to-face customer service support options should be available for vulnerable recipients”.

The plan is not human-centred. It is system-centred. By creating automated barriers with no human alternative, it guarantees a new layer of exclusion that will, by design, harm those least able to fight back.

Sovereignty Surrender

To mask its deep governance failures, the government has anchored its AI plan in the language of security and control. Minister Gallagher announced the creation of the “GovAI platform” and “Gov AI Chat,” which are repeatedly described as “secure, in-house AI tools”.

This “in-house” claim is a misnomer. It’s another large vendor-capture of the Australian government. The “in-house” GovAI platform is “built on the foundation of GovTEAMS”, which is “hosted on Microsoft Azure and Microsoft Office 365 platforms”. It is “completely dependent on foreign foundation models”, with 73% from the US and 0% from Australia. The government has even quietly launched an onshore instance of OpenAI’s GPT-4o for the APS, embedded within this Microsoft-based service.

This is not an AI strategy; it is a Microsoft procurement plan. The fix was in long before the plan was announced. The government’s large-scale trial of Microsoft 365 Copilot was not a test; it was the first stage of procurement. It created a Microsoft ecosystem entrenchment by integrating workflows and creating the demand that the in-house plan would later be “built” to satisfy. Now, the “plan” codifies this capture, locking the entire APS into a single foreign vendor from which it has no planned escape.

This “sovereignty surrender” is not a theoretical risk; it is a profound and immediate national security failure, exemplified by the very official promoting it. The man we discussed earlier, using this tool for national security analysis, is Hamish Hansford, the National Counter Foreign Interference Coordinator. The supreme irony is that the official paid to counter foreign interference is actively exposing sensitive “intergovernmental” and national security analysis to a foreign jurisdiction. What fun times we live in!

The Department’s technical defence that “Contractual arrangements are established with Microsoft” is a hollow and legally naive shield. The US CLOUD Act empowers US authorities to compel data disclosure from any US-based company, like Microsoft, regardless of the data’s physical storage location. The “onshore” hosting of data is a sovereignty illusion; the nationality of the company, not the data center, is what matters. This jurisdictional overreach places Australian government data in a state of legal conflict, rendering the entire “secure” platform a lie.

Implementation Fantasy

The final, fatal flaw of the APS AI Plan is that it is an implementation fantasy. The plan is to be executed by a public service that lacks the skills, the infrastructure, and the track record to do so safely (or at all…)

The plan calls for a full rollout to over 100,000+ public servants. Yet, CPSU data reveals that only 6% of public servants received AI training and most haven’t read guidance. The government is handing one of the most complex, high-risk technologies in history to a workforce that is mostly untrained in its use.

The guidance they would receive is, itself, useless. The DTA’s “AI Technical Standard” (July 2025) is guidance, not requirements and contains zero specific accuracy requirements or numerical thresholds, instead relying on subjective, unenforceable terms like “appropriate” levels of accuracy. How appropriate!

Meanwhile, the plan requires finding 100+ SES-level Chief AI Officers by July 2026, a recruitment crisis in a talent-scarce market where the private sector pays 2-3 times the public-sector salary. The likely outcome is that agencies will simply re-label existing CIOs or CTOs, creating title inflation rather than capability uplift.

The plan’s promises of “productivity” and “efficiency” are the central lie justifying its reckless speed. These claims are aspirational projections contradicted even a brief look at the evidence.

The OECD’s 2025 “Governing with Artificial Intelligence” report stated plainly that “while the promise is great, there is so far little empirical evidence that AI has or will deliver on these benefits” in public sector settings. The APS plan ignores this, instead relying on unverified, misleading, and industry-funded claims.

  • Fabricated Figures: The plan’s claim of "$19 billion in annual value by 2030 for public sector" is completely unverified and likely false. The Productivity Commission (PC) projects approximately $11.6 billion annually (well, they suggest $116 billion over the next decade, so 1/10 of that as a rough guide) from AI for the entire Australian economy, not $19 billion for the public sector alone.
  • Misleading Citations: The claim of “4.3 per cent labour productivity growth” omits the PC’s own critical caveats of “considerable uncertainty” and the fact these gains are potential, not guaranteed. This is directly contradicted by meta-analyses showing “NO robust… relationship between AI adoption and aggregate productivity gains”. The claim of a “13% lift” is grossly misleading, as this was the rejected upper bound of a range the PC reviewed, not their actual estimate.
  • Conflicted Sources: The plan leans on data that suggests it’s relying on the “Merom, 2025” report, which is not independent research but an un-peer-reviewed promotional report directly funded by OpenAI. OpenAI has hired Bourke Street Advisory as its Australian lobbyist, with spending increasing from $260,000 in 2023 to $1.76 million in 2024. The company secured two government contracts in 2025 without competitive tender, both as the sole invited bidder. OpenAI’s own “AI in Australia: Economic Blueprint” explicitly pushed Australia toward infrastructure investment using foreign models rather than developing sovereign capability, stating “Building local LLMs [is] almost the distraction.”
  • Contradictory Evidence: The plan’s claims of improved service delivery are unproven. UK studies show only 8% of public sector AI projects demonstrate measurable benefits, and 78% of leaders “struggle to measure AI impacts”. Claims of “helping manage employee workloads” are directly contradicted by empirical research. One study found 77% of employees report AI added to their workload (review, moderation, and learning), while another found AI-exposed workers work longer hours. This is the “Automation Paradox”: AI creates new, often invisible, review and repair work. Harvard Business Review’s research documented the phenomenon of “workslop”: AI-generated output that appears serviceable but lacks substance. Their September 2025 study found 41% of workers have encountered AI-generated workslop, costing nearly 2 hours of rework per instance. The estimated cost averages $186 per employee per month in time spent cleaning up inadequate AI content.
  • Implementation Failure: The plan ignores the micro-macro gap (task-level gains do not translate to aggregate gains) and the consistent massive failure rate of AI projects. Deloitte’s survey across 14 countries found that 78% of government leaders struggle to measure impacts from generative AI, significantly higher than other sectors. This measurement gap creates a vicious cycle where agencies cannot demonstrate return on investment, limiting their ability to justify further investment or identify what actually works. An OECD report concluded that “most implemented cases have not scaled beyond their original contexts” and “there is so far little empirical evidence that AI has or will deliver on these benefits.” The UK experience reveals the structural problem: digital and data projects are 60% more likely to report ‘Red’ status (successful delivery unlikely) than non-tech projects in the Government Major Projects Portfolio, representing £60 billion in programme costs.

The plan also seems to assume AI can be layered on top of existing infrastructure. This is naive. Key agencies like Services Australia (the home of Robodebt!) still run systems from the 1980s-1990s including COBOL-based applications on brittle, old mainframes. If it ain’t broke, don’t fix it. But also don’t layer AI on top of it.

The government’s own Copilot trial proved the AI tools are inaccurate (60% of users made “moderate to significant” edits), unpredictable, and prone to “inappropriately accessing sensitive information.

The plan is to now connect this known-to-be-faulty black box to known-to-be-brittle legacy systems, and to have it operated by an untrained workforce. Perfection. This is not a productivity gain; it is a time bomb. And it creates a great opportunity for more compounding failure, like just our old friend, Robodebt…

  1. The AI will generate errors (as the trial proved).
  2. The legacy systems will misinterpret, corrupt, or be broken by these errors.
  3. The untrained human-in-the-loop, suffering from automation bias, will be unable to spot or correct the failure.

This is not just a repeat of Robodebt. It is Robodebt at a speed and scale the original scheme’s architects could only have dreamed of! Innovation! And it is being attempted by an organisation with a litany of failures in large-scale IT projects, including the 2016 Census crash, the $27.2M Australian Apprentice Management System write-off, and the scrapped National Biometric Database, just to name but a few.

Anyway…

The Australian Public Service AI Plan is not just a flawed plan. It is a document of profound institutional amnesia and policy negligence. It is irrefutable evidence that the government has learned nothing from the “venality, incompetence and cowardice” of the Robodebt scandal. All the key structural failures that enabled Robodebt are replicated:

  • Weak oversight: An AI Review Committee that is basically advisory theatre.
  • Legal vacuum: A strange 18-month gap between things, and a “voluntary compliance” regime.
  • Australians bearing the burden: Performative transparency and no clear path for review.
  • Pressure for efficiency over accuracy: Rebranded as productivity gains, claims that are unverified, exaggerated, and contradicted by empirical data.

Minister Gallagher’s assurance that AI “is not about bringing AI in to decide that someone gets a payment or not” is a ministerial statement, not enforceable policy. It is the exact same kind of empty, non-binding verbal promise that allowed Robodebt to fester for years while ministers denied its reality.

The critical difference is this: Robodebt was a simple, rules-based system that was, after years of human suffering, eventually proven unlawful. The APS AI Plan will deploy black-boxes that are infinitely more “powerful”, inherently unexplainable, and may never be fully understood or tested in a court. It replaces an illegal system with an incomprehensible one.

This plan is not ready. Complain about it.


Cover Image modified by me. Sorry.