Related: This article examines state-level AI governance in Australia. For analysis of the Commonwealth’s AI governance failures, see Slop for the People. For context on the newly announced federal AI Safety Institute, see Australia’s AI Safety Institute: Lessons from the UK and US.
The Commonwealth government attracts most attention for its AI policies. State and territory governments are also deploying AI systems in education, health, policing, transport, and social services.
Each state governs AI differently. Some are ahead of the Commonwealth. Others barely engage with it. None do it well.
New South Wales
NSW is the most active Australian jurisdiction on AI governance. NSW’s Artificial Intelligence Strategy sets “the direction on the development and use of AI by NSW Government agencies”. It’s been in place since 2022, well before the Commonwealth’s recent activity.
NSW agencies must follow the AI Ethics Policy, updated on 1 July 2024. The AI Assessment Framework (AIAF) was updated in July 2024 to address “new and emerging risks and opportunities”. NSW departments must “establish minimum governance and assurance standards” for AI deployments.
NSW also released an Agentic AI Guide for AI systems that take autonomous actions.
NSW had mandatory policies before the federal government released its first APS AI Plan. The state’s been working on this for years.
NSW’s policies have the same problems as the Commonwealth’s. They’re not legislation—agencies must comply, but there’s no statutory enforcement. The frameworks check process compliance, not outcomes. NSW won’t tell you what AI systems it actually uses.
Queensland
Queensland released its Artificial Intelligence Governance Policy in September 2024. The policy “focuses on ensuring agency strategic planning for AI demonstrates a structured and consistent approach when evaluating AI solutions for transparency, accountability, and risk.”
Queensland also warned about Microsoft Copilot:
“Data sources accessed by M365 Copilot may contain personal, protected, sensitive, or official information that has been misclassified or secured appropriately. This may lead to the uncontrolled or unauthorised exposure of data.”
Queensland takes Copilot more seriously than the Commonwealth. The state warns about data exposure risks that the federal Copilot trial found but ignored.
Queensland’s policy is less comprehensive than NSW’s, but at least it addresses actual technical risks.
Victoria
Victoria adopted the National Framework for the Assurance of Artificial Intelligence in Government, released by Australia’s Data and Digital Ministers on 21 June 2024. The framework “sets a governance template for Victorian public sector organisations” covering AI adoption, data governance, risk, alignment with AI standards (AS ISO/IEC 42001:2023, AS ISO/IEC 23894:2023, AS ISO/IEC 38507:2022), and procurement.
Victoria didn’t develop its own policy. The state just adopted the national framework.
Other States and Territories
South Australia has an AI Strategy about economic opportunity and innovation, not governance and risk.
Western Australia has almost no AI-specific governance publicly available.
Tasmania, ACT, Northern Territory have almost nothing publicly visible.
The Coordination Problem
Australian AI governance has no coordination between levels of government.
States deploy AI in schools, hospitals, police, transport. The Commonwealth’s governance doesn’t cover any of this.
Citizens deal with both levels of government. You might get an AI decision from Centrelink and your state housing authority in the same week. Different governance, different risks, different transparency.
Nothing prevents states from using systems the Commonwealth won’t touch. Assessment methodologies aren’t consistent across jurisdictions. Citizens don’t have the same rights depending on which government uses AI on them. Lessons learned in one jurisdiction aren’t shared with others.
The National Framework is voluntary and high-level. States can ignore it.
What Happens When Laws Conflict?
The Commonwealth’s Privacy Act ADM provisions (commencing December 2026) impose transparency requirements on federal agencies. State agencies using AI on the same citizens have no equivalent obligations.
If the Commonwealth eventually regulates “high-risk AI”, that regulation won’t apply to state government AI.
This creates regulatory arbitrage. If the Commonwealth makes AI governance harder, states become attractive venues for deployment. If states are more permissive, Commonwealth policy is undermined.
The Constitution doesn’t help. The Commonwealth has power over corporations, trade, and some specific areas, but not state government AI use. States have to choose to align.
The Service Delivery Reality
State governments run services that matter to people’s lives. Education: student assessment, resource allocation, learning support. Health: hospital resource allocation, diagnostic support, treatment recommendations. Policing: predictive policing, facial recognition, evidence analysis. Transport: traffic management, public transport scheduling, road safety. Child protection: risk assessment, case prioritisation, placement decisions. Housing: eligibility assessment, allocation, waitlist management.
AI is already deployed in several of these areas. NSW Police has used facial recognition. Victorian child protection has used risk assessment tools. Queensland Health has explored diagnostic support.
The governance frameworks reviewed above don’t consistently apply to these deployments. Documentation is inconsistent. Public transparency is minimal.
What States Should Do
Legislate. NSW has the most developed framework, but it’s just policy. Policy can be ignored or changed whenever. Laws can’t.
Make AI systems public. Citizens should know what AI their state government uses on them. NSW won’t tell you. That needs to change.
Coordinate nationally. Eight different frameworks means gaps and inconsistencies. States should work through COAG/National Cabinet to develop actual national standards.
Reflect local context. Queensland’s tropical health challenges differ from Victoria’s urban density challenges. Don’t copy-paste national templates.
Fund enforcement. Governance without enforcement is just paper. States need oversight bodies with real powers and real budgets, not self-assessment checklists.
Involve affected communities. State AI deployments hit vulnerable populations hardest: people in social housing, child protection clients, people dealing with police. These communities should have a say in how AI is used on them.
The Federal-State Gap
The Commonwealth publishes strategies, establishes institutes, attracts media coverage. State governments deploy AI in schools, hospitals, and police stations quietly.
AI governance failures will happen at the state level first. Discriminatory outcomes. Harmful decisions. State governance is less developed, less resourced, less visible.
Federal AI governance alone won’t work. The states matter. Right now, most of them aren’t governing AI seriously.
State AI governance documentation: NSW’s Digital.NSW, Queensland’s ForGov, Victoria’s VGSO. Other states don’t make it easy to find.
Cover photo by Joey Csunyo on Unsplash.
