On 25 November 2025, Minister for Industry and Innovation Tim Ayres announced the establishment of the Australian AI Safety Institute (AISI). With a $29.9 million commitment and operations commencing “early 2026”, Australia will become the latest nation to join the International Network of AI Safety Institutes.

The UK’s AI Safety Institute (now rebranded as the “AI Security Institute”) has been operational for over two years. The US AI Safety Institute has been gutted by the Trump administration and rebranded as the “Center for AI Standards and Innovation”, with “safety” explicitly excised from both its name and mission. Australia is entering a field where the two supposed leaders are heading in opposite directions.

The UK: Actually Doing Things

The UK AI Safety Institute was established in November 2023, following the Bletchley Park AI Safety Summit. With an initial £100 million investment (the largest public AI safety commitment globally at the time), the UK has had a meaningful head start.

They’ve tested more than 30 of the world’s most advanced AI models. Joint pre-deployment evaluations with the US (back when the US cared about this) covered OpenAI’s o1 model and Anthropic’s latest systems. End-to-end biosecurity red-teaming with OpenAI and Anthropic revealed dozens of vulnerabilities, including new universal jailbreak paths. They ran the largest study of backdoor data poisoning to date with Anthropic.

In May 2024, the UK released Inspect, an open-source evaluation framework under an MIT licence. Inspect provides standardised testing techniques, over 100 pre-built evaluations, and tools for monitoring and visualising results. As Ian Hogarth, the AISI Chair, put it: “Successful collaboration on AI safety testing means having a shared, accessible approach to evaluations.” The framework is now used by governments, companies, and academics globally.

The December 2025 Frontier AI Trends Report drew on two years of evaluations to document capability trajectories. Cybersecurity task success rates rose from under 9% in 2023 to around 50% in 2025, with a model completing an expert-level cyber task requiring up to 10 years of experience for the first time. Software engineering task completion rose from below 5% to over 40%.

Testing systems, publishing findings, building shared infrastructure, identifying risks before they materialise at scale—this is what an AI safety institute looks like when it works.

The US: What Happens When Politics Wins

The US AI Safety Institute was established in November 2023 under Biden’s Executive Order on AI, housed within NIST. By August 2024, it had signed MOUs with Anthropic and OpenAI for pre-deployment access to major new models. The AI Safety Institute Consortium grew to over 290 members. In November 2024, they launched the Testing Risks of AI for National Security (TRAINS) Taskforce.

Within his first week back in office, Trump signed an executive order revoking Biden’s AI governance directives. The US refused to sign the final communique at the February 2025 AI Action Summit in Paris. On 3 June 2025, Commerce Secretary Howard Lutnick announced the transformation of the AI Safety Institute into the “Center for AI Standards and Innovation” (CAISI).

Lutnick was explicit: “For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards.

CAISI’s new mission is to “guard against burdensome and unnecessary regulation of American technologies by foreign governments” and “ensure US dominance of international AI standards”. The focus shifted from understanding and mitigating risks to “enhancing US competitiveness” and preventing other countries from regulating American AI companies.

The word “safety” was deliberately removed because it implied constraints on industry, as TechPolicy.Press noted. The institute that was supposed to test frontier AI systems for catastrophic risks is now primarily concerned with helping those systems reach market faster.

Where Does Australia Fit?

Australia’s $29.9 million commitment sits awkwardly between these two models. It’s a fraction of the UK’s £100 million, but at least it exists.

The ministerial announcement says the AISI will “provide the capability to assess the risks of this technology dynamically over time” and that “we need to make sure we are keeping Australians safe from any malign uses of AI”.

The AISI will join the International Network of AI Safety Institutes, which now includes Australia, Canada, the EU, France, Japan, Kenya, Korea, Singapore, the UK, and (nominally) the US. This network was launched at the Seoul Summit in May 2024 and has since been renamed the “International Network for Advanced AI Measurement, Evaluation and Science” (presumably to keep the Americans onside).

Australia isn’t building frontier AI systems. The UK and US institutes exist partly to test their own companies’ models before deployment. The UK has DeepMind. The US has OpenAI, Anthropic, Google, Meta, and a dozen others. Australia has imported models running on Microsoft Azure.

This creates a fundamental question about scope. Will Australia’s AISI test models before Australian deployment—meaning evaluate systems after they’ve already been released globally, which is less prevention than documentation? Contribute to international evaluation efforts, which requires deep technical expertise that takes years to build? Focus on Australian-specific deployment risks, which might be more tractable but requires different capabilities?

The job ads currently posted for the AISI suggest the government is thinking about this. They’re recruiting for roles covering “CBRN misuse, enhanced cyber capabilities, loss-of-control scenarios, information integrity and influence risks, and broader systemic risks arising from the deployment of increasingly capable general-purpose AI systems”.

That’s a broad remit for a $29.9 million institute that won’t be operational until 2026.

What Australia Should Learn

The UK’s Inspect framework exists because they invested in software engineers who could build it. Their evaluations have teeth because they hired researchers who know how to probe AI systems. Over 30 technical staff, including senior alumni from OpenAI, Google DeepMind, and Oxford, weren’t hired to write policy papers. They were hired to do science. Australia needs to decide if it’s building a research organisation or a policy shop, and fund accordingly.

The US AI Safety Institute collapsed because it existed at executive discretion. When the executive changed, so did the mission. If Australia wants an AISI that survives changes of government (and given the Coalition’s likely approach to AI regulation, this matters), it needs legislative grounding, not just ministerial announcements.

International cooperation requires something to offer. The UK can contribute evaluations, open-source tools, and technical findings to international networks. The US (under Biden) could offer access to frontier labs and testing infrastructure. What does Australia bring? If the answer is “a willingness to participate”, that’s insufficient. Australia needs a niche, whether that’s specific risk domains, deployment context expertise, or evaluation methodologies suited to smaller nations importing AI capabilities.

The Timeline Problem

Australia’s AISI won’t be operational until early 2026. By then, the UK will have been running for over three years. The US will have completed its transformation into an industry-friendly standards body. The GovAI platform will already be live across the Australian Public Service. Multiple federal agencies will have deployed AI systems in citizen-facing services.

The AISI is arriving after the decisions have been made. It’s being established to provide “expert capability to monitor, test and share information”, but the technology it’s supposed to monitor will already be embedded in government operations.

The UK AISI was established after GPT-4 was released, and still managed to build valuable capabilities. But it requires the institute to move fast, hire well, and resist pressure to become another advisory body producing reports nobody reads.

What Actually Matters

The test of Australia’s AISI won’t be its press releases or international memberships. It will be whether it can answer basic questions that no Australian institution can currently answer. What capabilities do the AI systems deployed in Australian government services actually have? What are their failure modes? How do they perform on Australian-specific contexts—Indigenous names, regional accents, local regulatory frameworks? What happens when they’re wrong?

In two years, the Australian AISI will have either tested models, published findings, and built tools that other organisations actually use—or it will have produced glossy reports about “AI opportunities” and “responsible innovation frameworks” while the real decisions happen elsewhere.

The UK has shown what’s possible. The US has shown what happens when politics overrides substance. Australia gets to choose.


The UK AISI’s Inspect framework and Frontier AI Trends Report are both publicly available. The US AISI’s transformation into CAISI is documented in Commerce Department statements. Australia’s AISI announcement is here.

Cover image: Landru revealed, a screenshot from the Star Trek The Original Series episode The Return of the Archons.