Survey Assessing Risks from AI

Understanding Australian public perceptions of AI risks and support for AI governance.
SARA surveys Australian adults about their concerns regarding AI risks, support for AI development and regulation, and priority governance actions to address these risks.

Explore 2025 Survey Explore 2024 Survey

About SARA

The Survey Assessing Risks from AI (SARA) is an annual representative survey of Australian adults investigating:

  • Public perceptions of AI risks (from current harms to potential catastrophic risks)
  • Support for AI development and regulation
  • Priority governance actions to address AI risks

SARA generates ’evidence for action’ to help public and private actors make informed decisions about safer AI development and use.

This project is a collaboration between Ready Research and The University of Queensland.

Latest Findings

2025 Survey (933 Australians):

  • Australians expect AI to be as safe as commercial aviation—at least 4,000 times safer than current risk estimates
  • There is strong public demand for government to better manage AI risks
  • Many proposed risk controls would increase public trust in AI

Explore the full 2025 survey findings →

2024 Survey (1,141 Australians):

  • Australians judge that “preventing dangerous and catastrophic outcomes from AI” the #1 priority for the Australian Government in AI
  • 9 in 10 Australians support creating a new regulatory body for AI.
  • 8 in 10 support idea that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Explore the full 2024 survey findings →

What This Means for Australia

These findings reveal a clear public mandate for stronger AI governance in Australia. Australians expect the same rigorous safety standards for AI that we apply to aviation and other critical technologies.

The research shows that:

  • Public expectations for AI safety are high: Australians want AI systems to meet world-class safety standards
  • Government action is needed: There is broad support for regulatory intervention to manage AI risks
  • Trust can be built: Implementing appropriate safety controls would increase public confidence in AI technology

This evidence base can inform policy decisions, regulatory frameworks, and governance approaches for AI in Australia.

AI Governance

AI governance encompasses the norms, policies, laws, processes, and institutions that guide responsible decision-making about AI development, deployment, and use. Effective governance is crucial for managing both current harms and potential catastrophic risks from AI, including risks from misuse, accidents, or loss of control.

Australian Organizations

Good Ancestors Policy — A policy advocacy organisation focused on AI safety and governance to ensure beneficial outcomes for future generations.

Tech Policy Design Institute — An independent Australian institute conducting research and advocacy on technology policy, including AI governance.

Human Technology Institute — Based at the University of Technology Sydney, this institute conducts applied research and consulting to support corporate and government decision-making about AI.

Centre for AI and Digital Ethics — Based at the University of Melbourne, this centre researches ethical, technical, regulatory, and legal issues relevant to AI.

Gradient Institute — A Sydney-based independent institute conducting applied research and consulting to improve the safety, ethics, accountability, and transparency of AI systems.

International Organizations

Centre for the Governance of AI — This organization conducts and convenes research dedicated to helping humanity navigate the transition to a world with advanced AI.

BlueDot Impact — Offers cohort-based courses including AI Governance, covering the policy landscape, regulatory tools, and institutional reforms needed for beneficial AI outcomes.

The Institute for AI Policy and Strategy — This organization focuses on understanding and managing risks from advanced AI systems, with emphasis on AI policy and standards, compute governance, and international governance.

Contact

Contact Dr Alexander Saeri to discuss the research project and its findings.



Ready Research & The University of Queensland logo