Search
Close this search box.
Our Work
AiXist was founded on the recognition of the existential risks posed by AI development, particularly in the context of nuclear, biological, and autonomous weapons. As AI technologies continue to advance, so too do the associated threats, including accidental launches, unintended escalations, and ethical dilemmas. Our consortium aims to collaborate, educate and advocate towards achieving a global treaty mitigating AI & existential risks.

At AiXist we are identifying, compiling and organising current research on the implications of AI in nuclear, biological and autonomous weapons, including potential risks and vulnerabilities.

Enhanced Understanding: A deeper understanding of the existential risks posed by AI in nuclear, biological and autonomous weapons, informed by rigorous research and analysis.

We are engaging with governments, international bodies, and stakeholders to advocate for responsible AI development and the establishment of robust arms control mechanisms.

Policy Impact: Advocacy and engagement leading to the development of responsible policies, international agreements, and ethical guidelines to mitigate AI-related existential risks.

We are raising global awareness about AI-related existential risks in nuclear, biological and autonomous weapons through public webinars, workshops, and knowledge dissemination.

 Global Awareness: Increased global awareness and knowledge about the potential consequences of AI advancements in the realm of security and the importance of responsible AI development.

Establish a network of experts, policymakers, academics, and organisations to facilitate collaboration, information sharing, and the development of risk mitigation strategies.

Risk Mitigation: Practical risk mitigation strategies and best practices for integrating AI into security systems, emphasising verification, accountability, and ethical considerations.