- Research
At AiXist we are identifying, compiling and organising current research on the implications of AI in nuclear, biological and autonomous weapons, including potential risks and vulnerabilities.
Enhanced Understanding: A deeper understanding of the existential risks posed by AI in nuclear, biological and autonomous weapons, informed by rigorous research and analysis.
To enhance accessibility and collaboration, we’ve developed the Knowledge Depository—a dynamic platform that visualizes and organizes this critical research. By offering an interactive and comprehensive resource, the Depository empowers stakeholders to explore key themes, identify gaps, and contribute to safeguarding humanity’s future.
- Policy
We are engaging with governments, international bodies, and stakeholders to advocate for responsible AI development and the establishment of robust arms control mechanisms.
Policy Impact: Advocacy and engagement leading to the development of responsible policies, international agreements, and ethical guidelines to mitigate AI-related existential risks.
- Outreach
We are raising global awareness about AI-related existential risks in nuclear, biological and autonomous weapons through public webinars, workshops, and knowledge dissemination.
Global Awareness: Increased global awareness and knowledge about the potential consequences of AI advancements in the realm of security and the importance of responsible AI development.
- Collaboration
Establish a network of experts, policymakers, academics, and organisations to facilitate collaboration, information sharing, and the development of risk mitigation strategies.
Risk Mitigation: Practical risk mitigation strategies and best practices for integrating AI into security systems, emphasising verification, accountability, and ethical considerations.