AiXist unites a broad spectrum of experts, organisations, and stakeholders to collectively tackle the existential risks posed by artificial intelligence (AI) development and convergence with advanced weapon systems, specifically nuclear, biological and autonomous weapons.
We envision a world where humanity harnesses AI for positive purposes, ensuring that these technologies contribute to global stability rather than undermine it or at worst, contribute to our extinction.
Collaborate, educate and advocate towards achieving a global treaty mitigating AI & existential risks.
- Objectives
- Research and Analysis
Identifying, compiling and organising current research on the implications of AI in nuclear, biological and autonomous weapons, including potential risks and vulnerabilities.
- Policy Advocacy
Engage with governments, international bodies, and stakeholders to advocate for responsible AI development and the establishment of robust arms control mechanisms.
- Education and Awareness
Raise global awareness about AI-related existential risks through public webinars, workshops, and knowledge dissemination.
- Stakeholder Engagement
Establish a network of experts, policymakers, academics, and organisations to facilitate collaboration, information sharing, and the development of risk mitigation strategies.
- Global Treaty Initiative
Work diligently to achieve a comprehensive international treaty that addresses the responsible development and use of AI in nuclear, biological and autonomous weapons.
- Values
- Responsibility
We believe in the responsible development and use of AI to minimise risks and maximise benefits for humanity.
- Collaboration
We prioritise collaboration and cooperation among diverse stakeholders to address complex challenges.
- Transparency
We are committed to transparency in our actions, decisions, and communications.
- Ethical Considerations
We uphold ethical principles and strive to promote AI development that promotes the dignity and welfare of the planet and its people.
AiXist was founded on the recognition of the existential risks posed by AI development, particularly in the context of nuclear, biological, and autonomous weapons. As AI technologies continue to advance, so too do the associated threats, including accidental launches, unintended escalations, and ethical dilemmas. Our consortium aims to collaborate, educate and advocate towards achieving a global treaty mitigating AI & existential risks.
- Focus Areas
- AI Development
Implications of AI development on existential risks, including potential dangers, challenges, and mitigation strategies.
- AI & Nuclear Weapons
Intersection between AI and nuclear weapons, including the risks of accidental launches, unintended escalations, and ethical dilemmas.
- AI & Biological Weapons
Risks posed by AI in the context of biological weapons, such as the potential for AI-driven bioterrorism or inadvertent release of biological agents.
- AI & Autonomous Weapons
Analysis of AI-powered autonomous weapons systems, including concerns about decision-making algorithms, human oversight, and accountability
- Activities
- Research
Conduct comprehensive studies and scenario analyses to evaluate the risks posed by AI in nuclear, biological and autonomous weapons, producing policy-relevant insights.
- Policy
Organise and participate in international policy forums, workshops, and conferences to develop ethical frameworks, guidelines, and international agreements related to AI-driven existential risks.
- Public Awareness Campaigns
Host public seminars, webinars, and media campaigns to disseminate knowledge about AI’s role in existential risks, featuring experts and thought leaders.
- Collaborative Projects
Facilitate collaborative research projects and innovation initiatives to develop practical solutions for responsible AI integration in security contexts.
- Treaty Advocacy
Lead a global initiative to gather support from governments and organisations to create a binding treaty addressing AI-related existential risks in nuclear, biological and autonomous weapons.
- Outcomes
- Enhanced Understanding
A deeper understanding of the existential risks posed by AI in nuclear, biological and autonomous weapons, informed by rigorous research and analysis.
- Policy Impact
Advocacy and engagement leading to the development of responsible policies, international agreements, and ethical guidelines to mitigate AI-related existential risks.
- Global Awareness
Increased global awareness and knowledge about the potential consequences of AI advancements in the realm of security and the importance of responsible AI development.
- Risk Mitigation
Practical risk mitigation strategies and best practices for integrating AI into security systems, emphasising verification, accountability, and ethical considerations.
- Global Treaty Achievement
The successful negotiation and implementation of a global treaty that addresses AI-related existential risks in nuclear, biological and autonomous weapons.
- Why AiXist
- Unique Opportunity
Joining the consortium provides a unique opportunity to shape the future of AI and global security.
- Access to Expertise
Gain access to a diverse community of experts and organisations dedicated to addressing AI-related existential risks.
- Impactful Collaboration
Collaborate with stakeholders from around the world to develop practical solutions and advocate for responsible AI development.
Your support is vital to our mission of mitigating existential risks posed by AI development. With your donation, we can advance crucial research, advocacy efforts, and outreach initiatives. Together, we can build a more secure future for humanity. Donate today and make a difference.
Stay up-to-date with the latest news, research, and events from AiXist by subscribing to our newsletter. Be the first to know about our initiatives, partnerships, and opportunities for involvement. Join our community and help shape the future of AI governance.