1.
Future of Life Institute:
The Future of Life Institute is a nonprofit organization that focuses
on reducing existential risks from AI and other emerging technologies.
They provide resources and information about AI safety and risks,
including a list of dangerous AI applications.
2.
Center for the Study of
Existential Risk: The Center for the Study of Existential Risk
(CSER) is a research center at the University of Cambridge that studies
global catastrophic risks, including risks from AI. They provide
research and policy recommendations to mitigate these risks.
3.
OpenAI: OpenAI is a
research organization that aims to ensure that artificial intelligence
is developed in a safe and beneficial way. They research and develop AI
technologies and also provide resources and information about AI safety
and risks.
4.
IEEE Global Initiative on
Ethics of Autonomous and Intelligent Systems: The IEEE Global
Initiative on Ethics of Autonomous and Intelligent Systems is a global
effort to ensure that AI and autonomous systems are developed in a safe
and ethical way. They provide guidelines and resources for the
responsible development of AI.
5.
AI Now Institute: The AI
Now Institute is a research institute at New York University that
studies the social implications of AI. They research and provide
information on the ethical and social implications of AI, including the
dangers of AI in areas such as criminal justice and labor automation.
6.
The Guardian: The Guardian
is a news organization that covers a wide range of topics, including AI
and its risks. They have a section dedicated to AI, where you can find
articles about the risks and dangers of AI, as well as updates on
developments in AI research.
7.
The Machine Ethics Podcast:
The Machine Ethics Podcast is a podcast that explores the ethical and
safety implications of AI. They interview experts in the field and
discuss topics such as AI transparency, fairness, and control.
8.
AI Impacts: AI Impacts is a
research organization that studies the long-term impacts of AI on
society. They research and provide information on the potential risks
and dangers of AI, such as the possibility of an intelligence explosion.
9.
The Center for Human-Compatible
AI: The Center for Human-Compatible AI is a research center at the
University of California, Berkeley that aims to ensure that AI is
developed in a way that is safe and beneficial for humans. They provide
research and resources on the safety and control of AI.
10.
The Partnership on AI: The
Partnership on AI is a nonprofit organization that brings together
academics, researchers, and industry professionals to collaborate on
the development of safe and beneficial AI. They provide resources and
information on the ethics and safety of AI.
11.
AI Alignment Forum: The AI
Alignment Forum is an online community of researchers, academics, and
enthusiasts who discuss and debate topics related to the safety and
control of AI. They provide resources and information on topics such as
AI alignment, corrigibility, and value alignment.
12.
The Verge: The Verge is a
news organization that covers a wide range of topics, including AI and
its risks. They have a section dedicated to AI, where you can find
articles about the risks and dangers of AI, as well as updates on
developments in AI research.
13.
OpenAI GPT-3 Concerns:
OpenAI's GPT-3 language model is one of the most advanced AI models
currently available, and it has raised concerns about the potential
dangers of AI. OpenAI has published a paper detailing their concerns
and outlining steps to mitigate these risks.
14.The
Bulletin of the Atomic
Scientists: The Bulletin of the Atomic Scientists is a publication
that covers global security issues, including the risks associated with
emerging technologies such as AI. They publish articles and analysis on
the potential dangers of AI, including the risks of autonomous weapons
and the impact of AI on nuclear weapons.
15.
Center for Security and
Emerging Technology: The Center for Security and Emerging
Technology (CSET) is a research organization at Georgetown University
that studies the national security implications of emerging
technologies. They provide research and analysis on the potential risks
of AI and other emerging technologies.
16.
The Machine Intelligence
Research Institute: The Machine Intelligence Research Institute
(MIRI) is a nonprofit research organization that focuses on reducing
the risks associated with AI. They provide research and resources on
topics such as AI alignment, decision theory, and decision-making in
complex environments.
17.
The Institute for Ethical AI
and Machine Learning: The Institute for Ethical AI and Machine
Learning is a nonprofit organization that promotes ethical and
responsible AI development. They provide resources and information on
AI safety and ethics, including the risks of AI and the importance of
transparency and accountability in AI development.
18.
The Center for Humane Technology:
The Center for Humane Technology is a nonprofit organization that aims
to align technology with human values. They provide resources and
information on the potential dangers of AI, including its impact on
mental health, privacy, and democracy.