COVID-19 & Catastrophic Risk – Future of Life Institute
What can we learn from COVID-19 about pandemic preparedness, and preparedness for other catastrophic risks? Here’s what the experts had to say.
What can we learn from COVID-19 about pandemic preparedness, and preparedness for other catastrophic risks? Here’s what the experts had to say.
Nuclear winter visualizations made by Prof. Max Tegmark using state-of-the-art simulation data from these science papers:* Lili Xia, Alan Robock, Kim Scherre…
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan’s…
Despite comparisons of artificial intelligence to nuclear bombs, the US approach to regulating bioweapons and biotechnology is better-suited to AI.
View this post as a PDF. The view that “mitigating the risk of extinction from AI should be a global […]
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept fro…
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot…
This post discusses how rogue AIs could potentially arise, in order to stimulate thinking and investment in both technical research and societal reforms aimed at…
This is “Is halting AI development the right aim for Europe’s AI policy?” by Stiftung Neue Verantwortung on Vimeo, the home for high quality videos…
The Future of Life Institute (FLI) is hiring a Chief Financial Officer to manage our finances and oversee our investments. The CFO will report directly…
The swift growth of artificial intelligence technology could put the future of humanity at risk, according to most Americans surveyed in a Reuters/Ipsos poll published…