• Mashup Score: 7

    Nuclear winter visualizations made by Prof. Max Tegmark using state-of-the-art simulation data from these science papers:* Lili Xia, Alan Robock, Kim Scherre…

    Tweet Tweets with this article
    • This 👇 A nuclear war would cause soot clouds to spread around the world and block out sunlight for several years - wrecking food production. The soot clouds currently menacing North America can be taken as a warning. What a nuclear winter looks like: https://t.co/HQaiikejAu https://t.co/DO9E5aZfDo

  • Mashup Score: 3

    Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan’s work at…

    Tweet Tweets with this article
    • On this FLI podcast, @GusDocker is joined by @DanHendrycks, director of @ai_risks. Dan looks at AI from the perspective of evolution, and explores how we might develop AI safely. Watch the full episode here: https://t.co/WbUlF8YC4Y https://t.co/cIUGCyiEYn

  • Mashup Score: 6

    Despite comparisons of artificial intelligence to nuclear bombs, the US approach to regulating bioweapons and biotechnology is better-suited to AI.

    Tweet Tweets with this article
    • AI is at a crossroads. Two directions are on offer: the nuclear road and the bio road. Regulators should take AI down the biotech road, argues @FLIxrisk's Emilia Javorsky for @Gizmodo. Why? (🧵1/5) Read the piece here: https://t.co/u3jfVCXz9V

  • Mashup Score: 1

    Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept fro…

    Tweet Tweets with this article
    • In the latest episode of the FLI podcast, @GusDocker interviews @romanyam about skepticism regarding AI risk and how best to judge the amount of risk society should accept from emerging technologies. Catch the full episode here: https://t.co/zUM9nyZhzy https://t.co/2PRIp0yv2m

  • Mashup Score: 0

    Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.Amazon6Learn more about this providercookietestThis cookie is used to determine if the visitor has accepted the cookie consent box.Expiry: SessionType: HTTPrc::aThis cookie is used to distinguish between…

    Tweet Tweets with this article
    • A new report by @ArmsControlNow' @sbugos explores how emerging technologies could increase the possibility of nuclear weapons use by • increasing the pace of conflict • increasing uncertainty • reducing human input, and • incentivizing arms racing https://t.co/uyXLeSWwGL

  • Mashup Score: 8

    This post discusses how rogue AIs could potentially arise, in order to stimulate thinking and investment in both technical research and societal reforms aimed at minimizing such catastrophic outcomes.

    Tweet Tweets with this article
    • Turing Award winner and deep learning pioneer Yoshua Bengio presents "a set of definitions, hypotheses and resulting claims about AI systems which could harm humanity" and then discusses "the possible conditions under which such catastrophes could arise." https://t.co/JzxiqlMByf

  • Mashup Score: 3

    This is “Is halting AI development the right aim for Europe’s AI policy?” by Stiftung Neue Verantwortung on Vimeo, the home for high quality videos…

    Tweet Tweets with this article
    • FLI President @tegmark recently spoke to @snv_berlin's @pegahbyte about our open letter calling for a pause on giant AI experiments and how Europe should deal with potentially powerful and risky AI models. Watch the full discussion here: https://t.co/NH2XSQRRyB

  • Mashup Score: 0

    The Future of Life Institute (FLI) is hiring a Chief Financial Officer to manage our finances and oversee our investments. The CFO will report directly to FLI’s Executive Director. FLI works to reduce global catastrophic risks from transformative technologies and develop optimistic yet realistic visions of the future.

    Tweet Tweets with this article
    • We're hiring! FLI is hiring a Chief Financial Officer to manage our finances and serve as a strategic financial advisor to the Executive Director and Board of Directors. đź“ŤLocation: Remote 📆 Deadline: June 11, 2023 👉Apply here: https://t.co/8YtY5rYrlE https://t.co/XPye4oxqjX

  • Mashup Score: 3

    The swift growth of artificial intelligence technology could put the future of humanity at risk, according to most Americans surveyed in a Reuters/Ipsos poll published on Wednesday.

    Tweet Tweets with this article
    • "We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action." - FLI's US policy lead Landon Klein on a poll about American attitudes towards AI. https://t.co/9cUxhHq9So