• Mashup Score: 3

    At a 2017 FLI conference, AI scientists and researchers developed the highly influential Asilomar AI governance principles. Add your signature.

    Tweet Tweets with this article
    • We believe that Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. We share this view with over 5000 leading scientists concerned individuals. (6/8) https://t.co/acibvc1AdX

  • Mashup Score: 0

    At a 2017 FLI conference, AI scientists and researchers developed the highly influential Asilomar AI governance principles. Add your signature.

    Tweet Tweets with this article
    • This is a good time to replug the Asilomar AI principles that thousands of scientists have signed: https://t.co/acibvc1AdX "5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards." https://t.co/qKuJhPmewl

  • Mashup Score: 0

    ShownotesOn this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery. Timestramps: 00:00 Introduction 00:31 Ethical guidelines and regulation of AI drug discovery 06:11 How do we balance innovation and safety in AI drug discovery? 13:12 Keeping dangerous chemical data safe 21:16 Sean’s personal story of voicing concerns about AI drug…

    Tweet Tweets with this article
    • AI models can now create deadly pathogens as easily as life-saving drugs. How can the pharma industry, computer scientists & lawmakers mitigate risks while promoting innovation? Listen to @ejjavorsky & @collabchem's conversation to find out: https://t.co/MB0lVqOLwl