December 2018 Newsletter
Edward Kmett has joined the MIRI team! Edward is a prominent Haskell developer who popularized the use of lenses for functional programming, and currently maintains many of the libraries around the Haskell core libraries.
I’m also happy to announce another new recruit:James Payor. James joins the MIRI research team after three years at Draftable, a software startup. He previously studied math and CS at MIT, and he holds a silver medal from the International Olympiad in Informatics, one of the most prestigious CS competitions in the world.
In other news, we’ve today released a new edition ofRationality: From AI to Zombieswith a fair amount oftextual revisionsand (for the first time) a print edition!
Finally, our2018 fundraiserhas passed the halfway mark on our first target! (And there’s currently $136,000 available in dollar-for-dollar donor matching through theDouble Up Drive!)
- A new paper from Stuart Armstrong and Sören Mindermann: “Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents.”
- New AI Alignment Forum posts:Kelly Bettors;Bounded Oracle Induction
- OpenAI’sJack ClarkandAxiosdiscuss research-sharing in AI, following up on our2018 Updatepost.
- A throwback post from Eliezer Yudkowsky:Should Ethicists Be Inside or Outside a Profession?
- New from the DeepMind safety team: Jan Leike’sScalable Agent Alignment via Reward Modeling(arXiv) and Viktoriya Krakovna’sDiscussion on the Machine Learning Approach to AI Safety.
- Two recently released core Alignment Forum sequences: Rohin Shah’sValue Learningand Paul Christiano’sIterated Amplification.
- On the 80,000 Hours Podcast, Catherine Olsson and Daniel Ziegler discusspaths for ML engineers to get involved in AI safety.
- Nick Bostrom has a new paper out: “The Vulnerable World Hypothesis.”