June 2020 Newsletter
MIRI researcher Evan Hubinger reviews “11 different proposals for building safe advanced AI under the current machine learning paradigm”, comparing them on outer alignment, inner alignment,training competitiveness, and performance competitiveness.
Other updates
- We keep being amazed by new shows of support — following ourlasttwoannouncements, MIRI has received a donation from another anonymous donor totaling ~$265,000 in euros, facilitated byEffective Giving UKand theEffective Altruism Foundation. Massive thanks to the donor for their generosity, and to both organizations for their stellar support for MIRI and other longtermist organizations!
- Hacker NewsdiscussesEliezer Yudkowsky'sThere's No Fire Alarm for AGI.
- MIRI researcher Buck Shlegeris talks aboutdeference and inside-view modelson the EA Forum.
- OpenAIunveils GPT-3, a massive 175-billionparameter language model that can figure out how to solve a variety of problems without task-specific training or fine-tuning. Gwern Branwen's pithysummary:
GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curvesstillare not bending!
Further discussionby Branwenandby Rohin Shah.
- Stuart Russell gives this year's Turing Lecture online, discussing “证明地有益的人工智能”.