All MIRI Publications
Recent and Forthcoming Papers
Sign up to get updates on new MIRI technical results
Get notified every time a new technical paper is published.
Inadequate Equilibria: Where and How Civilizations Get Stuck
E Yudkowsky (2017)
When should you think that you may be able to do somethingunusually well?When you’re trying to outperform in a given area, it’s important that you have a sober understanding of your relative competencies. The story only ends there, however, if you’re fortunate enough to live in anadequatecivilization.
Eliezer Yudkowsky’sInadequate Equilibriais a sharp and lively guidebook for anyone questioning when and how they can know better, and do better, than the status quo. Freely mixing debates on the foundations of rational decision-making with tips for everyday life, Yudkowsky explores the central question of when we can (and can’t) expect to spot systemic inefficiencies, and exploit them.
Rationality: From AI to Zombies
E Yudkowsky (2015)
When human brains try to do things, they can run into some very strange problems. Self-deception, confirmation bias, magical thinking—it sometimes seems our ingenuity is boundless when it comes to shooting ourselves in the foot.
Map and Territoryand the rest of theRationality: From AI to Zombiesseries asks what a “martial art” of rationality would look like. In this series, Eliezer Yudkowsky explains the findings of cognitive science, and the ideas of naturalistic philosophy, that help provide a useful background for understanding MIRI’s research and for generally approaching ambitious problems.
Smarter Than Us: The Rise of Machine Intelligence
S Armstrong (2014)
What happens when machines become smarter than humans? Humans steer the future not because we’re the strongest or the fastest but because we’re thesmartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.
Facing the Intelligence Explosion
L Muehlhauser (2013)
Sometime this century, machines will surpass human levels of intelligence and ability. This event—the “intelligence explosion”—will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
杰出人物阿兰·图灵和i . j .比尔Joy and Stephen Hawking have warned us about this. Why do we think Hawking and company are right, and what can we do about it?
Facing the Intelligence Explosionis Muehlhauser’s attempt to answer these questions.
The Hanson-Yudkowsky AI-Foom Debate
R Hanson and E Yudkowsky (2013)
In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.
The original debate took place in a long series of blog posts, which are collected here. This book also includes a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject, a summary of the debate written by Kaj Sotala, and a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.
- yabo app – MIRI publishes some of its most substantive research to its blog.
- Conversations– MIRI interviews a diverse array of researchers and intellectuals on topics related to its research.
Resources for Researchers
- A publicBibLaTeX file.
- A publicMendeley group.
- A publicGitHub repo.