如何有效地我们可以计划未来德吗cades? (initial findings)

||yabo app

MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at leasta few decades away. Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?

Or, more generally:How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?

To investigate these questions, we askedJonah Sinickto examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWellon the subject of insecticide-treated nets. The post below is a summary of findings fromour full email exchange (.pdf)so far.

We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means thatwe aren’t yet able to draw any confident conclusions about our core questions.

The most significant results from this project so far are:

  1. Jonah’s initial impressions aboutThe Limits to Growth(1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.
  2. Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.
  3. In retrospect, Norbert Weiner’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.
  4. Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”
  5. Upon initial investigation, two historical cases seemed like theymightshed light on our core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.
  6. We listed many other historical cases that may be worth investigating.

The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’sThe Signal and the Noise, availablehere.

Further details are given below. For sources and more, please seeour full email exchange (.pdf).

阅读更多»

The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!

||News

ai-foom-coverIn late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.

The debate is now available as an eBook in various popular formats (PDF, EPUB, and MOBI). It includes:

  • the original series of blog posts,
  • a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject,
  • a summary of the debate written by Kaj Sotala, and
  • a2013 technical reporton AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.

Comments from the authors are included at the end of each chapter, along with a link to the original post.

Head over towww.hdjkn.com/ai-foom-debate/to download a free copy.

Stephen Hsu on Cognitive Genomics

||Conversations

Steve Hsu portraitStephen Hsu is Vice-President for Research and Graduate Studies and Professor of Theoretical Physicsat Michigan State University. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon. He was also founder of SafeWeb, an information security startup acquired by Symantec. Hsu is a scientific advisor toBGI和它的成员Cognitive Genomics Lab.

Luke Muehlhauser: I’d like to start by familiarizing our readers with some of the basic facts relevant to the genetic architecture of cognitive ability, which I’ve drawn from the first half of apresentationyou gave in February 2013:

阅读更多»

MIRI’s November 2013 Workshop in Oxford

||News

013

From November 23-29, MIRI will host anotherWorkshop on Logic, Probability, and Reflection, for the first time inOxford, UK.

Participants will investigate problems related toreflective agents,probabilistic logic, andpriors over logical statements/ the logical omniscience problem.

Participants confirmed so far include:

If you have a strong mathematics background and might like to attend this workshop, it’s not too late to亚博体育苹果app官方下载 ! And even ifthisworkshop doesn’t fit your schedule, pleasedo apply, so that we can notify you of other workshops (long before they are announced publicly).

Transparency in Safety-Critical Systems

||yabo app

In this post, I aim to summarize one common view on AI transparency and AI reliability. It’s difficult to identify the field’s “consensus” on AI transparency and reliability, so instead I will presentacommon view so that I can use it to introduce a number of complications and open questions that (I think) warrant further investigation.

Here’s a short version of the common view I summarize below:

Black box testingcan provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.

The value of transparency in system design

Nusser (2009)writes:

…in the field of safety-related applications it is essential to provide transparent solutions that can be validated by domain experts. “Black box” approaches, like artificial neural networks, are regarded with suspicion – even if they show a very high accuracy on the available data – because it is not feasible to prove that they will show a good performance on all possible input combinations.

Unfortunately, there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent:

Methods that are known to achieve a high predictive performance — e.g. support vector machines (SVMs) or artificial neural networks (ANNs) — are usually hard to interpret. On the other hand, methods that are known to be well-interpretable — for example (fuzzy) rule systems, decision trees, or linear models — are usually limited with respect to their predictive performance.1

But forsafety-critical systems— and especially forAGI— it is important to prioritize system reliability over capability. Again, here isNusser (2009):

strict requirements [for system transparency] are necessary because a safety-related system is a system whose malfunction or failure can lead to serious consequences — for example environmental harm, loss or severe damage of equipment, harm or serious injury of people, or even death. Often, it is impossible to rectify a wrong decision within this domain.

阅读更多»


  1. Quote fromNusser (2009). Emphasis added. The original text contains many citations which have been removed in this post for readability. Also seeSchultz & Cronin (2003), which makes this point by graphing four AI methods along two axes: robustness and transparency. Their graph is availablehere. In their terminology, a method is “robust” to the degree that it is flexible and useful on a wide variety of problems and data sets. On the graph, GA means “genetic algorithms,” NN means “neural networks,” PCA means “principal components analysis,” PLS means “partial least squares,” and MLR means “multiple linear regression.” In this sample of AI methods, the trend is clear: the most robust methods tend to be the least transparent. Schultz & Cronin graphed only a tiny sample of AI methods, but the trend holds more broadly.

Holden Karnofsky on Transparent Research Analyses

||Conversations

Holden Karnofsky is the co-founder ofGiveWell, which finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. GiveWelltracked ~$9.6 million in donationsmade on the basis of its recommendations in 2012. It has historically sought proven, cost-effective, scalable giving opportunities, but its new initiative,GiveWell Labs, is more broadly researching the question of how to give as well as possible.

Luke Muehlhauser: GiveWell获得尊重它的高质量analyses of some difficult-to-quantify phenomena: the impacts of particular philanthropic interventions. You’ve written about your methods for facing this challenge in several blog posts, for example (1)Futility of standardized metrics: an example, (2)In defense of the streetlight effect, (3)Why we can’t take expected value estimates literally, (4)What it takes to evaluate impact, (5)Some considerations against more investment in cost-effectiveness estimates, (6)Maximizing cost-effectiveness via critical inquiry, (7)Some history behind our shifting approach to research, (8)Our principles for assessing research, (9)Surveying the research on a topic, (10)How we evaluate a study, and (11)Passive vs. rational vs. quantified.

In my first question I’d like to ask about one particular thing you’ve done to solve one particular problem with analyses of difficult-to-quantify phenomena. The problem I have in mind is thatit’s often difficult for readers to know how much they should trust a given analysis of a difficult-to-quantify phenomenon. In mathematics research it’s often pretty straightforward for other mathematicians to tell what’s good and what’s not. But what about analyses that combine intuitions, expert opinion, multiple somewhat-conflicting scientific studies, general research in a variety of “soft” sciences, and so on? In such cases it can be difficult for readers to distinguish high-quality and low-quality analyses, and it can be hard for readers to tell whether the analysis is biased in particular ways.

阅读更多»

2013 Summer Matching Challenge Completed!

||News

Thanks to the generosity of dozens of donors, on August 15th we successfullycompletedthe largest fundraiser in MIRI’s history. All told, we raised400000美元, which will fund our research going forward.

This fundraiser came “right down to the wire.” At 8:45pm Pacific time, with only a few hours left before the deadline, we announced onour Facebook pagethat we had only $555 more to raise to meet our goal. At 8:53pm,Benjamin Hoffmandonated exactly $555, finishing the drive.

Our deepest thanks to all our supporters!

Luke at Quixey on Tuesday (Aug. 20th)

||News

EA & EotW

This coming Tuesday, MIRI’s Executive Director Luke Muehlhauser will give a talk atQuixeytitledEffective Altruism and the End of the World. If you’re in or near the South Bay, you should come! Snacks will be provided.

Time: Tuesday, August 20th. Doors open at 7:30pm. Talk starts at 8pm. Q&A starts at 8:30pm.

Place: Quixey Headquarters, 278 Castro St., Mountain View, CA. (Google Maps)

Entrance:You cannot enter Quixey from Castro St. Instead, please enter through the back door, from the parking lot at the corner of Dana & Bryant.