December 2012 Newsletter

||消息letters

Greetings from the Executive Director


奇异研究所的亲爱的朋友,

This month marks the biggest shift in our operations since theSingularity Summitwas founded in 2006. Now that Singularity University has获得奇异性峰会(下面的详细信息)和SI对理性培训的利益正在现在分开Center for Applied Rationality,the Singularity Institute is making a major transition。For 12 years we’ve largely focused on movement-building — through the Singularity Summit,Less Wrong和其他程序。需要这项工作来建立对我们的使命的支持社区,并为我们独特的跨学科工作提供一大批潜在的研究人员。亚博体育官网

现在,现在是时候说“任务成就得很好,足以进行研究。”亚博体育官网我们的支持者社区现在已经足够大,如果我们有能力雇用他们,那么合格的研究人员可以租用我们。亚博体育官网出版了亚博体育官网 anddozens moreoriginal research articles on Less Wrong, we certainly haven’t neglected research. Butin 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research。If you’d like to help with that, pleasecontribute to our ongoing fundraising drive

Onward and upward,

卢克



Singularity Summit Conference Acquired by Singularity University


TheSingularity Summitconference, founded by SI in 2006, has been获得from SI by Singularity University。As part of the agreement, the Singularity Institute will change its name (to reduce brand confusion), but will remain as co-producers of the Singularity Summit in some succeeding years. We are pleased that we can transition the conference to an organization with a strong commitment to maintaining its quality as it grows.

Most of the funds from the Summit acquisition will be placed in a separate fund for a “Friendly AI team,”因此,不支持我们的日常操作或其他计划。

We wish to thank everyone who participated in making the Singularity Summit a success, especially past SI president Michael Vassar (now withPanacea Researchand Summit organizer Amy Willey.


2012年冬季匹配挑战!


We’re excited to announce our2012 Winter Matching Challenge。Thanks to the generosity of several major donors,每一笔捐款奇点研究所now until January 5th, 2013 will be matched dollar-for-dollar, up to a total of $115,000!

Now is your chance todouble your impactwhile helping us raise up to $230,000 to help fund亚博体育官网

Note that the newCenter for Applied Rationality(CFAR)将很快开展单独的筹款活动。

Please read ourblog post for the challengefor more details, including our accomplishments throughout the last year and our plans for the next 6 months.请支持我们从运动阶段过渡到研究阶段亚博体育官网

†Edwin Evans,MihályBarasz,Rob Zahra,Alexei Andreev,Jeff Bone,Jeff Bone,Michael Blume,Guy Srinivasan和Kevin Fischer提供了总匹配资金的115,000美元。


A Week of Friendly AI Math at the Singularity Institute


2
From Nov. 11th-17th, SI held a Friendly AI math workshop at our headquarters in Berkeley, California. The participants — Eliezer Yudkowsky, Marcello Herreshoff, Paul Christiano, and Mihály Barasz — tackled a particular problem related to Friendly AI. We held the workshop mostly to test hypotheses about ideal team size and the problem’s tractability, while allowing there was some small chance the team would achieve a significant result in just one week.

Happily, it seems the teamdid取得重要的结果,参与者估计,如果发表的话,这将等同于1-3篇论文。更多详细信息即将到来。


SI’s Turing Prize Awarded to Bill Hibbard for “Avoiding Unintended AI Behaviors”


award to bill hibbard

This year’sAGI-12conference, held in Oxford UK, included a special track onAGI Impacts。A selection of papers from this track will be published in a special volume of theJournal of Experimental & Theoretical Artificial Intelligencein 2013.

The Singularity Institute had previously announced a $1000 prize for the best paper from AGI-12 or AGI Impacts on the question of how to develop safe architectures or goals for AGI. At the event, the prize was awarded to Bill Hibbard for his paperAvoiding Unintended AI Behaviors

SI’s Turing Prize is awarded in honor of Alan Turing, who not only discovered some of the key ideas of machine intelligence, but also grasped its importance, writing that “…it seems probable that once [human-level machine thinking] has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…” The prize is awarded for work that not only increases awareness of this important problem, but also makes technical progress in addressing it.

SI researcher Carl Shulman also presented at AGI Impacts. You can read the abstract of his talk, “Could we use untrustworthy human brain emulations to make trustworthy ones?”,here。If video of the talk becomes available later, we’ll link to it in a future newsletter.


New Paper: How We’re Predicting AI — or Failing To


Note:The findings in this paper are based on a dataset error. For details, seehttps://aiimpacts.org/error-in-armstrong-and-sotala-2012/

新论文斯图尔特·阿姆斯特朗(FHI) andKaj Sotala(SI) has now been published (PDF) as part of theBeyond AIconference proceedings。Some of these results were previously discussedhere。The original predictions data are availablehere。TheLess Wrongthread ishere。我们感谢Stuart和Kaj对AI预测的宝贵元研究。

For the study, Stuart Armstrong and Kaj Sotala examined a database of 257 AI predictions, made in a period spanning from the 1950s to the present day. This database was assembled by researchers from the Singularity Institute (Jonathan Wang and Brian Potter) systematically searching though the literature. 95 of these are considered AI timeline predictions.

该论文研究了几个关于AI预测的民间理论,“Maes-Garreau law”(that people predict AI happening near the end of their own lifetime) and the prediction that “AI is always 15-25 years into the future”. Systematic analysis of the database of AI predictions revealed support for the second theory but not the first. Many of the predictions were concentrated around 15-25 years in the future, and this trend held whether the predictions were being made in the 1950s or the 2000s. Predictions were not observed to cluster around the expected end of lifetime of the predictors, a result which contradicts the Maes-Garreau hypothesis. It was also found that the predictions of experts do not correlate in any distinct way relative to non-experts; i.e., there seems to be little evidence that experts make better AI predictions than non-experts. At Singularity Summit 2012, Stuart Armstrongsummarized the results of the study在舞台上的演讲中。

The major take-away from this study is that predicting the arrival of human-level AI is such a fuzzy endeavor that we should take any prediction with a large grain of salt. The rational approach is to widen our confidence intervals — that is, recognize that we don’t really know when human-level AI will be developed, and make plans accordingly. Just as we cannot confidently state that AI is near, we can’t confidently state that AI is far off. (This was also the conclusion of an earlier Singulairty Institute publication, “Intelligence Explosion: Evidence and Import。”)


Michael Anissimov Publishes Responses to P.Z. Myers and Kevin Kelly


SI media director Michael Anissimov has published blog posts responding tobiologist P.Z. Myersonwhole brain emulationandbestselling author Kevin KellyonAI takeoff

In a blog post,P.Z. Myersrejected the idea of whole brain emulation in general, stating“它行不通。不能。”但是,他的反应着重于使用当前技术的活大脑扫描。In response, Anissimov concedes that Myers’ criticisms make sense in the narrow context in which he makes them, but his response misunderstands that whole brain emulation refers to a wide range of possible scanning approaches, not just the reductionistic straw man of “scan in, emulation out”. So, while Myers’ critique applies to certain types of brain scanning approaches, it does not apply to whole brain emulation in general.

Anissimov’sresponse to Kevin Kelly是对blog post from four years ago, “Thinkism”. Kelly’s blog post is notable as one of the most substantive critiques of the fast AI takeoff idea by a prominent intellectual. Kelly argues that the idea of scientific and technological research and development occurring more rapidly than “calendar time” is incredulous, because there are inherently time-limited processes, such as cellular metabolism, which limit progress on difficult research problems, namely indefinitely extending human healthspans. Anissimov argues that faster-than-human, smarter-than-human intelligence could overcome the human-characteristic rate of innovation through superior insight, breaking problems into their constituent parts, and by making experimentation massively accelerated and parallel.


Original Research on Less Wrong


SI executive director卢克Muehlhauser在我们的研究团队的帮助下,已编译亚博体育官网list of original researchproduced by the web communityLess Wrong。Though many of the posts onLess Wrongaresummaries of previously published research, there is also a substantial amount of original expert material in philosophy, decision theory, mathematical logic, and other fields.

原始研究的例子亚博体育官网Less Wronginclude Eliezer Yudkowsky’s“Highly Advanced Epistemology 101 for Beginners” sequence, Wei Dai’sposts on his original decision theory (UDT), Vladimir Nesov’sthoughts on counterfactual mugging, and Benja Fallstein’sinvestigation of the problem of logical uncertainty。In all, the list compiles over 50 examples of original research on Less Wrong stretching from 2008 to the present. Of particular interest is original research that contributes towardssolving open problems在友好的AI中。

Original research onLess Wrongcontinues to be pursued in severaldiscussion threadson the site.


How Can I Reduce Existential Risk From AI?


卢克·穆尔豪瑟(Luke Muehlhauser)最近的另一个不太错误的帖子是“How can I reduce existential risk from AI?”“Existential risk”, aterm coined by Oxford philosopher Nick Bostrom,特别是指人类生存的风险。奇异研究所认为,高级人工智能是existential riskto the future of the human species. “Existential risk” generally refers to the total destruction of the human species, rather than risks which threaten 90% or 99% of the population, which would constitute global catastrophic risks but not true existential risks.

自2000年建国以来,奇点Institute has argued that smarter-than-human, self-improving Artificial Intelligence is an existential risk to humanity. The concern is that, at some point over the next hundred years, advanced AI will be created that can manufacture its own sophisticated robotics and threaten to displace human civilization, not necessarily through deliberate action but merely as a side-effect of the exploitation of resources required for our survival, such as carbon, oxygen, or physical space. For additional background, please read our concise summary,“Reducing Long-Term Catastrophic Risks from Artificial Intelligence”

该帖子概述了降低AI风险的三个主要工作类别:(1)元工作,例如赚钱为友好的AI研究做出贡献;亚博体育官网(2)战略工作,努力对我们面临的挑战有更好的战略理解;(3)直接工作,例如技术研究,政治行动或特定类型的技术发展。亚博体育官网这三个人对建立“生存风险缓解生态系统”至关重要,这是数百人的合作努力,以更好地了解AI风险并为此做些事情。亚博体育苹果app官方下载


新研究助亚博体育官网理


The Singularity Institute is pleased to announce four new亚博体育官网研究伙伴, Benja Fallenstein, Marcello Herreshoff, Mihály Barasz, and Bill Hibbard.

Benja Fallensteinis interested in the basic research necessary for the development of safe AI goals, especially from the perspective of mathematical models of evolutionary psychology, and also in anthropic reasoning, decision theory, game theory, reflective mathematics, and programming languages with integrated proof checkers. Benja is a mathematics student at University of Vienna, with a focus in biomathematics.

Marcello Herreshoff自2007年以来,他一直不时与友好AI的数学研究所合作。在高中时,他是USACO的两次决赛入围者,他出版了一本小说的Combinatorics结果,他在第十二届Fibonacci国际会议上介绍了这一结果。及其申请。他拥有斯坦福大学的数学学士学位。在斯坦福大学,他在Putnam数学竞赛中获得了两个荣誉奖,并在IGPL逻辑杂志上提交了荣誉论文。他的研究亚博体育官网兴趣包括数学逻辑以及在形式化连贯目标系统中的使用。亚博体育苹果app官方下载

MihályBaraszis interested in functional languages and type theory and their application in formal proof systems. He cares deeply about reducing existential risks. He has an M.Sc. summa cum laude in Mathematics from Eotvos Lorand University, Budapest and currently works at Google.

Bill Hibbard是威斯康星大学麦迪逊分校太空科学与工程中心的名誉高级科学家,目前正在研究AI安全和意外行为问题。他拥有威斯康星大学麦迪逊分校的数学和计算机科学博士学位。


AI Risk-Related Improvements to the LW Wiki


The Singularity Institute has greatly改进了Less Wrongwiki with new entries, featuring topics from种子AItomoral uncertaintyand more. Over 120 pages were updated in total. The improvements to the wiki were prompted by earlier proposals for a dedicatedscholarly AI risk wiki。The improvements to the wiki enable more background knowledge for publishing short, clear, scholarly articles on AI risk.

Some articles of interest include the5-and-10 problem,阿吉怀疑,AGI Sputnik moment,AI advantages,AI takeoff,basic AI drives,,生物认知增强,Coherent Extrapolated Volition,computing overhang,computronium,differential intellectual progress,economic consequences of AI and whole brain emulation,Eliezer Yudkowsky,emulation argument for human-level AI,extensibility argument for greater-than-human intelligence,人级AI的进化论点,complexity of human value,Friendly artificial intelligence,人类研究所的未来,AI风险思想的历史,intelligence explosion,moral divergence,moral uncertainty,Nick Bostrom,optimal philanthropy,optimization process,Oracle AI,orthogonality thesis,回纸最大化器,Pascal’s mugging,recursive self-improvement,reflective decision theory,辛格尔顿,奇异主义,Singularity,subgoal stomp,superintelligence,terminal value,timeless decision theory,tool AI,utility extraction,value extrapolation,value learning, andwhole brain emulation


Featured Volunteer: Ethan Dickinson


This month, we thank Ethan Dickinson for his volunteer work transcribing videos from the Singularity Summit 2012 conference. When we talked with Ethan about his work, he mentioned how one talk, Julia Galef’s, (embedded below) even inspired a teary-eyed emotional climax. Ethan became involved in volunteer work for SI after several years of developing an increasing interest in rationality, originally introduced viaHarry Potter and the Methods of Rationality。Today, he feels he is using the full powers of
他的想象力了解奇异性。释义Jaan Tallin,他使用了自己的科幻小说,例如威廉·吉布森(William Gibson)Neuromancerand Isaac Asimov’s novels to imagine worlds “much more optimistic and much more pessimistic” than many of the middle ground scenarios for the future widely assumed today.


特色峰会视频:朱莉娅·盖莱夫(Julia Galef)


Four decades of cognitive science have confirmed thatHomo sapiensare far from “rational animals.” Scientists have amassed a daunting list of ways that our brain’s fast-and-frugal judgment heuristics fail in modern contexts for which they weren’t adapted, or stymie our attempts to be happy and effective. Hence the project we’re undertaking at the new Center for Applied Rationality (CFAR) — training human brains to run algorithms that optimize for our interests as autonomous beings in the modern world, not for the interests of ancient replicators. This talk explores what we’ve learned from that process so far, and why training smart people to be rational decision-makers is crucial to a better future.


消息Items


asimo

Killer robots? Cambridge brains to assess AI risk
CNET, November 26, 2012


DARPA’s Pet-Proto Robot Navigates Obstacles
YouTube, October 24, 2012