与新Miri研究员Luke Muehlhause亚博体育官网r的访谈

||Conversations

Section One: Background and Core Ideas

Q1。What is your personal background?
Q2.Why should we care about artificial intelligence?
Q3.Why do you think smarter-than-human artificial intelligence is possible?
Q4。The mission of the Machine Intelligence Research Institute is to “to ensure that the creation of smarter-than- human intelligence benefits society.” How is your research contributing to that mission?
Q5。How does MIRI’s approach to making friendly AI differ from the concept of Asimov laws?
Q6。Why is it necessary to make an AI that “€œwants the same things we want”?
Q7。如果要开发危险的人工智能,为什么我们只能“拉插头”?
Q8.您和机器智能研究所为什么要专注于人工智能,而不是人类智能的增强或整个大脑仿真?亚博体育官网

Section Two: Research Area Questions

Q9.What research areas do you specifically investigate to develop “Friendliness content” for artificial intelligence?
Q10.您和机器智能研究所为什么要专注于人工智能,而不是人类智能的增强或整个大脑仿真?亚博体育官网
Q11。What is value extraction and how is it relevant to Friendly AI theory?
Q12。What is the psychology of concepts and how is it relevant to Friendly AI theory?
Q13。什么是游戏理论,与友好的AI理论有何关系?
Q14.什么是元伦理学,与友好的AI理论有何关系?
Q15。什么是规范性,与友好的AI理论有何关系?
Q16。什么是机器伦理,与友好的AI理论有何关系?
Q16。What are some of those open problems in Friendly AI theory?
Q17。2010年底,机器情报研究所发布亚博体育官网“€œTimeless Decision Theory.” What is timeless decision theory and how is it relevant to Friendly AI theory?
Q18。What is reflective decision theory and how is it relevant to Friendly AI?
Q19.Can you give a concrete example of how your research has made progress towards a solution on one or more open problems in Friendly AI?
Q20.为什么没有机器情报研究所制定任何具体的人工智能法规?亚博体育官网

Section Three: Less Wrong and Rationality

Q21.You originally came to Machine Intelligence Research Institute’s attention when you gained over 10,000 karma points very quickly on Less Wrong. For those who aren’€™t familiar with it, can you tell us what Less Wrong is and what its relationship is to the Machine Intelligence Research Institute?
Q22.What originally got you interested in rationality?
Q23。您最近在夏季是伯克利理性小训练营的讲师。您能告诉我们一些理性营,人们在那里做什么,您教的工作等等吗?
Q24。Besides being a website, Less Wrong groups meet up around the world. If I were interested, where would I be able to get involved in one of those meetups, and what goes on at these meetups?
Q25。The recently released Strategic Plan mentions intentions to “€œspin off rationality training to another organization so that the Machine Intelligence Research Institute can focus on Friendly AI research.” Can you tell us something about that?

第四节:机器情报研究所运营亚博体育官网

Q26.您和路易·赫尔姆(Louie Helm)于9月1日被雇用。你们两个如何雇用?
Q27。路易·赫尔姆(Louie Helm)作为发展总监做什么?
Q28。The Machine Intelligence Research Institute just raised $250,000 in our Summer Challenge Grant. What will those funds be spent on?
Q29:Carl Shulman was hired not long before you and Louie. What is his role in the organization?
Q30.机器情报研究所正在寻找什么样的新研亚博体育官网究人员?

迈克尔·阿尼西莫夫(Michael Anissimov):You were recently hired as a专职员工机器智能研究所。亚博体育官网What is your personal background?

卢克·穆尔豪瑟(Luke Muehlhauser):I studied psychology in university but quickly found that I learn better and faster as an autodidact. Since then, I’€™ve consumed many fields of science and philosophy, one at a time, as they were relevant to my interests. I’€™ve written数十种文章为了my blogand forLess Wrong, and I host a播客on which I interview leading philosophers and scientists about their work. I also have an interest in the mathematics and cognitive science ofhuman rationality, because I want the research I do to be arriving at plausibly correct answers, not just answers that make me feel good.

迈克尔:Why should we care about artificial intelligence?

Luke:Artificial intelligence is becoming a more powerful technology every year. We now have robots that dooriginal scientific research, and the U.S. military is developing systems forautonomous battlefield robots做出自己的决定。当人工智能通过人类智力水平时,它将变得更加重要,在这一点上,它将能够做很多我们比我们能更好地关心的事情 - 诸如治愈癌症和预防灾难之类的事情。

迈克尔:Why do you think smarter-than-human artificial intelligence is possible?

Luke:The first reason is scientific. Human intelligence is a product of information processing in a brain made of meat. But meat is not an ideal platform for intelligence; it’€™s just the first one that evolution happened to produce. Information processing on a faster, more durable, and more flexible platform like silicon should be able to surpass the abilities of an intelligence running on meat if we can figure out which information processing algorithms are required for intelligence –€” either by looking more closely at which algorithms the brain is using or by gaining new insights in mathematics.

第二个原因是历史。机器已经在数百个特定任务中传递了人类的能力:playing chessor危险,通过大型数据库进行搜索,并在最近的进步中搜索reading road signs。There is little reason to suspect this trend will stop, unless scientific progress in general stops.

迈克尔:The使命of the Machine Intelligence Research Institute is to “to ensure that the creation of smarter-than-human intelligence benefits society.” How is your research contributing to that mission?

Luke:一个比人类的机器智能更聪明的,受益(而不是伤害)社会的智能被称为“Friendly AI”。我的主要研究重点是我们所亚博体育官网说的problem of “friendliness content.” What does it look like for an AI to be “friendly” or to “benefit” society? We all have ideas about what a good world looks like and what a bad world looks like, but when thinking about that in the context of AI you must be very precise, because an AI will only do exactly what it is programmed to do.

如果我们能弄清楚如何确切地指定AI“友好”将意味着什么,那么创建友好的AI可能是有史以来最好的事情。先进的人工智能可以比我们更快,更快地做科学,从而治愈癌症,治愈疾病,允许人类永生,预防灾难,解决气候变化的问题,并使我们能够将文明扩展到其他行星。友好的AI还可以发现更好的经济和政治体系,以改善每个人的条件。亚博体育苹果app官方下载

迈克尔:Siai使友好AI的方法与Asimov定律的概念有何不同?

Luke:Asimov’s三个机器人法则为了governing robot behavior are广泛considered不足以确保智能机器不会对人类造成伤害。实际上,阿西莫夫(Asimov)使用他的故事来说明这些法律可能导致意想不到的后果的许多方式。SIAI的方法大不相同,因为我们认为对AI行为的限制从长远来看会起作用。我们需要高级AI想要我们想要的相同的东西。如果AI想要与我们想要的不同的东西,它最终将找到我们对它所做的任何约束的方法,因为它的智慧极高。但是,如果我们能够使人工智能想要我们想要的相同的东西,那么它将比我们带来想要的世界的效果要多得多 - 治愈癌症并发明不朽等等。

迈克尔:Why is it necessary to make an AI that “wants the same things we want”€?

Luke:想要与我们不同的事情的强大AI可能是危险的。例如,假设对AI的目标系统进行编程以最大程度地提高愉悦感。亚博体育苹果app官方下载That sounds good at first, but if you tell a super-powerful AI to “maximize pleasure,” it might do something like (1) convert most of Earth’s resources into computing machinery, killing all humans in the process, so that it can (2) tile the solar system with as many small digital minds as possible, and (3) have those digital minds run a continuous cycle of the single most pleasurable experience possible. But of course, that’s not what we want! We don’t just value pleasure, we also value things like novelty and exploration. So we need to be very careful when we tell an AI precisely what it means to be “friendly.”

We must be careful not to anthropomorphize. A machine intelligence won’€™t necessarily have our “€˜common sense,”™ or our values, or even be sentient. When AI researchers talk aboutmachine intelligence, they only mean to talk about a machine that is good at achieving its goals — whatever they are –€” in a wide variety of environments. So if you tell an AI to maximize pleasure, it will do exactly that. It’s not going to stop halfway through and “€˜realize”€™ –€” like a human might –€” that maximizing pleasure isn’€™t what was intended, and that it should do something else.

迈克尔:If dangerous AI were to develop, why couldn’€™t we just “pull the plug”€?

Luke:We might not know that an AI was dangerous until it was too late. An AI with a certain level of intelligence is going to realize that in order to achieve its goals it needs to avoid being turned off, and so it would hide both the level of its own intelligence and its dangerousness.

But the bigger problem is that if some AI development team has already developed an AI that is intelligent enough to be dangerous, then other teams are only a few months or years behind them. You can’€™t just unplug every dangerous AI that comes along until the end of time. We’€™ll need to develop a Friendly AI that can ensure safety much better than humans can.

迈克尔:您和机器情报研究所为什么要专注于人工智能,而不是人类智能的增强或整个大脑仿亚博体育官网真?

Luke:Human intelligence enhancement is important, and may be needed to solve some of the harder problems of Friendly AI. Whole brain emulation is a particularly revolutionary kind of human intelligence enhancement that, if invented, could allow us to upload human minds into computers, run them at speeds much faster than is possible with neurons, make backup copies, and allow immortality.

许多研究人亚博体育官网员认为,人工intelligence will arrive before whole brain emulation does, but predicting the timelines of future technology can be difficult. We are very interested in whole brain emulation, and in fact that was the subject of a presentation our researcher安娜·萨拉蒙(Anna Salamon)在最近一次的AI会议上。目前,我们专注于AI的原因之一是,友好的AI理论中有数十个开放问题,我们可以立即取得进展,而无需需要在整个大脑仿真中取得进展所需的庞大计算资源。

Section Two: Research Area Questions

迈克尔:What research areas do you specifically investigate to develop “Friendliness content” for artificial intelligence?

Luke:One relevant area of research is认知神经科学,尤其是子领域神经经济学andaffective neuroscience

由于我们的价值观,世界对我们来说是“好”或“坏”的,我们的价值观存储在大脑的神经网络中。几十年来,我们必须通过观察人类的行为来推断人类价值观,因为大脑对我们来说是一个“黑匣子”。但这只能带我们到目前为止,因为我们行动的环境非常复杂,这使得很难仅仅从行为中推断人类价值观。最近,新技术喜欢fMRIandTMSand光遗传学让我们看着黑匣子,看大脑的作用。实际上,我们找到了一些特定的神经元,这些神经元似乎可以为我们在给定时刻考虑的可能采取的行动编码大脑的预期主观价值。我们还了解了很多关于大脑用于更新我们重视某些事物的特定算法的知识 - 实际上,它们是在计算机科学中首先发现的一种算法,称为时间差异强化学习。

第二个研究领域是亚博体育官网选择建模and偏好引起。例如,经济学家使用各种技术Willingness to Paymeasures, to infer human preferences from human behavior. AI researchers also do this, usually for the purposes of designing a piece of software called a决策支持系统亚博体育苹果app官方下载。The human brain doesn’t seem to encode a coherent preference set, so we’€™ll need to use choice modeling and preference elicitation techniques to extract a coherent preference set from whatever it is that human brains actually do.

Other fields relevant to friendliness content theory include value extrapolation, the psychology of concepts, game theory, metaethics, normativity, and machine ethics.

迈克尔:What is value extrapolation and how is it relevant to Friendly AI theory?

Luke:Most philosophers talk about “理想的偏好理论,” but I prefer to call them “€˜value extrapolation algorithms.”€™ If we want to develop Friendly AI, we may not want to just scan human values from our brains and give those same values to an AI. I want to eat salty foods all day, but I kind of wish I didn’t want that, and I certainly don’t want an AI to feed me salty foods all day. Moreover, I would probably change my desires if I knew more and was more rational. I might learn things that would change what I want. And it’s unlikely that the human species has reached the end of moral development. So we don’t want to fix things in place by programming an AI with our current values. We want an AI to extrapolate our values so that it cares about what we would want if we knew more, were more rational, were more morally developed, and so on.

迈克尔:What is the psychology of concepts and how is it relevant to Friendly AI theory?

Luke:Some researchers think that part of the solution to the friendliness content problem will come from examining our intuitive concept of “€˜ought”€™ or “€˜good,”€™ and using this to inform our picture of what we think a good world would be like, and thus what the goal system of a super-powerful machine should be aimed toward. Philosophers have been examining our intuitive concepts of “€˜ought”€™ or “€˜good”€™ for centuries and made little progress, but perhaps new tools in psychology and neuroscience can help us do this conceptual analysis better than philosophers could from their armchairs.

另一方面,心理实验一直在破坏我们关于什么概念的经典理论,领导一些甚至得出结论,概念在任何有用的意义上都不存在。该研究计划在心理学和哲学方面的结果可能对亚博体育官网任何友善内容的方法具有深远的影响,这些方法取决于对我们对“应该”或“善良”的直观概念的检查。

迈克尔:什么是游戏理论,与友好的AI理论有何关系?

Luke:游戏理论是一个高度发达的数学领域,与特定方案(“游戏”)有关,代理人的成功取决于他人的选择。它的模型和发现已应用于商业,经济学,政治科学,生物学,计算机科学和哲学。

游戏理论与友好内容有关,因为我们的许多价值观是由于我们需要在我们的成功取决于他人选择的情况下做出决策而产生的。这也可能与价值外推算法有关,因为外推过程可能会改变我们的价值观和决策与他人的价值观和决策相互作用的方式。

迈克尔:什么是元伦理学,与友好的AI理论有何关系?

Luke:哲学家经常将道德领域分为三个层次。应用伦理is the study of particular moral questions: How should we treat animals? Is lying ever acceptable? What responsibilities do corporations have concerning the environment?规范伦理considers the principles by which we make judgments in applied ethics. Do we make one judgment over another based on which action produces the most good? Or should we be following a list of rules and respecting certain rights? Perhaps we should advocate what we would all agree to behind a veil of ignorance that kept us from knowing what our lot in life will be?

Metaethicsgoes one level deeper. What do terms like “€˜good”€™ and “€˜right”€™ even mean? Do moral facts exist, or is it all relative? Is there such a thing as moral progress? These questions are relevant to friendliness content because presumably, if moral facts exist, we would want an AI to respect them. Even if moral facts do not exist, our moral attitudes are part of what we value, and that is relevant to friendliness content theory.

迈克尔:什么是规范性,与友好的AI理论有何关系?

Luke:规范性关于规范,有很多类型。审慎规范涉及我们应采取的目标来实现我们的目标。认知规范涉及我们应该如何追求知识。探索规范涉及我们应该相信的。道德规范涉及我们应该如何道德行事。等等。

规范性的经典关注是“距离差距据说,您无法从“ IS”陈述中推理给“应该”的陈述。从逻辑上讲,这并不是“我面前的人正在苦难”,“我应该帮助他。”实际上,这很微不足道桥梁当涉及审慎规范时。“如果您想要Y,那么您应该做X,”只是说“做X的另一种方式会增加您获得Y的机会。”第一句话包含“应该”的主张,但第二句话将其简化为关于自然世界的纯描述性句子。

Some philosophers think that the “€˜is-ought gap”€™ can be bridged in the same way for epistemic and moral norms. Perhaps “you ought to believe X” just means “If you want true beliefs, then you ought to believe X,”€ which in turn can be reduced into the purely descriptive statement “€œBelieving X will increase your proportion of true beliefs.”€

But is there any other kind of normativity? Are there “€˜categorical”€™ oughts that do not depend on an “€œIf you want X” clause? Naturalists tend to deny this possibility, but perhaps categorical epistemic or moral oughts can be derived from the mathematics of game theory and decision theory, as naturalist Gary Drescher suggests in良好和真实。If so, it may be wise to make sure they are included in friendliness content theory, so that an AI can respect them.

迈克尔:什么是机器伦理,与友好的AI理论有何关系?

Luke:Machine ethicsis one of several names for the field that studies two major questions: (1) How can we get machines to behave ethically, and (2) which types of machines can be considered genuine moral agents (in the sense of having rights or moral worth like a human might)? Most of the work in the field so far is relevant only to “€˜narrow AI”€™ machines that are not nearly as intelligent as humans are, but two directions of research that may be useful for Friendly AI are机械化的义逻辑andcomputational metaethics

不幸的是,我们对友好的AI的理解 - 以及对一般尚未确定的AI技术的理解是如此原始,以至于我们甚至不确定哪些领域会变得重要。It seems like cognitive neuroscience, preference elicitation, value extrapolation, game theory, and several other fields are relevant to Friendly AI theory, but it might turn out that as we come to understand Friendly AI better, we’€™ll learn that some research avenues are not relevant. But the only way we can learn that is to continue to make incremental progress in the areas of research that seem to be relevant.

迈克尔:What are some of those open problems in Friendly AI theory?

Luke:如果我们只是考虑友好内容的问题,那么一些开放的问题是:大脑如何选择它的几乎没有可能的动作将编码预期的主观价值?它如何结合绝对价值和概率估计来进行预期的主观价值计算?绝对值在哪里存储,如何编码?我们如何从人类大脑中的此活动中提取连贯的效用函数或偏好?我们应该使用哪种算法来推断这些偏好,为什么?推断后,两个不同人类的值会融合吗?所有人类的价值观会融合吗?所有众生的价值会融合吗?人类认知神经科学的细节会很重要,还是会被价值系统和游戏理论的高级数学结构“淘汰”?亚博体育苹果app官方下载如何在AI的目标系统中实现这些外推值?亚博体育苹果app官方下载

Friendliness content is only one area of open problems in Friendly AI theory. There are many other questions. How can an agent make optimal decisions when it is capable of directly editing its own source code, including the source code of the decision mechanism? How can we get an AI to maintain a consistent utility function throughout updates to its ontology? How do we make an AI with preferences about the external world instead of its reward signal? How can we generalize the theory of machine induction –€” calledSolomonoff induction- 这样它可以正确使用高阶逻辑和有关观察选择效果的理由?我们如何近似这样的理想过程以使它们可以计算?

无论如何,这是一个开始。

迈克尔:2010年底,机器情报研究所发布亚博体育官网“Timeless Decision Theory.”€什么是永恒的决策理论,与友好的AI理论有何关系?

Luke:Decision theory is the study of how to make optimal decisions. We value different things differently, and we are uncertain about which actions will bring about what we value. One of the problems not handled well by traditional decision theories likeEvidential decision theory(EDT)和Causal decision theory(CDT) is the problem of logical uncertainty –€” our uncertainty about mathematical and logical facts, for example what the nth decimal of pi is. One way to think about Timeless decision theory (TDT) is that it’s a step toward a decision theory that can handle logical uncertainty.

For an AI to be safe, its decision mechanism will have to be somewhat clear and mathematically testable for stability and safety. That probably means it will need to make decisions with decision theory, rather than through a relatively opaque neural nets mechanism. So we need to solve some fundamental problems in decision theory first, and logical uncertainty is one of the remaining fundamental problems in decision theory.

迈克尔:What is reflective decision theory and why is it necessary to Friendly AI?

Luke:传统决策理论无法处理可以修改其源代码的代理,包括其决策机制的源代码。反思性决策理论是可以处理这种强烈自我修改的推动力的理论。由于高级AI将足够聪明地修改其自己的源代码,因此我们需要开发一种反思性决策理论,使我们能够确保AI在整个自我修改和自我改善过程中保持友好。

迈克尔:Can you give a concrete example of how your research has made progress towards a solution on one or more open problems in Friendly AI?

Luke:I’ve only just begun working with the Machine Intelligence Research Institute, and making progress on open problems in Friendly AI theory is only one of the many things I do. My first contribution to friendliness content theory was tosummarize一些very recent advances in neuroeconomics that are relevant to the study of human values. I did that because other researchers in the field were not yet familiar with that material, and I think much of the work in friendliness content theory can be done collaboratively by a broad community of researchers if we are all well-informed.

神经经济学的这些结果似乎与友好内容理论有关,尽管只有时间才能证明。例如,我们了解到,人类行动的预期效用是在大脑中的基数编码(不是直接的),因此避免了经济学的限制,称为Arrow’s impossibility theorem

迈克尔:Why hasn’€™t the Machine Intelligence Research Institute produced any concrete artificial intelligence code?

Luke:这是一个普遍的混乱。友好AI理论中的大多数开放问题都是数学和哲学,而不是计算机编程。有时,程序员接近我们,提出要研究友好的AI理论,我们回答:“我们需要的是数学家。你擅长数学吗?”

As it turns out, the heroes who can save the world are not those with incredible strength or the power of flight. They are mathematicians.

Section Three: Less Wrong and Rationality

迈克尔:You originally came to the Machine Intelligence Research Institute’€™s attention when you gained over 10,000 karma points very quickly onLess Wrong。对于那些熟悉它的人,您能告诉我们什么不错,以及它与机器智能研究所的关系是什么?亚博体育官网

Luke:Less Wrong is a group blog and community devoted to the study of rationality: How to get truer beliefs and make better decisions. the Machine Intelligence Research Institute’€™s co-founder Eliezer Yudkowsky originally wrote hundreds of articles about rationality for another blog,克服偏见,因为他想建立一个可以清楚地思考友好AI等困难问题的人社区。然后,这些文章被用作新网站的种子含量,不太错误。由于对理性的兴趣,我发现错误的错误,并最终开始为该网站撰写文章 - 其中许多变得非常受欢迎。

迈克尔:What originally got you interested in rationality?

Luke:我抚养了一个热情的福音派基督徒,当我的信仰危机巨大学到了一些关于历史上的耶稣,科学,and philosophy. I was disturbed by how confidently I had believed something that was so thoroughly wrong, and I no longer trusted my intuitions. I wanted to avoid being so wrong again, so I studied the phenomena that allow human brains to be so mistaken — things likeconfirmation bias影响启发式。I also gained an interest in the mathematics of correct thinking, likeBayesian updatingand决策理论

迈克尔:您最近在夏季是伯克利理性小训练营的讲师。Can you tell us a little about the MiniCamp, what people did there, what you taught, and so on?

Luke:安娜我整理了一个小型训练营,这是一个为期一周的营地,充满了有关理性,社会有效性和生存风险的课程和活动。我们在伯克利的一栋大房子里住了20多人,我们在那里举行了课。他们中的一些人来自瑞典和英国。

Minicamp was a blast, mostly because the people were so great! We are still in contact, still learning and growing.

我们教授了诸如如何使用概率理论更新我们的信念,如何利用可及性原则更好地实现我们的目标,以及如何使用肢体语言和时尚来改善数学头有时忽略的生活的某些部分!我们还教上课最佳慈善事业(how to get the most bang for your philanthropic buck) andexistential risks(risks that could cause human extinction).

迈克尔:Besides being a website, Less Wrong groups meet up around the world. If I were interested, where would I be able to get involved in one of those meetups, and what goes on at these meetups?

Luke:Because of人类对背景的敏感程度如何, surrounding yourself with other people who are learning rationality and trying to improve themselves is one of the most powerful ways to improve yourself.

在您附近找到不太错误的聚会的最简单方法可能是检查最新的聚会front-pagepost on Less Wrong with the title that begins “€œWeekly LW Meetups…”€ That post will list all the Less Wrong meetups happening that week.

每个错误的聚会都有不同的人和不同的活动。您可以联系Meetup组织者以获取最近的聚会以获取更多信息。

迈克尔:The recently releasedStrategic Planmentions intentions to “€œspin off rationality training to another organization so that the Machine Intelligence Research Institute can focus on Friendly AI research.”€ Can you tell us something about that?

Luke:We believe that building a large community of rationality enthusiasts is crucial to the success of our mission. The Less Wrong rationality community has been an indispensable source of human and financial capital for the Machine Intelligence Research Institute. However, we understand that it’s confusing to be an organization devoted to two such apparently different fields: advanced artificial intelligence and human rationality. That’s why we are working toward launching a new organization devoted to rationality training. The Machine Intelligence Research Institute, then, will be more solely devoted to the safety of advanced artificial intelligence.

第四节:机器情报研究所运营亚博体育官网

迈克尔:您和路易·赫尔姆(Louie Helm)于9月1日被雇用。你们两个如何雇用?

Luke:The Machine Intelligence Research Institute doesn’€™t hire someone unless they do quite a bit of volunteer work first. I first came to the Machine Intelligence Research Institute as a visiting fellow. During the next few months I co-organized and taught at the Rationality Minicamp, taught classes for the longer Rationality Boot Camp, wrote dozens of articles on metaethics and rationality for Less Wrong, wrote theIntelligence Explosion FAQandIntelligenceExplosion.com, led the writing of a strategic plan for the organization, and did many smaller tasks.

路易·赫尔姆(Louie Helm)不久后到达伯克利。作为过去的来访者,路易(Louie)是建议我申请访问研究员计划的人。路易(Louie)为理性训练营做了一些教学,帮助我制定了战略计划,制定了一个捐助者数据库,以便我们与捐助者的接触更加一致,优化了机器情报研究所的财务状况,进行了很多筹款活动,等等。亚博体育官网

在那几个月中,我们俩都作为志愿者产生了很多价值,因此董事会雇用了我们 - 我为研究研究员,路易(Louie)担任发展总监。亚博体育官网

迈克尔:路易·赫尔姆(Louie Helm)作为发展总监做什么?

Luke:We’€™re a small team, so we all do more than our title says, and Louie is no exception. Louie raises funds, communicates with donors, applies for grants, and so on. But he also launched the Research Associates program, coordinates the志愿者网络,帮助组织Singularity Summit, seeks out potential new researchers, and more.

迈克尔:The Machine Intelligence Research Institute just raised $250,000 in our Summer Challenge Grant. What will those funds be spent on?

Luke:我们对夏季挑战赛的结果感到非常满意。没有一个人捐出超过25,000美元,因此赠款成功了,因为这么多不同的人付出了。超过40人给了1,000美元或以上,这表明我们的核心支持者非常信任。

每年花费368,000美元,以支持我们由八名全职员工组成的精益家庭,其中四名是研究人员:Eliezer Yudkowsky,Anna Salamon,Carl Shulman和我本人。亚博体育官网这笔钱也将用于运行2011 Singularity Summit,尽管我们预计该活动今年将是现金阳性的。我们计划重新设计Singinst.org网站,以便更容易导航并提供更大的组织透明度。峰会后有足够的资金,我们希望聘请更多的研究人员。亚博体育官网

迈克尔:Carl Shulman was hired not long before you and Louie. What is his role in the organization?

Luke:Carl also did quite a lot of work for the Machine Intelligence Research Institute before being hired. He has written several papers and given a few talks, many of which you can read from our亚博体育官网 page. He continues to work on a variety of research projects, and collaborates closely with researchers at Oxford’s人类研究所的未来

迈克尔:机器情报研究所正在寻找什么样的新研亚博体育官网究人员?

Luke:数学家,主要是。如果您是一位出色的数学系学生,并且想在海湾地区生活和工作,在那里您将被聪明,有影响力的无私的人包围,请在这里申请