A reply to Francois Chollet on intelligence explosion

||yabo app

This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion。”

In response to critics of his essay, Chollet tweeted:

If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?

And he earlier tweeted:

Don’t be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views.

Chollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements.

作为元级别的观点,我希望每个人都同意,关于真实结论的无效论据仍然是一个糟糕的论点。为了达到正确的信念状态,我们要总结所有有效的支持,只有有效的支持。为了得到支持,我们需要根据其本地结构和有效性来判断自己的论点的概念,如果他们支持一方,我们出于其他原因同意一方,就不要原谅谬论。

My reply to Chollet doesn’t try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion.

事不宜迟,这是我回应Chollet的想法。

The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time.

我同意这或多或少是我在1998年创建这个词时的“种子AI”的意思。今天,十九年后,我会谈论一个“能力增长”的一般问题或认知系统的力量如何亚博体育苹果app官方下载随着资源的增加和进一步的优化范围。递归自我改善的想法只是对能力增长的一般问题的投入。例如,我们最近看到了一些令人印象深刻的快速缩放能力,而没有任何我认为涉及种子AI的东西。也就是说,我认为Chollet关于“自我完善”的许多问题都与能力增强有关,因此我不会反对对话的主题。

该理论的支持者还将智力视为一种超级大国,以几乎超自然的能力授予其持有者来塑造其环境 

A good description of a human from the perspective of a chimpanzee.

From a certain standpoint, the civilization of the year 2017 could be said to have “magic” from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn’t recognize as a “solution” if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as “a good strategy for cooling your house” in advance of observing the outcome, because they don’t yet know about the temperature-pressure relation. A fancy term for this would be “strong cognitive uncontainability”;一个隐喻的术语将是“魔术”,尽管我们当然没有做任何真正的超自然事物。人与较小的大脑之间存在类似但较大的差距(aka a aka a chimpanzee)。

表明认知能力的巨大差距与务实能力的巨大差距相对应,这并不是完全前所未有的。我认为很多人都会同意将智力描述为人类超级大国,而与他们对智力爆炸假设的看法无关。

- 例如,从科幻电影《超越》(2014年)中看到。

我同意公众对事物的印象是someone应该关心。如果我进行乘车共享,并提到我做任何涉及AI的事情,那么驾驶员一半的时间说:“哦,像Skynet一样!”这是可以理解的原因。但是,如果我们试图弄清智力爆炸是否可能且可能是可能的事实问题,那么重要的是要考虑所有相关观点的各个方面的最佳论点,而不是流行的论点。为此,迪帕克·乔普拉(Deepak Chopra)在量子力学上的撰写是否比任何实际物理学家都更大。

值得庆幸的是,Chollet并没有尤其是在攻击Kurzweil的剩余文章中,因此我将其留下。

The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains.

我看不到“个人”一词在这句话中所做的工作。从我们的角度来看,如果从外面看,它似乎以一种连贯的目标指导的方式表现出来,那么它是否被认为是一百个代理商还是单个代理商,这无关紧要。务实的后果是相同的。我确实认为可以说,我想到的是“代理”,从我们的外部角度来看,这似乎以连贯的目标指导的方式行事。

我从情报爆炸理论中看到的第一个问题是未能意识到智力必然是更广泛的系统的一部分 - 智力的愿景是可以独立于其处境独立于任意智能的“罐子中的大脑”。亚博体育苹果app官方下载

我不知道自己,尼克·博斯特罗姆(Nick Bostrom)或该领域的其他主要技术声音,声称解决问题可以独立于处境/环境中。

也就是说,某些系统在各种结构化的亚博体育苹果app官方下载低渗透环境中都很好地发挥了作用。例如。在非常广泛的环境中,人的大脑的功能要比其他灵长类动物大脑好得多,包括许多自然选择并未明确优化的许多人。我们仍然在月球上起作用,因为月球在足够深的元水平上与地球有足够的共同点过去的经验继续在那里运行。现在,如果您将我们扔到一个宇宙中,未来没有与过去无关的关系,那么我们确实在这种“情况”中做得并不是很好,但是这与AI对我们自己的现实世界的影响务实地毫无意义,未来确实与过去有关系。

In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.

斯科特·亚伦森的反应: “Citing the ‘No Free Lunch Theorem’—i.e., the (trivial) statement that you can’t outperform brute-force search onrandom优化问题的实例 - 声称对AI限制有用的任何东西,这不是一个有希望的信号。”

It seems worth spelling out an as-simple-as-possible special case of this point in mathy detail, since it looked to me like a central issue given the rest of Chollet’s essay. I expect this math isn’t new to Chollet, but reprise it here to establish common language and for the benefit of everyone else reading along.

Laplace’s Rule of Succession由托马斯·贝叶斯发明,给了我们一个简单的rule for predicting future elements of a binary sequence based on previously observed elements. Let’s take this binary sequence to be a series of “heads” and “tails” generated by some sequence generator called a “coin”, not assumed to be fair. In the standard problem setup yielding the Rule of Succession, our state of prior ignorance is that we think there is some frequency \(\theta\) that a coin comes up heads, and for all we know \(\theta\) is equally likely to take on any real value between \(0\) and and \(1\). We can do some Bayesian inference and conclude that after seeing \(M\) heads and \(N\) tails, we should predict that the odds for heads : tails on the next coinflip are:

$$\frac{M + 1}{M + N + 2} : \frac{N + 1}{M + N + 2}$$

(看Laplace’s Rule of Succession为了证明。)

该规则会产生诸如:“如果您还没有观察到任何偶数,将50-50分配给头和尾巴”或“如果您看过四个头,没有尾巴,请分配1/6的概率而不是0概率下一个翻转是尾巴”或“如果您看到硬币的头部出现了150次,尾巴又有75次,下次将硬币分配到大约2/3的概率上。”

Now this rule does not do super-well in any possible kind of environment. In particular, it doesn’t do any better than the maximum-entropy prediction “the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously” if the environment is in fact a fair coin. In general, there is “no free lunch” on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better一般比最大熵​​,即使该算法在一个特定的随机绘制中表现得更好。

On the other hand, if you start from the prior that every binary sequence is equally likely, you never notice anything a human would consider an obvious pattern. If you start from the maxentropy prior, then after observing a coin come up heads a thousand times, and tails never, you still predict 50-50 on the next draw; because on the maxentropy prior, the sequence “one thousand heads followed by tails” is exactly as likely as “one thousand heads followed by heads”.

拉普拉斯(Laplace)继承规则实例化的推论规则在通用的低渗透宇宙中会更好。它不是从特定知识开始的;它不是从假设硬币是偏见的头部或有偏见的尾巴开始的。如果硬币是有偏见的,拉普拉斯的规则将了解到这一点。如果硬币是偏见的尾巴,拉普拉斯的规则也将很快从观察中得知。如果硬币实际上是公平的,那么拉普拉斯的规则将迅速收敛于50-50区域中的概率,而每co依夫的概率并不比我们从Max-Max-entropy Prior开始。

Can you do better than Laplace’s Rule of Succession? Sure; if the environment’s probability of generating heads is equal to 0.73 and you start out knowing that, then you can guess on the very first round that the probability of seeing heads is 73%. But even with this non-generic and highly specific knowledge built in, you do not do非常除非第一次共同互动对您未来的生存非常重要,否则比拉普拉斯的继承规则要好得多。拉普拉斯(Laplace)的规则可能会弄清楚答案在第一批回合中大约是3/4,并且在几百发子弹后,答案约为73%,如果答案是是n’t0.73 it can handle that case too.

拉普拉斯的规则是推断二进制序列的最一般规则吗?明显不是;例如,如果您看到了初始序列…

$$ hththththththt…$$

…then you would probably guess with high thoughnot infiniteprobability that the next element generated would be \(H\). This is because you have the ability to recognize a kind of pattern which Laplace’s Rule does not, i.e., alternating heads and tails. Of course, your ability to recognize this pattern only helps you in environments that sometimes generate a pattern like that—which the real universe sometimes does. If we tossed you into a universe whichjust as frequentlypresented you with ‘tails’ after observing a thousand perfect alternating pairs, as it did ‘heads’, then your pattern-recognition ability would be useless. Of course, a max-entropy universe like that will usually not present you with a thousand perfect alternations in the initial sequence to begin with!

One extremely general but utterly intractable inference rule is所罗诺夫诱导, auniversal prior它将概率分配给每个可计算序列(或序列上的可计算概率分布)与成比例算法简单也就是说,与指定计算所需的程序大小的指数相反。所罗诺夫诱导可以从观察中学习任何可以由compact program,相对于通用计算机的选择,该计算机最多对所需的证据数量或错误的数量产生有限的影响。当然,在假设结构 - 避免算法可压缩序列的假设结构范围内,所罗门诺夫的电感器将比最大 - 凝集的先验稍微稍微做,尽管不介意。较少的可能值得庆幸的是,我们不在这样的宇宙中生活。

It would then seem perverse not to recognize that for large enough milestones we can see an informal ordering from less general inference rules to more general inference rules, those that do well in an increasingly broad and complicated variety of environments of the sort that the real world is liable to generate:

The rule that always assigns probability 0.73 to heads on each round, performs optimally within the environment where each flip has independently a 0.73 probability of coming up heads.

继承拉普拉斯的统治将开始做方程lly well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace’s Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads.

人类是更一般的,也可以点模式like \(HTTHTTHTTHTT\) where Laplace’s Rule would merely converge to assigning probability 1/3 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty.

If anyone ever happened across a hypercomputational device and built a Solomonoff inductor out of it, the Solomonoff inductor would be more general than the human and do well in any environment with a programmatic description substantially smaller than the amount of data the Solomonoff inductor could observe.

在环境实际上是最大渗透率的情况下,这些预测因子都不需要比最大透镜预测更糟糕。它可能不是免费的午餐,但即使按照假设的随机宇宙的标准,也不是那么昂贵。这对任何事情都不重要,因为我们不住在最大渗透宇宙中,因此我们不在乎我们会做的事情变得更糟。

关于这一点的一些较早的非正式讨论可以在禁止午餐定理通常是无关紧要的.

如果智能是解决问题的算法,则只能在特定问题上理解。

有些问题比其他问题更笼统,而不是相对于最大值先验,它可以在同等的基础上对待所有问题子类,但是相对于我们实际生活的低渗透宇宙,一百万个观察到的头部均在该宇宙中。下一轮比T比T更容易产生H。相似地,相对于我们低渗透宇宙中抛弃的问题类别,“找出简单计算生成此顺序的内容”比人类更一般,该人比“弄清楚”更一般的人该序列中的头或尾巴的频率是多少。”

Human intelligence is a problem-solving algorithm that can be understood with respect to a specific问题类从务实的意义上讲,这可能是非常非常广泛的。

以更具体的方式,我们可以从经验上观察到这一点,因为我们知道的所有智能系统都是高度专业化的。亚博体育苹果app官方下载我们今天构建的AIS的智能是非常专业的,从事非常狭窄的任务 - 例如玩GO,或将图像分类为10,000个已知类别。章鱼的智力专门研究是章鱼的问题。人类的智慧专门研究了人类的问题。

人类解决的问题要比章鱼解决的问题要笼罩得多,这就是为什么我们可以在月球上行走而章鱼不能。我们不是绝对的 - 月亮仍然有a certain something与地球共同。科学归纳仍然在月球上起作用。并非当您到达月球时,下一个电子的电荷与先前观察到的电荷无关。而且,如果您将人类扔进这样的宇宙中,那么人类就会停止工作。但是人类解决的问题general enough to pass from oxygen environments to the vacuum.

如果我们要把新鲜创造的人脑放在章鱼的身体中,并活在海底的底部,将会发生什么?它甚至会学会使用八足的身体吗?它可以在几天里生存吗?…大脑具有用手可以抓住的身体的硬编码概念,可以吮吸的嘴,将眼睛安装在移动的头上,可用于视觉上关注物体(前桥 - 眼睛反射),并且需要这些先见之见才是人类的智力开始控制人体。

It could be the case that in this sense a human’s motor cortex is analogous to an inference rule that always predicts heads with 0.73 probability on each round, and cannot learn to predict 0.07 instead. It could also be that our motor cortex is more like a Laplace inductor that starts out with 72 heads and 26 tails pre-observed, biased toward that particular ratio, but which can eventually learn 0.07 after another thousand rounds of observation.

这是一个经验的问题,但我不确定为什么这是一个非常相关的问题。人类运动皮层可能是专门的,而不是在先验的知识中开始开始,而是在祖先的环境中,我们从来没有随机地将其随机地插入章鱼身体。但是什么呢?如果您将一些人放在游戏机上,并给他们一个像章鱼一样奇怪的机器人来学习控制,我希望他们在这方面的全心全意学习能力比原始运动皮层做得更好。人类使用整个智能以及一些简单的控件,即使在我们的祖先环境中没有这些飞机,也可以学会驾驶汽车和飞机。

我们也没有理由相信人类运动皮层是可能的极限。如果我们有时会被随机生成的身体,我希望我们已经有了可以适应章鱼的运动皮层。也许零马汽车可以在控制随机生成的身体上进行三天的自我竞争,并能够迅速地学习该课程中的任何身体。或者,被允许使用Keras的人可以弄清楚如何使用ML控制章鱼臂。最后一个情况与假设种子AI的情况最为相似。

经验证据相对稀缺,但据我们所知,在人类文化的养育环境之外成长的孩子并没有发展任何人类的智慧。从最早的几年开始,在野外养育的野生儿童有效地成为动物,在返回文明时再也无法获得人类的行为或语言。

没有视觉输入,人类视觉皮层的发展不佳。这并不意味着我们的Visual Cortex是一个简单的空白板,并且所有要处理视觉的信息都存储在环境中,而Visual Cortex仅从空白的板岩中适应了。如果那是真的,我们希望它可以轻松控制章鱼的眼睛。视觉皮层需要视觉输入,因为进化生物学的逻辑:如果使X成为环境常数,则该物种有可能获得假设存在X的基因。它没有理由不这样做。预期的结果是,视觉皮层包含大量的遗传复杂性,使其比通用的大脑皮层更好地进行视觉,但是某些复杂性需要在儿童期间正确展开视觉输入。

But if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that.

人类的孩子可靠地在其他人群周围长大,因此,如果人类以假定环境包含此信息的方式建立基本的智力控制过程并不奇怪。因此,我们不能推断在环境中“存储”多少信息,或者智力控制过程将是无法遗传存储的信息。这不是一个问题进化有理由尝试解决的理由,因此我们不能从缺乏进化的解决方案中推断出这种解决方案是不可能的。

And even if there’s no evolved solution, this doesn’t mean you can’t intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there’s no easy incremental pathway there through a series of smaller changes, so those designs aren’t very evolvable; but human engineers still build skyscrapers and cars, etcetera.

在人类中,GO的艺术被存储在历史游戏和其他人类的庞大存储库中,我们中间的Go Masters长大了GO,小时候对阵上级人类大师,而不是从头开始发明整个艺术。您也不会期望最有才华的人,可以自己重塑游戏玩法,能够与First Dan Pro赢得比赛。

但是,Alphago是在这个以存储形式的庞大游戏存储库中初始化的,而不是需要真正扮演人类大师。

And then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no ‘instinct’ in the form of precomputed features.

现在,也许可能会假设Alphago Zero解决的问题与人类所解决的更为普遍的问题之间存在一些鲜明而完全的区别,因此,我们的广阔知识建筑物可以通过自我教学的系统来超越自己的知识。亚博体育苹果app官方下载,但是我们的一般认知解决问题能力既不能被压缩到数据库中以进行初始化,也不能通过自我播放来教授。但是为什么要这么想呢?人类文明通过某种形式的自我竞争来教导自己。我们没有向外星人学习。更重要的是,我没有看到拉普拉斯(Laplace)的统治,阿尔法戈(Alphago)零,人类和所罗门诺夫(Solomonoff)感应者之间的明显区别。他们只是依次学习更多的一般问题类。If AlphaGo Zero can waltz past all human knowledge of Go, I don’t see a strong reason why AGI Zero can’t waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases.

This point could perhaps be counterargued, but it hasn’t yet been counterargued to my knowledge, and it certainly isn’t settled by any theorem of computer science known to me.

If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.

对我来说并不明显为什么这很重要。假设AI需要三天才能学习使用章鱼的身体。所以呢?

That is: We agree that it’s a mathematical truth that you need “some amount” of experience to go from a broadly general prior to a specific problem. That doesn’t mean that the required amount of experience is large for pragmatically important problems, or that it takes three decades instead of three days. We cannot casually pass from “proven: some amount of X is required” to “therefore: a large amount of X is required” or “therefore: so much X is required that it slows things down a lot”. (See also:无害的超新星谬误:有限,因此无害。)

如果您的大脑齿轮是您解决问题能力的决定性因素,那么那些智商远远超出了正常人类智力范围的稀有人类,将生活在正常生活范围之外,将解决以前认为无法解决的问题,并且将占领世界 - 就像有些人害怕比人AI更聪明的人一样。

“von Neumann? Newton? Einstein?” —Scott Aaronson

More importantly: Einstein et al. didn’t have brains that were 100 times larger than a human brain, or 10,000 times faster. By the logic of sexual recombination within a sexually reproducing species, Einstein et al. could not have had a large amount of从头software that isn’t present in a standard human brain. (That is: An adaptation with 10 necessary parts, each of which is only 50% prevalent in the species, will only fully assemble 1 out of 1000 times, which isn’t often enough to present a sharp selection gradient on the component genes;complex interdependentmachinery is necessarily universal within a sexually reproducing species, except that it may sometimes fail to fully assemble. You don’t get “mutants” with whole new complex abilities a la the X-Men.)

Humans are metaphorically all compressed into one tiny little dot in the vastness of mind design space. We’re all the same make and model of car running the same engine under the hood, in slightly different sizes and with slightly different ornaments, and sometimes bits and pieces are missing. Even with respect to other primates, from whom we presumably differ by whole complex adaptations, we have 95% shared genetic material with chimpanzees. Variance between humans is not something that thereby establishes bounds on possible variation in intelligence, unless you import some further assumption not described here.

对任何部署的人的标准答复,例如戈德尔主张不可能的论点AGI是要问:“为什么您的论点不排除人类?”

同样,一个对超人通用情报可能性的论点的人需要回答的标准问题是:“为什么您的论点不排除人类表现出务实的智力表现比黑猩猩更大的知识表现?”

Specialized to this case, we’d ask, “Why doesn’t the fact that the smartest chimpanzees aren’t building rockets let us infer that no human can walk on the Moon?”

没有人,即使是约翰von Neumann, could have reinvented the gameplay of Go on their own and gone on to stomp the world’s greatest Masters. AlphaGo Zero did so in three days. It’s clear that in general, “We can infer the bounds of cognitive power from the bounds of human variation” is false. If there’s supposed to be some special case of this rule which is true rather than false, and forbids superhuman AGI, that special case needs to be spelled out.

情报不是超级大国。杰出的情报本身并没有为您与您的情况相称地提供特殊的权力。

…说智人, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.

A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.

这是否意味着技术比今天的技术不应比今天高100年?如果没有,我们从什么意义上抓住了我们环境中的一切机会?

Is the idea that opportunities can only be taken in sequence, one after another, so that today’s technology only offers the possibilities of today’s advances? Then why couldn’t a more powerful intelligence run through them much faster, and rapidly build up those opportunities?

A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.

它不能吃互联网吗?它不能吃market? It can’t crack the protein folding problem and deploy arbitrary biological systems? It can’t get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann?

我在这里看不到强有力的贝叶斯证据。It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann’s, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds?

We know that the rule of inferring bounds on cognition by looking at human maximums doesn’t work on AlphaGo Zero. Why does it work to infer that “An AGI can’t eat the stock market because no human has eaten the stock market”?

However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence…

几个世纪以来,未来的超人AIS是否能够发展出比自己更大的AI?不,只有我们任何人都可以。

前提是,运行特定类型的软件(人的大脑)的特定大小和组成的大脑只能解决问题X(在这种情况下,如果它们在某个组中合作,则等于“构建AGI”)n尺寸n并运行一定的时间,并建立z数量的外部认知假体。好的。通过自然选择,人类并不是特别专门研究AI建造问题。Why wouldn’t an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren’t, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera.

提高这一论点:

我们可以从前提开始,“所有问题X, if human civilization puts a lot of effort into X and gets as far as W, no single agency can get significantly further than W on its own,” and from this premise deduce that no single AGI will be able to build a new AGI shortly after the first AGI is built.

However, this premise is obviously false, as evenDeep Blue见证人。是否应该有一些特殊情况的这种概括,而不是虚假的,并且说了一些关于“构建AGI”问题的事情,而这对“赢得国际象棋游戏”问题没有说明?那是什么特殊情况,为什么我们应该相信呢?

Also relevant: In the game of Kasparov vs. The World, the world’s best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov’s brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don’t agglomerate very efficiently.

However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces.

This takes in the premise “AIs can only output a small amount of cognitive improvement in AI abilities” and reaches the conclusion “increase in AI capability will be a civilizationally diffuse process.” I’m not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support “AI cannot output much AI”, it just tries to reason further from that as a premise.

Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology.

From Arbital’s无害的超新星谬误page:

  • Precedented, therefore harmless:“确实,我们已经有一段时间了:已经有一些设备通过将元素融合在元素周期表中的元素来产生'超级'的热量,它们被称为热核武器。事实证明,社会能够规范现有的热核武器,并阻止恐怖分子收购这些武器;没有理由对超新星不应该如此。”(非中性谬误 /连续性谬误:将超新星与氢炸弹的连续性放在连续体上,这并不能使它们能够通过类似策略来处理,也没有找到一个类别,因此既包含超新星和氢炸弹)。

Our brains themselves were never a significant bottleneck in the AI-design process.

A startling assertion. Let’s say we could speed up AI-researcher brains by a factor of 1000 within some virtual uploaded environment, not permitting them to do new physics or biology experiments, but still giving them access to computers within the virtual world. Are we to suppose that AI development would take the same amount of sidereal time? I for one would expect the next version of Tensorflow to come out much sooner, even taking into account that most individual AI experiments would be less grandiose because the sped-up researchers would need those experiments to complete faster and use less computing power. The scaling loss would be less than total, just like adding CPUs a thousand times as fast to the current research environment would probably speed up progress by at most a factor of 5, not a factor of 1000. Similarly, with all those sped-up brains we might see progress increase only by a factor of 50 instead of 1000, but I’d still expect it to go a lot faster.

Then in what sense are we not bottlenecked on the speed of human brains in order to build up our understanding of AI?

至关重要的是,文明级别的情报改善循环仅导致我们解决问题的能力随着时间的推移而实现的线性进步。

I obviously don’t consider myself a Kurzweilian, but even I have to object that this seems like an odd assertion to make about the past 10,000 years.

递归地改善X数学上会导致X呈指数增长吗?否 - 简而言之,因为没有复杂的现实世界系统可以建模为`x(t + 1)= x(t) * 亚博体育苹果app官方下载a,a> 1)`。

这似乎是really奇怪的断言,一眼驳斥世界GDP. Note that this can’t be an isolated observation, because it also implies that every必要的input into world GDP is managing to keep up, and that every input which isn’t managing to keep up has been economically bypassed at least with respect to recent history.

We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them… Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make.

If we define “recursive self-improvement” to mean merely “causal process containing at least one positive loop” then the world abounds with such, that is true. It could still be worth distinguishing some feedback loops as going much faster than others: e.g., the cascade of neutrons in a nuclear weapon, or the cascade of information inside the transistors of a hypothetical seed AI. This seems like another instance of “precedented therefore harmless” within the harmless supernova fallacy.

Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.

“A chimpanzee is just one cog in a bigger process—the ecology. Why postulate some kind of weird superchimp that can expand its superchimp economy at vastly greater rates than the amount of chimp-food produced by the current ecology?”

具体而言,假设一种药物足够聪明,可以破解逆蛋白结构的预测,即,它可以建立自己的生物学,并且物理定律允许使用任何数量的生物后分子机械。从什么意义上讲,它仍然取决于人类其他文化的大多数经济成果?为什么不只是开始构建von Neumann机器?

除了上下文硬限制之外,即使系统的一个部分具有递归自我突破的能力,系统的其他部分也将不可避免地开始充当瓶颈。亚博体育苹果app官方下载逆转过程将响应递归的自我完善和挤压。

Smart agents will try to deliberately bypass these bottlenecks and often succeed, which is why the world economy continues to grow at an exponential pace instead of having run out of wheat in 1200 CE. It continues to grow at an exponential pace despite even the antagonistic processes of… but I’d rather not divert this conversation into politics.

现在可以肯定的是,最聪明的头脑不能比光更快,如果我们对物理定律的特征遥不可及,它的指数增长将在原子和原子镜上占据瓶颈。但是要说因此,这没有理由是无害超新星谬论的“有限,无害”的变体。超新星并不是一个无限的热,但是很热,只要穿着Nomex连身裤就无法生存。

当涉及智能时,系统间的通信会成为对基础模块的任何改进的刹车 - 更智能零件亚博体育苹果app官方下载的大脑将在协调它们方面遇到更多麻烦;

Why doesn’t this prove that humans can’t be much smarter than chimps?

What we can infer about the scaling laws that were governing human brains from the evolutionary record is a complicated topic. On this particular point I’d refer you to section 3.1, “Returns on brain size”, pp. 35–39, in我关于认知投资回报的半技术讨论. The conclusion there is that we can infer from the increase in equilibrium brain size over the last few million years of hominid history, plus the basic logic of population genetics, that over this time period there were increasing marginal returns to brain size with increasing time and presumably increasingly sophisticated neural ‘software’. I also remark that human brains are not the only possible cognitive computing fabrics.

非常可能的人更有可能患有某些精神疾病,这也许不是巧合。

I’d expect very-high-IQ chimps to be more likely to suffer from some neurological disorders than typical chimps. This doesn’t tell us that chimps are approaching the ultimate hard limit of intelligence, beyond which you can’t scale without going insane. It tells us that if you take any biological system and try to operate under conditions outside the typical ancestral case, it is more likely to break down. Very-high-IQ humans are not the typical humans that natural selection has selected-for as normal operating conditions.

然而,现代科学进步是线性的。我在2012年的一篇题为“奇异之处都没有来”的文章中详细介绍了这种现象。在1950 - 2000年期间,我们没有比1900 - 1950年以上的物理学取得更大的进步 - 我们也可以说。数学的发展速度并没有比1920年更快。几十年来,医学一直在其所有指标上取得线性进步。

I broadly agree with respect to recent history. I tend to see this as an artifact of human bureaucracies shooting themselves in the foot in a way that I would not expect to apply within a single unified agent.

在有限的物理供应中,我们可能会结束可用的水果。这并不意味着我们目前的材料技术可以与可能的材料技术的限制竞争,这至少包括任何生物学混合系统都可以迅速制造的任何生物学混合系统。亚博体育苹果app官方下载

随着科学知识的扩大,必须投资于教育和培训的时间和精力,并且个人研究人员的调查领域变得越来越狭窄。亚博体育官网

Our brains don’t scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.

Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.

True of chimps; didn’t stop humans from being much smarter than chimps.

No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment.

小鼠的真实;没有阻止人类比老鼠更聪明。

Part of the argument above was, as I would perhaps unfairly summarize it, “There is no sense in which a human is absolutely smarter than an octopus.” Okay, butpragmatically说,我们有核武器和章鱼没有。一个相似的pragmaticcapability gap between humans andunaligned阿吉斯似乎是一个合理关注的问题。如果您不想将其称为智能差距,请称其为您喜欢的东西。

目前,我们的环境而不是我们的大脑正在充当我们智慧的瓶颈。

I don’t see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.

Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves.

这个事实应该暗示slowerprogress by an AGI that has a continuous, high-bandwidth interaction with its own onboard cognitive tools?

一个亚博体育苹果app官方下载已经自我改善的系统,已经很长时间了。

True if we redefine “self-improving” as “any positive feedback loop whatsoever”. A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don’t recommend standing next to one when it goes off.

递归自我提高的系统,由于偶然的瓶颈,减少的回报和来自其亚博体育苹果app官方下载存在的更广泛背景而产生的反反应,无法在实践中取得指数进步。从经验上讲,它们倾向于表现出线性或乙状结肠的改进。

Falsified by a graph of world GDP on almost any timescale.

特别是,科学进步就是这种情况 - 科学可能是我们可以观察到的最接近递归自我改善的AI的系统。亚博体育苹果app官方下载

I think we’re mostly justdoing science wrong, but that would be amuch longer discussion.

Fits-on-a-T-Shirt rejoinders would include “Why think we’re at the upper bound of being-good-at-science any more than chimps were?”

在我们的文明层面上,递归情报扩展已经在发生。它将在AI时代继续发生,并且以大致线性的速度进行。

If this were to be true, I don’t think it would be established by the arguments given.

我本人和罗宾·汉森(Robin Hanson)在“AI Foom辩论。”我希望甚至罗宾·汉森(Robin Hanson)在这场辩论中广泛反对我,也会对所有系统中的进步都局限于大致线性速度的想法咳嗽。亚博体育苹果app官方下载

For more reading I recommend my own semitechnical essay on what our current observations can tell us about the scaling of cognitive systems with increasing resources and increasing optimization, “Intelligence Explosion Microeconomics。”

Did you like this post?你可能会喜欢我们的另一个yabo app posts, including: