Josef Urban在机器学习和自动推理上

||Conversations

Josef Urban portraitJosef Urban研究所博士后研究员吗亚博体育官网Computing and Information Sciences of the Radboud University in Nijmegen, the Netherlands. His main interest is development of combined inductive and deductive AI methods over large formal (fully semantically specified) knowledge bases, such as large corpora of formally stated mathematical definitions, theorems and proofs. This involves deductive AI fields such as Automated Theorem Proving (ATP) and inductive AI fields such as Machine Learning. In 2003, he made the largest corpus of formal mathematics – the Mizar library – available to ATP and AI methods, and since then has been developing the first approaches, systems and benchmarks for AI and automated reasoning over such corpora. His Machine Learner for Automated Reasoning (MaLARea) has in 2013 solved by a large margin most problems in the mathematically oriented Large Theory division of theCADE Automated Systems Competition (CASC)。The system is based on several positive-feedback loops between deducing new proofs and learning how to guide the reasoning process based on successful proofs. Such AI methods are also being deployed as systems that assist interactive theorem proving and formal verification in systems likeMizar,IsabelleandHOL Light

He received his PhD in Computers Science from the Charles University in Prague in 2004, and MSc in Mathematics at the same university in 1998. In 2003, he co-founded thePrague Automated Reasoning Group, and from 2006 to 2008 he was a Marie-Curie fellow at the University of Miami.

Luke Muehlhauser: InUrban&Vyskocil(2013)and other articles, you write about the state of the art in both the formalization of mathematics and automated theorem proving. Before I ask about machine learning in this context, could you quickly sum up for us what we can and can’t do at this stage, when it comes to automated theorem proving?


Josef Urban: The best-known result is the proof of the Robbins conjecture found automatically byBill McCune’s system EQP in 1996。Automated theorem proving (ATP) can be used today for solving some open equational algebraic problems, for example in thequasigroup and loop theory。Proofs of such open problems may be thousands of inferences long, and quite different from the proofs that mathematicians produce. Solving such problems with ATP is often not a “push-button” matter. It may require specific problem reformulation (usually involving only a few axioms) suitable for ATP, exploration of suitable parameters for the ATP search, and sometimes also guiding the final proof attempt by lemmas (“hints”) that were useful for solving simpler related problems.

ATP is today also used for proving small lemmas in large formal libraries developed with interactive theorem provers (ITPs) such as Mizar, Isabelle and HOL Light. This is called “large-theory” ATP. In the past ten years, we have found that even with the large mathematical libraries (thousands of definitions and theorems) available in these ITPs, it is often possible to automatically select the relevant previous knowledge from such libraries and find proofs of small lemmas automatically without any manual pre-processing. Such proofs are usually much shorter than the long proofs of the equational algebraic problems, but even that already helps a lot to the people who use ITPs to formally prove more advanced theorems.

An illustrative example is theFlyspeck(“开普勒的正式证明”)由汤姆·海尔斯(Tom Hales)领导的项目,他在1998年证明了开普勒的猜想。证明是如此涉及,以至于确保其正确性的唯一方法是对ITP进行正式验证。汤姆·海尔斯(Tom Hales)最近写了300-page bookabout the proof, with the remarkable property that most of the concepts, lemmas and proofs have a computer-understandable “Flyspeck” counterpart that is stated and verified in HOL Light. Hales estimates that this has taken about 20 man-years. Recently, wehaveshown可以通过大理论ATP方法自动找到14000个小型飞盘引理中约47%的证明。这意味着唯一需要的精神努力是将这些引理在HOL光中正式陈述。同样,警告是,从图书馆中已经存在的其他引理中,诱饵必须很容易证明。另一方面,这意味着,一旦我们拥有如此大的正式演讲日常数学语料库,就可以将语料库与大理论ATP方法一起使用,以解决许多简单的正式数学问题。

Another interesting recent development are ATP methods that efficiently combine “reasoning” and “computing”. This has been a long-time challenge for ATP systems, particularly when used for software and hardware verification. Recent SMT (Satisfiability Modulo Theories) systems such asZ3整合了许多关于线性和非线性算术,比特向量等的决策程序。我不是该领域的专家,但我的印象是,这种系统的增长正在使复杂硬件和软件的正式正式验证完全正式验证亚博体育苹果app官方下载亚博体育苹果app官方下载系统越来越现实。


Luke: As you see it, what are the most important factors limiting the success and scope of ATP algorithms?


Josef: (1) Their low reasoning power, particularly in large and advanced theories, and (2) lack of computer understanding of current human-level (math) knowledge.

The main cause of (1) is that so far the general (“universal”) reasoning algorithms often use too much brute force, without much smart guidance, specific methods for specific subproblems, and methods for self-improvement. Even on mathematical problems that are considered quite easy, such brute-force algorithms then often explode. (2) is a major obstacle because even if the strength of ATP algorithms could be already useful for simple query-answering applications in general mathematics, the systems still do not understand problems posed using the human-level mathematical language and concepts. (1) and (2) are connected: human math prose can be arbitrarily high-level, requiring the discovery of long reasoning chains just for parsing and filling the justification gaps.

改进(1)的一条途径是通过对数学家的思维方式更好(并且最好是自动化的)分析。换句话说,通过采矿和理解他们所知道的知识,如何使用和生产它以及如何改善知识和推理方法。如果这是正确的,那么将大量的数学知识语料库(可能是“注释”)格式(至少可以正确解析),然后进一步分析并可能用于构建各种自我改进的ATP算法,应该很有用。在某种程度上,即使ATP算法的手册(“理论驱动”)工程仍然很大程度上占了上风,这在过去十年中已经开始。不幸的是,恰好是因为(2),即人类水平和计算机级别的理解和推理之间的差距,今天的生产范围仍然很罕见,并且今天生产的较罕见和昂贵。不用说,双重注释的人类计算机数学语料库也应(与体面的ATP一起)对于(2)的“自举”求解,即,对于学习越来越多的计算机对人类水平的数学文本的理解。由于数学是唯一具有正式语义的领域的事实,对这种语料库的研究也可能最终使我们对其他科学的自然语言和强大的AI进行更好的语义理解。

在软件和硬件验证中,(1)可能是主要问题,尽管例如,大型代码库的语义注释也是一项重大努力。解决(1)仍然很难,但是似乎“代码”通常不像“数学”那样多样化,并且有可能使用较少数量的手动编程技术变得远。


Luke: How much of human mathematical knowledge has been formalized such that automated theorem provers can use it? At what pace is that base of formalized mathematics growing?


Josef: One metric that we have is thelist of 100 theorems created in 1999andtracked for formal systems by Freek Wiedijk。目前,在这100个定理中,有88%至少在一名证明助手中得到证明,仅在Hol Light中有87%。许多本科课程可能已经在很大程度上涵盖了:真实/复杂演算的基础,测量理论,代数和线性代数,一般拓扑,集合理论,类别理论,组合学和图形论,逻辑等,但是我认为博士级别的报道仍然很远。

There are some advanced mathematical formalizations like the recentformal proof of the Feit-Thompson theorem in Coq, the already mentionedFlyspeck projectdone in HOL Light,formalization of more than a half of the Compendium of Continuous Lattices in Mizar, and many smaller interesting projects. Some系统中正式制定的定理和理论的概述页面是MML查询亚博体育苹果app官方下载(Mizar), theArchive of Formal Proofs(Isabelle), theCoq contribs, andthe 100-theorems projectin HOL Light.

The rate of growth of formal math is not very high, but also the number of authors is quite low. My very rough estimate is about 10k-20k top-level lemmas per year with some 100-300 more important theorems among them. But there is quite a lot of repetition and duplication between various systems, their various libraries, and sometimes even inside one library. People are trying various approaches to formalization, and sometimes prefer to design their own formalization from scratch rather than trying to re-use the work of somebody else. This is quite similar to code libraries. Making the libraries as re-usable as possible is a nontrivial effort.

Strictly speaking, only Mizar, Isabelle and HOL Light are currently accessible to large-theory ATP systems that really try to use the whole libraries at once to prove new conjectures. Coq’s logic is more different from the logics used by the strongest ATP systems and there is so far no sufficiently complete large-theory link between them. But there is also a lot of smaller-scale programmable proof automation inside Coq (and also other interactive provers) already.


Luke: The prospect of having large databases of formalized mathematical knowledge naturally leads to the question: “Might we be able to use machine learning algorithms for improved automated theorem proving?” What is the current state of machine learning in the context of automated theorem proving?


Josef: Most of the machine learning applications are today in the context of large-theory ATP. The task that has the most developed learning methods is selection of the most relevant theorems and definitions from a large library when proving a new conjecture. This is important for two reasons: (i) ATPs can today often get quite far when given the right axioms that are sufficient, not very redundant, and reasonably close to the conjecture, and (ii) adding irrelevant axioms to the ATP search space often quickly diminishes the chances of finding the proof within reasonable time. This selection problem became known as “premise selection” anda number of learning and non-learning algorithms have been tried for it already and combined in various ways。The oldest one tried since 2003 is naive Bayes, but more recently we tried also kernel methods, various versions of distance-weighted k-nearest neighbor, random forests, and some basic ensemble methods.

A major issue is the choice of good characterization of mathematical objects such as formulas and proofs for the learning algorithms. In a large diverse library, just using the set of symbols as formula features is already useful. This has been extended by methods ranging from purely syntactic n-gram analogs (terms and subterms and their parts, normalized in various ways), to more semantic but still quite syntactic features such as addition of type information and type hierarchies, to strongly semantic features such as the validity of formulas in a large pool of diverse models. Feature preprocessing methods such as TF-IDF and latent semantic analysis help a lot, and quite likely more can be done still.

另一个主要问题是用于培训的证据。对于人类而言,有些证据很容易,但是对于当今的ATP来说很难,因此通常最好在可能的情况下找到ATP证明并从中学习,而不是尝试从正式正确但不可行的证据中学习。这引起了探索性的AI系统,从而使演绎调查和从证明中学习。亚博体育苹果app官方下载出现积极的反馈回路:我们发现的ATP证明越多(以及反述)越多,下一个学习迭代越好,我们通常在下一个学习建议的证明发现中通常会发现的ATP证明越多。从某种意义上说,一个人不能轻易地“只做归纳AI(学习)”或“只做演绎AI(ATP)”:大理论ATP的最强方法就是真正使定理通过从证据中学习证明。这不足为奇,因为任何人都没有仅仅是通过纯粹的归纳或纯粹的推论来进行科学。但是对我来说,今天至少拥有一个AI域,可以与它们进行此类组合的技术和务实的实验,这似乎真的很有用。

有点类似的想法应用当我们停止治疗Ps as black boxes and start to look inside them. Instead of evolving an inductive/deductive system that is good at selecting relevant premises for conjectures (MaLAReais an example), we may try to evolve a system that has a number of specialized problem-solving techniques (ATP strategies) for common classes of problems, and some intuition about which techniques to use for which problem classes. An example isthe Blind Strategymaker大图书馆的不同的问题o-evolves a population of ATP strategies (treated as sets of search-directing parameters) with a population of solvable training problems, with the ultimate goal of having a reasonably small set of strategies that together solve as many of the problems as possible. The initial expert strategies (“predators”) are mutated and quickly evaluated on their easy prey (the easy training problems they specialize in), and if the mutations show promise (faster solutions) on such training subset, they undergo a more expensive (more time allowed) evaluation on a much wider set of problems, possibly solving some previously unsolvable ones and making them into further training targets. Again, just random mutating on the so-far-unsolved problems is quite inefficient, so one really needs the intuition about which training data the mutations should be grown on. This again yields a fast “inductive” training phase, followed (if successful) by a slower “hard thinking” phase, in which the newly trained strategies attempt to solve some more problems, making them into further training data. The intuition and the deductive capability co-evolve again; doing just one of them does not work so well.

And going again from black-box to white-box, one can also ask what are really the ATP strategies and deductive techniques, and how to steer/program them more than just by finding good combinations of predefined search parameters for problem classes. There are several learning methods that go beyond such parameter settings, the most prominent and successful is probablythe “hints” methodby Bob Veroff (a bit differently done alsoby Stephan Schulz),引导搜索证据向引理s (hints) that were found useful in proofs of related problems. I believe that learning such internal guidance is still quite an unexplored territory that we should be working on. One might not only direct the search towards previously useful lemmas, but also look at methods that suggest completely new lemmas and concepts, based on analogies with other problems, deeper knowledge about the problem state, etc. Depending on one’s idea about what is and what is not machine learning, one could also relate here to Douglas Lenat’s seminal work on concept and lemma-finding systems such as theAutomated MathematicianandEurisko


Luke: What trends in this area do you expect in the next 20 years? Do you expect that in 20 years a much greater share of research mathematicians will use ATPs, at least in a highly interactive way (alaLenat with Eurisko)? Do you expect learning (rather than non-learning) ATPs to predominate in the future? Do you expect things to advance so far that the jobs ofsomeresearch mathematicians will be effectively replaced, just as some paralegal jobs have (supposedly) been replaced by better software for searching through legal databases?


Josef: First, AI predictions usually happen, but often take longer than expected. 20 years is not really much in these low-profile fields: Flyspeck itself has taken about 20 person-years and Mizar took 40 years froma visionary talk by Andrzej Trybulecin 1973 to the current 1200 articles in the Mizar library. There are not many large projects in these areas and often they are pushed only by extreme determination of single researchers.

One AI breakthrough that I believe is quite reachable within 20 years is deepautomatedsemantic understanding (formalization, “machine reading”) of most of LaTeX-written mathematical textbooks. This has been for long time blocked by three factors: (i) lack of annotated formal/informal corpora to train such semantic parsing on, (ii) lack of sufficiently large repository of background mathematical knowledge needed for “obvious-knowledge gap-filling”, and (iii) lack of sufficiently strong large-theory ATP that could fill the reasoning gaps using the large repository of background knowledge. This has changed a lot recently and unless we are really lazy I believe it will mostly happen. It might not be immediately perfect and on the level of manual formalization, but it will gradually improve both by using more data and by better algorithms, a bit similarly to what has been happening with Google Translate between languages like English, Spanish and German. Once it starts happening automatically, it might snowball faster and faster, again thanks to the positive feedback loop between more “reasoning data” becoming accessible and the strength of the data-driven large-theory ATP and AI methods trained on such data.

我不认为数学家有任何理由to “fear AI” (at least at the moment), quite the contrary: more and more are wondering about the current low level of computer understanding and assistance of math. Mathematicians like von Neumann, Turing and Goedel are the reason that we have computers and AI as a discipline in the first place. One of their motivations was Hilbert’s program and the Leibniz-style dreams of machines assisting mathematics and scientific reasoning. It is even a bit of a shame for computer science and AI that in the hundred years since Turing’s birth they have done so little for their “parent” science and that the progress is so slow. At this moment, anybody who tried writing formal mathematics prays for better automation. Without it, formalization will remain a slow, costly and marginal effort. So it is more the opposite: in order for deep semantic treatment of math and sciences to take off, automation has to improve. But once such deep semantic treatment of sciences starts to develop, it means also easier access for mathematicians. Law is just one example; I really likeJohn McCarthy’s futuristic noteon formal proof becoming a strong criterion for all kinds of policy making. Complex mathematical proofs that require formal checking are just a tip of the iceberg: today we are building and relying on more and more complex machinery, hardware, software, legal, political and economical systems, and they are all buggy and easy to hack and break. Fixing this all and allowing us to correctly implement much more complex designs is a great future opportunity and market for mathematically thinking people equipped with automation tools that allow them to make fast progress without sacrificing correctness.

I do expect that ATPs will in general have better pattern-detection capabilities in 20 years than they have now, and that they will be able to better accumulate, process and efficiently use previous knowledge and better combine brute-force search with all kinds of guidance on various levels. Specifically in more advanced mathematics, high-level heuristic human-inspired proving/thinking methods might start to be more developed. One way to try to get them automatically is again through basic computer understanding of LaTeX-written mathematical texts, and learning what high-level concepts like “by analogy” and “using diagonalization” exactly semantically mean in various contexts. This is also related to the ability to reformulate problems and map them to a setting (for current ATPs, the best is purely equational) where the theorem proving becomes more easy. And another related work that needs to be done is “explaining back” the long ATP proofs using an understandable mathematical presentation.

The word “learning” has itself many meanings. A recent breakthrough in SAT solving (propositional ATP) is called “conflict-driven clause learning” (CDCL), but it is just a correct (resolution) inference of a new lemma implied by the axioms. The generalization aspect of “learning” is quite limited there. I don’t really know how good the ATPs will be in twenty years, improvements may come from various sides, not just by using “learning”. The main ATP calculi are so far so simple and brute-force, that it is also possible that somebody will come up with a completely new approach which will considerably improve the strength of ATP systems. For example, instantiation-based ATPs likeiProverrelying on the CDCL-extended SAT solvers have been improved quite a lot in the recent years.

Another trend I expect is more “manual” work on all kinds of targeted decision procedures (dealing with numbers, lists, bitvectors, etc.), particularly when useful for software and hardware verification, and their integration with ATP calculi. I also hope that we will start to be able to detect such targeted algorithms automatically from the successful reasoning patterns done by ATPs. In some sense, we need better automated compilation of “search” into “computing” (we could call it “learning of decision procedures”). There is no rigid distinction between these two: in ATP-inspired programming languages like Prolog, the two things largely coincide. The better we know how to direct the proof search in certain (e.g. numerical) problem classes, the more the proof search turns into efficient computation.

最后,我可能会冒险对20年来大理论ATP的强度进行一种混凝土和可排除的预测:在Flyspeck Resp。Mizar图书馆,学习辅助的大理论ATP今天可以证明47%的分子。40%顶层lemmas。In 20 years, using the same hardware and resources (i.e., not relying on Moore’s Law), we will be able to prove automatically 80% of both (measured on the same versions of the libraries).


Luke:Thanks, Josef!