Thomas Bolander on self-reference and agent introspection

||Conversations

Thomas Bolander portraitThomas Bolander, Ph.D., is associate professor at DTU Compute, Technical University of Denmark.

He is doing research in logic and artificial intelligence with primary focus on the use of logic to model human-like planning, reasoning and problem solving.

Of special interest is the modelling of social phenomena and social intelligence with the aim of creating computer systems that can interact intelligently with humans and other computer systems.

Luke Muehlhauser:Bolander (2003)and some of your subsequent work studies paradoxes ofself-referencein the context of logical/computational agents, as does e.g.Weaver (2013). Do you think your work on these paradoxes will have practical import for AI researchers who are designing computational agents, or are you merely using the agent framework to explore the more philosophical aspects of self-reference?


Thomas Bolander: First of all, let me explain why I even started to study self-reference in the context of logical/computational agents. In我的博士我正在研究如何正式代理introspection, that is, agents representing and reasoning about their own mental states. One of the main motivations for this is that agents need introspection in order to understand their own role in social contexts, e.g. to ensure not to “be in the way of others” and to reason about how you can help other agents to achieve their goals. Another motivation is that introspection is essential in learning: realising the shortcomings of one’s own knowledge and routines is required for a deliberate, goal-directed approach to gaining new knowledge and improve ones problem-solving skills.

Introspection can be into one’s own knowledge, beliefs, intentions, plans, etc. In my PhD I focused on knowledge. Representing a propositional attitude such as knowledge in a logical formalisms can be done in two distinct ways, either by a predicate or by a modal operator. In most logical settings, the predicate solution is more expressive than the modal operator solution, but with high expressivity sometimes also comes trouble like paradoxical sentences leading to inconsistency. When formalising knowledge as a predicate in a first-order predicate logic, assuming a fairly reasonable, though idealised, set of properties (axioms) of knowledge leads to trivialising inconsistency (this is Montague’s theorem, see e.g. my entry onSelf-referencein theStanford Encyclopedia of Philosophy)。不一致出现因为bec的逻辑omes sufficiently expressive to express paradoxical self-referential sentences concerning knowledge like “this is sentence is not known”. To remedy this problem, there are several possible routes to take: 1) move to a non-classical logic, e.g. a paraconsistent one; 2) restrict the set of axioms of knowledge or their applicability; 3) choose a modal operator approach instead.

In my PhD, I primarily considered option 2. Today my preferred option is 3, using modal logic instead of first-order predicate logic. Option 3 seems to be the currently most preferred option in both the logic and AI communities, with massive amounts of work on epistemic logic, temporal epistemic logic and dynamic epistemic logic based on modal logic. So why do people choose the modal logic approach? Well, one reason is certainly to avoid the problems of self-reference and inconsistency just mentioned. Another is that the modal logic approach often has lower computational complexity, which is essential for practical applications. So in a sense, to answer the question of “whether these paradoxes will have a practical import for AI researchers”, a first answer could be: they already have. They already have in the sense that people shifted attention to other logics where these problems don’t appear.

This answer is of course not entirely satisfactory, though. The way modal logic approaches solve problems of self-reference is by choosing logics in which self-referential sentences can not even be expressed. So the next question becomes: Are we going to miss them (the self-referential sentences)? Do they have an important role to play in reasoning that will eventually be required to build computational agents with strong introspective capabilities? This question is much harder to answer. The modal approach implies that knowledge becomes explicitly stratified. On the modal approach one can have first-order knowledge (knowledge about ontic facts), second-order knowledge (knowledge about knowledge about ontic facts), third-order knowledge (knowledge about second-order knowledge) etc., but there is no fixed-point allowing knowledge statements to refer to themselves. This stratification seems to be cognitively feasible, however. From studies in cognitive science, in particular in the so-called false-belief tasks, it is known that children only start to master second-order knowledge around the age of 4, and only third-order knowledge around the age of 7-8. And even adults seldom reason beyond third-order or at most fourth-order. This seems to talk in favour of some kind of explicit stratification also in the case of humans, and with a fairly fixed limit on the depth of reasoning. This picture fits better with the modal approach, where the stratification is also explicit (modal depth/depth of models). Furthermore, in the modal approach it is more straightforward to mimic limits on the depth of reasoning. Of course, having a fixed limit on one’s depth of reasoning means that there will be certain problems you can’t solve, likethe muddy children puzzlewith more children than your depth limit. But even within such limitations, the experience from human cognition suggests that it is still possible to do quite advanced social cognition and introspection.

So maybe the self-reference problems that are likely to follow from choosing a non-stratified approach are indeed more relevant in other areas such as the foundation of mathematics (set theory, incompleteness theorems) and the foundations of semantics (theories of truth) than for computational agents. I still find it a bit surprising, though, that I am more happy to accept an explicitly stratified approach to propositional attitudes of agents than as a solution to the self-reference problems in set theory: In set theory, I find the non-stratified approach ofZermelo-Fraenkel set theorymuch more reasonable than the stratified approach oftype theory. But even if self-reference is not going to play a big practical role in the development of computational agents, it will probably always lurk in the background and affect people’s choices of formalisms – as well as play an important foundational role in the same way it has in mathematics.


Luke: What are some examples of work that addresses (roughly) the kind of agent introspection you discuss in your thesis, but uses the modal logic approach to the challenge, or some other stratified approach?


Thomas:大多数的例子我给论文involve quantifying over knowledge or beliefs, like “I believe that some of my beliefs are false” or “I believe that Alice knows more about X than I do”. To quantify over knowledge or beliefs in the operator approach (modal logic approach), one needs at least first-order modal logic. When I left the predicate approach for the operator approach after my PhD, I essentially went all the way “down” to propositional modal logic, so I am not up to date with the uses of first- or higher-order modal logic for representations of introspective knowledge. I can give a couple of relevant references, but they are not very recent. One is the situation calculus extended with a modal knowledge operator as presented in the bookThe Logic of Knowledge Bases通过几何& Lakemeyer(麻省理工学院之前ss, 2001). Another one is the bookReasoning about Rational Agentsby Wooldridge (MIT Press, 2000). Both of these presents logics that extend first-order predicate logic by, among others, a knowledge operator, and they investigate the use of these logics as the basis for computational agents.

My choice of “downgrading” to propositional modal logic was not because I don’t find the quantifying over beliefs non-interesting or non-important, but simply because I started to become very interested in the dynamics of knowledge, i.e., how the knowledge of an agent changes when an action is executed or an exogenous event occurs. This was essential for my goal of being able to construct planning agents with higher-order reasoning capabilities. I found a logical framework that suited my needs perfectly, dynamic epistemic logic (DEL). Since DEL is mostly studied in the context of propositional modal logic, this is where I currently find myself working. To my knowledge there is in only one paper on first-order dynamic epistemic logic, “Dynamic term-modal logic” by Barteld Kooi.


Luke: Are there examples of self-reflective computational agents (of the sort we’ve been discussing) in current industrial or governmental use, or is this work still entirely occurring at a more theoretical stage?


Thomas:据我所知,它仍然是理论上的年代tage. However, its importance is becoming increasingly appreciated also among more applied AI researchers. As mentioned in my previous answer, I am now looking into epistemic planning: planning agents with higher-order reasoning capabilities (agents reasoning about the knowledge of themselves and other agents). The automated planning community is starting to become very interested in this. Bernhard Nebel, who is a very prominent figure in automated planning and robotics, even called epistemic planning the “next big thing”. The reason is that planning formalisms and systems have to become general and expressive enough to deal with robots acting in uncertain environments, and environments containing other agents (multi-agent systems). This calls for epistemic reasoning in such systems.

There is also an increased awareness of the importance of social intelligence in AI systems, if these systems are to efficiently communicate and cooperate with humans. This could be in something like hospital robots (e.g. theTUG robot, which is very well-known for its currentlackof social intelligence [Colin Barras,New Scientist, vol. 2738, 2009]), intelligent personal assistants like Siri on iPhone or Google Now on Android, and earthquake rescue robots that have to work in mixed teams with humans.

But integrating the existing formalisms for introspection and social intelligence into e.g. planning systems and then embed all of that into a robot is something which is still in its infancy. The most successful attempt up to now is probably the social robot bartender by Ron Petrick and co-authors [Petrick & Foster, ICAPS, 2013]1. They won a best paper award at the major international planning competition, ICAPS, for this last summer. Their underlying logical formalism is a restricted version of first-order epistemic logic (modal operator approach). The logic is restricted in many important ways in order to quickly get something with reasonable computational performance. The full logics of introspection and higher-order reasoning are still too expressive to be computationally feasible, and currently more work is needed to find the best trade-off between expressivity and computational efficiency that will make higher-order reasoning agents more attractive to the more applied researchers and the industry.


Luke: In general, what kinds of heuristics do you and your colleagues use to decide how to proceed with theoretical research in a broad problem category like “self-reflective reasoning in computational agents,” which may be 10-30 years from practical application? How do you decide which subproblems to work on, and when to pivot to a different sub-problem or a different approach? And how willing are grantmakers to fund that kind of work, in your experience?


Thomas: Since I’m using logical formalisms to try to capture “self-reflective reasoning”, any development of a new “theory” has a number of standard and clearly defined steps: 1) You define a suitable logical language and its semantics; 2) You define suitable calculi for reasoning within the logic (proof theory); 3) You give examples to show that your formalisms can capture important aspects of self-reflective reasoning; 3) You investigate the computational complexity and relative expressive strength of your logic; 4) You implement the calculi for reasoning, and optimise them if scalability is important for your proof of concept. Of course there are many choices to be made when you define your logical language, but this is usually driven by concrete examples: I want to be able to express these kinds of sentences or these lines of reasoning. Or, oppositely: I don’t want my language to be able to express this, because it is known to lead to inconsistency or undecidability.

For instance, in my new work, a source of inspiration has been the false-belief tasks used in cognitive psychology to test theTheory of Mind(ToM) of humans. Theory of Mind is the ability to attribute mental states – beliefs, desires, intentions, etc. – to oneself and others. A goal for my logical formalism is that it can robustly deal with any of the existing false-belief tasks used on humans, and some suitably general closure of that set. This naturally drives research activities into finding logics that are suitable for representing that kind of reasoning in a natural way. It might be 20-50 years down the line to see e.g. hospital robots have any kind of general Theory of Mind, but this is also exactly because our understanding of Theory of Mind reasoning and how to represent it formally and in a computer is still rather limited. First we need to understand the concept, then we need to find out how to formalise it, then we need to tame the inherent computational complexity, and then finally we are hopefully able to implement it in a practical application. This is the process that e.g.description logicshave been going through. Description logics are logics for representing and reasoning about terminological knowledge. Today reasoning calculi for description logic have been commercialised and are being used e.g. on large medical databases, but it was obviously a long process since the early days of description logic (description logics are in fact also a type of modal logics, more precisely a type of hybrid logics).

There is of course also a risk that what we are doing theoretically about self-reflective reasoning in computational agents willneverhave any practical impact. This is probably why some people start in the other end, and insist that everything they produce should from the outset be implemented in, say, a working robot. However, if you do this, you face another set of problems. Working on practical implementations of robots before the underlying concepts and mechanisms have been fully understood can often lead to solutions that are somewhat ad-hoc and far from being general and robust. There are e.g. researchers who have built robots able to pass certain false-belief tasks, but they can only pass first-order false-belief tasks, and it is not clear how the architecture can be generalised.

In terms of grants, there are always more money in applied research than in theoretical/foundational research. I don’t think getting funding of theoretical work on self-reflective reasoning in computational agents is any easier or harder than other foundational work.


Luke: You give the example of description logics, which I take it were developed several decades before they saw significant commercial application? What other examples of that kind are you aware of, where there was some early theoretical work inspired by fairly abstract motivations (“In the future, we’ll want systems that do X-ish type things, so let’s build some toy models of X and see where things go…”) that wasn’t commercialized until a decade or more later?


Thomas: Description logics go back to the beginning of the 80s. Efficient reasoners with good practical performance on large databases described in expressive description logics started appearing in the beginning of the 00s. Reasoners for these expressive description logics have very bad worst-case complexities (e.g. 2EXP), so it has been quite impressive, and surprising, to see how well they scale to large databases.

With respect to other examples, I think that most of the genuinely new theories in AI start out with only being applied to toy examples. Also, almost all of mathematics falls into the category of “theoretical work inspired by fairly abstract motivations”. If you e.g. look at all the mathematics needed to construct a physics engine for a modern 3D computer game, then none of this was originally made with computer games in mind, and most of it was developed without any applications in mind at all. In AI, of course, we tend to always have certain applications or sets of applications in mind, and we aim at shorter time intervals from theoretical developments to commercialisation. The example of the physics engine is about creating mathematical models of real-world physical phenomena and then implement these models in a computer. AI is not that different from this. AI is also about creating mathematical models of some part of our reality, but in this case it is mathematical models of (human) reasoning. There is no doubt that reaching this goal involves, as in the case of physics engines, significant work at all levels from the most theoretical foundations to the most applied implementation-oriented stuff.

I think it is dangerous to dismiss any new ideas in AI that at first only applies to toy examples. That might be a necessary first step, as was e.g. the case in description logics. But it is equally dangerous to forget all about scalability, and think that “this is not my business to worry about”. If you don’t worry at all about scalability and applicability, then you are certainly much more likely to come up with ideas that will never scale or be applied. The researchers I respect the most are those who help make the ends meet: theoreticians with strong knowledge and intuition about applications, and applied researchers with strong theoretical knowledge and interest in applying new theory.


Luke: Thanks, Thomas!


  1. Petrick, R. and Foster, M.Planning for Social Interaction in a Robot Bartender Domain.ICAPS 2013.Slides available.