index
int64 0
18.8k
| text
stringlengths 0
826k
| year
stringdate 1980-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
400
|
Mixing Binary and Continuous Connection Schemes for Knowledge Access N.E. Sharkey. R.F.E. Sutcliffe and W.R. Wobcke*. Centre for Cognitive Science, University of Essex, UNITED KINGDOM. ABsTRAcr We present BACAS. a Binary and Continuous Activation System which is a parallel process content- addressable memory model. BACAS is designed for the representation and retrieval of ‘knowledge of the world’ for automatic natural language understanding. In its present form, BACAS is a two-layered system with 10 K-structures (like scripts) in the binary output macro- layer represented by 46 Threshold Knowledge Units and 184 processing elements (like action events) in the continuous activation micro-layer. We discuss the problems of combining two types of connection system and describe a simulation in which the system moves from one pattern to the next in response to external input. A new tool for connection systems, the pulse-out, is introduced. This is a device which replaces the Boltzmann Machine in creating energy leaps. The pulsse-out also has the advantage, in the current system, of setting the state of the system a short Hamming distance from an appropriate pattern. I INTRODUCXION One of the central problems for any automatic natural language understanding device is that humans use language elliptically. People try to leave out much of the information they believe their readers/listeners share with them (Grice. 1975). Thus texts written by humans for humans are usually only a partial pattern of the information they are intended to convey. It must therefore be one of the first jobs of an automatic natural language understander to fill in the information missing from the input pattern. This technique has been employed for many schemata-driven story understanders (eg. Cullingford’s (1978) SAM). w e present a content-addressable memory retrieval mechanism which is designed to act as the core of a natural language understander. This differs from schema-appliers in that we use a distributed knowledge representation in a parallel process network. The model we present here is a considerably extended and modified version of the Knowledge Access Network (KAN) (Sharkey, in press; Sharkey & Sharkey. in press). KAN combines the script concept &hank & Abelson. 1977) with a spreading activation parallel associative network account of memory. The development of this model has represented an attempt to capture the notion of stereotypical information without relying on ‘packages’ of * We would like to thank the Economic and Social Research Council (Grant No. C 08 25 OOlS), the Science and Engineering Research Council and the Commonwealth Scholarship Commission in the United Kingdom for supporting this research. knowledge. KAN treats memory as continuous while behaving ‘as if’ it contained script-like entities. Furthermore, the KAN system differs from the Script Applier Mechanisms (eg. Cullingford, 1978) in the way it accesses knowledge. KAN does not need to search for a matching precondition to gain entry to its knowledge. Rather, the retrieval of KAN’s knowledge is integrated with its understanding process. Also, unlike SAM, KAN can have more than one knowledge pattern active at the same time. The new model BACAS (Binary And Continuous Activation System), is more distributed than KAN and utilizes both continuous and binary activation schemes. Like KAN, BACAS is architecturally very simple. It has two distinct domains of processing: a micro-layer, consisting of many processing elements which represent the system’s conceptual knowledge, and a ?nQcro-layer consisting of a number of Threshold Knowledge Units (TKUS) which act as the only connecting stations between the micro-elements (see Figure 1). (A TKU. as its name suggests, is a variant of the Threshold Logic Unit (Minsky and Papert. 1969) and follows in the McCulloch & Pitts, 1943, tradition.) A collection of micro-elements linked to a TKU is called a R-pattern. Each processing element may belong to several k-patterns. The contribution of an element to a given k-pattern is represented by the weight of its connection to the TKU. This weight also tells us about the system’s ‘belief’ about the likely relevance of an element to a TKU. A TKU has both an activation level (continuous) and an output (binary). The continuous activation level is the communication between the macro-layer to the micro- layer. Evidence as to the relevance of a particular micro- element is collected by the TKUs and, once a TKU reaches threshold, its activation is broadcast to all the micro- elements connected to it, thus completing its k-pattern in the micro-layer. When a TKU is ‘on’ (defined later), its output is 1. otherwise it is 0. These outputs form the communication between TKUs. The macro-layer settles into a particular pattern. based on the evidence of the k- patterns which have been recognized. In KAN. each TKU in itself corresponded to a script (Schank and Abelson, 1977). We had therefore imposed the restriction that only one TKU could be on at a time. However, because of the psychologically observed ‘handover period’ (Sharkey & Mitchell, 1985). the system was designed to behave ‘as if’ two K-structures were simultaneously active. That is. in the transition period between successive K-structures, the micro-elements of more than one TKU could be active. For convenience, a simple conceptual device, a one-place F-register, was employed to shunt TKUs on and off. When a TKU was ‘on’ it was said to be in focus. In BACAS. like KAN. a 262 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Kn0v1cdpe IJniu buy- E-P-m rhow-yss hd-aut Figure 1: The k-pattern corresponding to the ‘watch-film’ scene of the ‘going to the movies’ K-structure, and a k-pattern corresponding to the ‘boarding’ scene of the ‘catching-a-plant’ K-structure. These TKUs are negatively connected by symmetrical weights. Two k-patterns and two TKUs are shown. TKU in the ‘on’ state keeps the k-pattern of elements associated with it in an active state until it is switched Off. In BACAS, each TKU and its associated k-pattern corresponds to a scene. The collection of TKUs corresponding approximately to a set of scenes occurring together in a script &hank & Abelson, 1977) is called a K-structure. The macro-layer of BACAS is designed to represent one such K-structure at a time. A TKU may occur in many K-structures: thus K-structures are like MOPS &hank. 1982). For example, the K-structure for ‘going-to-the-movies’ now has six k-patterns and corresponding TKUs (choose film, stand-in-line, buy-ticket, enter-cinema, watch-film, exit-cinema). Our motivation for this change was to give BACAS more flexibility and to create the potential for dynamic modification of memory. In order for a BACAS K-structure to be in focus we now need several TKUs to be ‘on’ at once. This meant that we could no longer use the F-register. Instead, we have had to make the macro-layer strongly interconnected, that is. connect each TKU to every other TKU. In the most recent simulation we have used 10 K-structures and 46 TKUs. These are based on a set of empirically collected norms. (Galambos, 1982). The problems that BACAS is designed to solve are: 1. How to ensure completion of K-structures in the macro-layer such that the appropriate k-patterns in the micro-layer would be completed. 2. How to move from one K-structure to another. that is. how to switch on the appropriate TKUs and switch off the inappropriate ones on the basis of partial information (eg. cashing a check and then going to the movies). We turn now to examine each of the processing layers. We then give an example of the behavior of BACAS. showing how the above situations are handled. and finally discuss the choice of weights for the macro- layer. II THEMICRO-LAYER Processing in the micro-layer of BACAS (and KAN) is driven by an activation mechanism which is a variant of that of McClelland and Rumelhart (1981). Their threshold rule holds invariably for the micro-elements, but a TKU may be in one of three states, and its behavior varies accordingly: 1. A TKU in the of state accumulates activation from the micro-layer but it does not transmit activation to the micro-layer. 2. A TKU in the 011 state transmits a constant source of activation to the micro-layer but it does not receive activation from the micro-layer. This constant activation is the TKU’s saturation value. 3. A TKU is in the irrtermediate state between the ‘on’ and ‘off’ states while it decays from saturation to resting. In the ‘intermediate* state, a TKU continues to transmit its decaying activation until it reaches resting level, but it receives no activation from the micro- layer. The dynamic behavior of a TKU is as follows. When a TKU in an ‘off’ state reaches threshold after accumulating activation from the micro-layer, it immediately assumes the ‘on’ state and its activation is set to saturation. It will eventually be set to the intermediate state from the macro-layer (see the next section). If the TKU decays to resting before being switched on from the macro-layer, it assumes the ‘off’ state again. It is only then that it will again receive activation from the micro-layer. The idea behind 3, the intermediate state, was directly imported from KAN and is an important property of the model. The decaying activation on a TKU allows the ‘handover period’ mentioned above (Sharkey & Mitchell, 1985) so that there is a smooth transition between K-structures. COGNITIVE MODELLING AND EDUCATION / 263 We can formally describe the micro-lay:; as follows. The net input n1 (t ) of activation to the 1 element at time t is the weighted sum of the activation passed to it by its neighbors thus: I n,(t) - 0, for a TKU which is not of. c ai (t >wiI, otherwise. i I where a. .tll is the activation on the t TKU anti w micro-element or is the strength of association from i to i. Note that i:‘BACAS. the neighbors of a micro-element are just the TKUs connected to it, and the neighbors of a TKU are just the elements of its k-pattern. It is assumed that each micro-element and TKU saturates at below a point S (we set S = 1). The effect 5, (t) of activation input to a micro-element or TKU i at time t is defined by: el(t> - n,(t)@ - a,(t)) Activation on an element decays over time at a rate proportional to the amount of activation above resting level. The activation of the jr’ element at time t+l (i.e., one time cycle after, t> is given by: aJ(t+l) = I S. for a TKU which is on, ai - Hal(t) - rJ > + (j(t). otherwise. where P is j (whit *h the resting or is positive). and base level activation of element sis the constant of decay. III THEMAcRo-LAYER Communication in the macro-layer is by means of the binary outputs of the TKUs and a pulse-out mechanism. The pattern completion method in BACAS is similar to that of the asynchronous, deterministic machine of (Hopfield. 1982). That is. from an ‘initial’ pattern, the machine will settle in a state which is a local energy minimum, where the energy of a state is defined by: 1 E I- - 2= ‘iwijsj - C Cci’i ii i where w. is the weight from the i’* to the i’* TKU and /.L. is th;?f activation on the i’* TKU. Thus the activation of a TKU acts like external input in the formula of Hinton & Sejnowski (1983):. To make a transition in the macro-layer, a unit i is chosen at random, and the change in energy &I), is calculated: The state of the unit, s1 is then set as follows: * Note that, because of the p term, the energy landscape changes while the pattern completion mechanism is in progress, unlike Hinton & Scjnowski’s method, where, because the external input is fixed during pattern completion, the energy landscape ia also fixed. s&t+11 - I on,if AE, 2 0, in&mediate. if AE, < 0. As with a TKU reaching threshold after accumulating activation from the micro-layer, a TKU switched on from the macro-layer has its activation set to its saturation value. One problem which confronted us was that a language understander may need to apply a succession Of different K-structures as it reads through a text. HOW do we get the system to move from one K-structure to the next? As a first attempt. we tried using a Boltzmann Machine (Hinton & Sejnowski, 1983). We treated each K- structure as a global energy minimum to be found using an annealing schedule. Following Hinton & Sejnowski. the TKUs which had fired were treated as tied or locked units. However, this still left the problem of getting the system to move from one K-structure to the next. We viewed the target K-structure as a required global minimum and the current K-structure as an unwanted local minimum. The new K-structure could then have been found by initiating a new annealing schedule at the appropriate time. However to make this work. we would have had to have found a way of telling the system when to unlock the units of the current K-structure and when to begin a new annealing schedule. One possibility would have been to run the annealing schedule each time a new TKU fired. But it is logically unnecessary to do this, since not all of the TKUs which fire will signify a new K- structure. Instead, we solved the problem by using a pulseout mechanism to set up a new ‘initial’ pattern, close to the desired local minimum. The local minimum can then be found deterministically. The pulse-out works as follows. When a new TKU reaches threshold after accumulating activation from the micro-layer, it transmits a pulse. Any negatively connected ‘on’ TKUs are pulsed out, so that they will assume the intermediate state. As a result, when the new TKU belongs to a new K-structure, the bit string representing the state of the system is set a shorter Hamming distance away from the the new K-structure than any of the other K-structures. The pulse-out, in effect, creates a leap to a higher energy state (if the new TKU does not occur in any K-structures with currently active TKUs, the energy of the new state will be 0). By using the pulse-out, we have avoided using locked units and an annealing schedule in conjunction with a Boltzmann Machine and, as a result, our system is deterministic. N PROCESS AND COMPLETION In the BACAS model. when script information is presented sequentially to the system (eg. ‘Sam went into a restaurant. He ordered a meal. He paid the bill.‘). it activates a set of processing elements connected to a number of TKUS. For example the micro-element corresponding to ‘he paid the bill’ would be linked to TKUs associated with knowledge about restaurants, shops, garages, etc. (see Figure 2). The active elements broadcast activation to their TKUs until the activation on a TKU passes a preset threshold. When this happens the thresholded TKU is switched to the ‘on’ state. The ‘on’ TKU then transmits a constant activation source to the 264 / SCIENCE active and inactive micro-elements of its associated k- pattern in the micro-layer. Thus the k-pattern most relevant to the current input is completed. This enables BACAS to draw inferences about the likelihood of occurrence of other events than those which occurred. When the activation in a k-pattern ‘relaxes’, the activation on the micro-elements of a k-pattern are said to represent their ‘belief’ as to their relevance to the input. While the units in the k-pattern of the TKU that reached threshold are being activated, the macro-layer of BACAS settles into a local energy minimum which is the K-structure that BACAS believes* to be the most relevant to the input. The k-patterns of the TKUs in this K- structure are activated, thus completing a pattern of micro-elements which corresponds to a 1977 script. In this example, the ‘restaurant’ K-structure is completed. When a TKU not in the current K-structure reaches threshold, it pulses out the TKUs that do not share a K- structure with it. Continuing with our example, should ‘Sam went to the cinema and looked at the program’ occur, the ‘choose-film’ TKU would fire. The pulse-out sets each of the TKUs in the ‘restaurant’ K-structure to the intermediate state. The TKUs and the corresponding micro-elements in their k-patterns then decay. Meanwhile, the macro-layer settles into the ‘going-to-the-movies* K- structure, and activation begins spreading to the micro- elements of the other k-patterns in this K-structure, for example, ‘buy-ticket* and ‘enter-cinema’. Evidence for these TKUs is also collected from the micro-layer. So far we have implemented the macro- and micro- layers separately, and the macro/micro interface has also been simulated. It now remains for us to put the two layers together. V CHOOSING WEIGHTS In this section, we discuss our procedure for choosing the weights on connections between the TKUs in the macro-layer. For BACAS the weights should ensure. amongst other things, the following two important properties. First, TKUs that never occur together in any K-structure should be negatively connected to guarantee the effective operation of the pulse-out mechanism. These negatively connected TKUs were all connected with a weight of -1. Second, each of the K-structures should be a stable state of the system. To ensure this, we set the connection between each pair of distinct TKUs which occur together in some K-structure to a a small positive weight, c. and the weight from a unit to itself is set to 0. In the remainder of this section, we discuss how to choose c so that the K-structures are local minima. To ensure that a given K-structure is a local minimum, we simply make its energy value lower than that of all the patterns which are a Hamming distance of 1 away from it. These patterns are its neighbors. That is, two states are neighboring if the bit strings representing the states differ in only one bit, eg. 1101010 is a neighbor of 0101010. Thus, we take a K-structure of n TKUs (ie. the n ‘on’ TKUs that make up the K-structure). and derive constraints on c by considering the change in energy between that K-structure and its neighbors. The change in energy, AE,, created by altering the j’” unit in a pattern from 1 to 0, is given by: ;te;Fh ?K$ either c. -1 or 0. and st is the output from We first discuss the case where the jt’ unit of a pattern changes from 1 to 0 (downward neighbor), and then examine its opposite, where the jtA unit changes from 0 to 1 (upward neighbor). First, when a TKU j changes from 1 to 0. (ie, the state changes from a K- structure with TKU j ‘on’ to one of its neighboring patterns), the difference in energy is (n-l>c + p . This is because there are n-l units in the K-structure ‘which are each connected to TKU with positive weight c. This change in energy is alwa$s positive and so the K-structure has a lower energy than any of its downward neighbors. Second, to ensure that a K-structure has a lower energy than any of its upward neighbors, we examine the difference in energy resulting from changing the j” TKU from 0 to 1. Since this new TKU is not in the K- structure. it will be negatively connected to at least one of the TKUs in the K-structure*. If the new TKU is negatively connected to i TKUs in the K-structure, the change in energy is (n-i >c - i + p , since i TKUs have weight -1 to the new TKU while t he other n-i TKUs in the K-structure have weight c. If we choose a c to ensure that this quantity is always negative, the K-structure will have a lower energy than all of its upward neighbors. We can only do this if i > 1. If this holds, the maximum value that the change in energy can take is when i = 2 and the activation of the new TKU is at saturation. (ie. when each of n-2 TKUs in the chosen K-structure is contained in K-structures which also contains the new TKU). This maximum value is (n-2)c - 1. since the saturation value of a TKU is 1. To make the maximum negative, we choose a c so that (N-2)~ < 1. where N is an upper bound on the number of TKUs a K-structure may possess. Thus, with this choice of c. we have guaranteed that K-structures are local minima provided that if we choose any K-structure and any new TKU. i < 1. A further desirable property of BACAS is that, at all times, the system should have an hypothesis as to which K-structure is relevant. Thus with an ambiguous input text, the system would fix on just one of the interpretations of the input. The K-structure selected by the system as its hypothesis might, however, be dependent on the order of firing of the TKUs. For example, in our system, on being presented with ‘stand-in-line’ and ‘buy- ticket’, we want the machine to settle into either the ‘going-to-the-movies*, ‘catching-a-train’ or ‘taking-the- subway’ K-structure, rather than some other pattern. The main obstacle to BACAS’s achieving a single hypothesis with ambiguous input is that the union of two K-structures could be a local minimum. The system could settle into such a state when a TKU from a new K- structure comes on and the pulse-out leaves the machine in a state which is a shorter Hamming distance from the union of the new and the old K-structures than from either of the K-structures. We avoid this by choosing a c so that the union of any two K-structures has a higher energy than any of its downward neighbors. Since our COGNITIVE MODELLING AND EDUCATION / 265 pattern completion mechanism only makes moves to lower energy states. we can guarantee that the union of two K- structures can never be reached from below. The calculation is as follows. Suppose the two patterns contain m and n TKUs. Let the union of the K-structures and its downward neighbor differ in TKU , which is a member of the tist K-structure. If i is the ‘number of TKUs in the second K-structure that do not share K-structures with TKU, (ie. those which are negatively connected to TKU,). the change in energy from the K-structure to the neighboring pattern is (m-l+n-i)c - i + p . An upper bound on this quantity (where i > 1) is (A-l+n-2)c - 1. Thus. the change in energy is always negative if we choose c such that (N-l)c < 0.5. where N. as above, is an upper bound on the number of TKUs in a K-structure. In consequence, when a shared TKU is the first to reach threshold, the macro-layer settles into one of the K-structures containing the shared TKU. So even if it turns out to be the wrong K-structure, a small amount of activation will have flowed to the elements in its k- patterns. If the very unusual situation occurred where two TKUS from different K-structures thresholded simultaneously, the system would settle into one of the K-structures, thus resolving this ‘contradiction’ by ignoring some of the evidence it collects. The 10 K-structures we have used to test BACAS are based on empirically collected norms, (Galambos. 1982). For these K-structures, only two TKUs, ‘stand-in-line’ and ‘buy-ticket’, are shared. The maximum number, N, of TKUs per K-structure is 6. Thus choosing c to be 0.2, so that (N-2)c < 1. results in the K-structures being local minima with the desired behavior. VI CONCLUSION Our aim here has been to present a content- addressable memory retrieval mechanism for using ‘knowledge of the world’ in a natural language understander. Simplicity was our goal and we have left out a lot of what is important to language understanding. There is no representation of causality in BACAS, nor a mechanism for the temporal ordering of units. nor for role binding. Although these are obvious necessities. we are trying to get the retrieval mechanism correct before tackling these problems. We have demonstrated the usefulness of combining two types of connection system and we have described a technique to enable a connection system to move from one pattern to the next in response to external input. Our pulse-out mechanism replaces the Boltxmann Machine in creating energy leaps without the need for locked input. Our current research is geared towards both developing a learning mechanism for BACAS and building the natural language front-end. * A K-stmcture cannot be a subset of another K-structure. REFERENCES [l] Cullingford, R.E. (1978) ‘Script Application: Computer Understanding of Newspaper Stories.’ Technical Report 116. Department of Computer Science, Yale University. Ph.D. dissertation. bl [31 [41 151 161 [71 181 191 [lOI [ill [121 [I31 Galambos. J.A. (1982) ‘Normative studies of six characteristics of our knowledge of common activities.’ Cognitive Science Technical Report 14. Yale University. Grice, H.P. (1975) ‘Logic and conversation.’ In P. Cole & J.L. Morgan (Eds) Smax and Semrzrttics. fVo2 3): Speech acts. New York: Seminar Press. Hinton, G.E. & Sejnowski, T.J. (1983) Analyzing cooperative computation.’ Proceedings of the Fifth Annual Conference of the Cognitive Science Society. Rochester, NY, May. Hopfield, J.J. (1982) ‘Neural Networks and Physical Systems with Emergent Collective Computational Abilities.’ Rmeedings of the Natimul Academy of Sciences, USA., 79. 2554-2558. McClelland, J.L. &, Rumelhart. D.E. (1981) ‘An interactive model of context effects in letter perception: Part 1. An account of basic findings.’ Psychdogicai Review, 88, 375-407. McClelland, J.L. & Rumelhart. D.E. (1985) ‘Distributed memory and the representation of general and specific information.* Journd of Experimental Psydwlogy: General, 114. 159-188. McCulloch. W.S. & Pitts. W.H. (1943) ‘A logical calculus of ideas imminent in nervous activity.’ Bulletin of Mathematical Biophysics, 5, 115-133. Minsky, M. & Papert. S. (1969) Perceptrons. Cambridge, Mass: MIT Press. Schank, R.C. & Abelson. R.P. (1977) Scripts, P&ns, Goals and Understanding. New Jersey: Lawrence Erlbaum. Schank, R.C. (1982) @VZXV& Memory. Cambridge: Cambridge University Press. Sharkey, N.E. & Mitchell, DC. (1985) ‘Word recognition in a Functional Context: The Use of Scripts in Reading.’ Journal of Memory and Lunguuge, 24. 253-270. Sharkey. N.E. (1986) ‘A Model of knowledge-based expectations in text comprehension.’ In J.A. Galambos. J.B. Black and R.P. Abelson (Eds) Knowledge Structures. New Jersey: Lawrence Erlbaum. [14] Sharkey. N.E. & Sharkey, A.J.C. (1986) ‘KAN: A knowledge access network model.’ In R. Reilly (Ed) Dialogue Failure. Amsterdam: Elsevier. North Holland. 266 1 SCIENCE
|
1986
|
135
|
401
|
CHEF: A Model of Case-based Planning.* Kristian J. Hammond Department of Computer Science Yale University ABSTRACT Case-based planning is based on the idea that a machine planner should make use of its own past experience in developing new plans, relying on its memories instead of a base of rules. Memories of past successes are accessed and modified to create new plans. Memories of past failures are used to warn the planner of impending problems and memories of past repairs are called upon to tell the planner how to deal with them. Successful plans are stored in memory, indexed by the goals they satisfy and the problems they avoid. Failures are also stored, indexed by the features in the world that predict them. By storing failures as well as successes, the planner is able to anticipate and avoid future plan failures. These ideas of memory, learning and planning are implemented in the case-based planner CHEF, which creates new plans in the domain of Szechwan cooking. I WHAT IS CASE-BASED PLANNING? Case-based planning is planning from experience. A case-based planner differs sharply from planners that make use of libraries of goals and plans in that it relies on an episodic memory of past plan- ning experiences. As a result, memory organization, indexing, plan modification, and learning are central issues in case-based planning. Because this sort of planner has to be able to reuse the plans that it builds, it must be able to understand and explain why the plans that it has built succeed or fail in order to properly store them for later use. This means that a cased-based planner not only needs to have a powerful memory organization, it must also have a strong model of the causality of the domain in which it operates. The case-based approach to planning is to treat planning tasks as memory problems. Instead of building up new plans from scratch, A case-based planner recalls and modifies past plans. Instead of seeing plan failures as planning problems alone, it treats them as expecta- tion failures that indicate a need to modify its understanding of the world. And instead of treating plans as disposable items that are used once and then forgotten, a case-based planner treats them as valuable commodities that can be stored and recalled for later use. In general, a case-based planner treats at the problem of building and maintain- ing plans as an interaction between its knowledge base and the world. Any problem that arises out of a disparity between what the planner knows and what is requires that it alter not only its plan but also the expectations that led to the creation of that plan. II WHY CASE BASED? The argument for case-based planning is straightforward: we want a planner that can learn complex plans rather than replan every time it has to achieve an already planned for set of goals. In the case of a single plan from the CHEF domain, a plan for stir fried chicken and peanuts for example, the plan itself has seventeen steps performed on a dozen ingredients. While such a plan can be built up from a set of rules or plan abstractions, it is more efficient to save the entire plan and recall it for reuse when the same situation reoccurs. Further, the case-based approach seems to much more closely reflect human planning behavior than do those approaches that suggest replanning for every new case. Even going back to the earliest rule-based planners such as STRIPS [l], there has always been the desire to save completed plans in a way that makes them accessible to the planner in later situations. This was especially the case in those situations where a past plan included information about how to avoid problems that the planner’s base of rules tended to lead into. But the planning algorithm used by most planners does not allow for anything but the most obvious reuse of existing plans. Most planners build plans for multiple goal situations out of the individual plans for each of the goals that the planner is handed and then deal with any interactions as they arise. Unfortu- nately, this algorithm has resulted in set of planners that recreate and then debug past mistakes rather than using the plans that they have developed before that avoid those mistakes altogether [2,4,5]. The approach taken in CHEF is to anticipate and avoid problems due to plan interaction. To do this, CHEF keeps track of what features in its domain are predictive of particular problems so it can predict when it has to plan for them. It also saves plans in memory, indexed by the goals that they satisfy and the problems that they avoid. So the prediction of a problem allows CHEF to find the plans in memory that avoid it. CHEF’s basic algorithm is to find a past plan that satisfies as many of the most important goals as possible and then modify that plan to satisfy the other goals as well. III CHEF’S OVERALL STRUCTURE CHEF is a case-based planner that builds new plans out of its memory of old ones. CHEF’s domain is Szechwan cooking and its task is to build new recipes on the basis of a user’s requests. CHEF’s input is a set of goals for different tastes, textures, ingredients and types of dishes and its output is a single recipe that satisfies all of its goals. Its output is a single plan, in the form of a recipe, that satisfies all of the users goals. Before searching for a plan to modify, CHEF examines the goals in its input and tries to anticipate any problems that might arise while planning for them. If a failure is predicted, CHEF adds a goal to avoid the failure to its list of goals to satisfy and this new goal is also used to search for a plan. Because plans are indexed in memory by the problems they avoid, this prediction can be used to find a plan that solves the predicted problem. Much of CHEF’s planning power lies in this ability to predict and thus avoid failures it has encountered before. *This report describes work done in the Department of Computer Science Yale University. It was supported in part by ONR Grant #N00014-85-K-0108. at COGNITIVE MODELLING AND EDUCATION / 267 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. CHEF consists of six processes: l Problem anticipation: The planner anticipates planning problems by noticing features in the current input that have previously participated in past planning problems. l Plan retrieval: The planner searches for a plan that satisfies as many of its current goals as possible while avoiding the problems that it has predicted. l Plan modification: The planner alters the plans it has found to satisfy any goals from the input that are not already achieved. In the sections that follow, each of these modules will be discussed in turn. These sections will discuss two examples: one in which CHEF creates and then repairs a faulty plan for a stir fry dish including beef and broccoli and one in which CHEF uses the knowledge gained from the first example to create a plan for a stir fry dish including chicken and snow peas. Most of this paper will attend to the second example in which the knowledge learned by CHEF in dealing with the first exam- ple is actually used, so as to show the power of the processes of problem anticipation, plan retrieval and plan modification. In discussing the l Plan repair: When a plan fails, the planner fixes the faulty plan by building up a causal explanation of why the failure has occurred and using it to find the different strategies for repairing it. processes of failure repair, plan storage and credit assignment, how- ever, the first example is discussed. This is because CHEF learns from the problems encountered in dealing with this example and these three modules make-up CHEF’s repair and learning abilities. IV PROBLEM ANTICIPATION l Credit assignment: Along with repairing a failed plan, the CHEF’s initial input is a set of goals to include different tastes planner wants to repair the characterization of the world that and ingredients in a type of dish, The first step that CHEF takes in allowed it to create the failed plan in the first place. It does this dealing with them is to try to anticipate any problems that might arise by using the causal explanation of why the failure occurred to in planning for them. CHEF wants to predict problems before they identify the features in the input that led to the problem and occur so it can find a plan in memory that already avoids them. then mark them as predictive of it. l Plan storage: The planner places successful plans in memory, indexed by the goals that they satisfy and the problems that they avoid. The flow of control through these processes is simple. Goals are handed to the problem anticipator, which tries to predict any problems that might occur as a result of planning for them. If a problem is predicted, a goal to avoid it is added to the set of goals to be planned for. Once this is done, the goals are handed to the plan retriever, which searches for a plan that satisfies as many of the planner’s goals as possible, including any goals to avoid any predicted problems. In The planner anticipates problems on the basis of a memory of past failures that is linked to the features that participated in causing them. These links are used to pass markers from the features in an input to the memory of failures that those features predict. When the features that are related to a past problem are all present, the memory of the failure is activated and the planner is informed of the possibility of the failure reoccurring. For example, one of the failures the planner has in memory relates to an attempt to make a stir-fry dish with beef and broccoli. In making the dish, the liquid produced when stir frying the beef ruined the texture of the broccoli while it was cooking. The failure is indexed in memory by the features of the goals that interacted to cause it: the order to do this, the plan retriever makes use of a memory of plans indexed by the goals they satisfy and the problems they solve. Once a base line plan is found, the plan modifier alters the plan to satisfy any goals that it does not already deal with. The alteration of the plan is done using a set of modification rules that are indexed by the goal to be added and the type of plan being altered. After the plan is built it is handed to a simulator, which runs the plan using a set of rules concerning the effects of each action in CHEF’s domain under different circumstances. If the plan runs without fail, it is placed in memory, indexed by the goals that it satisfies. If there are failures, the plan is handed to the plan repair mechanism. This process builds a causal description of why a plan has failed and then uses that description to find the repair strategies that will alter the causal situation to fix the fault. After the plan has been repaired, it is placed in memory, indexed by the goals it satisfies and the problems it now avoids. Because a problem has to be anticipated before it can be planned for, however, the planner has to do more than just store plans by the fact they they solve particular problems. It also has to build up a knowledge base that can be used to infer problems on the basis of the features in the world that predict them. The planner must decide which features in the input are responsible for the problem and mark them as such. This credit assignment is done whenever a failure occurs and the features that are blamed for the failure are linked to the memory of the failure itself. These links are later used to predict failures so that plans can be found that avoid them. These six processes make up the basic requirements for a case-based planner. Plan retrieval, modification and storage are essential to the basic planning loop that allows old plans to be modified in service of new goals. A plan repair process is need for those situations in which plans fail. A process that assigns blame to the features responsible for failures is required for the planner to be able to later anticipate problems. And problem anticipation is needed in order for the planner to avoid making mistakes that it has already encountered. goal to include meat, the goal to include a crisp vegetable and the goal to make a stir fry dish. When these features are present in an input the planner can predict that the failure will occur again. Once a problem is predicted, the planner can add a goal to avoid the problem to the set of goals that will be used to find a plan. In planning for the goals to include SNOW PEAS and CHICKEN in a STIR FRY dish, the planner is is reminded of the past failure it encountered in building the beef and broccoli plan. This failure was due to the fact that CHEF tried to stir fry some beef and broccoli together in a past plan, allowing the liquid from the beef to make the broccoli soggy. The surface features of the goal to include meat and the goal to include a crisp vegetable are the same, so the planner predicts that it will make this mistake again if it does not attend to this problem. Searching for plan that satisfies input goals - Include chicken in the dish. Include BROW pea in the dish. Make a stir-fry dieh. Collecting and activating teste. Ie the dieh STYLE-STIR-FRY. Is the item a MEAT. Is the item a VEGETABLE. Ie the TEXTURE of item CRISP Chicken + Snow Pea + Stir frying = Failure “Meat aweata when it is stir-fried. "Stir-frying in too much liquid makes CriBp Vegetablea Soggy." Reminded of a failure in the BEEF-AND-BROCCOLI plan. Failure = 'The vegetable ia now Boggy' Once a failure has been anticipated, CHEF builds a goal to avoid 268 I SCIENCE it and adds this goal to the set of goals which will be used to search for a plan. This set of goals is then handed to the plan retriever. V PLAN RETRIEVAL The function of the plan retriever is to find the best plan to use in satisfying a set of goals. It seeks what we call the Ybest match” in memory, the plan for a past situation that most closely resembles the current situation. In CHEF, this notion of “best match” is defined as finding a plan that satisfies or partially satisfies as many of the planner’s most important goals as possible. Finding a raspberry soufflC recipe that can be turned into a strawberry soufflC or finding a beef stir fry dish that can be turned into one for pork. The plan retriever uses three kinds of knowledge in finding the best match for a set of goals: a plan memory that is indexed by the goals that the plans satisfy and the problems that they avoid, a similarity metric that allows it to notice partial matches between goals and a value hierarchy that allows it to judge the relative importance of the goals it is planning for. The planner’s goals, including the goal to avoid the problem of the soggy vegetable are all used to drive down in a discrimination network that organizes past plans. Driving down on: Make a stir-fry dish. Succeeded - Driving down on: Avoid failure exemplified by the state ‘The broccoli ie now Soggy' in recipe BEEF-AND-BROCCOLI. Succeeded - Driving down on: Include chicken in the dish. Failed - Trying more general goal. Driving down on: Include meat in the dish. When CHEF’s retriever is searching for a past case on which to base its planning, it is searching for a plan that was built for a situation similar to the one it is currently in. The idea behind this search is that the solution to a past, problem similar to the current one will be useful in solving the problem at hand. But this means that the vocabulary used to describe the similarity between the two situations has to capture the relevant aspects of the planning problems that the planner deal with. This vocabulary consists of two classes of objects: the goals in a situation, (which in the case of stored plans are satisfied by those plans) and the problems that have been anticipated (which in the case of the stored plans are avoided by them). CHEF gets its goals directly from the user in the form of a set of constraints that have to be satisfied. It gets information about the problems that it thinks it has to avoid while planning for those goals from its problem anticipator, which examines the goals and is reminded of problems that have resulted from interactions between similar goals in the past. CHEF’S domain is Szechwan cooking, so the goals that it plans for are all related to the taste, texture and style of the dish it is creating. The basic vocabulary that CHEF uses to describe the effects of its plans, and the effects that it wants a plan to accomplish include descriptions of the foods that it can include, (e.g., Beef, chicken, snow peas and bean sprouts), the tastes that the user wants (e.g., Hot, spicy, savory and fresh), the textures that the dish should include, (e.g., Crunchy, chewy and firm) and the type of dish that the user is looking for, (e.g., STIR-FRY, SOUFFLE, and PASTA). In searching for a past situation which might be useful, the plan retriever uses the goals that it is handed to index to possible plans. The plan it finds, then, will satisfy at least some of the goals it is looking for. Succeeded - Driving down on: Include BROW pea in the dish. Failed - Trying more general goal. Driving down on: Include vegetable in the dish. Succeeded - Found recipe -> RECQ BEEF-AND-BROCCOLI Here CHEF finds a past plan that avoids the problem due to goal interactions that has been predicted while still partially satisfying the other more surface level goals. VI PLAN MODIFICATION CHEF’s memory of plans is augmented by the ability to modify plans for situations that only partially match the planner’s present problems. CHEF’s plan modifier, the module that handles these changes, makes use of a modification library that is indexed by the changes that have to be made and the plan that they have to be made in. The modification rules it has are not designed to be complete plans on their own, but are instead descriptions of the steps that have to be added and deleted from existing plans in order to make them sat- isfy new goals. Along with the modification rules are a set of object C&U that look at a plan and, on the basis of the items or ingredi- ents involved, correct difficulties that have been associated with those ingredients in the past. The process used by CHEF’s modifier is simple. For each goal that is not yet satisfied by the current plan, the modifier looks for the modification rule associated with the goal and the general plan type. If no modification rule exists for the particular goal, the modifier steps up in an abstraction hierarchy and finds the modification rule for the more general version of the goal. Once a rule is found, the steps it describes are added to the plan, merged with existing steps when possible. The problems that CHEF tries to avoid, by finding plans that work around past instances of them, also relate to the planner’s goals. In searching for a plan, CHEF uses the predictions of any problems that it has anticipated to find plans that avoid those problems. Because plans are indexed by the problems that they solve, the planner is able to use these predictions to find the plans that will avoid the problems they describe. In searching for a base-line plan for the chicken and snow peas situation, CHEF searches for a plan that, among other things, avoids the predicted problem of soggy vegetables. Because the beef and broccoli plan is indexed by the fact that it solves the problem of soggy vegetables due to stir frying with meats, the plan retriever is able to find it on the basis of the prediction of this problem, even though the surface features of the two situations are dissimilar. If the goal in question is already partially satisfied by the plan, (this happens when the plan satisfies a goal that is similar to the current one), the planner does not have to go to its modification rules. It replaces the new item for the old item, removing any steps that were added by the old item’s ingredient critics and adding any steps required by the new item’s ingredient critics. In altering the BEEF-AND-BROCCOLI plan to include chicken and snow peas, the planner has the information that both new ingre- dients are partial matches for existing ones and can be directly substi- tuted for them. A critic under the concept CHICKEN then adds the step of boning the chicken before chopping. In searching for a plan that includes chicken and snow peas, the planner is also trying to find a plan that avoids the problem it has predicted of the vegetables getting soggy as a result of being cooked with the meat. The fact that it has predicted this problem allows it to find a plan in memory that avoids it, even though the plan deals with surface features that are not in the current input. The planner is able to find this plan even though a less appropriate plan with more surface features in common with the current situation, a recipe for chicken and green beans, is also in memory. Modifying new plan to aatiefy: Include chicken in the dish. Substituting chicken for beef in new plan. Modifying new plan to aatisfy: Include snow pea in the dish. Subetituting enow pea for broccoli in new plan. Considering critic: COGNITIVE MODELLING AND EDUCATION / 269 Before doing step: Chop the chicken do: Bone the chicken. - Critic applied. situation. This past failure occurred when CHEF was planning for the goals of including beef and broccoli in a stir fry dish. CHEF originally built a simple plan in which the two main ingredients were stir fried together. Unfortunately, this plan results stir frying the beef making the vegetables in the liquid produced by soggy as they are cooking. This explanation of the failure in this example indexes to the TOi SIDE-EFFECT:DISABLED-CONDITION:CONCURRENT, a mem- ory structure related to the interaction between concurrent plans in which a side effect of one violates a precondition of the other. This is because the side effect of liquid coming from the stir-frying of the beef VII PLAN REPAIR Because CHEF cannot avoid making errors, it has to be able to repair faulty plans in response to failures. As it is, CHEF’s plan repairer is one of the most complex parts of both the CHEF program and the case-based theory that it implements. It is complex because it makes the most use of the planner’s specific knowledge about the physics of the CHEF domain and has to combine this knowledge with a more abstract understanding of how to react to planning problems in general. Once it completes a plan, CHEF runs a simulation of it using a set of causal rules. This simulation is CHEF’s equivalent of the real world. At the end of the simulation, it checks the final states that have been simulated against the goals of the plan it has run. If any goal is unsatisfied or if any state has resulted that CHEF wants to avoid in general, an announcement of the failure is handed to the the plan repairer. CHEF deals with plan failure by building a causal explanation of why the failure has occurred. This explanation connects the surface features of the initial plan to the failure that has resulted. The plan- ner’s goals, the particular steps that it took and the changes that were made are all included in this explanation. This explanation is built by back chaining from the failure to the initial steps or states that caused it, using a set of causal rules the describe the results of actions in different circumstances. The explanation of the failure is used to find a structure in memory that organizes a set of strategies for solving the problem described by the explanation. These structures, called Thematic Organization Packets or TOPS [3], are similar in function to the critics found in HACKER [4] and NOAH [2]. Each TOP is indexed by the description of a particular type of planning problem and each organizes a set of strategies for deal with that type of problem. These strategies take the form of general repair rules such as REORDER steps and RECOVER from side-effects. Each general strategy is filled in with the specifics of the particular problem to build a description of a change in the plan that would solve the current problem. This description is used as an index into a library of plan modifiers in the cooking domain. The modifications found are then tested against one another using rules concerning the efficacy of the different changes and the one that is most likely to succeed is chosen. The idea behind these structures is simple. There is a great deal of planning information that is related to the interactions between plans and goals. This information cannot be tied to any individual goal or plan but is instead tied to problems that rise out of their combina- tion. In planning, one important aspect of this information concerns how to deal with problems due to the interactions between plan steps. Planning TOPS provide a means to store this information. Each TOP corresponds to a planning problem due to the causal interaction be- tween the steps and the states of a plan. When a problem arises, a causal analysis of it provides the information needed to identify the TOP that actually describes the problem in abstract terms. Under each TOP is a set of strategies designed to deal with the problem the TOP describes. By finding the TOP that relates to a problem, then, a planner actually finds the strategies that will help to fix that problems. CHEF does not run into any failures while planning for the stir fry dish with chicken and snow peas. This is because it is able to avoid the problem of the liquid from the meat making the vegetables soggy by anticipating the failure and finding a plan the avoids it. It can only do this, however, because it has handled this problem already in that it built a similar plan that failed and was then repaired. This earlier plan is the same one that CHEF selected as the base-line plan for this situation, because it knows that this is a plan that it repaired in the past to avoid the same problem of liquid from the meat .making the vegetables soggy that is now predicted as a problem in the current is disabling a precondition attached to the broccoli stir-fry plan that the pan being used is dry. The causal description of the failure is used to access this TOP out of the twenty that the program knows about. All of these TOPS are associated with causal configurations that lead to failures and store strategies for fixing the situations that they describe. For example, one TOP is DESIRE@EFFECT:DISABLED-CONDITION:SERIAL, a TOP that describes a situation in which the desired effect of a step interferes with the satisfaction conditions of a later step. The program was able to recognize t,hat the current situation was a case of SIDE- EFFECT:DISABLED-CONDITION:CONCURRENT because it has determined that no goal is satisfied by the interfering condition (the liquid in the pan), that the condition disables a satisfaction require- ment of a step (that the pan be dry) and that the two steps are one and the same (the stir fry step). Had the liquid in the pan satisfied a goal, the situation would have been recognized as a case of DESIRED- EFFECT:DISABLED-CONDITION:CONCURRENT because the vi- olating condition would actually be a goal satisfying state. Found TOP TOP1 -> SIDE-EFFECT:DISABLED-CfJNDITION:CONCURRENT TOP -> SIDE-EFFECT:DISABLED-CONDITION:CONCURRENT has 3 etrategies aseociated with it: SPLIT-AND-REFORM ALTER-PLAN:SIDE-EFFECT ADJUNCT-PLAN These three strategies reflect the different changes that can be made to repair the plan. They suggest: l ALTER-PLAN:SIDE-EFFECT: Replace the step that causes the violating condition with one that does not have the same side- effect but achieves the same goal. l SPLIT-AND-REFORM: Split the step into two separate stens and run them independently. 1 l ADJUNCT-PLAN:REMOVE: Add a new step to be run along with a step that causes a side-effect that removes t.he side-effect as it is created. In this case, only SPLIT-AND-REFORM can be implemented for this particular problem so the change it suggests is made. As a result the single stir fry step in the original plan in which the beef and broc- coli were stir fried together is changed into a series of steps in which they are stir fried apart and joined back together in the final st,ep of the plan. Once a plan is repaired it can be described as a plan that now avoids the problem that has just be fixed. When it is stored in memory, then, it is stored as a plan that avoids this problem so it can be found if a similar problem is predicted. VIII PLAN STORAGE Plan storage is done using the same vocabulary of goals satisfied and problems avoided that plan retrieval uses. Once a plan has been built and run, it is stored it in memory, indexed by the goals it satisfies 270 / SCIENCE and the problems it avoids. The plans are indexed by the goals that they satisfy so the planner can find them later on when it, is asked to find a plan for a set, of goals. They are also stored by the problems that they avoid so that CHEF, if it knows that a problem is going to result from some planning situation, can find a plan that avoids that problem. The repaired BEEF-AND-BROCCOLI is indexed in memory under the goals that it satisfies as well as under the problems that it avoids. So it is indexed under the fact that is a plan for stir frying, for including beef and so on. It is also indexed by the fact, that it avoids the problem of soggy vegetables that rises out the interaction between meat and crisp vegetables when stir fried. The fact that, the plan is associated with the problem that it solves allows the plan retriever to later find the plan to use when confronted with the later task of finding a plan that avoids the problem of soggy vegetables that results when meats and crisp vegetables are stir fried together. Indexing BEEF-AND-BROCCOLI under goals and problems: If this plan is successful, the following should be true: The beef is now tender. The broccoli is now crisp. Include beef in the dish. Include broccoli in the dish. Make a stir-fry dish. The plan avoids failure exemplified by the state 'The broccoli is now soggy’ in recipe BEEF-AND-BROCCOLI. IX CREDIT ASSIGNMENT CHEF’s approach to failures is two fold. It repairs the plan to make it run and it repairs itself to make sure that it will not make the same mistake again. Part of assuring that the same mistake will not be repeated is storing the repaired plan so that it can be used again. But the fact that the original plan failed and had to be repaired in the first place indicates that CHEF’s initial understanding of the planning situation was faulty in that it built a failure when it thought it was building a correct plan. When it encounters a failure, then, CHEF has to also find out why the failure occurred so that it can anticipate that failure when it encounters a similar situation in the future. CHEF’s makes use of the same causal explanation used to find its TOPS and repair strategies to figure out which features should be blamed for a failure. The purpose of this blame assignment is to track down the features in the current, input that could be used to predict this failure in later inputs. This ability to predict, planning failures before they occur allows the problem anticipator to warn the planner of a possible failure and allow it to search for a plan avoiding the predicted problem. The power of the problem anticipator, then, rests on the power of the process that figures out which features are to blame for a failure. CHEF’s steps through the explanation built by the plan repairer to the identify goals that interacted to cause the failure. After being pushed to the most general level of description that the current expla- nation can account for, these goals are turned into tests on the input. This allows the planner to later predict failures on the basis of surface features that are similar to the ones that participated in causing the current problem. As a result of the beef and broccoli failure, a test on the texture of vegetables, is built and associated with the concept VEGETABLE because a goal for a crisp vegetable predicts this failure while goals for other vegetables do not. It is associated with VEGETABLE rather than BROCCOLI because the rule explaining the failure is valid for all crisp vegetables not just broccoli. Because any meat will put off the liquid like that which participated in the failure no test is needed and a link is built directly from the presence of the goal to the memory of the failure. Building demon: DEMON0 to anticipate interaction between rules: "Meat sweats when it is stir-fried." "Stir-frying in too much liquid makes crisp vegetables soggy." Indexing marker passing demon under item: MEAT by test: Is the Item a MEAT. Indexing marker passing demon under item: VEGETABLE by test: Is the item a VEGETABLE. and Is the TEXTURE of item CRISP. Goal to be activated = Avoid failure exemplified by the state 'The broccoli is now soggy' in recipe BEEF-AND-BROCCOLI. These links between surface features and memories of failures are used later to predict the same problem when it is handed the goals to makes a stir fry dish with chicken and snow peas. This prediction is then used to find the plan that avoids the problem. X CHEF The idea behind CHEF is to build a planner that learns from its own experiences. The approach taken in CHEF is to use a representa- tion of those experiences in the planning process. To make this possi- ble, CHEF requires what any case-based planner requires, a memory of past events and a means to retrieve and store new them, a method for modifying old plans to satisfy new goals, a way to repair plans that fail, and a way to turn those failures into knowledge of how to better plan. By choosing plans on the basis of the problems that they solve as well as the goals they satisfy, CHEF is able to avoid any problem that it is able to predict. By also treating planning failures as understanding failures and repairing its knowledge base as well as its plans, CHEF is able to predict problems that it has encountered before. And by using an extensive causal analysis as a means of diagnosing problems, CHEF is able to apply a wide variety of repairs to a single failure. PI PI PI bl PI REFERENCES Fikes, R., and Nilsson, N., STRIPS: A neul approach to the appli- cation of theorem proving to problem solving, artificial Intelligence, 2 (1971). Sacerdoti, E., A structure for plans and behavior, Technical Report 109, SRI Artificial Intelligence Center, 1975. Schank, R., Dynamic memory: A theory of learning in computers and people, Cambridge University Press, 1982. Sussman, G., Artificial Intelligence Series, Volume 1: A computer model oj skill acquisition, American Elsevier, New York, 1975. Wilensky, R., META-PLANNING, Technical Report M80 33, UCB College of Engineering, August 1980. COGNITIVE MODELLING AND EDUCATION / 27 1
|
1986
|
136
|
402
|
The Structure-Mapping Engine* Brian Falkenhainer Kenneth D. Forbus Qualitative Reasoning Group Dedre Gentner Psychology Department University of Illinois 1304 W. Springfield Ave Urbana, Illinois, 61801 ABSTRACT This paper describes the Structure-Mapping Engine (SME), a cognitive simulation program for studying human analogical processing. SME is based on Gentner’s Structure-Mapping theory of analogy, and provides a “tool kit” for constructing matching algorithms consistent with this theory. This flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SUE is very efficient, making it a candidate component for machine learning systems as well. We review the Structure- Mapping theory and describe the design of the engine. Next we demonstrate some examples of its operation. Finally, we discuss our plans for using SME in cognitive simulation studies. 1. INTRODUCTION This paper describes the Structure-Mapping Engine (HE), a cognitive simulation program we have built to explore the computational aspects of Gentner’s Structure-Mapping theory of analogical processing. SME is both flexible and efficient. It provides a “tool kit,, for constructing matchers consistent with the kinds of comparisons sanctioned by Gentner’s theory. A matcher is specified by a collection of rules. The rules can include strengths of evidence, and the program uses these weights and a novel procedure for combining the local matches constructed by the rules to efficiently produce and weigh all consistent global matches. The efficiency and flexibility of this matching algorithm suggests it would also be a viable component for machine-learning systems. Cognitive simulation studies can offer important insights for understanding the human mind. Unfortunately, cognitive simulation programs tend to be complex and computationally expensive (c.f. [Anderson, 1983; Van Lehn, 19831). Being complex makes the relationship between the program and the theory obscure. In addition, it is harder to make computational experiments and account for new data if the only way to change the program’s operation is surgery on the code. Being computationally expensive means performing fewer experiments, and thus exploring fewer possibilities. There have been several important AI programs that study the computational aspects of analogy, but they were not designed to satisfy the above criteria (e.g. Burnstein, 1983; Winston, 1980, 1982). The next section briefly reviews Gentner’s Structure-Mapping theory. Section 3 describes SME’s organization and its novel matching algorithm. Section 4 illustrates SME’s operation on several examples, and Section 5 describes our plans for future development and for using it in psychological experimentation. 2. THE STRUCTURE-MAPPING THEORY The theoretical framework for this research is the Structure- Mapping theory of analogy (Gentner, 1980, 1982, 1983; Gentner & * This research N00014-85-K-0559. is supported by the Office of Naval Research, Contract No. Gentner, 1983). Th is theory describes the set of implicit rules by which people interpret analogy and similarity. The central intuition is that an analogy is a mapping of knowledge from one domain (the base) into another (the target) which conveys that a system of relations known to hold in the base also holds in the target. The target objects do not have to resemble their corresponding base objects. Objects are placed in correspondence by virtue of their like roles in the common relational structure. Given collections of objects {bi}, {t,} in the base and target representations, respectively, the tacit rules for constructing the analogical mapping M can be formalized as follows:** Objects in the base are placed in correspondence with objects in the target: M: b, --> t, Predicates are mapped from the base to the target according to the following mapping rules: (1) Attributes of objects are dropped: e.g. RED (b,) -f> RED (t,> (2) Relations between objects in the base tend to be mapped across: e.g. COLLIDE (bl, bj > --> COLLIDE (tl, tl> (3) The particular relations mapped are determined by systematicity, as defined by the existence of higher-order*** constraining relations which can themselves be mapped: e.g.CAUSE [PUSH(b,, bl>, COLLIDE (bj, b,)] --> CAUSE [PUSH (ti , tJ> , COLLIDE (tl, t,> 1 For example, consider the analogy between heat-flow and water-flow. Figure 1 shows a water-flow situation and an analogous heat-flow situation. Figure 2 shows the representation a learner ICE CUBE WARM COFFEE Figure I Two Physical Situations Involving Flow (adapted from Buckley, 1979, pp 15-25). _ _-~~ ** Besides analogy, other kinds of similarity can be characterized by the distri- bution of relational and attributional predicates that are mapped. In anelogy, only re- lational predicates are mapped. In lateral simrlarity, both relational predicates and object-attributes are mapped. In mere-appearance matches, it is chiefly object- attributes that are mapped. *** We define the order of an item in a representation as follows: Objects and constants are order 0. The order of a predicate is one plus the maximum of the order of its arguments, Thus GREATER-THAN(x, y) is first-order if x and y are objects, and CAUSE[GREATER-THAN(X, y), BREAK(x) ] is second-order. Examples of higher-order relations include CAUSE and IMPLIES. 272 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Water Flow CAUSE GREATER FLOW(beaker,vial,water,pipe) A PRESSU&(beaker) PRE&JRE(vial) GREATER B4 LIQUID(water) FLAT-TOP(water) CLEAR( beaker) DIA-METER(beaker) DIAMETER(via1) Heat Flow The Structure-Mapping theory has received a great deal of convergent theoretical support in artificial intelligence and psychology. Although there are differences in emphasis, there is widespread agreement on the basic elements of one-to-one mappings of objects with carryover of predicates (Burstein, 1983; Carbonell, 1983; Hofstadter,l984; Kedar-Cabelli, 1985; Reed, 1985; Rumelhart & Norman, 1981; Winston, 1982). Moreover, all these researchers have adopted something like the systematicity principle, or a special case of systematicity. For example, Carbonell focuses on plans and goals as the high-order relations that give constraint to a system, and Winston focuses on causality. Also, some models combine a structure-mapping component, which generates possible interpretations of a given analogy, with a pragmatic component which chooses the relevant interpretation (e.g., Burstein, 1983; Kedar-Cabelli, 1985). Empirical psychological studies have borne out the prediction that systematicity is a key element of people’s implicit rules for analogical mapping. Adults focus on shared systematic relational structure in interpreting analogy. They tend to include relations and omit attributes in their interpretations of analogy, and they judge analogies as more sound and more apt if base and target share systematic relational structure (Gentner, 1980; Gentner & Landers, 1985; Gentner & Stuart, 1983). Finally, in developmental work we have found that children are better at performing difficult mappings when the base structure is systematic (Gentner & Toupin, in press). Given the existing theoretical and empirical psychological support, we have decided that cognitive simulation is needed to allow us to explore the theory still more deeply. GREATER A TEMPERi%JRE(coffee) TEMPER&JRE(ice cube) FLOW(ice cube,coffee,heat,bar) LIQUID(coffee) FLAT-TOP(coffee) Figure 2. Simplified Water Flow and Heat Flow Descriptions might have of these situations (simplified for clarity). In order to comprehend the analogy “Heat is like water” a learner must: (1) Set up th e object correspondences between the two domains: a. THE STRUCTURE-MAPPING ENGINE: DESIGN Given the descriptions of a base and a target, SME constructs all syntactically consistent analogical mappings between them. As heat--> water, tube --> metalbar, noted above, the mappings consist of pairwise matches between beaker --> coffee, vial--> Ice cube predicates and objects in the base and target, plus a list of predicates (2) Discard object attributes, such as CYLINDRICAL (beaker). which exist in the base but not the target. This list of predicates is (3) Map base relations such as the set of candidate inferences sanctioned by the analogy. SME also GREATER-THAN [PRESSURE (water, beaker) , provides a syntactic evaluation of each mapping. In accordance with PRESSURE (water, vial) ] Structure-Mapping theory, no domain information beyond the representation of the target is used in SME to evaluate the candidate (4) Observe systematicity: i.e., keep reIations belonging to a to the corresponding relations in the target domain. systematic relational structure in preference to isolated relationships. In this example, CAUSECGREATER-THAN[PRESSURE(water, beaker), FLOW(water, is mapped into PRESSURE (water, vial) 1 , Pipe I beaker, Vial>> The base and target representations provided to SME are collections of facts called description groups. Domain objects and inferences ~ that is the job of other modules. constants are collectively referred to as entities. The construction of the analogy is guided by match rules which specify which facts and entities in the base and target might match and estimate the believability of each possible component of a match. Importantly, to build a new match function one simply loads a new set of match rules. These rules are the key to SME’s flexibility. CAUSECGREATER-THAN[TEMPERATURE(heat, coffee), TEMPERATURE (heat, Ice cube) 1 , FLOW(heat, bar, coffee, Ice cube)) while isolated relations, such as GREATER-THAN [DIAMETER (beaker) , DIAMETER (vial) ] are discarded. An analogy is processed in three steps. First, all potential pairings between items in the base and target are constructed and individually evaluated. Second, all sets of consistent combinations of these pairings are constructed to form the possible globa matches and their corresponding candidate inference sets. Finally, the global matches are evaluated syntactically to provide a score. We now describe these computations in detail. 8.1. Step 1: Construct local match hypotheses SME begins by finding for each entity and predicate in the base the set of entities or predicates in the target that could plausibly match that item. Plausibility is determined by match hypothesis constructor rules, which take the form The systematicity principle is central to analogy. Analogy conveys a system of connected knowledge, not a mere assortment of independent facts. Prefering systems of predicates that contain higher-order relations with inferential import is a syntactic expression of this tacit preference for coherence and deductive power in analogy. It is the higher-order relational structure that determines which of two possible matches is made. For example, suppose in the previous example we were concerned with objects differing in specific heat, such as a metal ball-bearing and a marble of equal mass, rather than temperatures. Then DIAMETER becomes relevant, since (in a more complete model than we have space for) DIAMETER affects the capacity of a container, the analog to specific heat. (MHCrule <condition> <body>) The body of these rules is run on each pair of items (one from the base and one from the target) that satisfy the condition and installs a ?netch hypothesis which represents the possibility of them COGNITIVE MODELLING AND EDUCATION / 2’3 matching. For example, we state that all predicates whose predicate name is identical could potentially match with the rule (MHCrule (equal-functors? *base-fact* *target-fact*) (install-MH *base-fact* *target-fact*)) The likelyhood of each match hypothesis is found by running match evidence rules and combining their results. The evidence rules provide support for a match hypothesis by examining the syntactic properties of the items matched. For example, the rule (MHErule (and (equal (mh-type *MH*) fact) (equal-functors? (mh-base-Item *MH*) (mh-target-Item *MH*))) (MHevidence *MH* 0.5 0.0)) states “If the two items are facts and their functors are the same, then supply 0.5 evidence in favor of the match hypothesis.“* The rules may also examine match hypotheses associated with the arguments of these items to provide support based on systematicity. This causes evidence for a match hypothesis to increase with the amount of higher-order structure supporting it. We use the Dempster-Shafer formalism for probabilities (Shafer, 1976) and combine evidence with a simplified form of Dempster’s rule of combination (Prade, 1983; Ginsberg, 1984). By using the simplified formula we are assuming independence among the match hypotheses, but this is not a problem because we are only using it to produce scores for ordering candidates rather than estimating probabilities. The state of the match between the water flow and heat flow descriptions of Figure 2 after running these first two sets of rules is shown in Figure 3. The weights shown in the figure are the support for each match hypothesis. Internally the program stores a Shafer interval, consisting of the support for the match and the maximum plausible support (i.e., one minus the support against it). The water flow - heat flow analogy is made possible by the program being able to match predicates with different names, such as matching Match Hypothesis Base Node Target Node GREATERPltllllre GREATERTWUpcrh”rc GREATERDiamcter GREATERT.mprr.ture PRESSUREbc,ker TEMPERATUREcalIle PRESSUREYi,, TEMPERATURE,* rubs D1*ETERbd~r TEMPERATUREcome DIAMETER,., TEMPERATURE,ee c~br *Low”.te* *Lowhrrl FLAT*.,e* *LATcoffrc LIQU?dW LIQulDcollrc vial ice cube beaker coffee water coffee water heat pipe bar Evidence 0.650 0.650 0 712 0.712 0.712 0.712 0.790 0.790 0.790 0.932 0.932 0.804 0.632 0.632 Figure 3. Water Flow - Heat Flow Match After Running Local Rules PRESSURE and TEMPERATURE. This behavior is caused by the particular set of rules we are using. In these rules, relational predicates such as GREATER are limited to matching predicates having the same name, while functional predicates such as TEMPERATURE can match other functional predicates. Note that at this stage, SME is entertaining a number of matches that will later * Evidence is attributed to a match hypothesis in the form of two numbers. The first number corresponds to evidence in favor of the match and the second number in- dicates evidence against the match. The sum of these number8 must be less than or equal to one. be discarded, such as LIQUID(water) ++ LIQUID (cof fee) and DIAMETER (vial) H TEMPERATURE (ice cube). 8.2. Step 2: Global Match Construction Once the individual match hypotheses have been constructed and analyzed, SME builds a set of analogical mappings between the base and target. Each mapping is a maximal set of consistent match hypotheses plus the candidate inferences supported by those hypotheses. Consistency is enforced by insisting that a match hypothesis MH is in the analogy only if the mapping includes other match hypotheses that pair up all the arguments of the base and target items of MH. The mappings are maximal in that adding another match hypothesis would lead to a contradiction, as indicated by a base item being matched to two target items or vice versa. The key to forming the mappings is constructing the sets of entity correspondences (called Emaps). Mappings are constructed in four steps. First, find all entity justifiers. An entity justifier is a match hypothesis that directly justifies one or more Emaps, in that some of its arguments are entities. Second, associate with each match hypothesis the set of Emaps that it implies. This step is accomplished by propagating Emaps upwards from entity justifiers. The set of Emaps that a match hypothesis supports is simply the union of all Emaps supported by its descendents. Third, create a collection of globally consistent matches, called Gmaps. Call a match hypothesis that is not the descendent of any other match hypothesis a root. Notice that if the Emaps supported by a root are consistent, then the entire structure under it is consistent. In the simplest case, the entire collection of descendents may be collected together to form a globally consistent match. However, if the root is not consistent, then the same procedure is applied recursively to each descendent. The result is a collection of sets of match hypotheses, within which all Emaps are consistent. The final step is to generate all consistent combinations of these sets, keeping those combinations that are maximal. This is done by first combining Gmaps which are part of Gmap #l: ( (GREATER,, ++ GREATER,,) (FLOW +-+ FLOW) (PRESSURE,, ++ TEMPERATURE,,) (PRESSURE,, ++ TEMPERATURE,,) ) Emaps ( (beaker ++ coffee) (vial c-t ice cube) (water ++ heat) (pipe e, bar) ) Weight: 0.9800 Candidate Inferences: [ (CAUSE GREATER,, FLOW) ) Gmap #2: ( (GREATER,, ++ GREATER,,) (DIAMETER,, H TEMPERATURE*,) (DIAMETERn, ++ TEMPERATWRETP) 1 Emaps. { (beaker ++ coffee) (vial t-) ice cube) ) Weight: 0 0195 Candidate Inferences: ( ) Gmap #3: ( (LIQUID t--* LIQUID) (FLAT-TOP ++ FLAT-TOP) 1 Emaps: { (water CI coffee) j Weight: 0.0004 Candidate Inferences: { ) (b) Figure 4. Gmap Construction 274 / SCIENCE the same base structure (e.g. the Gmap for the pressure inequality would combine with the Gmap for the flow relation to form a single Gmap) and then making any further combinations which are consistent. Figure 4(a) shows how the initial set of Gmaps is formed, while Figure 4(b) shows the final Gmaps created for the water flow - heat flow analogy. Associated with each Gmap is a (possibly empty) set of candidate inferences. Candidate inferences are base predicates that would fill in structure which is not in the Gmap (and hence not already in the target). In Figure 4(b), for example, Gmap #l has the top level CAUSE predicate as its sole candidate inference. If the FLOW predicate was not present in the target, then the candidate inferences for a Gmap corresponding to the pressure inequality would be both CAUSE and FLOW. All candidate inferences must be consistent with known target facts. In addition, they must be consistent with the Gmap’s structure and supported by some member of it. For example, GREATER-THAN [DIAMETER (cof f ee) , DIAMETER(1ce cube)] is not a valid candidate inference for the first Gmap because it does not intersect the existing Gmap structure. 8.3. Step 8: Global Match Selection Several factors must be taken into account when deciding which Gmap is the best analogy . . We have identified three factors as particularly important: (1) The evidence for the individual match hypotheses in the Gmap. (2) The candidate inferences sanctioned by the Gmap. (3) The graph-theoretic structure of the Gmap, e.g., the number and relative size of connected components. Exploring the relative importance of these and other factors is part of the desiderata for SME, hence we have made the criteria programmable. Gmap evidence rules, whose form is much the same as the other kinds of rules mentioned previously, can provide evidence for a Gmap based on whatever factors are deemed appropriate. To make an appropriate selection, evidence for Gmaps is combined under strict adherence to Dempster’s rule for combining probabilities. Thus the set of Gmaps is treated as a set of mutually exclusive choices, and evidence in favor of one Gmap implicitly counts as evidence against the others. Dempster’s rule automatically normalizes the weights so that the sum of the weights supporting each Gmap will always be less than or equal to one. In Figure 4(b), the Gmap which maps the PRESSURE relation is believed more than the Gmap which maps the DIAMETER relation. This conclusion is based on two rules. The first rule simply permits the evidence for a match hypothesis in a Gmap to count as evidence for that Gmap. The second rule gives evidence of 0.3 to a Gmap for each candidate inference it sanctions. We suspect tnat the ability to “tune” the criteria for choosing a Gmap will be important for modeling individual differences in analogical style and a subject’s domain knowledge. For example, a conservative strategy might favor taking Gmaps with some candidate inferences but not too many, in order to maximize the probability of being right. 4. EXAMPLES The Structure-Mapping engine has been tested on a number of examples drawn from a variety of domains. We discuss a few examples to further demonstrate SME’s flexibility and generality. Our first example is taken from Rutherford’s analogy between the solar system and the hydrogen atom. The second example demonstrates how the program reasons about complicated descriptions of water flow and heat, flow which were generated by a qualitative reasoning program before the inception of SME. 4.1. Solar System - Rutherford Atom Analogy The Rutherford model of the hydrogen atom was based on the well-understood behavior of the solar system. Given the descriptions shown in Figure 5, the Structure-Mapping engine constructed three possible interpretations. The most, preferred Solar System CAUSE AND Revolve-Around(planet,sun) ATTRnCe;TER J32 B3 GREATER MASS(sun) MASS(planet) A YELLOW(sun) 1’EMPERJ%JRE(sun) TEMPER?TURE(planet) Figure 5 Rutherford Atom GREATER m. -3 MASS(~~cleus) MASS(electron) ATTRACT(nucleus,electron) REVOLVE-AROUND(electron,nucleus) Solar System - Rutherford Atom Analogy mapping (given a weight of 0.99) pairs up the nucleus with the sun and the planet with the electron. This mapping is based on the mass inequality in the solar system playing the same role as the mass inequality in the atom. It sanctions the inference that the inequality, together with the mutual attraction of the nucleus and the electron, causes the electron to revolve around the nucleus. The other major Gmap (given a weight of 0.01) has the same entity correspondences, but is based on the solar system’s temperature inequality mapping to the atom’s mass inequality. There is much less belief in this interpretation because the temperature and mass predicates are different and because this Gmap does not allow any candidate inferences. The third Gmap is a spurious collection of match hypotheses which imply that the mass of the sun and planet should correspond to the mass of the electron and nucleus, respectively. There is no higher-level structure to support this interpretation and so the final belief is 1x10-s. This example demonstrates how SME is able to generate all syntactically plausible interpretations of a potentially analogous situation. It also show that, our rules have a preference for matching predicates of the same name (e.g. MASS with MASS), but is able to match predicates with different names (e.g. TEMPERATURE with MASS). 4.2. Water Flow - Heat Flow Analogy The Structure-Mapping engine has applications beyond cognitive simulation. For example, we could use this program in conjunction with a qualitative reasoning program to model the way people use analogy to reason about the physical world. Figure 6 (a) shows a domain description for water flow which was used in an actual qualitative reasoning program (Forbus 1984; Forbus & Gentner, 1983). Figure 6 (b) h s ows a greatly reduced version of the same program’s description of heat, flow. As with the earlier, simplified descriptions of water flow and heat flow, SME was able to make the correct analogical correspondences, creating all of the possible candidate inferences in COGNITIVE MODELLING AND EDUCATION / 275 LCB cube Figure 6. Water Flow (a) and Heat Flow (b) the process. Interestingly, only one consistent interpretation arose. All other match hypotheses were eliminated because they had no descendants to support their existence. The candidate inferences made were the correct ones, namely that a difference in temperature and an aligned heat path implies an instance of heat flow and that the rate of heat flow between two objects is proportional to the difference in their temperatures. 4.8. Summary Space limitations forbid a detailed account of our experiments to date; we summarize two here. First, we have analyzed short stories described in predicate calculus to compare mere appearance, surface matches with true analogy. Second, we have begun exploring a number of match algorithms. For example, one set of rules focuses on object attributes (mere-appearance matches), thus mimicking how children tend to treat potentially analogous situations (see below). These rules, when run on the water flow - heat flow descriptions of Figure 2, choose the water to coffee correspondence as the best interpretation due to their surface similarity and fail to notice the relational structure which implies that the role of water actually corresponds to the role of heat in the water flow and heat flow situations. 6. CONCLUSIONS SME has significant advantages over more traditional matching algorithms. Methodologically, the advantage of producing all possible mappings is that one can easily see syntactically consistent alternatives to the best match. Yet SME’s matching algorithm is very efficient, avoiding the extensive backtracking normally associated with pattern-matching systems.* On our large water flow - heat flow example, the program took only 0.7 seconds to perform the entire match on a Symbolics 3640. This includes everything from the construction of local match hypotheses to the gathering of candidate inferences and Gmap construction. The smaller examples average 0.4 seconds. The current program needs to be expanded to properly handle predicates which are commutative (e.g. SUM) or take a variable number of arguments (e.g. AND), In addition, we would like to add the ability to introduce new entities when required by the analogical mapping through the use of Skolem functions. The results of SME’s operation on the examples above provides suggestive evidence concerning a currently debated issue in analogy. Th e question concerns how much a purely syntactic account of analogy can do. Although many researchers have adopted variants of the systematicity principle, often specific domain knowledge or pragmatic information is used as well. For example, Carbonell (1981, 1983) focuses on plans and goals as the relevant higher-order relations for analogical mapping. Winston’s (1982) system uses causal relations in its importance-guided matching algorithm. Winston [personal communication, November 19851 has also investigated goal-driven importance algorithms. The extreme view is taken by Holyoak (1985), whose account of analogical mapping relies solely on the relevance of predicates to the current plan. Among the claims of these researchers are (1) purely syntactic information is insufficient to guide analogical mapping and (2) even if it were sufficient, such a system would be inefficient (e.g. Burnstein, 1986, p.358). The evidence from SME so far suggests otherwise, since it generates intuitively plausible answers and does so rapidly. We intend to explore this issue more fully by using a variety of examples to see if and when the purely syntactic approach breaks down. Clearly content knowledge must be invoked at some point to evaluate whether the candidate inferences from a given analogy are appropriate. This suggests a model which uses a context-sensitive, expectation-driven system to evaluate the output of SME. This extension is compatible with the combination models proposed by Burstein (1983) and Kedar-Cabelli (1985). In addition to tests of the basic algorithm, we plan several cognitive simulation studies of analogical reasoning and learning. We mention only one here. P syc o ogical research shows a marked h 1 developmental shift in analogical processing. Young children rely on surface information in analogical mapping; at older ages, systematic mappings are preferred (Gentner & Stuart, 1983; Gentner & Toupin, in press; Holyoak, Juin & Billman, 1985; Vosniadou, 1985). Further, there is some evidence that a similar shift from surface to systematic mappings occurs in the novice-expert transition in adults (Chi, Glaser & Reese 1982; Larkin, 1985; Novick, 1985; Reed, 1985; and Ross, 1984). In both cases there are two very different interpretations for this analogical shift: (1) acquisition of knowledge; or (2) a change in the analogy algorithm. The knowledge-based interpretation is that children and novices lack the necessary higher-order relational structures to guide their analogizing. The second explanation is that the algorithm for analogical mapping changes, either due to maturation or learning. In h uman learning it is difficult to decide this issue, since exposure to domain knowledge and practice in analogy and reasoning tend to occur simultaneously. SME gives us a unique opportunity to vary independently the analogy algorithm and the amount and kind of domain knowledge. For example, we can compare identical evaluation algorithms operating on novice versus * While we have not yet explored this possibility, It appears that a variant of this matching algorithm could be very useful for connectionist architectures 270 / SCIENCE expert representations, or we can compare different analogy evaluation rules operating on the same representation (see summary above). The performance of SME under these conditions can be compared with novice versus expert human performance. ACKNOWLEDGEMENTS The authors wish to thank Janice Skorstad for invaluable assistance in encoding domain models. REFERENCES Anderson, J., The Architecture of Cognition, Harvard University Press, Cambridge, Mass, 1983. Buckley, S., Sun up to sun dozen, McGraw-Hill Company, New York, 1979 Burstein, M. H., Concept formation by incremental analogical reasoning and debugging. Machine Learning: An Artificial Intelligence Approach Vol. 11, R.S.Michalski, J.G.Carbonell, and T.M.Mitchell (editors), Morgan Kaufmann, 1986. Carbonell, J. G. Learning by analogy: Formulating and generalizing plans from past experience. Machine Learning, Michalski, R. S., Carbonell, J., and Mitchell, T. (Eds.), Tioga Publishing Company, Palo Alto, California, 1983. Carbonell, J. G., Derivational analogy in problem solving and knowledge acquisition. Proceedings of the Second International Machine Learning Workshop, University of Illnois, Monticello, Illinois, June, 1983. Chi, M. T. H., Glaser, R., & Reese, E. Expertise in problem solving. In R. Sternberg (Ed.), Ad vances in the psychology of human intelligence (Vol.1). Hillsdale, N.J., Erlbaum, 1982. Forbus,K.D., Qualitative Process Theory. MIT Artificial Intelligence Laboratory Technical Report No. 789, July, 1984. Forbus,K.D., and D.Gentner, Learning Physical Domains: Towards a Theoretical Framework. Machine Learning: An Artificial Intelligence Approach Vol. 11, R.S.Michalski, J.G.Carbonell, and T.M.Mitchell (editors), Morgan Kaufmann, 1986. Gentner, D., The structure of analogical models in science. BBN Tech. Report No. 4451, Cambridge, MA., Bolt Beranek and Newman Inc., 1980 Gentner, D., Are scientific analogies metaphors?. In Miall, D., Metaphor: Problems and perspectives, Harvester Press, Ltd., Brighton, England, 1982 Gentner, D., Structure-mapping: A theoretical framework for analogy. Cognitive Science 7(2), 1983 Gentner D., & Gentner, D. R., Flowing waters or teeming crowds: Mental models of electricity, In D. Gentner & A. L. Stevens, (Eds.), Mental Models, E r aum Associates, Hillsdale, N.J., 1983 lb Gentner, D., & Landers, R. Analogical reminding: A good match is hard to find. In Proceedings of the International Conference on Systems, Man and Cybernetics. Tucson, Arizona, 1985. Gentner, D., & Stuart, P. Metaphor as structure-mapping: What develops. Bolt Beranek and Newman Technical Report No. 5479, Cambridge, Massachusetts, 1983. Gentner, D. & Toupin, C. (in press). Systematicity and surface similarity in the development of analogy. Cognitive Science. Ginsberg,M.L., Non-Monotonic Reasoning Using Dempster’s Rule. Proceedings AAAI, August, 1984. Hofstadter, D. R. The Copycat project: An experiment in nondeterministic and creative analogies. M.I.T. A.I. Laboratory memo 755. Cambridge, Mass: M.I.T., 1984. Holyoak, K. J. The pragmatics of analogical transfer. In G. H.Bower (Ed.) The psychology of learning and motivation. Vol.1. New York: Academic Press, 1984. Holyoak, K. J., Juin, E. N. & Billman, D. 0. (in press). Development of analogical problem-solving skill. Child Development. Kedar-Cabelli, S. (1985). Purpose-directed analogy. Proceedings of the Seventh Annual Conference of the Cognitive Science Society, Irvine, CA. Larkin, J. H. Problem representations in physics. In D. Gentner & A. L. Stevens (Eds.) Mental Models. Hillsdale, N. J., Lawrence Erlbaum Associates, 1983. Novick, L. R. Transfer and expertise in problem solving: A conceptual analysis. Stanford University: Unpublished manuscript, 1985. Prade,H., A Synthetic View of Approximate Reasoning Techniques. Proceedings of the 8th International Joint Conference on Artificial Intelligence, 1983. Reed, S. K. A structure-mapping model for word problems. Paper presented at the meeting of the Psychonomic Society, Boston, 1985. Ross, B. H. Remindings and their effects in learning a cognitive skill. Cognitive Psychology, 16, 371-416, 1984. Rumelhart, D. E., & Norman, D. A. Analogical processes in learning. In J. R. Anderson (Ed.), Cognitive skills and their acquisidion, Hillsdale, N.J. Erlbaum, 1981. Shafer,G., A Mathematical Theory of Evidence, Princeton University Press, Princeton, New Jersey, 1976. Van Lehn, K. & Brown, J. S. Planning nets: A representation for formalizing analogies and semantic models of procedural skills. In R. E. Snow, P. A. Federico & W. E. Montague (Eds.), Aptitude, learning and instruction: Cognitive process analyses. Hillsdale, N. J. Erlbaum, 1980. Van Lehn, K., Felicity Conditions for Human Skill Acquisition: Validating an AI-based Theory. Xerox Palo Alto Research Center Technical Report CIS-21, 1983. Vosniadou, S. On the development of metaphoric competence. University of Illinois: Manuscript submitted for publication, 1985. Winston, P. H. Learning and reasoning by analogy. Communications of the ACM, 23(12), 1980. Winston, P. H. Learning new principles from precedents and exercises. Artificial Intelligence, 19, 321-350, 1982. ~~GNITI~EM~DELLINGANDEDUCATION 1 277
|
1986
|
137
|
403
|
SNePS CONSIDERED AS A FULLY INTENSIONAL PROPOSITIONAL SEMANTIC NETWORK Stuart C. Shapiro and William J. Rapaport Department of Computer Science State University of New York at Buffalo Buffalo, NY 14260 { rapaport 1 Shapiro }%buffalo@csnet-relay ABSTRACT W’e present a formal s\ ntax and semantics for SNePS considered as the (modeled) mind of a cogn:ti\e agent. The semantics is based on a Meinongian theory of’ the intensional objects of’ thought that is appropriate for 41 considered as “computational philosophy” or “computational psychology”. 1. INTRODUCTION. W’e present a formal syntax and semantics for the SNePS Semantic Network P recessing System (Shapiro 1979), based on a \leinongian theory of the intensional objects of thought (Rapaport 198Sa). Such a theory avoids possible worlds and is appropriate t or AI considered as “computational philosophy”-AI as the study of how intelligence is possible-or “computational psychology”- .ql with the goal of w-riting programs as models of human cogni- tile behavior. Recently, SNePS has been used for a variety of AI research and applications projects. These are described in Shapiro k Kapclport 1985, of u hich the present paper is a much shortened version. Here, w-e use SNePS to model (or construct> the mind of a cllgnitive agent, referred to do C.ASSIE (the cognitive Agent of the SNePS System--an Intelligent Bntitv). 2. INTENSIONAL KNOWLEDGE REPRESENTATION. \\ePS represents propositions about entities having properties and standing in relations. Nodes represent the propositions, entities, properties, and relations, while the arcs represent structural links kt \veen these. S?WS nodes might represent extensional entities, whose identity conditions do not depend on their manner of representation. Two extensional entities are equivalent (for some purpose) iff they are identical (i.e., iff “they” are really one entity, not two>. Although S?(ePS can be used to represent extensional entities in the world, we believe that it must represent intensional entities-entities whose identity conditions do depend on their manner of representation. Two intensional entities might be equivalent (for some purpose) without being identical (i.e., they might really be two, not one). Only if one wants to represent the relations between a mind and the world would it also have to represent extensional entities (Rapaport 1978, McCarthy 1979). If S\ePS is used just to represent a mind-i-e., a mind’s model of the wjoorld-- then it does not need to represent any extensional objects. It can then be used either to model the mind of a particular cogni- ti\e agent or to build such a mind-i.e., to be a cognitive agent itself (Vaida & Shapiro 1982 1. There have been a number of arguments presented in both rhe 41 and philosophical literature in the past few years for rife tieed I or in’en\ional entities (Castaiieda 1974. Woods 1975. Maidn It. Sh,rp~rc~ 1982, Rapaport 198%. Rrach * This research was supported in part by SUNY Buffalo Research Development Fund grants #150-9216-F, 150-8537-G, and the Yational Science Foundation under Grant No. 131’8504713 (Rapaport) and in part by the Air Force Systems Command, Rome 41r Development Center, Griffiss Air Force Base, New York 13441-5700, and the Air Force ofhce of Scientific Research, Bolling AFH LX 20332 under contract No. F30602-85-C-0008 (Shapiro). man 1977, Routlep 1979. Parsons 19801. Among them, the f-o1 lowing considerations are e\peciilIl\ Flgnltic.lrlt: Principle of Fine-Grai ned Representation: The clb.pc t\ of thought (i.e., intent ional object\) are intensional. d mind I dn 1!,1\e two or more objects of thought that correspond to only one eaten sional object. To tahe the classic esample, the llorning Star and the Evening Star might be distinct objects of thought, yet there 14 only one extensional object (a certain astronomical body1 corresponding to them. Principle of Displacement: Cognitive agents can think and talk about non-existents: a mind can have an object of thought that corresponds to no extensional object. Again to take several classic examples, cognitive agents can think and talk about fictional objects such as Santa Claus, possible but non-existing objects such as a golden mountain, impossible objects such as a round square, and possible but not-yet-proven-to-exist objects such as theoretical entities (e.g., black holes). If nodes only represent intensions (and extensional entities are not represented in the network), how do they link up to the external, extensional world 3 One answer is by means of a LES arc (see (Syn.1) and (Sem.l), below): The nodes at the head of the IL\ arc are au,. (the user’s> interpretation of the node at its tail. The network without the 1,1’S arcs and their head-nodes displa.vs the strucfure of CASSIH’s mind (Carnap 1928, Sect. 14; for other answers, see Maida & Shapiro 1982, Shapiro & Rapaport 1985). 3. DESCRIPTION OF SNePS. ‘;YePS satisfies the Uniqueness Principle: There is a one-to-one correspondence between nodes and represented concepts. This principle guarantees that nodes represent intensional ob;jects and that nodes will be shared whenever possible. Nodes that only have arcs pointing to them are considered to be unstructured or at omit. They include: (1) sensory nodes, which-when SNePS is being used to model a mind-represent interfaces with the exter- nal world (in the examples that follow, they represent utter- ances); (2) base nodes, which represent individual concepts and properties; and (3) variable nodes, which represent arbitrary indi- viduals (Fine 1983) or arbitrary propositions. Molecular nodes, u-hich have arcs emanating Jrorn them, include: (1) structured indi\idunl nodes. R hich represent struc- tured indi\?dual concept\ or prc~prrtles (i.e., concepts and properties represented in such a w’av tlldt their internal structure is exhi- bited); and (21 structured p~opositi,~l nodes. ivhich represent pro- positions; those w ith no incoming arcs represent beliefs of the sys- tem. (\ote that strtlr tured proposition nodes c~ln also be considered to be structuretl IIICI \ idu.~ls.) l’rop~~sition nodes are either alolnic (representing dtorn]L pr!lposirions) or are &e nodes. Rule nodes represent deduction rules and are used for node-based deductilTe inf‘erence (Shapiro 1978; Shapiro & McKay 1980; Mchav & Shapiro 1981; Shapiro, Martins, CEr McKay 1982). For each of the three categories of molecular nodes (structured individuals, atomic propositions, and rules), there are constant nodes of that category and pattern nodes of that category representmg arbitrary entities of that category. 278 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. There are a few built-in arc labels, used mostly for rule nodes. Paths of arcs can be defined, allowing for path-based inference, including property inheritance w.ithin generalization hierarchies (see below; Shapiro 1978, Srihari 1981). All other arc labels are defined by the user, typically at the beginning of an interaction with SNePS. 3.1. CASSIE-A Model of a Mind. Since most arcs are user-defined, users are obligated to pro\-ide a formal syntax and semantics for their Sh‘ePS networks. We shall describe the way in which we have been using S>ePS to build tY.4SSllF. Lsing Brachman’s (1979) terminology, insofar as SNePS is a semantic network system at the logical level and can thus be used to define one at the epistemological or conceptual level, CAS- SIl: is SiePS being used at a conceptual level. The nodes represent the objects of CASSIE’s thoughts-the things she thinks about, the properties and relations with which she characterizes them, her beliefs, etc. (Maida & Shapiro 1982, Rapaport 1985a). The Principle of Displacement says that a cogni- tive agent is able to think about virtually anything, including fictional objects, possible but non-existing objects. and impossible obpcts. Any theory that would account for this requires a non- standard logic, and its semantics cannot be limited to merely possi- ble w orlds. Theories based on Alexius Ileinong’s Theory of Objects are of precisely this hind. 2leinong held that psychological experiences consist in part of a psychological act (such as thinking, believing, wishing, etc.) and the object to which the act is directed (e.g., the object that is thought about or the proposition that is believed). Two kinds of ‘Meinongian objects of thought are relevant for us: (1) The objectum, or object of “simple” thoughts: Santa Claus is the objecturn of John’s act of thinking of Santa Claus. The mean- ing of a noun phrase is an objectum. (2) The objective, or object of belief, knowledge, etc.: that Santa Claus is thin is the objective of John’s act of believing that Santa Claus is thin. Objectives are like propositions in that they are the meanings of sentences and other sentential structures. Note that objecta need not exist and that objectives need not be true. (Cf. Meinong 1904; Rapaport 1978, 1981; Castaiieda 1974, 1975; Routley 1979; Parsons 1980; Lambert 1983; Zalta 1983.) This is, perhaps, somewhat arcane terminology for w-hat might seem lihe AI common sense. But without an underlying theory, such as Meinong’s, there is no way to be sure if common sense can be trusted. It is important to note that not only are all represented things intensional, but that they are all obJects of (‘.~SSIE’s mental acts; i.e., they are all in CASSIE’s mind (her “belief space”)-they are all intent ional. ‘I‘hus, even if CASSIE represents the beliefs of someone else (e.g., John’s belief that Lucy IS rich, as in the conversation in Sect. 3.2), the objects that she represents as being in that person’s mind (as being in his “belief space”) are actually CASSIE’s representations of those objects-i.e., they are in CASSIE’s mind. 3.2. A Conversation with CASSIE. Before giving the syntax and semantics of the case-frames employed in representing CASSIE’s “mind”, we present a conversa- tion we had with her, showing the network structure as it is built-i.e., showing the structure of CASSIE’s mind as she is given information and as she infers new information. An ATN parser/generator (Shapiro 1982) was used to parse the English input into SNePS and the SNePS structures into English. User input is on lines with the :-prompt; CASSIE’s output is on the lines that follow. Comments are enclosed in brackets. A frag- ment of the full network showing CASSIE’s state of mind at the end of the conversation is shown in Fig. 1. : J’oung Lucy petted a yellow dog 1 understand that young I.ucy petted a vellow dog [CASSIE is told something, which she now believes. At this point, her entire belief structure consists of nodes bl, ml-m13, and the corresponding sensory nodes. The node labeled “now” represents the current time, so the petting is clearly represented as being in the past. CASSIE’s response is “I understand that” appended to her 1:nglish description of the proposition just entered.] W’hat is yellow a dog is yellow [This response shows that CASSIE actually has some beliefs; she did not just parrot back the above sentence.] : Dogs are animals I understand that dogs are animals [CASSIE is told a small section of a class hierarchy.] : Who petted an animal young Lucy petted a yellow dog [CASSIE can answer the question using the class hierarchy, be- cause, prior to the conversation, an inheritance rule was given to SNePS. The rule says that the CLASS arc is implied by the path consisting of a CLASS arc followed by zero or more occurrences of the two-arc path consisting of the converse SUBCLASS arc fol- lowed by the SUPERCLASS arc (Shapiro 1978, Srihari 1981). The dog was called “a yellow dog” rather than “a yellow animal” be- cause the redundant CLASS arc is not built. CASSIE’s new belief’s about the dog’s classification and color consists of nodes m9-m13, m16-m17, and the corresponding sensory nodes.] : Lucy is sweet I understand that young Lucy is sweet [CASSIE’S response shows that she identifies this Lucy with the previous Lucy.] : I.UCY is a girl I understand that sweet young Lucy is a girl [The beginning of a class hierarchy for I.uc\‘. Sotice that all the adpcti\ al properties of Iucy are mentioned.] : (;irls are people I understand that girls are people [\lore of the class hierarchy is given.] : What was petted by a person a yellow dog was petted by sweet young Lucy [The proposit’ ion is retrieved using the CLASS inheritance rule.] : The dog is named Rover I understand that Ro\.er IS a yellow dog [“I-he dog’ refers to the only dL>,g (‘ASSlE knows about, who is now giLen if name.1 : \Yhat v.a\ petted 1, a perkon ye] low Rev er w ‘1s petted bv \v. eet \ oung Iucy [‘fhis que\tion \vas ;l\ked before hut I\ ,,nswered differently this time, because the dog now ha\ it name. and (‘ASSII: prefers to Jp,Lribe an indi\ idud 1,~ its n,lmr M hen it h‘ts one.] : John believes that 1.uc.y is rich I understand that John believes that LUCY is rich [In our current development of‘ (‘ASSIE. she interprets ‘believes that’ contexts to be de ditto, so she assumes that the Lucy that John has in mind is a different one from the Lucy that she knows. (:ASSIE’s beliefs about the two Iucies are added to the network (nodes m40, m44).] COGNITIVE MODELLING AND EDUCATION / 279 \ 280 / SCIENCE : Who is rich 1 don’t know. (Sem.1) i is the ob,jectum corresponding to the utterance of W. [(‘ASSlh hnc\ks no onr‘ who 1s rich. She only believes rhat John believes that someone (u horn he believes to be named ‘l,ucy’) is rich. The answer IS ‘I don’t know’, rather than ‘no one is rich’, be- cause CXSSIE doesn’t use the closed-world hypothesis.] (Syn.2) If either “t ,” dnd “1 ?” are identifiers not previously used, or “t ,” is an identifier nor pre\ ~nusly used and t, is a temporal node, then BEFORE : N’ho is swleet is a network and t I and t J are tempt-al nodes, i.e. individual young I,uc?: is sheet nodes representing times. [This question demonstrates that I.ucy is able to answer a “v,Tho 1s <property >” question when she has relevant beliefs.] (Sem.2) I, and t z are objecta corresponding to two times, the former occurring before the latter. : John belie\?es that l.ucy is old I understand that John believes that rich Lucy is old nodes, and “172 ” is an identifier [(*.3%X a%umes that John knows a different I*ucy than she hnows, but she assumes that all John’s beliefs about “l,ucy” are about the same I,ucy.] (Syn.3) If i and j are individual not previously used, then EQUIV EQUIV : lohn is a bov 1 understand that John is a bov is a network and m is a structured proposition node. [This and the next two Inputs are given to establl5h more of the class hierarchy and to make it clear that when <:.4SSII: answers the last question of this session, she is doing both path-based rea- soning and node-based reasoning at the same tlme.1 (Sem.3) m is the objective corresponding to the proposition that objecta i and j (are believed by CASSIE to> correspond to the same actual object. (This is not used in the conversation, but is needed for fully intensional representational systems; cf. Rapaport 1978, 1984b; Castaiieda 1974; Maida & Shapiro 1982.) : F3oys are people 1 understand that boys are people (Syn.4) If i and j are individual nodes and “m ” is an identifier not pre\ iously used, then PROPERTY OBJECT : Ilogs are pets 1 understand that dogs are pets : ]:or every p and d if p 1s a person and d 1s a pet then p loves d 1 understand that for every d and p, If p is a person and d is a pet then p loves d 15 a network and m is a structured proposition node. (Sem.4) m is the has the property j ObJective correspondi ng to the proposition that i nodes and “m ” is an identifier (Syn.5) If i and j are individual not pre\.iou9ly used, then [y]‘hls n&-based rule fits Into thr CIA\\ lrler,trLh?: as node m%. Thiy is, Lye bellr\e, eyul\-alent t(T the Inlegr,tted Tllo.\ Al%)\ mechanism proposed for h K\ I’ I‘( )L i I\r ,Ichrtian et al. 1983. Hr,~ch man et al. 19SS).l : \\‘ho lo\-e5 ~1 pet \LJ eet > clung l.UCY IO\-es xel]o\ Ro\ er and John lo\es vellnw Koker [The question was answered using path-based lnferenclng to deduce that I.ucy and John are people and that Rover is a pet, and node-ba<ed inferencing to Conclude that. therefore, I-ucy and John lo\ e Ro\ er.] PROPER -NAME OBJECT is a net\,-orh and 1n is a structured proposition node. (Sem.5) m is the objective corresponding to the proposition that objectum i’s proper name is j. (j is the objectum that IS i’s prt’p- er name: its expression in English IS represented by a node at the head of a LEN-arc emanating from j.) 3.3. Syntax and Semantics of SNePS. In thl\ sectlon, LX e gi\ e the syntax and semantic% of the nodes and e\rL\ used In the Interaction. \Vhat me present here is our current mL,del: \xe mahe no claims to completeness of the representational xheme. \L’e begin with a few rough definitions. ((Yf. Shapiro 1979, 5eeLt. 2.1. for more precise ones.) (Def. 1) A node do?ninates another node if there is a path of directed arcs from the first node to the second node. (Syn.6) If i and j are individual nodes and “?n” is an Identifier not previously used, then CLASS MEMBER IS a nets-ork and ~tz is a structured proposition node. correspondi ng to the proposition that i (Sem.6) I)Z is the objective is a (member of class) j. (Def. 2) A pattern node is a node that dominates a variable node. (Def. 3) An individual node is either a base node, a variable node, or a structured constant or pattern indil~idual node. (Def. 4) A proposition node is either a structured proposition node or an atomic variable node representing an arbitrary proposi- tion. (Syn.7) If i and j are Indi\~ldual nodes and “m” IS an identifier not previously used. then SUPERCLASS SUBCLASS is a network and nz is a structured proposition node. / (Syn.1) If “w” is a(n English) Lvord and 2” is an identifier not pre\ 10us1y u5ed, then LEX a-----B@ proposition that (Sem.7) 772 is the objective corresponding to the (the class of‘) is are (a subclass of the class of) js. (Syn.8) If i, , i, , i 1 are individual nodes, t , , t 2 are temporal nodes, and I’m” is an identifier not previously used, then 1’; a netv,ork, w is a censor\’ node, and i is a structured Indl\ Idual node. COGNITIVE MODELLING AND EDUCATION / 28 1 is a network and m is a structured proposition node. (Nodes m40, m34 are examples of this for the mental act of believing; cf. Rapa- port 1984b, Rapaport & Shapiro 1984. The ETIME and STIME arcs are optional and can be part of any proposition node; they are a provisional technique for handling temporal information-cf. Shapiro & Rapaport 1985.) (Sem.8) m is the objective corresponding to the proposition that agent i , performs act i z with respect to i 3 starting at time t 1 and ending at time t 2, where t , is before t *. Rule nodes have been described more fully in Shapiro 1979, and a full syntax and semantics for them is presented in Shapiro and Rapaport 1985. Here, we present the syntax and semantics only for the node-based inference rule used in the conversation with CASSIE (Fig. 2, node m56): (Syn.9) If a,, . . . , a,, C,, . . . , Cj, and d,, . . . , dk are PrOpOSitiOn nodes (n, j, k 1 0), and “T ” is an identifier not previously used, then is a network, and T is a rule node. (Sem.9) r is the objective correspondmg to the proposition that the conjunction of the propositions a ,, , . , a, relevantly implies each cl (1 5 1 I j) and relevantly implies edch dl (1 I1 5 k) for which there is not a better reason to belie1.e it is false. (The dl are default consequences: each is Implied only if it is neither the case that CASSIE already believes not dl nor that not di follows from non-default rules.) (Syn.10) If r is a rule node, and I- dominates variable nodes Vl, * . . , v,, and, in addition, arcs labeled “AVB” go from r to each vi, then r.-is a quantified rule node. (Sem.10) f’ is the objective corresponding to the proposition that the rule that would be expressed by T without the AVB arcs holds after replacing each vi by any object in its range. 4. SNePS AND CASSIE AS SEMANTIC NETWORKS. We conclude by looking at SKePS from the perspective of Brachman’s discussions of structured inheritance networks and hierarchies of semantic-network formalisms (Brachman 1977, 1979). Brachman offers six criteria for semantic networhs: A semantic network must have a uniform notation. SNePS provides some uniform notation with its built-in arc labels for rules, and it provides a uniform procedure for users to choose their own notation. A semantic network must have an algorithm for encoding information. This is provided for by the interfaces to SNePS, e.g., the parser component of our ATN parser-generator inputs English sentences and outputs SNePS networks. A semantic network must have an “assimilation” mechanism for building new information in terms of stored information. SNePS provides for this by the Uniqueness Principle, lvhich enforces node sharing during network building. The assimilation is demonstrated by the generator component of our ATK parser- generator, which takes SKePS nodes as input and produces English output expressing those nodes: In our conversation with CASSIE, the node built to represent the new fact, ‘Lucy is sweet’, was expressed in terms of the already existing node for Lucy (who had pre\‘iously been described as young) by ‘young Lucy is sv,Teet’. A semantic net\.-ark should be neutral with respect to net- work formahxms at higher levels in the Brachman hierarchy. SNePS is a semantic networb at the “logical” level, and CASSIE is (perhaps) at the “conceptual” level. SNePS is neutral in the relevant sense; it is not so clear whether CASSIE is. But a more important issue than neutrality is the reasons why one formalism should be chosen over another. Several possible criteria that a researcher might consider are: eficiency (including the ease of interfacing with other modules; e.g., our ATh‘ parser-generator has been designed for direct interfacing with SKePS), psychological adequacy (irrelevant for SNePS, but precisely what CASSIE is being designed for), ontological adequacy (discussed below), logicat adequacy (guaranteed for SNePS, because of its inference package), and natural-language Qdf?qUQCy (a feature of SNePS’s interface with the ATN grammar). A semantic network should be QdeqUQZe for any higher-level network formalism. SNePS meets this nicely: KL-OKE can be implemented in SNePS (Tranchell 1982). A semantic network should have a semantics. We presented that in Sect. 3.3. But there are at least two sorts of semantics. SYePS nodes have a meaning within the system in terms of their links to other nodes; they have a meaning so?- users as provided by nodes dt the heads of LES arcs. Arcs, on the other hand, only have meaning within the system, provided by node- and path- based inference rules (which can be thought of as procedures that operate on the arcs). In both cases, there is an “internal”, system’s semantics that is holistic and structural: the meaning Of the nodes and arcs are not given in isolation, but in terms of the entire net- work. This sort of “syntactic” semantics differs from a semantics that provides links to an external interpreting system, such as a user or the “world’‘--I.e.. links between the network’s way Of representing information dnd the user’s a’ay. It is the latter Sort of semantics that we provided for (‘.1%SIE u!lth respect to an ontology of Meinongian uhJects, which are not to be tahen as representing things in the world. C4SSIE’s ontology 1s an epislemological ontology (Rapaport 1985’1986) of the purely intensional items that enable a cognitive agent to h‘l\o l~?Iiefs about the world. It is a theory of what there must be In clrdel for a cognitiye agent to ha\Te beliefs about \vhat there is. (1) (2) (3) (4) (5) (6) REFERENCES Brachman, R. J. (1977), “What’s in a concept: structural foundations for semantic networks,” Int. 1. Malz-Machine Studies 9: 127-52. Hrachman, R. J. (1979), “On the Epistemological Status of Semantic Networks,” in N. V. Findler (ed.), Associative Networks (New York: Academic Press): 3-50; reprinted in (3): 191-215. Brachman, R. J., & Levesque, H. J. (1985), Readings in Knowledge Repr esent ation (Los A 1 tos, CA: Yorgan Kauf mann). Brachman, R. J.; Fikes, R. E.; Rr Levesque, H. J. (1983), “E;R\‘PTON: Integrating Terminology and Assertion,” Proc. AAAI-83: 31-35. Brachman, R. J.; (Gilbert, \‘. P.; & Le\e\yue, II. J. (1985), “An Itssentlal Ilyhrld Reasoning System: hnowledge and Symbol I.e\el ,4cLount\ of hR~‘PTO\.” PIOC. ZJCAI-8.5: 532 39. Carnap, K. i 1928), ‘I he l,ogical Struciure of the U’grld, K. A. George (trans.) 1 Rerheley: I . (‘alifornia Press, 1967). 282 / SCIENCE (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) Castafieda, il.-~. (1974), “‘l’hmhlng and the Structure of the \i.urld.” Philorophia ,f: 3 40. Castaceda, 1 t. 2. (1975), I’hinking and Doing (Dordrecht: D. Reidel). Fine, K. (1983), “,4 Defence of Arbitrary Objects,” PvOC. Aristotelian Sot., SUpp. Vol. 58: 55-77. I,amtxrt, K. (1983), Meinong and the PrincipLe of Independence (Cambridge, Eng.: (‘ambrIdge C. Press). Maida, 4. S., & Shapiro, S. C. (I 982) “Intensional Concepts in Propositional Semantic Netu.orks,” Cognitive Science 6: 291-330; reprinted in (3): 169-89. XKarthy, J. (1979), “First Order Theories of Individual Concepts and Propositions,” in J. E. tlayes, D. Michie, and 1,. Mikulich (eds.), Machine ZnteLLigence 9(Chichester, Eng.: Ellis Ilorwood): 129-47; reprinted in (3): 523-33. McKay, D. I’., Rr Shapiro, S. C. (1981), “Using Active Con- nection Graphs for Reasoning with Recursive Rules,” Proc. IJCAI-Xl: 368-74. Meinong, A. ( 1904), “cber Gegenstand5theorle,” in R. 1 jailer (ed.), 4lexius Mainong Gesamtausgabe, Vol. II (Graz, ,4ustria: Ahademische Druck- u. Verlagsanstalt, 1971): 481-535. Eng. trans. (“The Theory of Objects”) bv 1. Le\ i et al., in R. M. Chisholm (ed.). Realism and the Rack- gl ound of J’hP~‘onwndogy C Yew. J‘orb: F‘ree Press, I 960): 76 117. Parsons. ‘I‘. ( 1980), Nonexistent Objects (New IIaven: Yale I-. Press). Rapaport, W. J. (1978), “Meinongian Theories and a Russel- l ian Paradox,” Notis 12: 153-80. Rapaport, W. J. (1981), “How to Make the World Fit Our I,anguage: An Essay in Meinongian Semantics,” GI.azer Phil. Studien 14: l-21. Rapaport, W. J. (1984b), “Belief Representation and Quasi- Indicators,” Tech. Rep, 215 (SUNY Buffalo Dept. of Com- puter Science). Rapaport, W. J. (1985a), Yvfeinongian Semantics for Propo- sitional Semantic Yetworks,” Proc. 23rd Annual Meeting .4ssoc. Computational I&~guistics (C. Chicago): 43-48. Rapaport, W. J. (1985, 1986), “Non-Existent Objects and Epistemntogical Ontology,” Grazer Phil. Studien, Vol. 25/26. (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) Rapaport, m’. J., Rr Shapiro, S. C. (1984), “Quasi-1ndexical Reference in Propositional Semantic Networks,” Proc. 10th Int. Conf. Comp. Ling. (COLING-84) (Stanford U.>: 65 70. Routley, R. (1979), Exploring Meinong’s Jungle and fleyond (Canberra: Australian Kat’l. U., Dept. of Philoso phy). Shapiro, S. C. (1978), “Path-Based and Node-Based Inference in Semantic Networks,” in D. Waltz (ed.), TinLap-2: Theoretical Issues in Natural Language Processing (New J’orh: ACM): 219-25. Shapiro, S. C. (1979), “The SNePS Semantic Network Pro- cessing System,” in N. V. Findler (ed.), Associative Net- works (Sew York: Academic Press): 179-203. Shapiro, S. C. (1982), “Generalized Augmented Transition Network Grammars for Generation from Semantic Net- works,” American J. Computational Linguistics 8: 12-25. Shapiro, S. C., RL McKay, D. P. (1980), “Inference with Recursive Rules,” Proc. AAAI-80: 151-53. Shapiro, S. C., & Rapaport, W. J. (1985), “SNePS Considered as a Fully Intensional Semantic Network,” in G. McCalla & N. Cercone (eds.), Knowledge Representation (Berlin: Springer-Verlag, forthcoming); also, Tech. Rep. 85-15 (SUN\r’ Buffalo Dept. of Computer Science, 1985). Shapiro, S. <‘.; Martins, J.; Cpr McKay, D. P. (1982). “RI Directional Inference,” P~oc. 4th .bnual Conf. Cognitive Science Ser. (L. Michigan): 90-93. Shapiro, S. C.; Sriharl, S. N.; tieller, J.; & Taie, %1.-R. (1986), “A Fault Diagnosis System Based on an Integrated knowledge Base,” lEEE Software 3.2(March 1986)48 49. Srihan, R. h. (1981), “Combining Path-Based and Node- Based Inference in SNePS,” Tech. Rep. 183 (SUNY Buffalo Dept. of Computer Science). Tranchell, L. M. (1982), “A SNePS Implementation of hL- O&E,” Tech. Rep. 198 (SUNY Buffalo Dept. of Computer Science). Woods, W. A. (1975), “What’s in a Link: Foundations for Semantic Networks,” in D. G. Bobrow & A. M. Collins (eds.), Kepresentation and Understanding (New York: Academic Press): 35-82; reprinted in (3): 217-41. Zalta, E. N. (1983), Abstract Objects (Dordrecht: D. Reidel). COGNITIVE MODELLING AND EDUCATION / 283
|
1986
|
138
|
404
|
A QUANTITATIVE ANALYSIS OF ANALOGY BY SIMILARITY Stuart J. Russell Department of Computer Science Stanford University Stanford, CA 94305 ABSTRACT In the absence of specific relevance information, the tradi- tional assumption in the study of analogy has been that the most similar analogue is most likely to provide the correct so- lutions; a justification for this assumption has been lacking, as has any relation between the similarity measure used and the probability of correctness of the analogy. We show how a sta- tistical analysis can be performed to give the probability that a given source will provide a successful analogy, using only the assumption that there are some relevant features some- where in the source and target descriptions. The predicted variation of the probability with source-target similarity corre- sponds closely to empirical analogy data obtained by Shepard for human and animal subjects for a wide variety of domains. The utility of analogy by similarity seems to rest on some very fundamental assumptions about the nature of our representa- tions.* I INTRODUCTION Analogical reasoning is usually defined as the argument from known similarities between two things to the existence of further similarities. Formally, we can define it as any infer- ence following the schema w, w, W,~), S(S,Y> * QKY) (see [Russell 86b]) h w ere T is the target, about which we wish to know some fact Q (the query); S is the source, the ana- logue from which we will obtain the information to satisfy Q by analogy; P represents the known similarities given by the shared attribute values W. P and Q can be arbitrary predicate calculus formulae. An innumerable number of inferences have this form but are plainly silly; for example, both today and yesterday oc- curred in this week (the known similarity), yet we do not infer the further similarity that today, like yesterday, is a Friday. The traditional approach to deciding if an analogy is reason- able, apparently starting in [Mill 731, has been to say that each similarity observed contributes some extra evidence to the con- clusion; this leads naturally to the assumption that the most suitable source analogue is the one which has the greatest sim- ilarity to the target. Thus similarity becomes a measure on the descriptions of the source and target. However we define the similarity measure, it is trivially easy to produce coun- terexamples to this assumption. Moreover, Tversky’s studies * This work was performed while the author was supported by a NATO studentship from the UK Science and Engineering Research Council, and by ONR contract N00014-81-K-0004. Computing support was provided by the SUMEX-AIM facility, under NIH grant RR-00785. [Tversky 731 show that similarity does not seem to be the sim- ple, two-argument function this nai’ve theory assumes. One can convince oneself of this by trying to decide which day is most similar to today. The theory of determinations ([Davies & Russell 861, [Rus- sell 86b]) gives a first-order definition to the notion of the rele- Vance of one fact to another. given that the known similarities are (partially) relevant to the inferred similarities, the analogi- cal inference is guaranteed to be (partially) justified. The fact that P is relevant to Q is encoded as a determination, written as P(:, z) > Q(g, y) and defined as ~(a.~) A &i A Q(:,$ * Qk, y). With this information, the overall similarity becomes irrele- vant . When the similarity is insufficient to determine the query at hand, i.e., we have no idea which of the known facts might be relevant, the theory does not apply. However, it still seems plausible that the most similar source is the best analogue. What has been lacking in previous theories of analogy by sim- ilarity is any attempt to justify this assumption; the analysis in this paper hopes to rectify this situation. Since an inference by analogy is still an inference, the justification must take the form of in argument as to why a conclusion from similarity is any better than a random guess; better still, the theory should be able to assign a probability to the conclusion given the truth of the premises. The object of this paper is thus to compute (or at least sketch) the relationship between the measure of similarity between two objects, and the probability that they share a further, specified similarity. The principal problems which need to be solved before such a theory can be constructed are: 1) k reasonable way must be found to circumscribe the source and target descriptions. Without this, the sets of facts to be compared are essentially without limit. 2) A similarity measure must be defined in such a way as to be (as far as possible) independent of the way in which the source and target are represented. 3) We must identify the assumptions needed to relate the similarity measure to the desired probability. The precise similarity measure itself is not important; in fact, it is essentially meaningless. If we have a different similarity measure, we simply need to relate it in a different way to the probability of correctness of the analogy. Thus we will not be attempting to define a similarity measure that is more plausible than those proposed previously. The essence of our approach is to show that analogy to a maximally similar source can be justified in the absence of any applicable determination by showing that such a source is the most likely to match the target on the properties which are relevant to the query (even though the identity of these 284 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. properties is unlcnozon). If a source matches the target on all relevant features, an analogy from that source is assumed to be correct. We first calculate the probability of such a match for the simple case of an attribute-value representation in which the relevance of any attribute is equally likely a priori; initially this is done assuming a fixed number of relevant features, and then we incorporate the assumption of a probability distribution for the number of relevant features. The result of the analysis is a quantitative prediction of the probability of correctness of an analogy to a given source as a function of the overall similarity of that source to the target. The prediction bears a very close resemblance to the empirical ‘stimulus generalization probabil- ity’ (the psychological term for the probability we are trying to calculate) measured in animal and human experiments. In a subsequent section we attempt to relax the simple rep- resentational assumptions to allow the theory to apply to the general case. We conclude with a discussion of the difficulties inherent in such a task, and an indication of how similarity can be combined with determination-based reasoning to cre- ate a more general theory of analogy. II THE SIMPLE MODEL A simplified model for analogy in a database is this: we have a target T described by m attribute-value pairs, for which we wish to find the value of another attribute Q. We have a num- ber of sources Sr . . . S, (analogues) which have values for the desired attribute Q as well as for the m attributes known for the target. Define the similarity s as the number of matching attribute values for a given target and source. The difference d = m - s. \ The parameter attached to each curve Is the number of relevant attributes. r. Number of nonmstching attributes. d = (m-s) I Fig. 1 p(d, r) for T = 1,3,5,10,20 Assume that there are T attributes relevant to ascertaining the value of Q, and that the relevant attributes are all included somewhere in the target descriptions. This is equivalent to say- ing that the conjunction of all the attributes in the description is sufficient to determine the query (but since not all the at- tribute values match we cannot use this to conclude the desired similarity with certainty). Thus the solution to the problem of circumscribing the source and target descriptions is to limit them to the attributes contained in the left-hand side of the least specific determination available for the query at hand. Define p(d, r) to be the probability that a source S, differing from the target on d attributes, matches it on the r relevant attributes. In the first instance, we assume that aEZ attributes are equally likely to be relevant. We can thus calculate p(d, r) using a simple combinatoric argument: the number of choices of which attributes are relevant such that S matches T on those attributes is (m - d) choose r; the total number of choices of which attributes are relevant is m choose r; the value of p(d, r) is the ratio of these two numbers: p(d, T> = (“,“)/(y) b-21) For any r, this function drops off with d (= m-s), monotonically and concavely, from 1 (where d=O) to 0 (where d > m-r). Thus the most similar analogue is guaranteed to be the most suitable for analogy. Figure 1 shows p(d, T) for values of T of 1, 3, 5, 10, 20 with the total number of attributes m = 30. As we would expect, the curve narrows as T increases, meaning that a higher number of relevant attributes necessitates a closer overall match to ensure that the relevant similarities are indeed present. III ALLOWING T TO VARY The assumption of a fixed value for the number of relevant features seems rather unrealistic. The most general assump- tion we can make is that T follows a probability distribution Q(T) = constant Q(T)CX e+ Q(T)o: Te-' Q(T) = Nt4,2> Figure 2 p(d) g’ lven various assumptions about q(r) COGNITIVE MODELLING AND EDUCATION / 285 QQ(T) which depends on the type of the query Q. Thus, for example we could assume that there are equally likely to be any number of relevant features, or that three or four seems reasonable whilst 25 is unlikely. Although this introduces an extra degree of freedom into the theory, we find that the results are almost independent of what we assume about q. We cal- culate the probability of successful analogy now as a function of the source-target difference d only: p(d) = &clP(4r) r=o using the above formula for p(d, r). For any reasonable as- sumption about the shape of q(r), the variation of p(d) with d remains approximately the same shape. For q(T) = constant, p(d) N l/(d+ 1) For q(r) CC emr, p(d) N emd for low d For q(T) LX re-‘, p(d) N esd except at large d For q(T) = NormaZ(~ = 4,a = 2), p(d) - emd The first two assumptions are somewhat unrealistic in that they assign significant probability to there being no relevant features. When this possibility is discounted, the curves come much closer to being exponential. In figure 2 we show values of p(d) (plotted as dots) computed using these four assumptions of q(T), with a simple exponential decay (p(d) o( eed, solid line) superimposed. IV EMPIRICAL DATA ON STIMULUS GENERALIZATION Psychological experiments on stimulus generalization are highly relevant to the study of analogy by similarity. In these experiments, a (human or animal) subject is given an initial stimulus, to which it makes a response. If necessary, the correct response is confirmed by reinforcement. This original stimulus- response pair is the source in our terms. Then a second stim- ulus is given, which differs from the original. This represents the target situation, for which the subject must decide if the original response is still appropriate. The empirical probability that the subject makes the same response (generalizes from the original stimulus) is measured as a function of the difference between the stimuli. This probability is essentially what we are predicting from rational grounds in the above analysis. Early results in the field failed to reveal any regularity in the results obtained. One of Shepard’s crucial contributions ([Shepard 581) was to realize that the similarity (or difference) between the stimuli should be measured not in a physical space (such as wavelength of light or pitch of sound) but in the sub- ject’s own psychological space, which can be elicited using the techniques of multi-dimensional scaling ([Shepard 621). Using these techniques, Shepard obtained an approximately expo- nential stimulus generalization gradient for a wide variety of stimuli using both human and animal subjects. Typical re- sults, reproduced, with kind permission, from Shepard’s APA presidential address ([Shepard Sl]), are shown in figure 3. His own recent theory to explain these results appears in [Shepard 841, and has a somewhat similar flavour to that given here. V GENERALIZING THE MODEL In principle, we can make the simple model analyzed above applicable to any analogical task simply by allowing the ‘at- tributes’ and ‘values’ to be arbitrary predicate calculus formu- lae and terms. The assumption that each of these new ‘at- tributes’ is equally likely to be relevant is no longer tenable, however. In this section we will discuss some ways in which the similarity measure might be modified in order to allow this assumption to be relaxed. The idea is to reduce each attribute to a collection of uniform mini-attributes; if the original as- sumptions hold for the mini-attributes, our problem will be solved. Unfortunately, the task is non-trivial. The first difficulty is that we can only assume equal rele- vance likelihood if the a priori probabilities of a match on each attribute value are equal; in general, this will not be the case. In the terms of (Carnap 711, the widths of the regions of possi- bility space represented by each attribute are no longer equal. Accordingly, the simple notion of similarity as the number of matching attributes needs to be revised. If the cardinality of the range of possible values for the ith attribute is Ici, then the probability pi of a match (assuming uniform distribution) is l/&. Although L will vary, we can overcome this by reducing each attribute to log, b mini-attributes, for which the proba- bility of a match will be uniformly 0.5. If the original distri- bution is not uniform (for example, a match on the NoOfLegs attribute with value 2 is much more likely than a match with value l), a similar argument gives the appropriate contribution as - log, pi mini-attributes. This refinement may underlie the intuition that ‘unusual’ features are important in metaphorical transfer and analogical matching ([Winston 781, [Ortony 791). In [Russell 86b], the notion of one value ‘almost matching’ another is taken into account by supposing that determina- tions are expressed using the ‘broadest’ attributes possible, so that precise attributes are grouped into equivalence classes ap- propriate to the task for which we are using the similarity.In other words, similarities are re-expressed as commonalities. In the current situation, however, we will not know what the ap- propriate equivalence classes are, yet we still want to take into account inexact matches on attribute values; for example, in heart disease prognosis a previous case of a 310-lb man would be a highly pertinent analogue for a new case of a 312-lb man. If the weight attribute was given accurate to 4 lbs, these men would weigh the same; thus in general an inexact match on a scalar attribute corresponds to an exact match on less fine- grained scale, and the significance of the ‘match’ is reduced according to the log of the accuracy reduction (2 bits in this case). A consequence of this view of the significance of an at- tribute leads to a constraint on the possible forms of q(r): if we assume that the relevant attributes must contain at least as much information as the attribute Q whose value they com- bine to predict, then we must have q(r) = 0 if T is less than the significance value of Q. Here T, as well as the total ‘attribute count’ M and the similarity s, are all measured on a scale where a one-bit attribute has a significance of 1. At first sight, it seems that we have succeeded in breaking down our complex features into uniform elements, all of which are equally likely to be relevant, so all the earlier results should still apply. However plausible this may seem, it is simply false. The base of the logarithms chosen is of course totally arbitrary - we would still have uniform mini-attributes if we used log*. This would mean halving our values for m, T and s; but the formula for p(d, r) contains combinatoric functions, so it will not scale. Hence our predicted probability will depend on the base we choose for the logarithms! This is clearly unsatisfac- tory. What we have done is to neglect an important assump- tion made in using the combinatorial argument, namely that 286 / SCIENCE the relevant information consisted of a set of whole features. If we allow it to consist of a collection of sub-elements of various features, then clearly there are many more ways in which we can choose this set. The plausibility of the simple model rests in our unstated assumption that the attributes we use carve up the world in such a way as to correctly segment the various causal aspects of a situation. For example, we could represent the fact that I own a clapped-out van by saying OwnsCar(SJR, 73DodgeSportsmanVanB318) using one feature with a richly-structured set of values; but for most purposes a reasonable breakdown would be that I l’r h A. CIRCLES VARYING IN SIZE S: McGuire (195411961) 0: Shepard (1955/1962) . n 0. SOUARES VARYING IN SIZE AND LIGHTNESS D 1 G. TR!ANGLES VARYING IN SIZE AND SHAPE S: Attneave (1950) 0: Shepa rd (1958a) . J. FREE-FORMS VARYING IN SHAPE S: Shepard & Cermak 0: Shepard & Cermak own a van (for other people’s moving situations), that it’s very old (for long-distance trip situations), that it can seat lots of people (for party situations), that it’s a Dodge (for frequent repair situations) and that it’s virtually worthless (for selling situations). Few situations would require further breakdown into still less specific features. In some sense, therefore, we will require a theory of natural kinds for features as well as for objects. If it is the case that humans have succeeded in developing such well-tuned representations, then it is indeed reasonable for us to assume that the relevant information, which corresponds to the part of the real-world situation which is responsible for I , \ B. COLORS VARYING IN LIGHTNESS 8 SATURATtON S: Shepard (195511958b) 1 0: Shepard (195511962) S . R E. COLORS VARYING IN HUE (pigeon data) e S: Guttman 8 Kelish a 0 (1956) 0 0: Shepard (1965) 0 H. \ COLORS VARYING iN HUE (pigeon data) l S: Blough (1961) 0: Shepard (1965) C. POSITIONS VARYING IN A LINEAR SLOT S: Shepard (1958 b) 0: Shepard (1958a) F. CONSONANT PHONEMES S: Miller & Nicely (1955) \ 0: Shepard (1972) 1 I. VOWEL PHONEMES S: l%errpn & Barney 0: Shepard (1972) I n l L. MORSE CODE SIGNALS l : S: Rothkopf (19.57) 0: Cunningham 8 a. Shepard (1974) Fig. 3 Plots of analogical response probability (S) g a ainst source-target difference (D) from [Shepard 811. COGNITIVE MODELLING AND EDUCATION / 287 determining the queried aspect, will consist of a set of discrete features corresponding to the various possible causal factors present. This of course raises a vast throng of questions, not least of which is that of how an AI system is to ensure that its representation has the appropriate properties, or even it can know that it does or doesn’t. The subject of the semantic im- plications of using a particular representation is also touched upon in [Russell 86a], where we tie it in to the process of vo- cabulary acquisition; a real understanding is still far beyond our reach, but an appreciation of the problem, and the areas on which it impinges, is a first step. VI CONCLUSIONS The first steps toward a quantitative analysis of the proba- bility of correctness of an analogy as a function of the source- target difference have been presented, giving the first justi- fication for the maximal similarity heuristic. Although sev- eral difficult problems remain, it may be possible to define a representation-independent similarity measure on reliably cir- cumscribed object descriptions. The empirical verification of the theory by Shepard’s results is extremely good, in the sense that it shows that humans and animals possess a rational abil- ity to judge similarity which has evolved, presumably, because of the optimal performance of its predictions given the avail- able information. Shepard’s explanation of the results and our own are somewhat complementary in that he deals with unan- alyzed stimuli whereas our model assumes a breakdown into features. Given the usual nature of AI representations, this is well-suited for our purpose of constructing a computational theory of analogy and a generally useful analogy system for AI. We intend to further explore the implications and loose ends of the theory by performing large numbers of analogies in an AI database of general knowledge (Lenat’s CYC system; see [Lenat et al 861). A further goal is to integrate analogy by similarity with the determination-based analogical reasoning theory. We anticipate three forms of integration: overconstrained determinations will circumscribe broad classes of potentially relevant features; we reason by similarity within these constraints if no exact match can be found; probabilistic determinations can add weights to the con- tributions of individual attributes to the overall similar- ity total; observation of an unexpectedly high similarity can ini- tiate a search for a hitherto unknown regularity to be encoded as a new determination. When intelligent systems embodying full theories of limited rationality are built, an ability to perform analogical reason- ing using both determinations and similarity will be essential in order to allow the system to use its experience profitably. Analogy by similarity also seems extremely well suited to the task of producing reliably fast, plausible answers to problems, particularly in a parallel environment. It is hoped that the ideas in this paper have gone some way towards realizing this possibility, although it is clear that more questions have been raised, some of them for the first time, than have been an- swered. ACKNOWLEDGEMENTS I would like to thank my advisors Doug Lenat and Mike Genesereth for fruitful discussions of these ideas, Benjamin Grosof and Devika Subramanian for helpful comments on an earlier draft, and Roger Shepard for making these ideas possi- ble. References [Carnap 711 Carnap, Rudolf. A Basic System of Inductive Logic, Part I. In R. Carnap and R. C. Jeffrey (Eds.) Studies in Induc- tive Logic and Probability Vol I. Berkeley, CA: University of California Press; 1971. [Davies & Russell 861 Davies, Todd & Stuart Russell. A Logical Approach to Rea- soning by Analogy. Stanford CS Report (forthcoming) and Technical Note 385, AI Center, SRI International; June, 1986. [Lenat et al 861 Lenat D., Mayank P. and Shepherd M.. CYC: Using Com- mon Sense Knowledge to Overcome Brittleness and Knowl- edge Acquisition Bottlenecks. AI Magazine Vol. 6 No. 4; Winter 1986. [Mill 731 Mill, J. S. System of Logic Book III Ch XX ‘Of Analogy’ in Vol. VIII of Collected Works of John Stuart Mill University of Toronto Press; 1973. [Ortony 791 Ortony A.. Role of Similarity in Similes and Metaphors in Ortony A. (ed.) Metaphor and Thought. Cambridge Univer- sity Press; 1979. [Russell 86a] Russell, Stuart J. “Preliminary Steps toward the Automa- tion of Induction”. In Proceedings of the National Confer- ence on Artificial Intelligence. Philadelphia: AAAI; 1986. [Russell 86b] Russell, Stuart J. Analogical and Inductive Reasoning. Ph. D. thesis. Stanford University; 1986. [Shepard 581 Shepard R. N. Stimulus and Response Generalization: De- duction of the Generalization Gradient from a Trace Model. Psychological Review Vol. 65; 1958. [Shepard 62a,b] Shepard R. N. The analysis of proximities: multidimensional scaling with an unknown distance function (Parts I and II). Psychometrika Vol. 27; 1962. [Shepard 811 Shepard, Roger. APA Division 3 Presidential Address, Los Angeles, August 25, 1981. [Shepard 841 Shepard R. N. Similarity and a law of universal generaliza- tion. Paper presented at the annual meeting of the Psycho- nomic Society, San Antonio, TX; November, 1984. [Tversky 771 Tversky, Amos. Features of Simila.rity. Psychological Review Vol. 84, No. 4; 1977. [Winston 781 Winston, Patrick H. Learning by Creating and Justifying Transfer Frames. Artificial Intelligence Vol. 10, No. 4; April, 1978. 288 / SCIENCE
|
1986
|
139
|
405
|
SCAT, AN AUTOMATIC-PROGRAMMING TOOL FOR TELECOMMUNICATIONS SOFTWARE S. Barra, 0. Ghisio, F. Manucci CSELT - via G. Reiss Romoli, 274 - 10148 Turin (Italy) ABSTRACT The size, complexity and long 1 ife-time of telecommunications software, e.g. the programs for store program control (SPC) telephone exchanges, call for an increased software pro- ductivi ty and maintainability other than an improved qua1 ity. The availability of programming support environments based on stan- dardized specification and programming languages greatly improves the software development pro- cess. Artificial Intelligence techniques are very promising aiming at further improvements and can provide a short-term payoff especially within an evolutionary approach leading up to an hybrid programming environment, i.e. a software environment made of both conventional and intelligent tools. The paper describes an intelligent tool, dubbed SCAT, based on ideas exploited by various automatic programming systems, like CHI, Programmer’s Apprentice and DEDALUS. SCAT is strictly related to the tele- communications domain, thus it differs from other systems in the domain specifity. SCAT partly automatizes the most crucial phase in the software development process, i.e. the tran- sition from project’s detailed specification to the actual software implementation. SCAT has been tested in a few experimental software deve- lopments and in an actual application,i.e. the message hand1 ing system (MHS) to be made available in the Italian public packet switching network (ITAPAC). 1. INTRODUCTION The most crucial phase of the software deve- lopment process is the transition from detailed specification of the project to the actual implementation. A first attempt in designing and implementing tools supporting this phase via conventional techniques proved to be not very promising. Thus we addressed the problem of automatizing this phase according to the method of knowledge engineering and the transfor- mational approach [1] [2]. Referring to the telecommunications field, the specification and implementation formalisms must be the Specification and Description Language (SDL) and the CCITT High Leve 1 programming Language (CHILL) recommended by CCITT, the International Consulting Committee representing all telecom- munications administrations. Background infor- mation about SDL and CHILL languages is referred to in section 2. SCAT, SOL to CHILL Assisting Transformer, is based on ideas exploited by PSI 131 I Programmer’s Apprentice [4] and DEDALUS [5] systems. Specification acquisition phase, synthesis phase and knowledge organization are similar to PSI ones. In the phase of specification acquisi- tion, a program network is built. The program network is transformed into a complete and con- sistent description of the program (a program model represented via a hierarchical structure of frames) during the synthesis phase. Finally, the program model is translated in the target 1 anguage. A detailed description of these two phases is given in section 3.3. The organization of the three kinds of knowledge (knowledge about SDL and CHILL, application domain knowledge and incremental knowledge) is presented in section 3.2. SCAT provides the users with an assistance similar to that provided by Programmer’s Apprentice system. User interaction is needed as SCAT could ask for missing information required in generating a working program corresponding to a particular specification (see section 3.1). Referring to the transformational approach adopted in SCAT, this system could be considered close to DEDALUS, in particular referring to the rules stating for specification and implemen- tation languages and the mapping between them. Actually, transformations in SCAT are applied in a stiff and predetermined way, instead of depending at any time on the analysis of the results gained in previous transformations. In addition, as SCAT refers to SDL, i.e. a formal 1 anguage, syntax and semantics of specification are far away from natural 1 anguage or input/output predicates. Information relevant to the gained experience in using SCAT and discussions about SCAT implemen- tation are in section 4. 2. BACKGROUND To overcome at least some of the dif- ficulties derived by the dimension and complexity of telecommunications systems, CCITT (International Telegraph and Telephone Consultative Committee) has promoted the defini- tion of international standards providing formal languages to support design and implementation of switching software. Two 1 anguages are recommended (Specification and Description Language) ;8] f:i CHILL (CCITT High Level Language) [7]. The APPLICATIONS / 831 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. former is usable by both Administrations to pro- vide functional specifications of telecom- munications systems and Manufacturers, to pro- duce descriptions, i.e. standard documentation (Fig.1). The latter is a high level programming language, fulfilling typical requirements of the specific telecommunication area. SDL and CHILL languages are rigourous means for specification and implementation activities. However, problems still remain in the transition from specification to implementation, owing on one hand to the different nature of the two languages, on the other hand to the specific usage of SDL and CHILL. The formality degree required at SDL and CHILL levels is different, that is the first allows a certain degree of informality in the specification, while the second must obey the strongest formalization rules. SDL and CHILL definitions have been carried out by their own CCITT groups, that even if emphasizing compatibility between the languages, designed them under different points of view. SDL is expected to provide a mean of specifying/describing the behavior of telecom- munications systems : for this reason people involved in SDL definition focused on descrip- tion problems specific to the telecommunications field. Conversely CHILL is a general purpose programming language. Thus, CHILL provides syn- tactic constructs typical of a high level programming language, while most of them are missing in SDL grammar. As a matter of fact, some concepts are syn- tactically similar but semantically different, while others have the same semantics but are syntactically referred to in a different way. Then problems arise in modeling SDL semantics by CHILL. The best way of matching the two languages semantics cannot be immediately identified: a match can be per formed using a set of CHILL constructs to reproduce a particular SDL con- cept. On the basis of the problems previously outlined, an intelligent tool has been developed at CSELT, in order to support and partially automate some phases of software development and maintenance (Fig.2) [a]. 3. THE SCAT SYSTEM ___._~ SCAT (SDL to CHILL Assisting Transformer) is an intelligent tool providing an assisted transformation in the production of a complete CHILL program corresponding to an SDL specifica- tion. This allows to bridge the gap between specifica- tion and implementation levels due to the dif- ferent informality degree of the former with respect to the rigourous formalism of the latter. SCAT takes as input an SDL specification (see top of Fig.3) in a textual form, produced by a set of tools relevant to the specification (graphical editor, graphical-textual translator, syntax analyzer) and produces as output the corresponding CHILL code (see bottom of Fig. 3). The specification is acquired and analyzed by the specification analyzer, then the specifica- tion solver performs the synthesis activity, cooperating with the user’s assistant and con- sulting the knowledge base. The coder produces the CHILL output that will be processed within a conventional CHILL programming support environment [9]. 832 / ENGINEERING prototypical frames, that is frames not yet instantiated, and then suitably instantiating their attributes. The incremental knowledge is organized in a set of frames retaining the information which appears to be reusable during the transformation of the same and/or other specifications of a particular project. Such information is used in transforming SDL tasks into CHILL procedures and is made of CHILL code pieces and some further information charac- terizing it, that is parameters types, procedure attributes, file name containing the procedure code. Figure 4 shows an example of task frame instance for the SDL task MAKE ASSDCIATIDN(A,B). This task could be transformed into a CHILL procedure named MAKE-ASSOCIATION having A and B as present parameters. Since this task has been previously classif ied as “procedure with parameters”, a procedural attachment handling present parame- ters, is started. The procedure “manage-proc-wi th-param” recovers the CHILL name and the present parameters from the SDL name; retrieves the formal parameters of the procedure from the incremental frame, corresponding to the MAKE ASSOCIATION task (Fig.5) if it already exists;%signs such types to the present parameters, suitably instan- tiating a declarative frame. Figure 5 shows an instance of incremental knowledge task frame with some values useful for the CHILL definition of MAKE-ASSOCIATION proce- dure. This kind of information is acquired through an user interaction phase and recorded by the system, which exploits it again, when needed (e.g. for the present parameters defini- tion of the task met). The normative knowledge is made up of about 300 rules. In particular, they are concerned with : - syntactic rules of SDL and CHILL languages (decoding and coding rules); - correspondence rules between the two languages (equivalence rules); - the rules performing slot instantiation and retrieval (property rules); - the rules checking some consistency constraints, to be used in the updating acti- vity (consistency rules). An example of SCAT rules for the task concept is shown in Fig. 6. The decoding task rule (1) expresses that the SDL primitive at hand is a task if this primitive consists of a keyword TASK followed by the SDL name. Two coding rules correspond to the task rule, depending on what the task stands for (i.e. abstraction or informal task (2) or assignment (3) ). The choice of the suitable coding rule depends on the evaluation of the equivalence rules (6) and (7). The instance and the retrieval in the task frame for <SDL-NAME> and <CHILL-NAME> respectively is performed by the property rules activation (4) and (5). 3.3 Transformational Process In the transformational process (Fig. 7), SCAT gradually acquires the specification, through the application of the SDL rules, building at the same time the program model (specification analyzer), suitably instantiating its slots with those values at the moment available in the specification; then, linking it to its parent frame. Looking at those empty slots in the program model, intended for storing CHILL information, SCAT becomes aware of what it needs to issue the CHILL code. Then, it fills up such slots consulting the model itself, the incremental knowledge and, when needed, the user (complete mode 1 builder, user interface). Finally, it generates the CHILL code through the mode 1 scanning and the application of the programming rules (coder). As far as the updating activity is con- cerned, SCAT allows the user to substitute a chunk of a previously acquired specification or to insert further pieces in it. A submodel for the SOL piece to be added or substituted is created and fulf i 1 led by the same components acting in the transformation process. The sub- model is then inserted at the right place in the starting model ; at the same time, checks are made throughout the whole model to identify and solve the possible inconsistencies caused by the updating (model modifier). The code generation is finally performed in a completely transparent way with respect to the carried out changes. 4. EXPERIENCE OF USE AND IMPLEMENTATION A few experimental applications of SCAT has been carried out and the system is presently used in an actual application development [lo]. A communication protocol [ll] fully specified in SDL is automatically implemented in CHILL using SCAT system. The detailed specification of a communication protocol is not an easy task: its implementation based on this specification is even more compli- cate, requiring a good knowledge of the problem itself. Usually the implementation of a com- munication protocol consists of a considerable amount of code, therefore it represents a good test for an automatic translator. This experience made explicit that software productivity can be increased by a factor from 5 to 10. SCAT system has been developed in PROLOG 1 anguage, in particular in CProlog (1.2 and 1.5 versions) and in Quintus (1.2 version), running on VAX-11 machines (UNIX and VMS operating systems) and SUN workstations. Some differences of behavior, depending on the PROLOG versions and the computer used, have to be pointed out. Primary requirement to translate an average-high SDL specification (about 500 SDL lines) in the corresponding CHILL implementation is to change the memory dimension of CProlog. In particular CProlog stacks have to be arranged in order to avoid an abort of the transformation process. The transformation of the same specification in VAX/VI% environment and in VAX/UNIX environment’s using CProlog 1.5 APPLICATIONS ! 833 produces two different behaviors. In the VAX/VMS stack dimensions have to be fixed in the following manner: - global stack : 4100K - local stack : 4800K - heap stack : 500K - trial stack : 150K In the VAX/UNIX the transformer cannot be run because the greatest dimension of a UNIX task is 4Mb, as also occurs on Sun workstation. The CPU time required by the transformation of the above mentioned specification is about 30.54 sec. and the generated program is about 2,000 CHILL lines long. 5. CONCLUSIONS SCAT system is based on some ideas exploited by various automatic programming systems: PSI’s synthesis phase and knowledge organization, the assistance approach provided by Programmer’s Apprentice, the transformational approach adopted in DEDALUS. Actually, SCAT does not provide new general applicable ideas; it rather shows how some ideas from automatic programming field can be instan- tiated in the specific telecommunications domain. Such a domain is one of the largest software application areas, thus it justifies huge efforts in attempting to increase software pro- ductivi ty and maintainability other than to improve its quality. Significant payoffs have been achieved in SCAT development exploiting AI techniques according to evolutionary approach which aims at hybrid software environments. First of all, AI techniques allow to improve the cooperative assistance to the user and to effi- ciently organize the knowledge required in the transformation task. Moreover, the joint use of frames and rules has allowed to quickly write a first SCAT prototype, afterwards tailored to the particular applica- tion in a tuning activity. The use of Prolog language has outlined its suitability to represent and evaluate a knowledge organized in a production rules form; furthermore, it made it possible to realize in a short time a first prototype of the system; finally, it assures an easy SCAT adaptability to others programming and specification languages. SCAT makes easier, cheaper and more reliable software development and maintenance, taking care of burdensome and error-prone tasks, as consistency and match between specification and implementation, and leaving to the user the responsibility for the most crucial decision making. REFERENCES [l] E. Lerner “Automatic Programming” Computers Software II IEEE 1982 [21 J. Phillips “Self-Described Programming Environments-An application of a Theory of Design to Programming Systems” Technical [31 [41 [51 ['31 [71 [al 191 [lOI 1111 Report SIAN-CS-84-1008, Kestrel Institute 1983 E. Kant, D. Barstow “The refinement para- digm : the interaction of coding and effi- ciency knowledge in program synthesis” IEEE Transaction of Software Engineering Vol. SE-7 n.5 1981 C.Rich “Inspection methods in Programming” - Technical Report AI-TR-604-M.I.T. A.I. Labs 1981 Z. Manna, R. Waldinger “Synthesis : dreams -> programs” IEEE Transaction on Software Engineering Vol. SE-5 n.4 1979 CCITT Recommendation 2.100 - 2.104 “Functional Specification and Description Language (SDL ) ” 8th. Plenary Assembly Malaga-Torremolinos 1984 CCITT Recommendation 2.200 “CCITT High Leve 1 Language (CHILL)” 8th. Plenary Assembly Malaga-Torremolinos 1984 Barra S., Ghisio O., Modesti M. “The use of artificial intell igence in the transfor- mation from SDL to CHILL” II SDL users and implementors forum Helsinki 1985 Bagnoli P. et al. “Towards a Software Engineering environment for telecom- munication systems based on CCITT stan- dards” - XI International Switching Symposium -Florence 1984 Barra S., Ghisio O., Modesti M. “Experience and problems of applications of automatic translation from SDL specifications into CHILL implementations” 6th. International Conference on Software Engineering for Telecommunication Switching Sys terns Eindhoven 1986 CCITT Recommendation x.411 “Message Transfer Layer” 8th. Plenary Assembly Malaga-Torremolinos 1984 Fig. 1 - An example of SDL graphical form specification in 834 / ENGINEERING Fig. 2 - The SCAT window environment Fig. 3 - The SCAT system (1) TASK cSDL_name> [ <comment> ] ; --> <task> (2) <call-action> --> CALL <CHILL-name> ( <present-par> (, <present-par> j* ) [ *comment> ] ; (3) <assign-act [ <comment> ion> --> I ; <CHILL-name> (4) <SDL-name> --> fill-slot(SDL-NAME) (5) <CHILL-name> --> get-slot(CHILL-NAME) (6) <task> --> <call-action> if tSDL_name> is “abstraction” (7) <task> --> <assign-action> if *SDL-name> is “assignment” SLOT SDL NAME PROCEDURAL VALUE ATTACHMENT MAKE-ASSOCIATION (A. 01 I IMPLEMENTED I CHILL I I COMMENT LABEL Lp’oc MANAGE-PROC-WITH PARAM (chd-name. present.parl chlil-name recovering (chfll-name); param-types-recover,ng (present par, types), present.par-slots lnSfa”tlatlO” (present-par. types) Fig. 4 - A frame example of program model SLOT VALUE PA NAME MAKE-ASSOC ATION FORMAL-PAR TYPEl. TYDEZ l KIND PROCEDUPE I RECURSIVE NO I pr0c: ASSIGN-TO-PRESEtiT-PARAM (formal-par. name): - task-‘rame-recovering (name. present-par), assign-types (present-par. formal-par); end. Fig. 5 - An example of incremental know1 edge CODING PHI5E PROGRAM MOOEL I I I CODER Fig. 6 - Example of rules Fig. 7 - The transformational process APPLICATIONS / 835
|
1986
|
14
|
406
|
HYPOTHETICALS AS HEURISTIC DEVICE * Edwina L. Rissland and Kevin D. Ashley Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01002 (413) 545-0332 Abstract In this paper we examine the use of hypotheticals as a heuristic device to assist a casebased reasoner test the strengths, weaknesses, and ramifications of an analysis or argument by exploring and augmenting the space of known cases and indirectly, the attendant spaces of doctrine and argument. Our program, HYPO, works in the task do- main of the law, particularly, the area of trade secret pro- tection for software. We describe how HYPO generates a constellation of legally-meaningful hypothetical fact situ- ations (“hypos”) which are “near” a given fact situation. This is done in two steps: analysis of the given situa- tion and then generation of the hypos. We discuss the heuristics HYPO currently uses, which include: (1) make a case weaker or stronger; (2) generate an extreme case; (3) enable a near miss; (4) manipulate a near win; and (5) generate a case on a related “dimension”. 1. Introduction In contemplating a course of action, one needs to ex- plore such things as its strengths and weaknesses and the ramifications of pursuing it. In particular, one needs to be able to spot fatal weaknesses (e.g., If one assumed condi- tion does not obtain, then does the entire plan fail?) and implications that set new precedents and/or overturn old ones (e.g., Does it reverse a long line of policy?). Experts in many fields - the law, Talmudic scholarship, medical therapy planning - often make use of hypothetical cases (“hypes”), and other case-based reasoning (“CBR”) tech- niques, in such situations. In this paper, we concentrate on the use of hypothet- icals in the legal domain. A real case is a case that has been litigated and decided; a hypo has not (even though it might be a very slight variation of one that has, or foretells * This work was supported (in part) by: Grant IST- 8212238 of the National Science Foundation, the Advanced Research Projects Agency of the Department of Defense, monitored by the Office of Naval Research under contract no. NOOO14-84-K-0017, and an IBM Graduate Fellowship. of cases in the process of coming to light or just “waiting to happen” [Rissland, 19851.) In particular, we focus on the use of a sampling of hypotheticals to explore possi- ble situations and arguments arising from a current fact situation. It is a heuristic technique. Not only does the “constellation” of hypotheticals around a given case effect a heuristic search of the %pace” of all possible cases, but also the individual hypos in the constellation are generated using heuristics. We have implemented these ideas in our case-based reasoning program, “HYPO”, which performs CBR tasks in the legal domain. HYPO is based on detailed analysis of case-based reasoning, particularly that involving hypo- theticals, as performed by legal experts, particularly the Justices of the United States Supreme Court and profes- sors of law at the Harvard Law School. In reasoning with cases and hypotheticals, HYPO uses a means of representing and indexing cases in a Case Knowl- edge Base (“CKB”), a computational definition of rele- vance in terms of =dimensions’ which capture the util- ity of a case for making a particular kind of argument, a dimension-based method for comparing cases, and meth- ods for generating hypotheticals to help an arguer formu- late an argument, gather relevant facts, and explain his argument. HYPO’s domain is legal argument where, as illustrated below with examples of oral arguments before the Supreme Court, cases and hypotheticals are primary tools. In this paper, we concentrate on HYPO’s creation of hypothetical new cases to accomplish such tasks as: (1) test the sensitivity of one’s argument to absence or pres- ence of certain facts; (2) locate and explore subspaces of relevant cases in the CKB; (3) augment and “flesh out” sparse areas of the CKB; (4) sample the space of implica- tions of a given argument; (5) formulate refinements and refutations of an argument. HYPO generates these hy- potheticals heuristically using certain well-known general heuristics (e.g., examine extreme cases) as well as HYPO- specific ones (e.g., examine weaker/stronger cases along a HY PO dimension). COGNITIVE MODELLING AND EDUCATION / 289 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. While HYPO is a program whose primary task domain is legal argument, the lessons learned from HYPO should prove useful for other CBR tasks like strategic planning and learning by experimentation. The posing and manip ulating of hypotheticals is important in strategic planning where one must examine a proposed plan in light of telling what if8 - all too often the advocate of a plan only tells of its good points and and a devil’s advocate is needed to unmask its weaknesses, Also, one cannot afford to wait passively for the right test case to come along before grap pling with a potential problem; one must create cases to reason in anticipation. In learning, the problem of how to intelligenfly eclect truining inutoncea is directly related to our concerns here. In fact, some of the heuristics we discuss here, like those using near misses, are the subject of another on-going project of ours on intelligent example selection [Rissland, Buchanan, Rosenbloom and Ng, 1986) for rule-learning systems like RL [Fu and Buchanan, 19851. 2. Examples of How Experts Reason with Hypotheticals Legal experts reason with hypotheticals in two situa- tions: (1) law school teaching; and (2) aspects of litiga- tion. In law school, hypos are used (sometimes unmerci- fully) to ferret out unspoken assumptions and prejudices of students, to focus attention on subtle or troublesome points, and to exercise the student’s argumentative pow- ers [Gewirtz, 1981; Rissland, 19841. In litigation, hypos are used primarily at two points: (a) preparation and “de- bugging” of an argument in the way a strategic planner “dry runs” his plan, and (b) in oral argument. In oral argument, the hypos usually come from the judges trying to probe an advocate’s position and the ram- ifications of it. The following examples from oral argu- ments before the United States Supreme Court illustrate how judges use hypotheticals: In Lynch u. Donnelly, 104 S. Ct. 1355 (1984), a case involving the constitutionality of a Christmas creche dis- play of a city on municipal land, the Justices posed the following hypotheticals: Q: Do you think . . . that a city should display a nativity scene alone without other displays such as Santa Claus and Christmas trees...? Q: [C]ould the city display a cross for the celebration of Easter, under your view? Q: [Sjupposing the creche were just one or- nament on the Christmas tree and you could hardly see it unless you looked very closely, would that be illegal? Q: What if they had three wiseman and a star in one exhibit, say? Would that be enough? . . . What if you had an exhibit that had not the creche itself, but just three camels out in the desert and a star up in the sky? Q: Well, the city could not display religious paintings or artifacts in its museum under your theory. Q: There is nothing self-explanatory about a creche to somebody . . . who has never been exposed to the Christian religion. Q: Would the display up on the frieze in this courtroom of the Ten Commandments be unconstitutional then, in your view? Q: Several years ago . . . there was a cere- mony held on the Mall, which is federal prop erty of course. . . . [T] here were 2OO,OOO or 300,000 people . . . and the ceremony was presided over by Pope John Paul II. Would you say that was a step towards an establishment of religion vi- olative of the religion clauses? . . . Then you think it would be alright to put a creche over on the Mall? . . . How do you distinguish a high mass from a creche? . . . [T] here was a consid- _ _ erable involvement of government in that cer- emony, hundreds of extra policeman on duty, streets closed... That was a considerable gov- ernmental involvement, was it not? SUP, Lynch u. Donnelly, Case No. 82- 1256, Fiche No. 5. l To present, support and attack positions (e.g., by testing the consequences of a tentative conclusion, pressing an assertion to its limits, and exploring the meaning of a concept); l To relate a fact situtation to significant past cases; l TO augment an existing case base with meaningful test or training cases; l To factor a complex situation into component parts (e.g., by exagerating strengths, weaknesses or elimi- nating features); In the above questions, one can see the Justices modi- fying the fact situation along various dimensions: location, size, and focus of display, religious content of the display, nature of the viewer, and degree of government involve- ment. l To control the course of argument (e.g., by focussing discussion on particular issues). 290 / SCIENCE Sometimes the purpose of the modifications (and thus the derivative hypos) is to compare the fact situation to ac- tual cases previously decided by the Court to test whether the current case presents stronger or weaker facts. ’ Or a hypothetical case, like the Mall example, may be signif- icant because it did & give rise to litigation. Frequently, the Justices use hypotheticals to apply pres- sure to the rule proposed by an attorney for deciding the case. This is often done by means of a “slippery slope”, that is, a sequence of hypotheticals each a little more ex- treme than its predecessor which ends in an extreme Ve- ductio” case. The changes can be symbolic or numerical. The folIowing example uses a (numerical) slippery slope. In the argument of Sony Corp. u. Uniuersd City Stu- diocr, 464 U.S. 417 (1984), while an attorney was advo- cating the position that if Sony sold video recorders while knowing that consumers would use them to copy copy- righted materials, then Sony should be legally responsible to the owners of the copyrights, the following interchange occurred: Q: Suppose . . . that about 10 percent of all programming could be copied without inter- ference by the producer or whoever owned the PrOgram... A: I don’t think that would make any dif- ference. I think 10% is too small of an amount. Q: Well, what about 50? At this point the attorney posed his own hypothetical. Even if there were only one television program that was copyrighted, he asserted that if Sony knew the program would be copied, it should be legally responsible. Finally, the Justice asked: Q: Under your test, supposing somebody tells the Xerox people that there are people who are making illegal copies with their ma- chine and they know it. . . . Xerox is a contrib- utory infringer? A: To be consistent, Your Honor, I’d have to say yes. Q: A rather extreme position. SUP, Sony Carp u. Uniueraai City Stu- dt’ou, Case No. 81-1687, Fiche No. 2. 1 See e.g., Stone u. Gruhom, 449 US. 39 (1980): Posting copies of Ten Commandments in schools held unconstitu- tional; Giljillan u. City of Philadelphia, 637 F. 2d 924 (CA3, 1980): City-financed platform and cross used by Pope John Paul H to celebrate public mass held uncon- stitutional; McCreay u. Stone, 575 F.Supp. 1112 (SDNY 1983): Not unconstitutional for village not to refuse per- mit to private group to erect creche in public park. In these last two questions, although the altered fact situations posed by the Justice are still covered by the proposed rule, it is increasingly harder for the attorney to justify his position because the hypotheticals present progressively weaker facts; the Justice has “stacked” the hypothetical with eztreme facts. The attorney to keep his argument alive must distinguish the current Sony situation and the hypos. Indeed, the attorney failed. The Court held for Sony on the ground that the Betamax was capable of substantial noninfringing use because so many programs were not subject to copyright restrictions, 464 U.S. 417, 456. Such observations are the basis of HYPO’s approach to using and generating hypotheticals. 3. The Heuristic Constellation The technique we describe in this paper is what we call the heuristic constellation. It is the generation and use of a cluster of hypothetical cases generated from a given initial “seed” case. The hypos in the constellation are generated by applying certain heuristics to the initial case and are ‘nearby” the seed in a legally meaningful way. Aa we said, the effect is a heuristic search of the space of all cases (which is far too large to be searched in a bruteforce manner). The heuristic constellation can be used to give one a feel for the robustness or vulnerability of a case. For in- stance, if many of the hypes in the constellation look “bad” viz-a-viz a particular analysis or position, this provides a strong hint that the seed is vulnerable. By assessing the elements of the constellation one creates an evaluation of the initial case that takes into consideration the relation of the case to the case knowledge base. The constellation can also be used for case retrieval from the case knowledge base. It adds some “jitter” to the seed case and thus might cause some new matches to be made in the CKB. This phenomenon can be seen in our creche example, where the Justices used the hypotheticals to get to other cases in their CKB - like those involving the Pope - which in a straight keyword approach would have been missed. Thus the heuristic constellation can enable one to find new areas of the CKB, which a priori one might not have considered “close” or relevant to the seed. We will see another example of this in our discussion of HYPO’s manipulations of a trade secret misappropriation case. At first blush, one might think that the only relevant cases will be about intellectual property law; however, by artful hypothecating, one gets to certain classic cases in contract law which are also directly relevant. These cases would have been missed by a hierarchical retrieval system that used a a conceptual hierarchy placing property and contracts on totally different major branches. COGNITIVE MODELLING AND EDUCATION / 29 1 4. Background on HYPO: Some defht tions. In order to perform its work, HYPO represents various kinds of domain knowledge. Cases are disputes between parties tried by a court, whose decisions are reported in published opinions. The opinion sets forth the fa& of the case, the w made by one party against the other, and the court’s holding. Fucta are statements about events associated with the dispute that were proved at trial or which the court assumed to be true. A claim is a rec- ognized kind of complaint for which the courts will grant relief (e.g., breach of contract, negligence, trade secrets misappropriation, copyright infringement). The holding is the decision of the court as to the legal effect on each claim of the facts of the case, either in favor of the plaintiff or defendant. In HYPO, cases are represented by a hierarchical clus- ter of frames (flavor instances) with slots for relevant fea- tures (plaintiff, defendant, claim, facts, etc.). Some fea- tures are in turn expanded and represented as frames (e.g., plaintiff) [Rissland, Valcarce, & Ashley, 19841. The li- brary of cases is called the Case Knowledge Base (CKB). HYPO’s current CKB contains a dozen or so of the leading cases for trade secret law for software. See the Appendix Table 1 for a partial list of cases and a very brief indication of their content. Besides the CKB and the understanding of the legal domain that this case representation implicitly contains, the other major source of domain-specific legal knowledge is in HYPO’s dimenaiona Dimensions capture the notion of legal relevance of a cluster of facts to the merits of a claim: that is, for a particular kind of case, what col- lections of facts represent etrengths and weakneaaea in a party’s position. The short answer is that facts are rele- vant to a claim if there is a court that decided such a claim in a real case and expressly noted the presence or absence of such facts in its opinion. Examples of dimensions in HYPO’s area of trade secret law are: Secrets-uoluntarity- diacloaed, Di8c~o8ure-8ubject-to-re8triction, Competitiue-ad- uantage-gained, Vertical-knowledge. summarized in the Appendix. These dimensions are As the Appendix indicates, each dimension has sev- eral facets. For instance, the prerequisites of the Secrets- uoluntorify-disclosed dimension are that two corporations, plaintiff and defendant, compete with respect to a prod- uct, plaintiff has confidential product information to which defendant has gained access and plaintiff has made some disclosures of the information to outsiders. The prerequi- sites are stated in terms of factual predicates, which in- dicate the presence or absence of a legal fact or attribute (e.g., existence of a product, existence of a non-disclosure agreement). The focal of this dimension is the num- ber of disclosees and its range is a non-negative integer. To strengthen the plaintiff’s position in a fact situation to which this dimension applies, decrease the number of disclosees; the best case being that with 0 disclosees. The significance of the dimension is that courts have found that the prerequisite facts are a reason for deciding a trade se- crets misappropriation claim. This dimension w at least two cases in the CKB: Midland-Rosa in which the court held for the defendant where the plaintiff disclosed the secret to 100 persons, and D&o-General in which the court held for plaintiff where plaintiff disclosed to 6000 persons. HYPO knows about 80 dimensions in alI (some of the others are described in [Rissland, Valcarce & Ash- ley, 19841). The dimensions were gleaned from law journal articles describing the state of the (case) law in this area [Gilburne & Johnston, 19821. In order to reason with a case, HYPO first performs a legal analysis; this is done by a CASEANALYSIS module, which in essence works as a diagnostic engine to determine which dimensions apply to a fact situation. The prerequi- sites, in effect, define antecedent conditions and a dimen- sion (i.e., a possible reason for deciding a claim in a par- ticular way) is the consequent. To make an analogy with the medical domain, the prerequisite facts are like symp tomatic features and the dimensions are like intermediate disease classes. The overall structure of HYPO and its internal workings are described in [Ashley and Rissland, 1985; Ashley, 19861. The output of the CASEANALYSIS module is the CU8e-UnUfy8k-record which contains: applicable factual pred- icates, applicable dimensions, near-miss dimensions, ap plicable claims, and relevant CKB cases. Near-miss di- mensions are those for which some, but not all, of the prerequisites are satisfied. 6. Heuristics for Generating Hypotheti- Cd8 Basically what HYPO does is to start with a given fact situation, or seed case, and generate legally relevant or plausible derivative hypotheticals by modifying the seed case. Since one cannot explore all the “legally” possible hypes (in the sense of syntactic legal move), one needs to explore the space heuristically. Dimensions provide a han- dle on how to do this exploration in a legally meaningful way. The process occurs in two steps: (1) analyze the seed case; (2) generate legally relevant derivative hypo- theticals. 292 / SCIENCE Step one is accomplished by the CASEANALYSIS module and results in the case-analysis-record described in the previous section. To recall, this is like a “legal- diagnosis”. Step two is accomplished by the HYPO-GEN module which given high level argument goals (e.g., generate a slippery slope sequence to refute side l’s position), uses the case-analysis-record, and heuristics like the following to generate hypotheticals derived from the seed case: Hl. Pick a near miss dimension and modify the facts to make it applicable. H2. Pick an applicable dimension and make the case weaker or stronger along that dimen- sion. H3. Pick a dimension related to one of the ap plicable dimensions and apply one of the other heuristics, particularly, 1 or 2. Ha. Pick an applicable dimension and make the case extreme with respect to that dimen- sion. H5. Pick a target case that is a win and, using 1 and 2, move the seed case toward it to create a near win. In order to illustrate these methods, we will use the fol- lowing hypothetical case, Widget-King u. Cupcake, whose facts are as follows: Plaintiff Widget-King and defendant Cup cake are corporations that make competing prod- ucts. Widget-King has confidential informa- tion concerning its own product. Cupcake gained access to Widget-King’s confidential informa- tion. Cupcake saved expense developing its competing product. The parts of the case-analysis-record for Widget-King u. Cupcuke that are relevant for the following discussion are: applicable dimensions: competitiue-advantage-gained near-miss dimensions: aeerets-voluntarily-di~cloaed; uertical-knowledge relevant CKB cases: Telez v. IBM HI. Enable a near mlsa dimension: To make a hypothetical out of a fact situation according to this heuristic method, HYPO selects a near miss dimension and “fills in” the missing prerequisites. HYPO instanti- ates objects and makes appropriate cross references among objects’ slots so that the missing factual predicates are satisfied. For example, eecrefs-uoluntorily-disclosed would apply to Widget-King but for the fact that the confidential information had not been disclosed to anyone. The pm gram instantiates, let us say, five disclosures and sets the subject of the disclosures to be the confidential informa- tion. As discussed below, the number of disclosures, five, may be derived from an actual case that the program is considering in the context of making up the hypothetical, or it may be somewhat arbitrarily chosen by the program from within the range of the dimension. H3. Make a case weaker or stronger: HYPO generates a derivative hypothetical weaker/stronger than the seed case by using the information it knows about di- mensions. It can make a case weaker or stronger in two ways: (1) independently of the “caselaw” represented by the CKB; or (2) based on the CKB using a weak form of analogy. To accomplish a CKB-independent etrength- ening/weakening, HYPO simply changes the values of a focal slot in the manner specified by the direction-to- strengthen slot; the amount of change is somewhat ar- bitrary. To accomplish a CKB-based modification, for in- stance to strengthen, HYPO first chooses a case that (a) shares the dimension being manipulated, and (b) is fur- ther along the dimension in the stronger direction. HYPO then adjusts the values of the focal slots of the seed in the stronger direction so that the derivative case is stronger than the “precedent” chosen from the CKB. These changes can involve numerical, symbolic or Boolean values. For symbolic values, this means using a partial ordering on values. Modifications can involve more than one focal slot, for instance a ratio. For example, given our fact situation involving Widget-King and Cupcake which involves some expenditure of money by Widget-King for product devel- opment, the Tefez u. IBM case in the CKB is relevant. In Telez the ratio of paintiff’s to defendant’s expenditures was 2:l (and the paintiff won). So to strengthen Widget- King’s case, change ratio of Widget-King’s to Cupcake’s expenses to be at least 2: 1. An example of such ratio ma- nipulation can also be found in [McCarty & Sridharan, 19811. COGNITIVE MODELLING AND EDUCATION / 293 Even a simple change in a single numerical focal slot value can have serious legal implcations. Again consider our Widget-King case, as mod&d by the introduction of 5 disclosees, and make it e along the secreta-uofuntari1~- disclosed dimension by using cases from the CKB. HYPO w the number of Widget-King disclosees from 5 to 150 based on Midlond-Ross which was decided for the de- fendant because there were too many disclosees (100) and now Widget-King has passed the l~isclosee threshold. Note, Widget-King could still rely on Do&x-General and argue that since the plaintiff won in that case (with 6ooo disclosees), it should still win with only 150. HYPO could make the case weaker still by increasing the number of dis- closees near or above 6oo0, the highest value in the CKB or even greater (in a CKB-independent way) to the highest value allowed by HYPO. H3. Generate a hypo on a related dimension: There are certain relations among dimensions in the HYPO program. For example, two dimensions may conflict. That means there is a particular case to which both dimen- sions apply which would have been decided for the oppos- ing party had only one of the dimensions applied. The dimensions diseloaurea-su6~ect-to-~e8triction and aecrets- uohntutr’ly-diucloeed conflict with one another in the Dutu- Generalcase. With its 6000 disclosees, the case would have been decided for the defendant but for the fact that the disclosures were subject to restriction. A hypothetical on a related dimension can be gener- ated by taking the seed case and adding facts sufficient to make the related dimension apply to it in a manner similar to that with heuristic Hl. For example, the Widget-King case, as modified by Hl and H2 above, can be further modified so that diucZoaures-uu6ject-~~~reu~~c~~o~ applies by making all of the disclosures subject to nondisclosure agreements. In this example, the related dimension is also a near miss dimension but that need not always be true. A hypothetical generated on a conflicting dimension is interesting because it is an example of a case where, at least arguably, facts associated with one dimension can override the effects of the other dimension’s facts. H4. Examine an extreme case: To generate an extreme case, HYPO simply changes the value of a focal slot to be an extreme of its range of values. This can also be done in either a CKB-based or CKB-independent manner. The former method pushes the slot value to the extreme actually existing in a case in the CKB, the latter simply pushes the slot value to its permissible extreme. For instance, the extreme case on the strongest end of the aecteta-uoluntun’ly-dkclosed dimension for Widget- King is the facts as stated above with the exception that there are 0 disclosees. The other extreme is the maximum value for number of disclosees which in the CKB is 6000 and which in HYPO is 10,000,000. H6. Manipulating a near win: A near win hypo is one in which a seed fact situation is weak on behalf of, let us say, the plaintiff. It can be ‘moved” in the direction of a real target case from the CKB that has been decided in fa- vor of the plaintiff. Using methods HI through H3, HYPO endows the seed situation with the facts to make the case strong for the plaintiff. As a result, the target case be- comes relevant to the seed hypothetical and an argument can be made, based on the pro-plaintiff target case, that the hypo should be decided in favor of the plaintiff. Corre- spondingly a near win hypo can start with a pro-plaintiff fact situation and be moved in the opposite direction away from the pro-plaintiff seed case or toward a pmdefendant target case. For example, consider two cases: Z’elez, which we have already seen above, and Automated Systema, where court held in favor of the defendant where the confidential in- formation that the plaintiff wanted to protect was about a customer’s business operations, that is, the knowledge was about a “vertical market”. Using the Telez case as a seed, and Automated Syatema aa target, HYPO could make Telez a near win by making IBM’s confidential infor- mation be vertical knowledge (i.e., be about a customers business operations). As a result, an argument could be based on Automated Syatemo that, in the hypo, defendant Telex should win. 6. A Heuristic HYPO Exploration HYPO’s heuristically guided generation of hypothet- icals makes it possible to explore a fact situation’s legal significance in a manner not unlike the sequence of hypo- theticals in the creche example from the Lynch case oral argument. To see how this works, let’s start with a hypothetical fact situation (a) based on the original Widget-King case but modified so that the confidential information is about customer business operations. This modification makes the Automated S@ema case apply in favor of the defen- dant. The following hypothetical modifications bolster the plaintiff ‘8 position: Q: Suppose (b) Widget-King’s alleged trade secret information, eventhough it was vertical knowledge, helped it to produce its competing product in half the time like in the Telex case? Q: Suppose (c) the vertical knowledge al- lowed Widget-King to bring its product to mar- ket in one fourth the time and at one fourth the expense. Q: Suppose (d) that Cupcake paid a large sum to a former employee of Widget-King to use the information to build a competing prod- uct, as Telex did. Wouldn’t the information be protectible as a trade secret then? 294 / SCIENCE In this example, heuristic methods 1,2,3 and 5 are at work. Near miss dimension uerficuf-knowledge is used with 1 to create the intial hypo (a). The modification at (b) is produced by 5 and 2 using the the Telex case as the target. Method 2 is used to make the stronger hypo at (c). Methods 5, 1 and 2 are used to create the hypo at (d) where the near miss dimension is common-emplo~ee-puid- to-chunge-employera. Having reached step (d) in the above extended ex- ample, a hypothetical has been constructed that is fairly strong for the plaintiff. But plaintiff’s position can be eroded by moves along other dimensions: Q: Suppose (e) that Widget-King made dis- closures to 100 outside persons as in the Midlund- Rosa case. Q: Suppose (f) all of the disclosees entered into nondisclosure agreements as in Dufu-Generul. Under that case, Widget-King (g) could have made restricted disclosures to as many as 6000 people. Q: What if(h) Widget-King made restricted disclosures to lO,OOO,OOO people. Is it still a 8e- cret? (Not an idle hypothetical in this day of mass marketing of software.) Q: Are the nondisclosure agreements en- forceable? What did all of these people get in exchange for agreeing not to disclose the se- cret? Suppose (i) that the disclosees did not receive anything of value for entering into the nondisclosure agreements? With secreta-uolunfurily-diacloaed as near miss dimen- sion and the Midland-Rout+ case as target, the hypo at (e) can be generated from (d) using methods 5, 1 and 2. (f) represents a method 3 move to a conflict dimen- sion, discloaurea-aubjecf-fo-reefrkfion. We assume that the Data-General case has been recognized as a conflict- example. Otherwise this could be regarded as a method 5 move with Data-General as a target. Using method 4, the hypo at (g) hats been moved to the extreme value in Data-General and at (h) to the extreme of the range of the dimension. The program does not know that a secret told to 10,000,000 people is not a secret, even if they promise not to tell anyone else, but the program does know that two dimensions conflict and that moving to an extreme on one dimension may cause the conflict to be moot. Having exhausted the possibilities for weakening the case along the aecrefs-uolunfurilg-ily-diacloscd dimension, the program moves, using methods 1 and 2, to a dimension that became a near miss as soon as nondisclosure agreements came into the hypo at (f), agreement-supported-by-consideration. The above line of hypotheticals makes legal sense. One can imagine the scene at 11 p.m. in the oak-paneled library at 14 Wall Street as two first year associate at- torneys, assigned to preparing an initial memorandum as to the strengths of Widget-King’s claim against Cupcake, play devil’s advocate with the facts by spinning off simi- lar hypothetic&. A program that does strategic planning needs to pose and manipulate hypotheticals in just this way and that is what HYPO does. One can also analyze the sequence of hypotheticals about the civic creche display from the &(nch case oral ar- gument in terms of the dimensional model and heuristics for building hypo’s. The justices make the basic fact situa- tion weaker and stronger along a dimension that might be called focus-of-attention: they remove all of the secular im- ages leaving only the religious one, they physically shrink the symbol to an extreme and relegate it to a comer, they remove the religious symbols and leave the secular ones. They weaken plaintiff’s case along the dimension of civic- content-message by moving it to a municipal art museum or the frieze of a courtroom. They compare the case along the dimension of government-involvement to an extreme example, the Pope’s mass on the Mall. 7. ConcIusion In this paper, we have discussed an aspect of case- based reasoning (CBR) involving the use of hypothetical cases. In particular, we have discussed how our CBR l+ gal reasoning program HYPO currently uses case exam- ples, “dimensions”, and five or so heuristic methods to compare the legal consequences of facts and to generate hypothetical fact situations to augment and explore its case knowledge base (CKB). This is done by considera- tion of a sampling of hypotheticals, which we have called the “heuristic constellation”. The hypos in the constel- lation help accomplish analysis tasks, such as testing the sensitivity of positions and relating a fact situation to sig- nificant past cases, and argument tasks, such as generat- ing a slippery slope to refine or refute an argument and controlling the course of argument. HYPO generates the hypos in the constellation with heuristics involving (1) strengthening/weakening of a case; (2) taking the case to extremes; (3) enabling a near miss case; (4) manipulating a near win; and (5) examining a case along related (e.g., conflicting) dimensions. COGNITIVE MODELLING AND EDUCATION / 205 APPENDIX Telez Corp. u. IBM Corp., 510 F.2d 894 (5th Cir., 1975). Data Generul Corp. u. Digiful Computer Controls, Inc., Held for plaintiff IBM on trade secrets misappropri- 357 A.2d 105 (Del. Ch. 1975). ation claim where Telex gained access to IBM’s con- Held for plaintiff Data General on trade secrets mis- fidential product development information by hiring appropriation claim where Data General disclosed an IBM employee, paying him a large bonus to de- its technical product development info to 6000 per- velop a competing product. The employee used de- sons, all of whom were subject to nondisclosure velopment notes he brought from IBM. Telex saved agreements. time and expense developing the competing product. Automuted Systems, Inc. u. Service Bureuu Corp., 401 Midland-Rosa Corp. u. Sunbeam Equipment Corp., 316 F.Supp. 171 (W.D. Pa., 1970). Held for defendant Sunbeam on trade secrets misap propriation claim where Midland-Ross disclosed it’s technical product development info to 100 persons. F.2d 619 (10th Cir., 1968). Held for defendant SBC on trade secrets misappro- priation claim where Automated-Systems’ confiden- tial info was about customer’s business operations (i.e., vertical info). Table I: Sample Casea from Case Knowledge Rase. Secrets-voluntarily-disclosed: Significance: Plaintiff’s (P’s) position stronger the fewer persons to whom secrets disclosed. Prerequleites: P and Defendant (D) compete; D had access to P’s product information and gained some competitive advantage; some disclosures. Focal slot: Number of disclosees. To Strengthen P: Decrease number of disclosees. Range: 0 to N. Cases indexed: Midland-Roes, Data-G&e& Competitive-advantage-gained: Significance: P’s position stronger the greater competitive advantage gained by D. Prerequisites: Competition; access to info; D saved some expense. Focal slot: Development expense saved. To Strengthen P: Increase expense saved by D. Range: 0 - 100 %. Cases indexed: Telex u. IBM Vertical-knowledge: Disclosures-subjectt~restriction: Significance: P’s position stronger the fewer dis- &sees not subject to nondisclosure agreements. Prerequisites: Competition; access to info; some disclosures and nondisclosure agreements. Focal slot: Number of disclosees subject to restric- tion. To Strengthen P: Increase percentage of die closees subject to restriction. Range: 0 - 100 96. Cases indexed: Data-Generuf Signikance: P’s position stronger if information technical, not vertical. Prerequisites: P and D compete; D had access to P’s product information; info about something. Focal slot: What information is about. To Strengthen P: Make information about techni- cal development of product. Range: {technical, vertical} Cases indexed: Automuted Systems, et al. Table 2: Sample Dimensions. 296 / SCIENCE References PI PI PI VI PI PI Kevin D. Ashley. Modelling Legal Argument: Reu- aoning with Cases and &potheficub - A Thesis Pro- posal. Project Memo 10, The COUNSELOR Project, Department of Computer and Information Science, University of Massachusetts, 1986. Kevin D. Ashley and Edwina L. R&land. Toward Modelling Legal Argument. In Antonio A. Mar- tino and Fiorenza Socci Natali, editors, Atti pre- liminuri de1 II Conuegno internuzionule di atudi $01 Logieu In/ormuficu Dititto, pagea 97-108, Consiglio Nazionale delle Ricerche, Istituto per la documen- tazione giuridica, Florence, Italy, September 1985. Li-Min Fu and Bruce G. Buchanan. Inductive Knowl- edge Acquisition for Rule-Baaed Ezpert Systems. Re- port KSL85-42, Knowledge Systems Laboratory, De- partment of Computer Science, Stanford University, 1985. To Appear in AI Journal. Paul Gewirtz. The Jurisprudence of Hypotheticals. American Bur Associution Journal, 67:864-866,198l. M. R. Gilbume and R. L. Johnston. Trade Secret Protection for Software Generally and in the Mass Market. Computer/Law Journal, III(3), 1982. L. Thome McCarty and N-S. Sridharan. The Repre- sentation of an Evolving System of Legal Concepts: II. Prototypes and Deformations. In Proceeding8 of fhe Seventh Infernutionul Joint Conference on Arfi- jkiul Intelligence, International Joint Conferences on Artificial Intelligence, Inc., Vancouver, B.C., August 1981. PI PI PI WI WI Edwina L. Rissland. Argument Moves and Hypothet- icals. In Charles Walter, editor, Computing Power und Legal Reoeoning, West Publishing Co., St. Paul, MN, 1985. Fdwina L. Rissland. Hypothetically Speaking: Ek- perience and Reasoning in the Law. In Proceed- inga First Annuul Conference on Theoretical Iaaues in Conceptual Infonnution Processing, Georgia Ins& tute of Technology, Atlanta, GA, March 1984. Edwina L. R&land, Bruce G. Buchanan, Paul Rosen- bloom, and H. T. Ng. Intelligent Ezumple Selection: Empirical Ezperiments with Near-hhsses. 1986. Sub mitted for Publication. Edwina L. R&land, E. M. Valcarce, and Kevin D. Ashley. Explaining and Arguing with Examples. In Proceedings oj the Fourth National Conference on Ar- tificiul Intelligence, American Association for Artifi- cial Intelligence, Austin, TX, August 1984. SUP The Complete Oral Arguments of the Supreme Court of the United States. University Publications of America, Frederick, MD. COGNITIVE MODELLING AND EDUCATION / 297
|
1986
|
140
|
407
|
ABSTRACT CAN A SYSTEM BE INTELLIGENT IF IT NEVER GIVES A DAMN? Thomas Edelson Department of Computer Science Georgetown University Washington, DC 20057 I explore whether all types of cognitive faculties are possible in a system which does not also have "affective faculties" such as motives and emotions. Attention is focused on the human cognitive faculty of understanding the human affective faculties, and it is suggested that we do this in part by using ourselves as models of other people. Therefore any system which could perform as well as we do on this task would incor- porate much knowledge about human affective faculties, and would probably have embedded within it a "model human" which would possess affective faculties. However, this would not necessarily manifest itself directly in the embedding system's behavior; thus the embedding system, unlike the embedded one, need not have affective faculties. That's a relief, because we might prefer that it didn't. Keywords: emotions; feelings; metacognition; motives; common-sense psychology; folk psychology; naive psychology; Haugeland, John. 1. Introduction Can there be thinking without caring, reason without emotion, intelligence without engagement? I will argue that: (1) There are some types of thinking which probably can't be separated from caring, in the sense that we won't know how to build systems that can do those kinds of thinking until we know how to build systems that care. (2) However, saying that we will need to know how to build caring systems does not imply that we must actually do so; it is possible to build thinking systems which do not care. (3) If I'm right about (l), then people inter- ested in practical uses of AI had better hope that that certainai;;;s Lz) as well. The alternative, I'm right intelligence are possible only in caring systems, is unattractive, because we might not want applied AI systems to be caring systems. Haugeland (1985) seems to agree with my claim (1); indeed, much of his last chapter can be read as an argument for it. However, he disagrees with claim (2), and does not discuss (3). In this section I will clarify these claims; then I will turn to arguing for them one by one. I will drop the informal term "caring" and speak instead of "engaged systems" and systems which have "affective faculties". An engaged system is one for which it makes sense to talk about its motives. In explaining what it does, we find it natural to say things like "it did x because it wanted to accomplish y". Furthermore, not all these motives are simply subgoals on the way to accomplishing some one thing -- they are not all means to a single end. Rather, the system is capable of having multiple, independent motives, each of them an "end in itself" rather than merely a means. Also, it is capable of change over time in its set of motives. It can gain new motives and lose old ones. A corollary of this is that observers will need to make inferences from its behavior in order to figure out what its motives are at any particular time. It should be clear that human beings are en- gaged systems. I hope it is also clear that almost no computer-based systems, so far at least, are engaged systems (nor are they intended to bej. We may say of a question-answering system that it "wants" to figure out the correct answer, of a chess-playing program that it "wants" to win (Haugeland, 1985>, of an operating system that it "wants" to carry out our commands. But these usages are metaphors, not literal statements; such systems do not have the complexity of motives required for an engaged system." They acquire new motives only as intermediate goals, or because we assign them. I have defined an "engaged system" as one with multiple, independent, changeable motives. What about "affective faculties"? This is intended to be a catchall term for everything in the mental domain which is not strictly rational or cognitive. Thus belief and thinking are not affective, but motives and emotions are. 2 A system with one motive doesn't really have any motives. Compare Haugeland on "semantic intrigue" (pp. 216-217). 298 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Sloman and Croucher (1981) argue convincingly that (what I call) an engaged system will also have other affective faculties, such as emotions. I accept this conclusion; furthermore I note that their argument equally supports the converse claim, that a system with emotions must be an engaged system. Therefore I will use "engaged system" and "system with affective faculties" almost interchangeably. 2. Cognition About Affect In this section I argue for the claim that any reasonably complete model of human cognitive faculties will include a model of human affective faculties: the two domains cannot be separated. In an excessively simple form, the argument is: any reasonably complete model of human cognitive faculties must include the ability to reason about human behavior and the faculties which produce it. including the affective faculties; for a computer system to do this, it will need to include a model of the affective faculties. The problem with the argument as stated is that it applies to too many things. One could equally well say: "Any reasonably complete model of human cogni- tive faculties must include the ability to reason about the weather and the processes which produce it; for a computer system to do this, it will need to include a model of the global atmosphere. The cognitive domain cannot be separated from the meteorological domain." If the cognitive domain is inseparable from the affective domain in only the same way that it is inseparable from the meteorological domain, that's not such a big deal. It's just another way of saying that understanding of human affect and motives -- more generally, understanding of "human nature" -- is part of the "common-sense knowledge" which AI systems of the future are going to need (Lenat et al, 1986). But I claim that our knowledge about "human nature", including the affective faculties, is significantly different from other kinds of common-sense knowledge. I want to suggest that when humans perform cognitive tasks which happen to involve thinking about human affect, then something special is going on. In thinking about humans, we take special advantage of the fact that we are humans. .Therefore, to reproduce this particular set of cognitive capacities in a system which is not itself human will present a special (though not necessarily insuperable) problem for AI. What is this special process which (I claim) is involved in thinking about people? It consists of using oneself as a model of the person one is thinking about. I will now ask you, the reader, to provide your own example of this process, by acting as an experimental subject. All you have to do is to answer a question about how someone would feel in a fictional situation. Imagine an engineer called in by her boss after a space shuttle accident. This engineer had argued that it was not safe to launch the shuttle, and it now appears that her doubts were well- founded. The manager asks the engineer not to reveal her pre-launch objections; if asked by the board of inquiry, to say that she had fully con- curred with the recommendation to launch. The manager adds that the company will not forget who was, and who was not, a team player at this criti- cal time. How does the engineer feel? I hope you agree that most people in such a situation would feel dismay, anger, and fear. But how do you know that? Is it an inductive inference from similar situations which you have seen people in, or been in yourself? It could be that; it probably is partly that. But I would suggest that it is also something else. In answering a question about hypothetical people in a hypothetical situation, we imagine ourselves into that situation. And "imagining" here is not a purely cognitive process. It doesn't just involve remembering how our (or others') affective faculties functioned in the past; it involves our affective faculties now. If you imagine yourself into a situation like that of the engineer, you may actually feel her reactions yourself. I am proposing that the process of imagining a situation literally activates some of the same brain processes which occur in response to a real situation; somehow we are able to "fake" the neural messages which would have come from our sensory apparatus. As a result, at least part of our brain responds exactly as it would in the real situation (for indeed, since the messages it is receiving are the same, it can't tell the difference). Therefore the internal process which normally corresponds to getting angry (for ex- ample) may actually occur. We observe this going on in ourselves, conclude that it would happen in the imagined situation, and report accordingly. What I am offering is a piece of speculative psychological theory. I have been unable to find any mention of this theory in the psychological literature. There are, however, interesting parallels between what I am suggesting and some work on mental images; for example (Finke, 1986). 3. A Computational Analogy In this section I will try to make my psychological hypothesis more concrete, by sug- gesting an analogy between it and a computational process. I suggest that what goes on in a person who is imagining himself in a situation is something like what goes on in a "virtual machine" facility such as IBM's VM/370 (Buzen and Gagliardi, 1973; IBM, 1972). A virtual machine is an imaginary computer COGNITIVE MODELLING AND EDUCATION / 299 being simulated by a real one; the VM/370 control We certainly don't rely only on this method. program feeds inputs to the virtual machine in We also do have and use ordinary, explicit such a manner that these inputs appear to be knowledge about people. The "virtual person" coming directly from real input devices. Some- method can be used when, lacking any more specific times they are in fact coming from devices of the information, we go on the assumption that others same general type, but sometimes not: when a would act the same way we would. When we have virtual machine reads from its "card reader" it information from experience about how a kpecific may be receiving data which in fact has never been person acts in a given kind of situation, we are on a card, but was instead generated by some other likely to get a better prediction by using that virtual machine. information than we could get by putting ourselves in her place. Similarly, when a virtual machine is ready to produce output, it executes a Start I/O instruc- tion, which would, in a real machine, cause an I/O 4. How to Model Affect channel to begin the output operation. Since this is a virtual machine, however, the output request If my speculation about how humans understand is intercepted by the control program, which then each other is correct, how then can a computer simulates the operation. What is actually done system be programmed to equal human performance at with the data may be quite different from what the the task of understanding humans? There seem to virtual machine "thinks" it is doing. be two possible approaches. The analogy is imperfect: for one thing, an The first approach is to build an engaged unprogrammed 370, real or virtual, has no affec- system: one which has complex multiple motives, tive faculties (and no human-like cognitive feelings, and the rest of the affective faculties, either). For another thing, the pur- faculties. Then it can do the "virtual person" pose of using a virtual machine is usually not to trick to understand people by analogy with try to figure out what another machine would do. itself. Nevertheless, the VM/370 analogy is particularly apt in at least one way: the program in the vir- The other approach is to build a system which tual machine is not in fact merely simulated, in does not share the affective faculties, but the sense of being carried out by a software reasons about them in the same way that it reasons interpreter. It is executed directly by the about any other topic. This would require giving hardware (or microcode) of the real 370. Only the it a knowledge base about human nature which is inputs and outputs are simulated or redirected. more complete than the one which actual humans have. (Humans do have explicit knowledge about In a similar way, I propose, a human being may human nature, but the machine would need more of generate inputs (representing an imagined situa- it to make up for its lack of non-explicit tion) which don't come directly from the outside "knowledge".) world, but which are fed to affective faculties just as if they had; these faculties generate It may seem that the two approaches would lead outputs which would normally lead to action, but to quite different kinds of models of the affec- which are trapped and redirected (fed as inputs tive faculties. However, I want to argue in this back into the part of the brain which is managing section that they probably would not; that if the the simulation experiment). two approaches are both successfully pursued, they will probably produce quite similar models, dif- Since the virtual machine's program is executed fering only in the way that the models are con- directly by the real machine, the VM/370 software netted to the outside world. system need not, and does not, include a software interpreter for the 370 instruction set. Why would we initially expect the models to be Similarly, if human beings use the mechanism I am different? Because designers following one ap- suggesting, then they can "model" the affective preach would be most naturally led to look at the faculties of other people (or themselves), without affective faculties "from the inside", while those having a set of mental representations of how following the other approach would look at them those affective faculties work. I don't have to "from the outside". If you are trying to build a know what the causal chain is that leads from system which actually has human-like affective sensory inputs to motor outputs; I can invoke that faculties, then you learnwhat you can about the causal chain by providing bogus sensory input, and way those faculties work in human beings, and trap and observe the resulting output signals, build a system which works the same way. By without being able to describe what goes on in contrast, if your objective is merely to give your between. system a theory about human nature which it can use to make predictions and answer questions, then If I am right, there is indeed a significant it is most natural to think of the system as difference between common-sense knowledge about simply observing human beings from the outside, people and common-sense knowledge about weather. treating them as black boxes; either the system or The former, unlike the latter, may in part not be the system builder would then need to come up with "knowledge" in the usual sense at all, that is, a theory which predicts the outputs of those black information encoded somehow in memory. And we boxes, given their inputs. didn't have to learn it. 300 / SCIENCE However, the important question is what works; and it may turn out that the "black box" approach just doesn't work well enough. What this approach amounts to is trying to determine a function by looking at its behavior over some large but finite set of input values. If the function is allowed to have arbitrary complexity, then any such set of data underdetermines what the function is. And therefore mistakes may be made in predicting its future behavior. The goal, of course, is not perfect prediction but merely prediction as good as that achieved by humans (which certainly isn't perfect). But if, as I hypothesize, humans use the "virtual person" approach, which gives them access to the inside of the box, it may turn out that this enables them to do better on some predictive tasks than the black box approach ever could. Also, even if the black box approach could work in theory, it may still be easier in practice to construct a working predictor by taking advantage of knowledge about the inner mechanisms of human affective faculties. It is possible that the black box approach will, even in practice, produce acceptable performance. If so, it could still turn out that the model constructed by this approach is similar to the one created from knowledge of the inner mechanisms. Obervation from the outside may lead to a computational model which is, in fact, similar in structure (perhaps even mathematically isomorphic) to one created the other way, so that the black box modeler will have modeled the inner mechanisms without knowing it. Admittedly, there is the residual possibility that the black box approach will work, and produce a model quite different from what you'd get if you worked with knowledge of the inner mechanisms. I can't rule that out; time will tell. 5. Cognition Without Affect? Yes I have now completed argument for my first claim. That argument may be summarized as fol- lows: Any reasonably complete simulation of the human cognitive faculties must include the human ability to reason about humans, including their affective faculties; thus it will include a model of their affective faculties. Humans probably do such reasoning, in part, by using themselves as models of the other (what I have called the "virtual person" technique), thus having and using access to the inner mechanisms of the affective faculties, rather than only observing them from the outside. It is probable, though not certain, that any system which matched our performance on these tasks would do it by having a model which also was based on, or at least unintentionally resembled, these inner mechanisms of the affective faculties. Therefore, in designing such a system, the designers would probably have learned so much about the affective faculties that they would be able to build a system which had affective faculties of its own -- an engaged system -- if they wished to do so. Now I will argue for the second claim: that it is nevertheless possible to build a system which can reason as well as we do about human affective faculties, but which does not itself share them. How do we tell whether a system has affective faculties? By observing its behavior. Here's a rather simplified example: Suppose we tell a system that we don't think its performance is as good as some other system's. Suppose it replies, "If you think that one is better, why don't you use it?" Suppose, more importantly, that it then refuses to answer any questions for us until we retract our statement. Then we would be inclined to say that this system has a self-image that it cares about, and that it is capable of anger. Given such criteria for what counts as an engaged system, does any system which contains a model of an engaged system necessarily have to be one itself? Clearly not. The question is not how lifelike the model is internally, but how it is connected to the system as a whole. For example, contrast the system described above, which is sensitive to criticism (call it Fred) with one that simply understands how humans can be sensitive to criticism (call it Flo). If you tell Flo a story about a person who is criticized and then refuses to cooperate, Flo can explain why this occured. How? Very likely because Flo contains a model (call it Moe) which has, as humans have, the tendency to be sensitive to criticism. Flo finds out how a person would react to a critical remark by making that remark internally to Moe and seeing how Moe responds. But Moe's response is not passed directly to us; it is only used by Flo to draw a conclusion. If we make a critical remark to Flo itself, it doesn't respond as either Fred or Moe would. Perhaps it ignores the input entirely, judging it irrelevant to its programmed goals; perhaps it takes our word for the statement and stores it away in its knowledge base; but it does not retaliate or show any other general disturbance in its question-answering behavior. Therefore Flo is not an engaged system, though it contains a model of one.* Haugeland (1985), in a section titled "Wouldn't a Theory Suffice?", uses essentially the same example to argue for a conclusion opposite to the * To refer back to the analogy with virtual machines, Flo is not like a VM/370 system, because here the virtual machine (Moe) has a different architecture from that of the real machine in which it is embedded (Flo). COGNITIVE MODELLING AND EDUCATION / 30 1 one I have just argued for.;\::' Agreeing that Flo has no affective faculties, he wants to conclude that Flo doesn't really understand human affective faculties either -- and thus, more generally, a system which didn't have the faculties couldn't understand them. The crux of his argument is as follows: I, . . . Flo has to give Moe the whole story and then simply plagiarize his response. In other words, the idea of Flo understanding the stories cognitively, with .occasional reference to her theory of selves, is a total scam. Moe (in whom there is no such segregation [between cognition and affect]) is doing all the work, while Flo contributes nothing." But Moe is part of Flo! Haugeland's argument implicitly rejects behavioral criteria for deter- mining whether Flo understands or not, though Haugeland accepts such criteria elsewhere in the book (see p. 215). If we ask Flo questions and get answers which seem to indicate understanding, then Flo understands, no matter if she consults a whole platoon of inner specialists in the process. 6. What Do We Want? So far as we can tell, then, building a system which understands human affective faculties without sharing them is just as feasible (or infeasible) as building one which does share them. Given the choice, which type of system would we rather have? Someone will want to try to build an engaged system, at least for research purposes: to test theories about how humans work -- and, of course, simply to show that it can be done. What about applied AI? There, I suggest, one would usually rather not have a system which displayed human-like affective faculties. Do you want to have to worry about the possibility that your expert system will refuse to help you, be- cause you have insulted it? Which would you rather deal with, Fred or Flo? The choice is up to you. REFERENCES To use Haugeland's own terminology, in this passage he abruptly and temporarily abandons his otherwise preferred stance as a "skeptic" about AI, and speaks instead as a "debunker" -- just like (Searle, 1980). In effect he is saying: "It may look like understanding from the outside, but when you see the way it works inside, you'll realize that it isn't really understanding after all." Buzen, J. P., and U. 0. Gagliardi. "The Evolu- tion of Virtual Machine Architecture". In Proc. National Computer Conference, New York, June, 1973, pp. 291-299. Finke, R. "Mental Imagery and the Visual System". Scientific American 254:3 (March 1986) 88-95. I conclude that cognition and affect can be Haugeland, J. Artificial Intelligence: The separated -- or more precisely, decoupled. A G;; Idea. Cambridge, Massachusetts: MIT Press, * system which can understand the affective faculties needs to contain a model of them; IBM Corporation. IBM Virtual Machine nevertheless, it is possible to arrange matters so Pub.#GC20-1801- that these affective faculties do not directly Facility/z -- Planning Guide. manifest themselves in the personality which the 0, 1972. system presents to the outside world. Lenat, D., M. Prakash, and M. Shepherd. "CYC : But when I say "it's possible", I don't mean to Using Common Sense Knowledge to Overcome Brittle- imply that it's just a matter of rolling up our ness and Knowledge Acquisition Bottlenecks". AI - sleeves and doing it. All I really mean is that Magazine 6:4 (Winter 1986) 65-85. it hasn't yet been convincingly been shown not to be possible. There are serious unsolved problems Searle, J. "Minds, Brains, and Programs". which face any attempt to create a system which Behavioral and Brain Sciences 3:3 (September 1980) 417-424. - - can really understand human affect -- whether or not you want it to share those qualities as well and M. Croucher. as understand them. The attempt may yet be aban- Sloman, A., "Why Robots Will Have Emotions". In Proc. doned as infeasible, or just not worth the enor- IJCAI-81, Vancouver, -- mous effort -- if not as impossible in principle. British Columbia, August, 1981, pp. 197-202. St* See pp. 242-243. I have borrowed the names. 302 / SCIENCE
|
1986
|
141
|
408
|
Debugging User Conceptions of Interpretation Processes M. J. Coon&*, R. T. Hartley* and J. G. Stellt *Computing Research Laboratory, New Mexico State University TDepartment of Computer Science, Manchester University, U.K. ABSTRACT The use of high level declarative languages has been ad- vocated since they allow problems to be expressed in terms of their domain facts, leaving details of execution to the language interpreter. While this is a significant advantage, it is frequently difficult to learn the pro- cedural constraints imposed by the interpreter. Thus, declarative failures may arise from misunderstanding the implicit procedural content of a program. This pa- per argues for a constructive approach to identifying poor understanding of procedural interpretation, and presents a prototype diagnostic system for Prolog. i. Procedural Interference with Problem Specification Specification (“declarative”) languages have arisen out of different considerations from conventional “procedur- al” languages. Advocates of the former are concerned with the structure of problems, whereas those support- ing the latter are concerned with the structure of solu- tions (Kowalski, 1979). Thus, declarative languages re- quire constructs for problem decomposition and pro- cedural ones for decomposing solutions. While the declarative approach to programming is at- tractive, there are difficulties in keeping the concept pure. First, although applications which fit neatly into the control structure underlying a particular language will be easy to express, it will not be easy to write pro- grams of any complexity outside its scope. To do this, a clear understanding of the constraints imposed by the inference engine on the expression of domain facts and rules will be required (see Sauers, 1985 concerning the effects of production system control schemes on knowledge representation). Secondly, where ambiguity exists in the user’s mind over the procedural semantics of the language, it will be difficult for him (or an au- tomated debugging aid) to separate errors in specification from procedural errors. This will reduce the vaIue of the language as a tool for programming with domain knowledge, in which all errors should be explained in terms of an omitted case or an incorrect representation. The logic language Prolog, for example, solves problems by repeatedly decomposing them into simpler ones which must be finally solved by matching facts in a da- tabase. Decomposition is achieved by matching problem statements to rule conclusions, the conditions necessary for those conclusions representing sub-problems. If some sub-problem fails, Prolog backtracks to the last successful solution and seeks some alternative. Although this backtracking is conceptually simple, its effects can be complex and therefore difflcult to antici- pate. However, a Prolog program may produce an unexpected result by successfully applying the first of two related rules on backtracking, rather than moving on after failure to apply the second rule. To understand this behaviour, the user must know that all possible instan- tiations of a given rule are tried before going on to an alternative. Lacking such knowledge, it is unlikely that the user will be able to induce the principle from the program text by symbolic execution because the poor procedural syntax provides few landmarks to help him mentally test hypotheses concerning backtracking. He may thus try to correct the program by re-writing the rules, so changing the specification, when all that is re- quired is to re-order them to allow the desired rule to be taken first. Since the text of a declarative language has to carry both specification and procedural functions for the user, errors in programs when viewed a.s specifications are difficult to isolate due to confusion with procedural er- rors. This difficulty is not found in conventional languages where a separate specification can exist in- dependent of the program. This paper thus advocates a programming aid to help isolate procedural difficulties with declarative languages (particularly those arising from misconceptions), allowing specification errors to remain for independent treatment. COGNITIVE MODELLING AND EDUCATION / 30.3 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. A Consultant for Debugging User Conceptions of Prolog Processes The authors are developing a prototype consultancy sys- tem for debugging user conceptions of Prolog interpreta- tion processes (Coombs and Alty, 1984). Prolog was selected because its predicate logic foundation promised to make it especially suitable for specification program- ming, yet considerable skill is actually required to mas- ter the procedural constraints imposed by the inter- preter. Moreover, contrary to our expectations, experts continue to make similar errors to novices, although not with the same frequency. Many of the major problems of understanding Prolog execution are related to backtracking. Even a simple program of two or three clauses may backtrack in com- plex ways which, if represented in full, would occupy many pages of trace. Such behaviour may be difficult to predict without a detailed mental model of process- ing. However, mental execution is difficult to perform accurately, given the lack of syntactic markers in Prolog text to serve as signposts and the need to relate infor- mation widely distributed in the sequence of execution events (Green et al., 1981). During learning, users develop a variety of different con- ceptions concerning Prolog execution. These conceptions have a procedural component and a memory com- ponent. The latter forms a data structure, representing the current state of a problem solution, upon which the procedural component is formulated. In conventional textbook descriptions, the memory component tends to be an OR or an AND/OR tree of goals. We have found, however, that novice Prolog programmers do not use these simple tree structures (Coombs, 1985; McAlles- ter, 1985). Instead, novices build their procedural models based upon classification of program entities (e.g. goals and clauses) as succeeded, failed and under evaluation. This formally amounts to generating a se- quence of mental AND trees, with subtrees added and deleted with satisfaction and failure of goals. This ap- proach results in a different class of procedural problems from those following from textbook descriptions. The two types of misconception described below - “try- once-&-pass” and “redo-body-from-left” - have the same underlying model, that of left-to-right execution of conjunctive goals corresponding to the sequential matching of nodes (goals) at a given level of an AND tree. ‘LTry-once-&-pass77 and “redo-body-from-left”, which we will use as examples in the rest of the paper, are the two most common backtracking misconceptions. With the former, a rule is (incorrectly) failed completely after the failure of a single instantiation; with the latter, back- tracking into a rule is (incorrectly) seen as proceeding left-to-right, taking the first subgoal to succeed rather than the last. These contrast with the correct pro- cedure, in which all possible instantiations of a rule are tried before progressing to an alternative clause, with the rule subgoals being retried from right-to-left. For example, with the failure of the goal “c(1)” in figure 1, given the success of “h(1,2)” and “i(2)“, “try-oRce- &-pass” would cause the second “b” rule to be tried, and “redo-body-from-left” would cause “d(X)” to be re- tried in the first “b” clause (followed by “e(2)“). Under correct execution, however, “d” and “e” would be re- tried from “e(l)“; only after all matches had been made would the second clause be taken. 1. a(X):-b(X),c(X). 2. b(X):-d(X),e(X). 3. b(X) :-f(X). 4. d(1). 5. d(2). 6. e(1). 7. e(2). 8. f(3). 9. c(3). Figure 1. A simple Prolog’ program. The consultant system is designed to be employed when the user has doubts about his view of program execu- tion. The system’s task is to explain such user miscon- ceptions by building a modified interpreter (a “mal- interpreter”) which reproduces the user’s account of exk- cution. This mal-interpreter is constructed by replacing the correct backtracking procedures with incorrect ones, using evidence obtained from comparing the system’s account of execution with that of the user. A typical interaction with the system would be as fol- lows. The user runs his faulted program on the correct interpreter, which generates an internal system trace. Using the same “trace language”, the user describes a symbolic execution of the program. The system then seeks discrepancies between the user’s trace and the sys- tem trace and, requesting the user to give more detailed accounts where necessary, arrives at the identity of misapplied or misunderstood interpreter concept. 3. The Symbolic Execution Language The model is described in a conceptual representation language adapted from the conceptual graphs of Sowa (S owa, 1984; Hartley, 1985). This is summarised in figure 2 a.s a procedural net (Sacerdoti 1977). However, in the complete representational scheme, the edges in the network become actors and the nodes become sub- types of the type GOAL. Each actor is triggered in- dependently when all of its inputs concepts are instan- tiated. The concept graph provides a framework for or- ganizing erroneous interpreter elements (figures 3 and 4) 304 / SCIENCE (LLmal-rules”) and for creating statements for the trace language. USER <GOAL> 1 FAIL-OUT CALL <GOAL> MARK-CLAUSE REJECT<CLAUSE”> Figure 2. The correct interpreter. The trace is based upon the familiar box model, with states CALL, EXIT, FAIL and REDO. The model describes a range of operations, the most central of which are the “matching” of goals to clauses and crea- tion of new goals both forwards (“goal-gen”) and on backtracking (“back-goal-gen”). 4. Bug Diagnosis as Alternative Interpreter Gen- eration The diagnostic procedure is based on the assumption that discrepancies between the system trace and user trace are generated by discrepancies between the correct interpreter and a hypothetical interpreter. The task of the system is to identify these second discrepancies, which may be seen as user misconceptions concerning Prolog procedural semantics. These hypothetical interpreters and the correct Prolog interpreter are seen as instantiations of a generic inter- preter, organized hierarchically into modules. The gener- ic slots in the interpreters are instantiated by particular procedures for Prolog semantic concepts. A Prolog se- mantic concept is, for example, “backtracking” or “unification”; these concepts are defined in different ways in actual interpreter modules. In order to simplify our task, we have only fully specified the backtracking module. ************x********************************************** N&E: backtrack(ProofTree,.X) PAR&f: <<clean(X,Y)> <back_goal_gen(ProofTree,X)>> EDDY: backtrack(ProofTree,X) :- back_goal_gen(ProofTree,Y), clean(Y,X), trace-rep(redo(X)). *********************************************************** Figure 3. The backtracking module. *********************************************************** proof trees are of the form: [Goal ,Clause-no,List-of-Subgoals] e.g. for a in 1. a:-b,c. 2. b. 3. c:-d. tree is [af;,y.]b,Z, [] ] ) IcAl bb4Jl1 11 I 1 /* correct rule */ backxoalsen([Goal,-,Sub],BkGoal) :- last(Sub,L), back_goal_gen(L,BkGoal), back_goal_gen( [Goal ,N, [ ] ] ,Goal) . /* mal-rule 1 - redo-body-from left */ back_goal_gen( [Goal ,-,Sub] ,BkGoal) : - Sub = [L,-,[[B~al,X,IEs]lT]], last([[B~al,X,~lITl,[~,-,[]~), t ree-unmark(T) . /* tree-unmark(T) unrmrks all of the */ /* clauses in the list of sub-trees T */ back_goal_gen( [Goal ,-,Sub] ,BkGoal) : - last(Sub,L), back_goal_gen(L,BkGoal). back_goal_gen( [Goal ,N, [ ] ] ,Goal) . *********************************************************** Figure 4. An example mal-rule. The module selection process involves the following stages: i) locate sequentially first trace line discrepancy; ii) generate list of possible wrong atomic inter- preter modules [using “trace pattern/module” pairs]; iii) run possible interpreters - if no wrong module specified in above list, use the correct module; iv) select interpreter accounting for largest sec- COGNITIVE MODELLING AND EDUCATION / 30 j tion of incorrect trace; 4 communicate misconception to the user, in- viting him to explore it via the tutoring module and to make the appropriate corrections to his symbolic execution; vi) repeat at i) - wrong modules may be plugged in where the correct module is used in the inter- preter (we make the simplifying assumption in the prototype that misconceptions are consistent throughout a single symbolic execution). 5. An Example of a Debugging Interaction The example illustrates the debugging of a symbolic ex- ecution which predicts the correct solution for a pro- gram run but which the user suspected was faulty. The program is the same as in figure 1. It implements a standard “generate-and-test” sequence, a state generat- ed by the “h” predicate being tested by the “i” predi- cate, and the result of the “b” predicate being tested by the “c” predicate. A summary of the diagnostic process is given below. The system first runs the program and generates the trace labeled SYl (figure 5). The user is then invited to give his account of execution in summary form (trace TJl). Taking the two traces, the system first identifies the discrepancy noted by the arrow pointing to “REDO b(X)” in Ul and then seeks to explain it in terms of modifications to the correct interpreter. ST1 C4LL a(X) C4LL b(X) CALL h(X,Y) CWL i(2) CALL c(1) REDO i(2) - h(X,-f) C4LL i(4) - h(X,Y) - b(X) C4LL h(Y,X) CALL i(1) - h(Y,W C4LL i(3) CALL c(4) C&Pa(X) CALL b(X) CALL h(X,Y) CALL i(2) u2 CALL c(1) CALL c(1) - b(X) FAIL c(1) CALL h(Y,X) - b(X) CALL i(1) reject r2 CALL h(X,Y) CALL h(Y,X) CALL i(4) MIT h(1,2) CALL h(Y,X) sY2 CALL i(3) CALL c(1) CALL c(4) FAIL c(1) - b(X) reject r2 match r3 CALL h(Y,X) EXIT h(l,2) u3 CALL c(1) FAIL c(1) - h(X,Y) reject f4 MIT h(3,4) sY3 CALL c(1) FAIL c(1) - h(X,Y) reject f4 match f5 EXIT h(3,4) C‘ZC(1) FAIL c(1) FEIXI i(2) continue r2 reject f6 FAIL i(2) - h&Y) match f5 MIT h(3,4) s-Y5 PJl I C4LL a(X) CALL a(X) C4LL b(X) CALL b(X) CALL h(X,Y) CALL h(X,Y) CALL i(2) CALL i(2) c4LdL c(1) CALL c(1) - b(X) - b(X) CALL h(Y,X) C&EL i(1) CALL h(Y,X) G4LL i(3) CALL c(4) CALL h(Y,X) CALL i(1) CALiT i(1) CALL h(X,Y) FAIL i(1) CALL i(4) CALL h(X,Y) CALL h(Y,X) mtch r2 CALL i(3) EXIT h(3,4) CALL c(4) CALL i(4) FAIL i(4) sY6 CALL i(1) FAIL i(1) CALL h(X,Y) U6 CALL i(1) FAIL i(1) CALL h(X,Y) reject r3 match r2 EXIT h(3,4) CALL i(4) FAIL i(4) sY7 - a(X) - b(X) CALL h(X,Y) CALL-i (2) - c(l) - b(X) - h(Y,X) CALL i(l) CALL h(X,Y) CALL i(4) - h(Y,X) C4LL i(3) CALL c(4) reject r3 Having identified a work area, taken as the discrepant match r2 MIT h(3,4) item with a context of one item either side, the system attempts to match the trace pattern to the database of patterns which index the incorrect interpreter rules. Failing to find such a match, the system requests a more detailed account of the work area. Increasing detail is requested until a match is found to one or more mal- rules. The more detailed user trace for this example is given by U2. CALL i(4) FAIL i(4) *******************************************************~***** Figure 6. Trace interpretation. Having identified possible mal-rules (the single rule “try-once-&-pass” in the present case), the Prolog inter- preter is modified to include this rule and run on the user’s program. The trace generated from this (trace SY2) is compared with the user’s trace (U2) and, in the present case, found to give an acceptable match. The nature of the bug LLtry-once-&-pass” is then communi- cated to the user, and he is invited to run and inspect traces from tutoring programs designed to highlight differences between the erroneous interpreter rule and the correct rule. Following the tutoring phase, the user is invited to correct the trace. The user’s modified trace segment is given in U3, and it may be seen that his account still fails to correspond to the correct system trace ($71). 306 / SCIENCE The old error proved to hide a further misunderstand- ing, which must now be identified and corrected. The same procedure is adopted as before, it being found this time that additionally the user does not understand that subgoals are backtracked into from the right (the mal- rule “redo-body-from-left”). Tutoring is accordingly un- dertaken, and the misconceptions within the current work area are corrected. This is demonstrated by the user trace U4. At this point, the system has identified two mal-rules present within the user’s view of Prolog execution and has constructed a mal-interpreter to include these rules. On the assumption that these misconceptions will apply throughout the trace, the system proceeds to seek the next bug. This is achieved by first executing the user’s program with the mal-interpreter to generate a further trace (SY5), and then comparing this trace with the user’s original account of execution modified by the re- vised segment. In the example, this identifies a further work area (detailed in U5) which is finally explained by a further mal-rule (LLfast~rule~cycle”), the interpreter for which produces SY6. This is similar to “try-once- &-pass”, where a given set of rules are all tried on a sin- gle fact, only moving on to the next fact after failure. Tutoring is undertaken for this misconception, which results in a correction to the trace - U6. A further at- tempt is made to find another work area. In the present case, the three misconceptions account for the user trace (compare Ul and SY7), SO the consultation is terminat- ed. 6. Conclusions The present prototype system is able to diagnose 6 backtracking misconceptions. These are all “primitive” misconceptions in that they do not form a part of some error model and are assumed to be composed sequential- ly within the mal-interpreter. Some experimental work has indicated that this is not necessarily true of Prolog misunderstandings, some being clearly nested in others. Further, it is not clear that users are necessarily con- sistent in their errors of understanding nor that the failure to generate a trace error implies that the user correctly understands an interpreter concept; the user may have a “loose” concept of the interpretation pro- cess which is nevertheless adequate for a particular problem. These limitations do affect the range of misconceptions capable of being diagnosed and corrected. However, the approach has proved robust on the typical problems presented in basic Prolog courses, which is when it is important that learners should develop a correct image of bhe interpretation process to employ for symbolic exe- cution. Moreover, the modular approach to building our interpreter makes the progressive improvement of the system relatively easy. The current system is a prototype and as such contains only a novice conception of Prolog based on the goal structure of successive AND trees (searches through pro- gram text). However, when a Prolog user gains experi- ence both his goal structures and their related pro- cedures change. At an advanced stage of learning for ex- ample, users employ an OR tree of goal stacks which greatly simplifies the backtracking rule. A realistic con- sultant would have to be able to represent transitions between such successive conceptualizations. This would require considerable additional knowledge including strategies for reducing the model’s complexity. Although considerable further research is required into the origin of Prolog misconceptions, their diagnosis and correction, we are confident that the present method of analysing, representing and simulating symbolic execu- tion will accommodate the new knowledge. References Coombs, M.J. (1985). Internal and external semantics for Prolog: debugging the user interpreter. Invited talk, CRL, NMSU, Las Cruces. Coombs, M.J. and Alty, J.L. (1984). Expert systems: an alternative paradigm. International Journal of Man-Machine Studies, 20, 21-43. Green, T.R.G., Sime, M.E. and Fitter, M.J. (1981). The art of notation. In M.J. Coombs and J.L. Alty (eds), Computing Skills and the User Interface. London: Academic Press. Hartley R.T. (1985). Representation of procedural knowledge for expert systems. 2nd. IEEE confer- ence on AI applications. Kowalski, R. (1983). Logic for Problem Solving. New York: North-Holland. McAllester, K. (1985). A debugging system for Prolog programs by user query. Technical Memorandum, Computer Science, University of Strathclyde. Sacerdoti, E.D. (1977). A St ructure for Plans and Behaviour. New York: Elsevier. Sauers, R. (1986). Controlling expert systems. In L. Bolt and M.J. Coombs (Eds.), Computer Expert Systems. Springer-Verlag - in production. Sowa, J.F. (1984). C onceptual Structures. Addison Wes- ley. COGNITIVE MODELLING AND EDUCATION / 307
|
1986
|
142
|
409
|
IMPOSING STRUCTURE ON LINEAR PROGRAMMING PROBLEMS: AN EMPIRICAL ANALYSIS OF EXPERT AND NOVICE MODELS Wanda Orlikowski Vasant Dhar Department of Computer Applications and Information Systems New York University ABSTRACT Research on expert-novice differences falls into two complementary classes. The first assumes that novice skills are a subset of those of the expert, represented by the same vocabulary of concepts. The second approach emphasizes novices’ misconceptions and the different meanings they tend to attribute to concepts. Our evidence, based on observations of problem solving behavior of experts and novices in the area of mathematical programming, reveals both type of differences: while novices are to some extent underdeveloped experts, they also attribute different meanings to concepts. The research suggests that experts’ concepts can be characterized as being more differentiated than those of novices, where the differentiation enables experts to categorize problem descriptions accurately into standard archetypes and facilitates attribution of correct meanings to problem features. Our results are based on twenty-five protocols obtained from experts and novices attempting to structure problem descriptions into mathematical programming models. We have developed a model of knowledge in the LP domain that accommodates a continuum of expertise ranging from that of the expert who has a highly specialized vocabulary of LP concepts to that of a novice whose vocabulary might be limited to high school algebra. We discuss the normative implications of this model for pedagogical strategies employed by instructors, textbooks and intelligent tutoring systems. I. Introduction Analytical modeling techniques constitute an important component of the curriculum of Operations Research, Industrial Engineering and Management schools. In particular, mathematical programming models such as linear programming (LP) have proved useful in solving many real-world problems. However, structuring open-ended problem descriptions into formal LP models is not a straightforward task. We have found that despite having taken courses in linear programming, students are often unable to frame even relatively simple problem descriptions into appropriate mathematical programming models. We conjecture that one important reason for this situation is an overly normative orientation in instruction which arises out of pragmatic considerations -- an instructor with limited contact hours may be unable to take cognizance of students’ “naive conceptions” of the material, and may focus only on presenting the “correct” modeling formalisms. On the other hand, a good tutor is sensitive to the student’s conceptualization of a problem, in detecting the lack of congruence between the student’s conceptual system and the “correct one” (assumed to be the tutor’s), and in eliminating the mismatch between the two. This requires knowledge about the domain (a model of expertise), knowledge about the novice (a novice model), and tutoring strategies that help “remodel” the novice. A pragmatic long term goal of this research is to develop an instructional system in the domain of Mathematical Programming that will effect the novice-expert transition. Achieving this overall objective requires developing the three knowledge components mentioned above. In this paper, we have attempted to understand the first two. We have developed an abstract theoretical model of knowledge that expresses “levels of expertise” in the domain of mathematical programming. It casts novice and expert models in terms of a common set of “concepts” where a more elaborate differentiation of a concept is associated with more expertise in that part of the problem domain covered by the concept. Differentiation is a construct that is used widely by developmental psychologists and historians of science for contrasting progressive “conceptual systems” (Wiser and Carey, 1983). For example, in the history of science, Kuhn (1977) provides, among other cases, the shift from Aristotle’s conception of velocity to Newton’s, which differentiated it into instantaneous and average velocities. This differentiation was a necessity in Newtonian physics whereas the more diffused notion of velocity was adequate, albeit limiting, in Aristotelian physics. We have found that expressing knowledge in terms of two types of differentiation, namely structural and semantic differentiation, provides a good theoretical foundation for contrasting expert and novice behavior in the mathematical programming domain. Structural differentiation involves elaboration of LP concepts or generic terms into fine grained ones. Semantic differentiation involves attributing different meanings to features of a problem description during the process of formulation. We illustrate these two types of differentiation in section 3 following analysis of the data. Our model is based on the results of an empirical investigation of problem solving processes of experts and novices solving linear programming problems. Before describing the model, we discuss prior research that has motivated it. Section 3 contains the results of the study and the differentiated model of knowledge that explains expert/novice differences. Pedagogical implications of this view of expert/novice differences are discussed in section 4. 2. Prior Research Prior research into expert-novice differences has been characterized by two complementary approaches. The first focuses on the differences among experts and novices in terms of their relative abilities to categorize problem descriptions into standard abstractions (Chi et al. 1981; Larkin et al. 1980; Larkin 1983; Wiedenbeck 1985.). Within this view, experts are characterized as employing abstract schematic representations that enable mapping problem descriptions into a “deep structure”, while novices, lacking such abstractions, typically fail to progress beyond the superficial problem features (Chi et al. 1981). Other research has found that experts “automate” the simple aspects of the problem solving procedures they employ, while novices have certain difficulties performing the simplest stereotypical procedures (Wiedenbeck 1985; Hinsley 1983). 308 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The second approach focuses on the misconceptions or errors made The results can be summarized as follows: first, our findings by novices in attempting to solve some problems. The premise of support those of Chi et. al (1981), Larkin (1983) and others where this approach is that novice and expert models of a domain are experts were found to employ categorization of problems into fundamentally different in that they interpret the same terms in the standard types based on underlying principles of the domain domain very differently, not unlike pre and post paradigmatic (1981:p.150). The complementary finding that novices tend to situations in the history of science (Clement 1983; diSessa 1983; McCloskey 1983; Wiser & Carey 1983). handle problems according to the entities present in the problem McCloskey (1983;pp.318-319) provides evidence that novices frequently description (1981:p.150) was also confirmed in the novice protocols. Secondly, the choice of an appropriate decision variable by experts misinterpret terms taught to them, distorting them to fit their naive theories about the domain. He suggests that these naive theories are strongly held and might not be easily remodeled by mere presentation of expert concepts and strategies. Wiser and Carey (1983) suggest that the shift from novice to expert might be characterized along the lines of the scientific paradigm shift (Kuhn 1977). The dimensions suggested by them for contrasting the conceptual differences between novice and expert are differentiation -- where the previously diffused understanding of a phenomenon is replaced by multiple finer-grained or more accurate concepts, coalescence -- where prior understanding of a phenomenon is recognized as involving redundant categories which are collapsed into a single category in the revised theoretical framework, and the shift from property to relation -- where a phenomenon that was previously viewed in isolation becomes related to others in the domain. was found to be central in reducing problem complexity. When appropriately subscripted, a decision variable relates the various problems dimensions thereby enabling a “holistic” approach toward structuring the problem. In contrast, novices employed several erroneous complexity-reduction strategies, partly in order to compensate for poor choices of decision variables. The third interesting finding was the detection among experts of a form of dimensional analysis which was not part of the novices’ repertoire. It involves simple manipulations of the units associated with the variables (i.e. $, $/lb) in a way that influences the actual construction of algebraic expressions. In effect, the “semantic” information in the units is used as a means of shaping or validating expressions in the formulation. This aspect of expertise has not been stressed in the literature, but we find it particularly relevant to the formulation of models in this domain. While it is not clear whether the novice-expert shift can be characterized in the same way as a paradigmatic shift in the history of science, we have found the notion of differentiation of concepts among novices and experts to be a useful theoretical construct along which to express and contrast expertise. Structural and semantic differentiation allow us to interpret the behavioral differences between experts and novices observed in both types of abovementioned studies. We have found the other concepts of coalescence and property-relation shift less useful in classifying mathematical programming knowledge although they may be useful in highlighting novice-expert differences in other domains. 3.1. Imposing Structure Via Categorieation Chi et al. (1981;p.150) suggest that experts begin problem solving by first attempting to categorize the problem from a brief analysis of the problem statement. This problem analysis typically yields category names that serve as labels to access appropriate internal schemata. To illustrate categorization, consider Exhibit 1 line 18 where the expert begins the formulation by stating “It’s an allocation problem” which he refines subsequently to “a blending problem”, followed by a description of the “deep structure” of the blending problem (Exhibit 1, line 24): “You could liken these variables (amounts of exposure) to amounts of ingredients where each ingredient (each advertising medium) supplies a certain amount (media effectiveness) of a certain thing.” 3. Experimental Results Two experts and three novices were chosen to participate in the experiment. The experts were a professor and a graduate student, both with considerable knowledge and experience in management science. The novices consisted of graduate students who had successfully completed one or more courses in introductory mathematical programming. Five simple’ problem scenarios selected from a fundamental mathematical programming textbook (Wagner, 1975) were chosen. These problems represent a reasonable spectrum of LP problem types: one fluid blending problem, one feed-mix problem, one dynamic programming problem and two transportation problems (the Appendix shows a blending and a transportation problem). A total of twenty-five observations were obtained. The subjects were asked to formulate each of the problems into LP models and were specifically requested not to try and solve the problems, since formulations, expressed in some suitable form, can be solved using standard procedures such as the simplex algorithm. Fundamentally, formulation involves imposing a formal structure on a problem. This requires strategies for factoring out the complexity into manageable components. Our observations reveal three types of effective strategies used by experts for handling complexity, contrasted with the methods employed by novices that preclude a coherent structuring of a problem description. In this study, not a single one of the fifteen novice formulations was correct. The errors were not trivial, but revealed a lack of knowledge about certain important concepts, as well as a diffused understanding of certain specialized LP concepts. The parenthesized phrases correspond to terms in the actual problem description. As we can see, the expert attempts to map the surface features of the problem description into the structure of the identified category. In effect, a recognition of the “blending problem” initiates a search for problem dimensions, namely, ingredients to be blended (in this case, media) into the product (in this case, exposures of audiences). These findings are similar to those from the experiments of Simon and Hayes (1976) on problem isomorphs. In contrast, novice statements such as (Exhibitl, line 1): “It’s an advertising problem” suggest the lack of any such categorization, and hence the inability to impose structure on the problem description. This finding was common to all novice cases. The lack of a schema within which to interpret the problem also appears to increase the chance of the novice forming misconceptions about the problem features, that is, of attributing incorrect meanings to them. For example, in Exhibit 1, line 10, the novice actually interprets problem data that should be in the objective function (i.e. minimize cost) as belonging to the right hand side of a constraint.2 1 We classify problems as simple if they correspond to standard types such as the transportation problem or blending problem. Complex real-world problems typically involve combinations of these standard types. 2 Actually, this is sometimes a legitimate strategy employed by experts in multiobjective programming problems, but it is highly unlikely that the novice was thinking in these terms. COGNITIVE MODELLING AND EDUCATION / 309 3.2. Imposing Structure Via “Holistic Reduction” In mapping surface problem descriptions into categorizations, experts invariably express the problem in terms of subscripted variables which capture the various dimensions of the problem (such as buyers and suppliers in problem 4) simultaneously. In this way, the problem is reduced “holistically” in that recognizing the right subscripts imposes significant constraints on the overall problem structure -- to the point that surface features become “irrelevant” and can be added or deleted without affecting the complexity of the problem description. For example, in problem 4, once the expert establishes the variables cXi j as representing the amount of oil from i to customer j, the actual dumber of vendors and customers in the problem become irrelevant to the abstract formulation since the constraint Vi, zXi j 2 Bi captures the abstract relationship (the minimum amounts of oil to be supplied) between the problem dimensions (in this case, suppliers and buyers). Adding more suppliers or buyers does little to affect the complexity of the problem. In contrast, superficial features of the problem add considerable complexity for the novice. Consider the following segment: “Let’s say X = teenagers, Y = married couples, Z = geriatric group” which illustrates that what is lacking is an appreciation that the three elements in the problem statement represent one dimension of the problem for which a single dimensional variable is more appropriate. In this case, adding more suppliers and buyers leads to more variables and relationships among them, which tends to add complexity for the novice. A related observation was that novices attempted to remove the complexity they could not deal with by artificially “simplifying” the problem by reducing the number of elements in a dimension to one (this amounts to removing the need for a subscript, Exhibit 1, line 11): “If I had only one audience, say X, and I took care of that first...” Another example of this type of sequential simplification was a transformation of an inequality into an equation thereby constraining the problem drastically and introducing a sub-optimal solution, as illustrated in the following excerpt from problem 4 (Exhibit 1, line 17): “If I say I want 110,000 gallons at airport 1, and I pick company B, then B can provide just fine.” A third type of simplification leading to sub-optimal solutions was that of sequential decomposition, an attempt to break the problem down into isolated parts and then to handle each part separately (Exhibit 1, line 14): “You want to start with airport 4 and see how you can meet its requirements from company B, then see where the next lower price is, an so on.” Finally, a fourth simplification strategy is not to introduce a decision variable at all, but to try and solve the problem arithmetically -- a phenomenon similar to one observed by Matz (1983). In line 29 for example, the novice correctly identifies the cost coefficient, but in line 30, the novice attempts to insert it into the formulation in place of the decision variable. The upshot of this is that the novice can neither arrive at the formulation because no variables are introduced, nor solve the problem because the algorithm required to obtain an optimal solution is too complex procedurally. 3.3. Imposing Structure Via Dimensional Analysis A final method in the expert’s problem structuring repertoire is one of ensuring consistent dimensionality among the units in the algebraic expressions. Casting the problem features into appropriate units not only makes explicit what the decision variables and coefficients stand for, but also, constrains how they combine (i.e., multiply, add) with other variables. For example, in problem 1, the expert deliberates on the units of “advertising effectiveness”, pointing out that if this is specified in terms of $/person, then the units of the decision variable must be “# of people” (since these are to be multiplied, and the units to be maximized are $). On the other hand, an “advertising effectiveness” without units would require a decision variable to be expressed in terms of dollars. In effect, the choice of units -- an area in which there can be considerable discretion -- constrains what the units of other variables and coefficients can be, which also clarifies their problem-specific interpretations. In contrast, in all fifteen cases, novices were unable to arrive at a consistent interpretation of units. A typical example was as follows (Exhibit 1, line 10): “Somehow, the exposure levels...exposure level for Xl, plus exposure level for X2, plus exposure level for X3 has to be equal or less than the total money.” To summarize, ensuring consistent units, appropriate subscripting, and rapid categorization enable an expert to impose a correct structure on a problem description. In contrast, the lack of an appreciation of standard archetypes, simplistic complexity reduction strategies and a disregard for units makes it virtually impossible for the novices to arrive at an appropriate formulation. In the following subsection, we summarize the findings in terms of a model of expertise that makes some of the distinctions explicit. 3.4. Summary: A Differentiated Model of Knowledge It appears that the observed differences among novices and experts can be explained in terms of the two types of differentiation mentioned at the outset of this paper. The first, structural differentiation refers to the elaboration or decomposition of LP concepts into finer grained ones. Based on our results, two levels of differentiation of terms are apparent among experts that are absent among novices. These are illustrated in Exhibit 2a and 2b. The first level of differentiation is at the level of the “optimization problem” itself. As we discussed in the previous subsection, experts try to characterize descriptions into one among several standard types. Novices, lacking these differentiations, must rely on a more bottom- up strategy for arriving at the objective function and constraints which, taken together, represent a formulation.3 Another level of differentiation occurs at the level of variables and constants. Experts have a precise conception of decision variables and constants. The decision variable is what must be computed to maximize or minimize some objective function. When appropriately subscripted, it relates the problem dimensions. Similarly, constants are of three types: the C matrix -- which is a vector of multipliers used in the objective function, the A matrix -- a vector multiplier on the left hand side of constraint inequalities, and the B matrix -- a vector reflecting certain constraint levels on the right hand side. Experts search for problem features to match these vectors. To a novice, however, a decision variable is not differentiated from a variable, and a coefficient is nothing more than some constant. Fundamentally, what is lacking is a clear concept of vectors and vector multiplication. As a consequence, the novice has little chance of synthesizing an appropriate formulation in terms of scalars unless enough of them are introduced in order to capture the dimensionality of the problem. 3 It should be noted that we distinguish between novice and expert concepts (i.e. decision-variable-E versus decision-variable-N) because the problem features that are associated with these may differ among novices and experts. Because of this, instantiations of the schematized concepts in Exhibit 2 for novices and experts solving the same problem can contain different problem features. 3 10 / SCIENCE The second type of differentiation, which we term semantic algebraic to LP domains, where terms and operations assume dijjerentiation, refers to the process of associating the problem specialized meanings. For example, strategies such as sequential features (some of which may be implicit in the problem description) substitution of variables for solving simultaneous equations cannot be with the appropriate semantic labels. For experts, categorization used for solving LP problems. These simplification strategies are a appears to serve as an anchor around which problem features can be consequence of novices’ inability to deal with the problem dimensions interpreted, whereas novices are more susceptible to interpreting simultaneously, due to their diffuse understanding of simultaneous problem features incorrectly. For example, it is inappropriate for the optimization problems, and in particular, the concept of multi- “total cost” to appear as a constraint in the blending problem dimensional decision variables and vector operations on them. This because the objective for such a problem is cost minimization, contributes to their being overwhelmed by the multi-dimensionality However, lacking general knowledge about this class of problems, the in even simple LP problems. Based on this evidence, it appears that novice interprets the objective as a constraint. In addition, textbooks that present LP without first establishing the foundations inappropriate interpretations of problem data appear to result from of the concepts and mechanics of linear algebra are unlikely to help an inadequate understanding of concepts such as decision variables novices in differentiating the meanings of the specialized LP concepts and coefficients. In effect, although novices might refer to decision from those of high school algebra. As the next step in this research, variables, coefficients, objective functions, etc., the meanings we are about to conduct an in-depth empirical investigation of how attached to these can be inappropriate (misconceptions) resulting in exactly experts and novices conceptualize terms that appear in LP. incorrect interpretations of problem features. In the following Following this, we shall study the student/human-tutor interactions section, we examine the implications of these findings for instruction in the LP domain in order to understand how good tutors resolve in LP. novice misconceptions. Modeling this interaction will provide us 4. Discussion The underlying assumption of early normatively oriented CAI (Computer-Aided Instruction) systems was that the novice would assimilate the material presented by somehow eliminating previous conceptions or biases if these existed, and that the correct conceptual categories and meanings would be established. More recent tutoring systems, that maintain explicit student models, have typically adopted one of two approaches for representing knowledge about the student. In the first approach, a student model is synthesized by comparing the student’s behavior to that of the expert model. This student modeling approach has been termed an overlay model (Goldstein, 1983), as the student’s knowledge is represented entirely in terms of the expert%. GUIDON (Clancey, 1981) uses such an overlay model. Novices and experts are assumed to share the same underlying structural and semantic models, with the novice model assumed to be a subset of the expert model. A different approach to the modeling of student knowledge is to recognize that the student’s knowledge is not a subset of the expert’s, but a perturbation or deviation from the normalcy of the expert’s knowledge, that is, in terms of bugs (Burton, 1983; Sleeman, 1983). Such systems attempt to identify the student’s mistakes and classify them according to the assumed misconception underlying the mistakes. The results of our study suggest that LP tutoring requires both approaches to modeling expert and novice knowledge. It is based on the observation that the LP domain models of novices and experts are distinct, and in particular can reflect significant conceptual differences in (a) the absence of certain concepts from novice models, and (b) the meanings attached to existing concepts and relationships. From a tutoring standpoint, the challenge is one of differentiating novices’ notions of LP concepts from those of high school algebra, and establishing precise meanings of these concepts (i.e. variables, vectors and scalars). Current introductory textbooks and course instruction do not seem to guarantee the development of adequate differentiations (as our novices’ protocols and a perusal of basic LP textbooks has revealed). Based on our results we offer two recommendations for improving LP tutoring. First, we hypothesize that the differentiation of LP problems into categories helps substantially in reducing misconceptions by decreasing the novices’ sensitivity to superficial problem features. For this reason, we suggest that tutoring media foster the development of explicit problem schemata. Second, we suggest that a sharper distinction be made between the concepts of algebra and LP. Much of the lack of semantic differentiation we detected, could be attributable to an inadequate transition from with a sound base for constructing an intelligent tutoring system in the LP domain. References Burton R.R. Diagnosing Bugs in Simple Procedural Skills, Intelligent Tutoring Systems, Sleeman D. & Brown J.S. (eds.), Academic Press, London, 1983, pp. 157-183. Chi, Michelene T.H., Feltovich Paul J. & Glaser Robert, Categorization and Representation of Physics Problems by Experts and Novices, Cognitive Science, Vol. 5, June 1981, pp. 121-152. Clancey, William J., Methodology For Building An Intelligent Tutoring System, Report #STAN-CS-81-894, Dept. of Computer Science, Stanford University, Stanford CA, October 1981. Clement, J., Students’ Preconceptions in Introductory Mechanics, American Journal o j Physics, 50, 1982. diSessa, A., Phenomenolgy and the Evolution of Intuition in Mental Models, Gentner D.R. & Stevens (eds.), LEA, 1983, Goldstein I.P., The Genetic Graph: A Representation for the Evolution of Procedural Knowledge, Intelligent Tutoring Systems, Sleeman and Brown (eds), Academic Press, 1983.pp. 51-77. Hinsley, D.A., Hayes, J.R., & Simon, H.A., From Words to Equations: Meaning and Representation in Algebra Word Problems, in Cognitive Processes in Comprehension, Carpenter and Just (eds), Erlbaum, 1978. Kuhn, Thomas., The Structure o j Scienti jic Revolutions, Chicago University Press, 1977. Larkin, J., Problem Representation in Physics, in Mental Models, Gentner D.R. & Stevens (eds.), LEA, 1983. Larkin J.H., McDermott J., Simon D.P. & Smith H.A., Expert and Novice Performance in Solving Physics Problems, Science, Vol. 208, 1980, pp. 1335-1342. Matz M., Towards a Generative Theory of High School Algebra Errors, Intelligent Tutoring Systems, Sleeman and Brown teds), Academic Press, 1983. pp. 25-50. McCloskey M., Naive Theories of Motion, Gentner D.R. & Stevens (eds.), LEA, 1983. in ff4ental Models, COGNITIVE MODELLING AND EDUCATION / 3 11 Simon H., and Hayes J.R., The Understanding Process: Problem Isomorphs, Cognitive Psychology, 1976,8, pp. 165-190. Sleeman D.H., kssessing Aspects of Competence in Basic Algebra, Intelligent Tutoring Systems, Sleeman and Brown (eds), Academic Press, 1983. pp. 185-199. Wagner H.M., Fundamentals o j Operations Research, Addison- Wesley, 1975. Weidenbeck Susan., Novice/Expert Differences in Programming Skills, International Journal o j Man-Machine Studies, 1985. Wiser M., and Carey S., When Heat and Temperature Were One, in Mental Models, Gentner D.R. & Stevens (eds.), LEA, 1983. I . I j t P a C a n C u ~PPENOIX: I Coblem 1 h account executive, Lotta Billings of the Flag-Pole Advertismg Co. has announced hat she can optimally allocate her clients’ advertismg dollars bi means of linear mgramming Her approach is to identify the various audiences the client wants tddressed, such as teenagers, young married couples, the gerlatic group, etc The :lient specifies a desired level of exposure for each audience. There are WIOUS ldvertismg vehicles (e.g. Magazmes, TV spot commercial. color ads in a Sunday ~wspaper etc). Each is scored for its effectiveness In each of the Identified audienc ategories. Her clients’ objective is to mrnimize the total rhlle still meeting the desred levels of product exP0sut-e. advertfslng expenditure Poblem 4 he purchasing agent of the Fly-by-Night Airline must decide on the amounts of jet fue o by from three possible vendon. The alrline refuels Its aircraft regularly at the four lrports it serves. The oil compames have said that they can furnish up to the followi? mounts of jet fuel during Ule coming month. 275,000 gallons fmm Oil Company A; 55,000 gallons from Oil Company 6; and 660,000 gallons fmm Oil Company C The equired amount of jet fuel is I 10.000 gallons at Airport I; 220,000 gallons at Airport ; 330,000 gallons at Airport 3; and 440,000 gallons at Airport 4. When transportation osts are added to the bid price per gallon supplied, the combined costs per gallon for Dt fuel from each vendor fumlshmg a specific airport is shown In the following table: COmPSfty A Company 6 Company C Airport 1 IO 7 a Airport 2 10 II 14 Airport 3 9 12 4 Airport 4 11 13 9 I Novice lr problem & 1. Its an advertismg problem. 1. Identify the audiences. Three audiences Xl,X2,X3. 1. OK, 50 we have our specification, which is the level of expoare per audience type. 4. Audiences and media are all I’ve wt. i. I’m calling adience x1=x3. - I. I’m calling media Yl,Y2,Y3. ?. I’ve got some coefficient for each. 3. so, AlY1, A2Y2, A3Y3. If I add all these together, I’ll get some cart of for each of ;hese groups. I’ll call that an exposure level. 3. So we have uposure level 1, level 2, level 3 for Xl, X2 and X3 LO. Somehow, the exposure levelz~...exposure level for X1 plus xposure level for X2 plus expceure level for X3 has to be equal or levl than the total money... 11. Hmm, if I had only one audience, say X, and each had... Novice 2 Problem 4 -a-- 12. Alright, 50 we’ve got a cost function. 13. We have to maximia amounta available from each company, Company A = 275,000, 14. So, we have to minmuse amounts at each airport. You want to start with airport 4, and see how you can meet its requirementa from company B, then see where the next lower price is, and 50 on... Novice 3 Problem 4 --1-- 15. OK, company A gives 10, B gwvea 7, OK I have the cost data. 16. So, how much should I buy from...whether I should buy from company A, or B, or C at the dlfferent row is an audience category in the tableau... 21. OK, so there’s a co&ic,ent here - the effectiveness for each of the variable on each of these audiences (i.e. A,j]. 22. Coefticientu could be S/number of people, ok, the product should be S. 23. OK, so here’s the formulatmn (Shows the formulation as ‘h4inimrre Cx’ such that Ax > B). 24. Iti P blending problem - you could liken those varmbles to amounta of ingredients where each ingredient supplies a certiun amount of a certain thing to the product. Expert 1, Problem 4 -- 25. OK, so some of the decision variables are the amount of oil bought from A, B, C. So, mAJ < 275 etc. Upper bounds on these vanables. 26. This 1s a transportation problem. 27. OK, so we’ve got to change XAj to Tj; index i is vendors, j is indexing airports. So now, constramts on availability from vendor i to be added to the general transportation problem. 28. So It is a minrmire CCAJ...oops ECijX,j with these constramta - clavic transportation problem. Novice 2 Problem 1 --1-- 29. So, we’ll start by creatmg a chart with the clienta on one axw and the different kinds of exposure on the other. Then in the boxes I’d put the score that each vehicle has for each targtt group. 30. Now. let us see...you’ll have to expra the desired level of exposure for each audience. So, you’ll have ta express that somehow, probably in percentages, that is, the score ia a percentage of the exposure you rant for the group. EXHIBIT 1: Ekcerpta of Expert and Novice Protocols for Problems 1 and 4. aHll3lT 2a f Optlmlzatron \ I ObJectlve Functlon-E I I Constraints-E 1 3 12 / SCIENCE
|
1986
|
143
|
410
|
INTELLIGENT TUTORING SYSTEMS BASED UPON QUALITATIVE MODEL EVOLUTIONS Barbara Y. White and John R. Frederiksen BBN Laboratories 10 Moulton Street Cambridge, MA 02238 ABSTRACT One promising educational application of computers derives from their ability to dynamically simulate physical phenomena. Such systems permit students to explore, for instance, electrical circuit behavior or particle dynamics. In the past, these simulations have been based upon quantitative models. However, recent work in artificial intelligence has created techniques for basing such simulations on qualitative reasoning. Qualitative models not only simulate the phenomena of the domain, but also permit instructional systems to generate explanations of the behavior under study. Sequences of such models, that attempt to capture the progression from novice to expert reasoning, permit instructional systems to select problems and generate explanations that increase in complexity at an appropriate rate for each student. Since the acquisition of a qualitative understanding of the laws of physics and their implications is an important component of understanding physical phenomena, it is argued that systems based upon qualitative model progressions can play a valuable role in science education. I INTRODUCTION Our research has focused on (1) modelling possible evolutions in students’ reasoning about electrical circuits as they come to understand more and more about circuit behavior, and on (2) using these model progressions as the basis for an intelligent learning environment that helps students learn (1) to predict and explain circuit behavior, and (ii) to troubleshoot by locating opens and shorts to ground in series-parallel circuits. We have found that, even for the simplest circuit, there are different kinds of questions that you can ask about the behavior of the circuit that require different kinds of reasoning. For example, consider the elementary circuit illustrated in Figure 1, containing a battery, a switch, a light bulb, and a variable resistor. One could start by asking, “If I close the switch, will the light in this circuit be on or off?” This type of question can be answered by a simple form of qualitative reasoning, which we call “zero order” (because it employs no derivatives). Zero order models reason (1) about whether or not devices have voltages applied to them based upon the conductivity and resistance of other devices within the circuit, and (2) abo:lt how dramatic changes in conductivity, such as closing a switch, can affect the behavior of the circuit. One could go on to ask, “What happens to the light as I increase the resistance of the variable resistor? Does the light get brighter or dimmer?” Answering this type of question requires a more sophisticated form of qualitative reasoning, which we call “first order”. First order models reason about how increasing the resistance in a branch of a circuit increases and decreases voltages within the circuit. The qualitative model is thus no longer simply reasoning about whether or not there is a voltage applied to a device, rather, it is determining whether the voltage is changing and is, therefore, utilizing qualitative derivatives. This type of analysis is crucial when analyzing, for instance, the occurrence of feedback within a circuit. Finally one can ask still more precise questions about the behavior of the circuit shown in Figure 1. For example, one could ask, “When I close the switch, how bright will the light be?” To answer such a question requires a quantitative analysis of the circuit. Purely qualitative models are no longer sufficient to capture the reasoning necessary to answer this type of question. We argue that in instruction, one should start by helping students to acquire a progression of increasingly sophisticated, zero order, qualitative models that enable students to reason about gross aspects of circuit behavior. This class of models can help students to develop basic circuit concepts such as resistance, conductivity, and voltage drop. It can also introduce students to fundamental circuit principles such as Kirchhoff’s voltage law and help them to understand how changes in one part of the circuit can cause changes in other parts of the circuit. Once these fundamental aspects of circuit behavior have been mastered in qualitative terms, we argue that one should then introduce students to reasoning about more subtle aspects of circuit behavior by helping them to acquire first order, qualitative models of circuit behavior. COGNITIVE MODELLING AND EDUCATION / 3 13 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Finally, only after students can reason about and understand circuit behavior in qualitative terms, should quantitative reasoning be introduced. Further, the form of quantitative circuit analysis taught should be a logical extension of the qualitative reasoning that the students have already mastered. This approach represents a radical departure from how physical theories are typically taught. Traditionally, only quantitative analysis is taught and students are left to develop their own qualitative methods, which they rarely do until long after they become experts at quantitative analysis (Larkin et al., 1980; Chi et al., 1981; Cohen et al., 1983). In this paper, we will argue for the instructional necessity of starting with zero order, qualitative models. We will then go on to describe an instructional environment that we have implemented and tried out with high school students. Viewing instruction as producing in the student a progression of models permits a tutoring system architecture with elegant properties. Within our system, the student model, the tutor, and the domain simulation are incorporated within the single model that is active at any point in learning. This model is used to simulate the domain phenomena, is capable of generating explanations by articulating its behavior, and furnishes a desired model of the students’ reasoning at that particular stage in learning. The progression of models also enables the system to select problems and generate explanations that are appropriate for the student at any point in the instructional sequence. In order to motivate students to transform their models into new models, they are given problems that the new model can handle but their present model cannot. This evolution of models also enables the system to focus its explanations on the difference between the present model and the new model. Such a system architecture also permits a variety of pedagogical strategies to be explored within a single instructional system. Since the system can turn a problem into an example by solving it for the student, the students’ learning can be motivated by problems or by examples. That is, students can be presented with problems and only see examples if they run into difficulty, alternatively, they can see examples first and then be given problems to solve. Also, by working within the simulation environment, students can use a circuit editor to construct their own problems and thus explore the domain in a more open ended fashion. The system is capable of generating runnable qualitative models for any circuit that the student or instructional designer might create. Further, the learning process can be managed either by the system or by the student. For example, students can be given a map of the problem space and can decide for themselves what class of problems to pursue next strategy they want to employ. or even what pedagogical III THE INSTRUCTIONAL NEED FOR ZERO ORDER MODELS The pioneering work of deKleer (19’79) and others (in Bobrow (Ed.), 1985) has shown how models can be developed that enable a computer to reason qualitatively about a physical domain. Further, these researchers have demonstrated that such models can be adequate to solve a large class of problems (e.g., deKleer in Bobrow (Ed.), 1985). Our work on the design of qualitative models for instructional purposes has focused on creating models that (1) enable decompositions of sophisticated models into simpler models that can, nonetheless, accurately simulate the behavior of some class of circuits, and (2) enable the causality of circuit behaviors for the simpler models to be clear and at the same more sophisticated models. time compatible with that for DeKleer (in Bobrow 1985, p. 208) argues that: “Most circuits are designed to deal with changing inputs or loads. For example, . . . digital circuits must switch their internal states as applied signals change . . . . The purpose of these kinds of circuits is best understood by examining how they respond to change.” DeKleer’s behavioral circuit model reasons in terms of qualitative derivatives obtained from qualitative versions of the constraint equations (“confluences”) used in quantitative circuit analysis. These enable it to analyze the effects of changing inputs on circuit behavior. The difficulty with utilizing such a model, at least at the initial stage of instruction, is that novices typically do not have a concept of voltage or resistance, let alone a conception of changes in voltages or resistance (Cohen et al., 1983; Collins, 1985; Steinberg, 1983). For example, as part of a trial of our instructional system, we interviewed seven high school students who had studied physics as part of a middle school science course, but who had not taken a high school physics course. They all initially exhibited serious misconceptions about circuit behaviors. For example, when asked to describe the behavior of the light in the circuit shown in Figure 2 as the switches are opened and closed, only one of the seven students had a concept of a circuit. The other students predicted that the bulb would light if only one of the switches were closed. A typical remark was the following , “If one of the switches on the left is closed, the light will light. It does not matter whether the switches on the right are open or closed.” Further, they said, ” if you close both switches on the left, the light will be twice as bright as if you close only one of them”. In addition to this lack of a basic circuit concept, all seven of the students predicted that when you close the switch in Figure 3, the light would still light -- the statement that the switch was not 314 / SCIENCE resistive when closed did not matter. In fact, five of the students stated that they did not know what was meant by the term “not resistive”. They thus had no conception of how a non-resistive path in a circuit could affect circuit behavior. test light would not be on even if the circuit were unfaulted. Thus, even for performing the most elementary type of electrical troubleshooting, one needs a “zero order understanding” of circuit behavior. IV THE ZERO ORDER MODELS The zero order models incorporate knowledge of the structure of the circuit, the behavior of the devices Frederiksen, 1984; White & Frederiksen, 1985). algorithms utilize the behavioral models as part of their Figure 2. problem solving process. Both the behavioral models and troubleshooting algorithms can articulate their Novices such as these, who do not have accurate thinking, both visually and verbally, when simulating the models of when a voltage is applied to a device in a behavior of a given circuit or when troubleshooting. circuit, could not possibly understand what is meant by a change in voltage across a device. Thus, we argue that students should initially be taught a progression of A. Device Models zero order, qualitative models that reason about gross The behavioral models contain device models for aspects of circuit behavior. This type of model can devices typically found in circuits. The devices accurately simulate the behavior of a large class of modelled are batteries, switches, resistors, bulbs, circuits, and can be utilized to introduce fundamental ideas about circuit behavior. diodes, fuses, capacitors, transistors, test lights, and wires (wires are explicitly introduced as devices). Device models include rules for determining a device’s state, based upon the circuit environment of the device. For example, if there is a voltage drop across the two ports of a light bulb, the light bulb will be in the “on” II3 NZ ;YI N3 state; otherwise it is in the “off” state. When a II I il device’s state changes, the device model activates additional rules which reevaluate a set of variables associated with the device. These variables include (1) RI RI NI - .# 6’ - the conductivity of the device (is it purely conductive, conductive but resistive, or nonconductive), and (2) whether or not the device is a source of voltage. For (a) b) example, when a capacitor is in the charged state, it is Figure 3. nonconductive and a source of voltage. Finally, the device models include fault states, which include rules -- for altering the device variables to make them {ry- consistent with a particular fault, and which override the normal states for the device. For example, when a lii:tiosuzEe fa;:t;i ,‘,‘;fp;f”, it becomes non-conductive When a particular device, such as a light bulb, is employed within a particular circuit, a data table is Y created for the specific instantiation of that device in Figure 4. that circuit. This table is used to record (1) the present state of the device, (2) whether it is presently The knowledge embedded in the zero order models a voltage source, (3) t i s internal conductivity (what has been shown to be the type of knowledge that even possible internal conductive paths exist among its ports college physics students lack (Cohen et al., 1983), and and whether they are presently purely conductive, is also crucial knowledge for successful troubleshooting. resistive, or nonconductive), (4) the device polarity, as For example, consider an elementary form of well as (5) its connections to other devices in the troubleshooting such as trying to locate an open in the circuit, and (6) its fault status. When the student is circuit shown in Figure 4. Imagine that a test light is performing a mental simulation of a particular circuit, inserted into the middle of the circuit as shown in the the student must also keep track of this information. figure. In order to make an inference about whether the open is in the part of the circuit in series with the test light or the part in parallel with it, one needs to know that if switch #l were open, the light would not be on even if the circuit had no fault. Similarly, one needs to understand that if switch #2 were closed, the A mental model for a device enables the student to determine the state of the device regardless of the circuit environment in which it is placed. Information related to the state of the device, such as its internal conductivity and whether or not it is a source of COGNITIVE MODELLING AND EDUCATION / 3 15 voltage, will in turn affect the behavior of other devices in the circuit. Such a device model will thus form the the light bulb will be off.*** basis for understanding the causality of circuit behavior in terms of showing how a change in state of C. Causal Explanations one device can produce a change in state of another device within the circuit. B. Circuit Principles When simulating a particular circuit, the only information that the qualitative simulation requires is information about the structure of the circuit, that is, the devices and their interconnections. All of the information about circuit behavior, as represented by a sequence of changes in device states, is inferred by the qualitative simulation as it reasons about the circuit. To reason about device polarity and state, the device models utilize general qualitative methods for -circuit analysis. For instance, when attempting to evaluate their states, device models can call upon procedures to establish voltages within the circuit. In the case of the zero order models, these procedures determine, based upon the circuit topology and the states of devices, whether or not a device has a voltage applied to it.* Simply having the model articulate that when the switch is closed, the light will be off because there is a non-resitive path across it, is not a sufficient causal explanation for students who have no understanding of (1) what is meant by non-resistive, or (2) what affect such a path can have on circuit behavior. First of all, students need definitions for concepts such as voltage, resistance, current, device state, internal conductivity, series circuit, and parallel circuit. Further, they need a “deeper” causal explanation of the circuit’s behavior. For Instance, there are two alternate perspectives on the causality of circuit behavior -- a current flow perspective and a voltage drop perspective. To illustrate, the following are explanations that (1) a current flow model, and (2) a voltage drop model could give as to why the light is off when the switch is closed for the circuit shown in Figure 3. .- The most sophisticated zero order voltage rule is based on the concept that, for a device to have a voltage applied to it, it must occur in a circuit (loop) containing a voltage source and must not have any non-resistive paths in parallel with it within that circuit. More formally, the zero order voltage rule can be stated as: If there is at least one conductive path to the negative side of a voltage source from one port of the device (a return path), and if there is a conductive path from another port of the device to the positive side of that voltage source (a feed path), with no non- resistive path branching from any point on that “feed” path to any point on any “return” path, then, the device has a voltage applied to that pair of ports.** Changes in a circuit, such as closing a switch, can alter in a dramatic way, the conductivity of the circuit and thereby produce changes in whether or not a device has a voltage applied to it. To illustrate, when the switch is open in the circuit shown in Figure 3(a), the device model for the light bulb calls upon procedures for evaluating voltages in order to determine whether the light’s state is on or off. The procedure finds a good feed path and a good return path and thus the light bulb will be on. When the switch is closed, as shown in Figure 3(b), the procedure finds a short from the feed to the return path and thus The current flow model could state: “In order for the bulb to light, current must flow through it. There is a device in parallel with the bulb, the switch. In parallel paths, the current is divided among the paths. More current flows through the path with the least resistance. If one of the paths has no resistance, all of the current will flow through it. Since the bulb has resistance and the switch does not, all of the current will flow through the switch. Since there is no current flow through the bulb, it will be off.” Whereas, the voltage drop model could state; “In order for the bulb to light, there must be a voltage drop across it. There is a device in parallel with the bulb, the switch. Two devices in parallel have the same voltage drop across them. Voltage drop is directly proportional to resistance: If there is no resistance, there can be no voltage drop. Since the switch has no resistance, there is no voltage drop across the switch. Thus, there is no voltage drop across the light, so the light will be off.” One could be given even “deeper” accounts of the physics underlying circuit causality. For instance, the system could present physical models that attempt to explain why current flow and voltage drop are affected by resistance in terms of electrical fields and their propagation. However, for our present purposes, the system presents a causal account to the depth illustrated by the preceding models. In explaining preceding example, the one behavior could utili of ze the light either the in the voltage *In the case of the first order models, these procedures reason about whether the voltage drop across a device is increasing or decreasing as a result of changes in its resistance and the resistance of other devices in the circuit. **By “voltage applied to a device”, we mean the qualitative version of the open circuit (or Thevenin) voltage, that is, the voltage the device sees as it looks into the circuit. In the case of the zero order voltage rule, this is simply the presence or absence of voltage. ***The voltage procedures utilize topological search processes that are needed, for example, to determine whether a device has a conductive path to a source of voltage. The search processes utilize the information maintained by the device data tables concerning their circuit connections, polarity, internal conductivity, and whether or not they serve as voltage sources. Polarities are assigned to the ports of each device in the circuit by a general, qualitative, circuit orientation algorithm. 316 / SCIENCE drop explanation or the current flow explanation, or both. Our view is that giving students both types of explanations, at least in the initial stages of learning about circuits, would be unnecessary and confusing. It would require students to construct two models for circuit behavior, and this would create a potential for them to become confused about circuit causality. However, later on students may learn to reason in either way about circuit behavior. We therefore selected only one of the causal models. We chose the voltage drop explanation because current flows as a result of an electromotive force being applied to a circuit; because troubleshooting tasks typically are based upon reasoning about voltages and testing for them; and because research has shown that this is an important way of conceptualizing circuit behavior that even sophisticated students lack, as illustrated by the following quotation “Current is the primary concept used by students, whereas potential difference is regarded as a consequence of current flow, and not as its cause. Consequently students often use V=IR incorrectly. A battery is regarded as a source of constant current. The concepts of emf and internal resistance are not well understood. Students have difficulties in analyzing the effect which a change in one component has on the rest of the circuit” (Cohen, Eylon, and Ganiel, 1983). In addition, reasoning about how circuits divide voltage is a major component of our first order models. These models reason about changes in resistances and voltages within a circuit, using a qualitative form of Kirchhoff’s voltage law. Thus getting students to reason in terms of voltages is compatible with the type of reasoning that will be required later on in the evolution of the students’ models. states have occurred. Time is then allowed to increment and the simulation continues. When any further changes in device internal conductivity or status as a voltage source occur, due either to the passage of time or to external intervention, time is again frozen and the propagation of state changes is allowed to commence once again. E. A Sample Zero Order Circuit Simulation As an illustration of how a zero order model reasons, consider a simulation of the behavior of the circuit illustrated in Figure 5. Initially suppose that both switches are open, the light bulb is off, and the capacitor is discharged. Then, suppose that someone closes switch #l. This change in the internal conductivity of a device causes the other devices in the circuit to reevaluate their states. The capacitor remains discharged because switch #2 being open prevents it from having a good return path. The light bulb has good feed and return paths, so its state becomes on. Since, in the course of this reevaluation no device changed its conductivity, the reevaluation process terminates. Note that even though the light bulb changed state, its internal conductity is always the same, so its change of state can have no effect on circuit behavior and thus does not trigger the reevaluation process. N3 Cl N4 N5 IC I I II D. Control Structure The simulation of circuit operation is driven by changes in the states of the devices in the circuit. These changes are produced by (1) changes in states of other devices, such as a battery becoming discharged causing a light to go out; (2) external interventions, such as a person closing a switch, or a fault being introduced into the circuit; and (3) increments in time, such as a capacitor becoming discharged. Whenever a device changes state, its status as a voltage source is redetermined by the device model, along with its internal conductivity/resistance. Whenever any device’s internal conductivity or status as a voltage source changes, then time stops incrementing within the simulation and all of the other devices in the circuit reevaluate their states. This allows any changes in conductivity or presence of voltage sources within the circuit to propagate their effects to the states of other devices. The circuit information used for this reevaluation is the set of device data tables existing at the initiation of the reevaluation (not those that are being created in the current reevaluation cycle). This is to avoid unwanted sequential dependencies in determining device states. If in the course of this reevaluation some additronal devices change state, then the reevaluation process is repeated. This series of propagation cycles continues until the behavior of the circuit stabilizes and no further changes in device *I$ e1 q Al rAY ;ili1 Figure 5. Now, imagine that someone closes switch #2. This change in state produces a change in the conductity of the switch and triggers the reevaluation process. The light bulb attempts to reevaluate its state and finds that its feed path is shorted out by the capacitor (which is purely-conductive because it is in the discharged state) and switch #2 (which is also purely- conductive because its state is closed), so its state becomes off. The capacitor attempts to reevaluate its state and finds that it has a good feed and return path, so its state becomes charged. This change in state causes it to reevaluate its internal conductivity, and to reevaluate whether it is a source of voltage. As a result of the capacitor becoming charged, it becomes non-conductive, and a source of voltage. This change in the internal conductivity of the capacitor causes the reevaluation process to trigger again. The light bulb reevaluates its state and finds that it has a good feed and return path (it is no longer shorted out by the capacitor because the capacitor is now charged and therefore non-conductive) and its state becomes on. This change in the light bulb’s state has no effect on the light bulb’s internal conductivity so the reevaluation process terminates. COGNITIVE MODELLING AND EDUCATION / 3 17 Suppose that someone then opens switch #l. This changes the switches internal conductivity and therefore causes all other devices to reevaluate their states. The light bulb no longer has a good return path with respect to the battery. However, it has a good feed and return path to another source of voltage within the circuit, the capacitor (which is charged and therefore a source of voltage). The state of the light bulb will thus be on. The capacitor no longer has a good return path to a source of voltage and it has a conductive path across it, so its state becomes discharged and it becomes purely-conductive and is not a source of voltage. This change in the capacitors internal conductivity causes the light bulb to reevaluate its state. Since the capacitor is no longer a source of voltage, and since switch #l is open thereby preventing a good return path to the battery, the light bulb concludes that its state is off. This change in state has no effect on the light bulb’s internal conductivity so the reevaluation process terminates. Notice that this relatively unsophisticated qualitative simulation has been able to simulate and explain some important aspects of this circuit’s behavior. It demonstrates how when switch #2 is closed, it initially shorts out the bulb, and then, when the capacitor charges, it no longer shorts out the bulb. Further, it explains how when switch #l is opened, the capacitor causes the light bulb to light initially, and then, when the capacitor becomes discharged, the light bulb goes out. One of the most impressive features of the type of qualitative, causal model described in this paper is its utility in helping to solve a wide range of circuit problems. For example, the student can be asked to predict the state of a single device after a switch is closed, or to describe the behavior of the entire circuit as various switches are opened and closed, or to determine what faults are possible given the behavior of the circuit. Further, students can be asked to locate a faulty switch within a circuit, or to design a circuit such that when the switch is closed, the light in the circuit will be off. Performing this type of mental simulation of circuit behavior is instrumental in solving all of these types of problems. V MODEL TRANSFORMATIONS The learning environment is not based upon a single, zero order, qualitative model, but rather, it is based upon a progression of increasingly sophisticated models that correspond to a possible evolution of a learner’s model. The system can help students to transform their model by presenting to them those problems that can be solved by the transformed model but not by the untransformed model. The students will thus be motivated to revise their existing qualitative model in an appropriate direction. For example, the learning environment can help students who have a rudimentary conception of voltage drop to refine their conception by learning about the effects of non-resistive paths. This particular model transformation can be motivated by giving students problems where they have to predict, for instance, the behavior of the light bulb in the circuit shown in Figure 3 as the switch is opened and closed. In order to facilitate such a transformation, the system can turn any problem into an example for the student by reasoning out loud while it solves the problem. As models become more sophisticated, they also become more verbose. The mechanism for pruning explanations is to focus the explanations on the difference between the transformed and the untransformed model. Reasoning of the transformed model that was present in the untransformed model either does not articulate itself or, if it is necessary to support the model increment, is presented in summary fashion. Looking at the difference between the transformed model and the student’s current model also helps to define what aspects of the problem solving process should be represented to the student. For instance, if students are learning about determining when there is or is not a voltage drop across a device, the system illustrates paths to voltage sources. However, later in the model progression, when it is assumed that students already know how to determine the presence of a voltage drop, the paths are no longer displayed. VI LEARNING STRATEGIES The learning environment thus consists of an interactive simulation driven by qualitative models. Further, the progression of models defines classes of problems and facilitates explanation generation. This architecture for an intelligent tutoring system permits great flexibility in the students’ choice of an instructional strategy. Open- ended exploration. Students can construct circuits, explore their behavior (by changing the states of devices, inserting faults, and adding or deleting components), and request explanations for the observed behaviors. Students can thus create their own problems and experiment with circuits. The system thereby permits an open-ended exploratory learning strategy. Problem-driven learning. In addition, the progression of models enables the system to present students with a sequence of problem solving situations that motivate the need for developing particular transformations of their models of circuit behavior. In solving new problems, the students attempt to transform their models of circuit behavior in concordance with the evolution of the system’s models. The focus is on having students solve problems on their own, without providing them first with explanations for how to solve them. Only when they run into difficulty, do they request explanations of circuit behavior. Example-driven learning. Alternatively, students can be presented with tutorial demonstrations for solving example problems by simply asking the system to reason out loud about a given circuit using its present, causal model. Students can thus hear qualitative, explanations of how to solve each type of problem in the series, followed by opportunities to solve similar Since the focus is on presenting examples problems. together with explanations prior to practice in problem solving, we term this learning strategy “example- driven”. 318 / SCIENCE Student directed learning. The classification of problems created by the progression of models provides facilities students can use in pursuing instructional goals of their own choosing. Problem sets are classified on the basis of the concepts and laws required for their solution, and on the instructional purpose served by the problem set. This enables students to pursue goals such as acquiring a new concept or generalizing a concept. The students can thus make their own decisions about what problems to solve and even about what learning strategy to employ. The system has been tried out with seven high school students. Students were allowed to pursue their own learning strategies with the constraint that use of the circuit editor was restricted to the modification of circuits in the problem sets. Initially, all of the students exhibited serious misconceptions about circuit behavior, and lacked key electrical concepts. Further, none of them had any experience with troubleshooting. After five hours of working with the system on an individual basis, they were all able to make accurate zero order predictions about circuit behavior and could troubleshoot for opens and shorts to ground in series circuits. We found that differences between the students’ mental models and those that we were trying to teach were not due to the inevitability of misconceptions, but rather, were due to limitations of the learning environment -- a non-optimahty in either the form of the knowledge we were trying to impart, or the progression of models, or the type of problem selected to induce a particular model transformation. Thus our future research will focus on developing further the theory underlying model forms, model transformations, and instructional strategies. Also, we intend to to expand the set of instructional modes and problem types, by, for example, allowing students to design and troubleshoot, not only circuits, but also, the qualitative models that perform the circuit simulations. ACKNOWLEDGEMENTS This research was supported by the Office of Naval Research and the Army Research Institute, under ONR contract N00014-82-C-0580 with the Personnel and Training Research Program. We would like to thank Wallace Feurzeig and Frank Ritter for their contributions to this work. REFERENCES [l.] Bobrow, D. G. (Ed.) Qualitative Reasoning about Physical Systems. Cambridge, MA: MIT Press, 1985. [2.] Chi, hf., Feltovich, P., & Glaser, R. “Categorization and Representation of Physics Problems by Experts and Novices.” Cognitive Science, 5 (1981) 121-152. 13.3 Cohen, R., Eylon, B., & Ganiel, U. “Potential Difference and Current in Simple Electric Circuits: A Study of Students’ Concepts.” American Journal of Physics, 51:5 (1983) 407-412. ~ - L4.1 15.1 16.1 P.1 P3.1 r9.1 WI Ill.1 WI D3.1 Collins, A. “Component Models of Physical Systems.” Proceedings of the Seventh Annual Conference of ---- the Cognitive Science Society, University of California, Irvine, 1985. Davis, R. “Reasoning from First Principles in Electronic Troubleshooting.” International Journal of Man-Machine Studies, 19 (1983) 403-423. - deKleer, J. “Causal and Teleological Reasoning in Circuit Recognition.” TR-529, MIT Artificial Intelligence Laboratory, Cambridge, MA, 1979. Gentner, D., & Stevens, A. L. Mental Models. Hillsdale, NJ: Lawrence Erlbaum Associates, 1983. Larkin, J.H., McDermott, J., Simon, D.P., & Simon, H.A. “Expert and Novice Performance in Solving Physics Problems.” Science, 208 (1980) 1335- 1342. Rouse, W. B., & Morris, N. M. On Looking into the Black Box: Prospects and Limits in the Search for Mental Models.” Center for Man-Machine Systems Research Report No. 85-2, Atlanta, GA: Georgia Institute of Technology, 1985. Sleeman, D., & Brown, J. S. Intelligent Tutoring Systems. London: Academic Press, 1982. Steinberg, M.S. “Reinventing Electricity.” In Proceedings of the International Seminar, Misconceptions %i Science and Mathematics, Ithaca, --- New York, 1983. White, B.Y., & Frederiksen, J.R. “Modeling Expertise in Troubleshooting and Reasoning about Simple Electric Circuits.” In the Proceeding of the -- Annual Meeting of the Cognitive Science Society, Boulder, Colorado, 1984. White, B. Y., & . Frederiksen, J. R. “QUEST: Qualitative Understanding of Electrical System Troubleshooting.” ACM SIGART Newsletter, 93 -- (1985) 34-37. COGNITIVE MODELLING AND EDUCATION / 3 19
|
1986
|
144
|
411
|
An Analysis of Tutorial Reasoning About Programming Bugs David C. Littman, Jeannine Pinto & Elliot Soloway Cognition and Programming Project Department of Computer Science Yale University Abstract New Haven, CT 06620 what knowledge to teach a student who makes a bug nor about how to teach the knowledge. In a sense, all the tutorial knowledge possessed by such systems is ‘compiled”. The three step model may be appropriate for tutoring students when their bugs do not reflect deep misunderstandings, or when one bug always should get the same intervention. However, it seems unlikely to be effective in domains such as computer programming where students’ bugs are often related and may reflect deep misconceptions about how to solve problems or about the constructs of the programming language. In complex domains such as programming, tutors seem to engage in extensive reasoning about how to tutor students who make serious bugs. As an example of the kinds of issues a tutor reasons about when tutoring students in complex domains, consider the ostensibly simple problem of &en to deliver tutorial interventions. Two opposite strategies been proposed: A significant portion of tutorial interactions revolve around the bugs a student makes. When a tutor performs an intervention to help a student fix a programming bug, the problem of deciding which intervention to perform requires extensive reasoning. In this paper, we identify five tutorial considerations tutors appear to use when they reason about how to construct tutorial interventions for students’ bugs. Using data collected from human tutors working in the domain of introductory computer programming, we identify the knowledge tutors use when they reason about the five considerations and show that tutors are cottsistent in the ways that they use the kinds of knowledge to remon about students’ bugs. In this paper we illustrate our findings of tutorial consistency by showing that tutors are consistent in how they reason about bug criticality and bug categories. We suggest some implications of these empirical findings for the construction of intelligent tutoring systems. 1 Introduction: The Problem of Tutorial Consistency A key issue for designers of Intelligent Tutoring Systems is how to treat students’ bugs. Collins and Stevens (1976)) Both the research of others (e.g., and our own work (Littman, Pinto, and Soloway (1985)) suggest, that bugs play a central role in tutoring. In a sense, tutors use bugs to drive the tutorial process: bugs help the tutor understand what the student does not understand and they provide a ready forum for communication with students since all students want to fix their bugs. Though most tutors try to help students fix bugs, the skill of expert tutors, and therefore effective Intelligent Tutoring Systems, lies in how they use bugs in their tutorial interven?ions. A simple first-order model for using bugs in tutoring would have three steps: l ident.ify the bug l look up an appropriate response to the bug in a database of tutorial responses l deliver the appropriate response to the student. This three step model of tutorial intervention, which is essentially the model used by CA1 systems (Carbonell (1970)), does not require the tutoring system to reason either about The research reported in this paper was cosponsored by the Personnel and Training Division Research Groups, Psychological Sciences Division, Office of Naval Research and the Army Research Institute for the Behavioral and Social Sciences, under Contract No. NOOO14-82-k0714, Contract Authority Identification Number 154-492. l The LISP tutor of John Anderson’s group (cf. Anderson, Boyle, Farrell, and Reiser (1984)) provides immediate feedback to the student on all bugs the student makes. l The WEST tutor of Brown and Burton (1982) plays a very conservative “coaching” role with the goal of minimizing interruptions of the student’s problem solving. WEST takes a ‘<wait-and-see” attitude to interrupting the student, trying to collect diagnostic information from patterns of bugs. Since the three step model seems inappropriate for the Intelligent Tutoring Systems that will have to be built for complex domains, and since there appears to be considerable controversy about the generation of tutorial interventions, we decided that it would be useful to study human tutors in an effort to determine how they reason about tutorial interventions for students who make bugs. Our general approach to studying how tutors reason about bugs was to identify several issues that we believe tutors reason about to generate their interventions. From interviews with tutors, and videotapes of interactive tutoring sessions, we identified five main issues tutors reason about when they generate their tutorial interventions. Each of these issues, called a tutorial consideration, influences the tutor’s decisions about which bugs to tutor, when to tutor them, and how to tutor them. The five tutorial considerations are: 1. How critical the bugs are 2. What category the bugs fall into 3. What caused the bugs 4. What tutorial goals are appropriate for tutoring the bugs 320 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 5. 12’hat tutorial interventions would achieve the tutorial goals i$*ith this very genera! set of tutorial considerations in mind we designed a variant of a protocol study (Newel! and Simon (1972)) that was intended to present tutors with situations that would lead them to reason about the five tutoria! considerations. Tutors were presented with buggy programs actually written by students in an introductory PASCAL programming class and asked to answer questions designed to elicit reasoning about, the five tutorial considerations. For example, we asked tutors w-by they thought the student who wrote each program made the bugs, what goals they had for tutoring the student, and what they would actually do to tutor the student. During our initial analysis of the data set, we have had the goal of simply identifying and describing the kinds of knowledge tutors have and the factors that tutors take into account when they reasoned about the five tutorial considerations. By abstracting the responses of many tutors to the same questions, we have begun to identify different kinds of knowledge tutors use and the factors that they weigh when they make decisions about the tutorial considerations. Our initial description of the data, therefore, is in terms of: l tutorial considerations l kinds of knowledge tutors use in reasoning about tutorial considerations l factors that comprise the knowledge tutors use Though we do not yet have a computer program that implements our findings about human tutors, we definitely plan to use the information we acquire from this study to guide our development of Intelligent Tutoring Systems. Since at this point we are trying to develop a descriptive vocabulary that permits us to express tutorial knowledge and reasoning, and to describe such knowledge and reasoning, our current research is more appropriately viewed as theory b&f&g than as theory application. Hence, in this paper we present part of the vocabulary and use it to show that tutors are consistent when reasoning about tutoring students’ bugs. One of the major concerns of our research has been the problem of consistency of tutorial reasoning. Because tutors use so many kinds of information to decide how to tutor a student’s bug, it seems plausible to hypothesize that different tutors would be inconsistent in the ways they reason about either identical bugs or different bugs. The problem of tutorial consistency is important to designers of Intelligent Tutoring Systems since, if human tutors were entirely inconsistent in their generation of tutorial interventions, using human tutors as models for machine tutors would not be useful. Absence of tutorial consistency would imply that there is no reason to prefer any one method of generating tutorial interventions over any other method on the grounds that human tutors find one method especially effective. Fortunately, t,here are at least two sources of evidence for tutorial consistency. First, Collins and Stevens (1976), in a study of “super-teachers”, identified several Socratic tutorial strategies that their teachers used; many of the strategies identified by Collins and Stevens (1976) found their way int,o the Socratic 1iI!E* tutor (Stevens, Collins, and Goldin (1982)), LVoolf’s programming tutor for students in introductory programming courses (1Yoolf (1985)), and Clanccy’s GUIDON program for teaching the ski!! of medical diagnosis (Clancey (1983)). Second, our analyses of the data we gathered from human tutors suggest that tutors are consistent in the ways in which they reason about how to tutor st,udents who make bugs. This paper is organized as follows: l First, in Section ‘2, we describe the experiment we conducted to collect data about tutorial consistency. l Second, in Section 3, we present an example which illustrates how two tutors reason in the same way about the same bug. l Third, in Sections 4 and 5, we describe bug criticality and bug categories and present statistical evidence that tutors are consistent in reasoning about both. l Finally, in Section 0, we draw some conclusions and implications of our study. Though we do not present analyses of a!! of the five considerations tutors take into account in deciding how to tutor students’ bugs, the analyses of bug criticality and bug categories illustrate our genera! findings which apply equally to bug categories, the causes of bugs, tutorial goals, and tutorial interventions. A complete analysis of the consistency of a!! five types of knowledge is presented in Littman, Pinto, and Soloway (1986). 2 Methods 2.1 Subjects Eleven Yale University graduate and advanced undergraduate students participated in this study. Each had extensive tutoring experience. The range in tutorial experience was from 150 to over 2000 hours. Each subject could program competently in PASCAL as we!! as in a variety of ot,her languages. 2.2 Task Subjects received five buggy programs actually written by introductory programming students along with the same questionnaire about each program. The programs were written in response to the Rainfall Assignment, which was assigned during the fifth week of class. The assignment is shown in Figure 1 and a program that correctly solves the assignment is shown in Figure 2. To reproduce the typical situation a programming tutor faced in introductory PASCAL programming courses, the buggy programs contained an average of 6 bugs. For each st,udent’s program, tutors were asked to imagine themselves tutoring the student who wrote the program and to answer each of the questions in the quest ionnnire. The questionnaires were displayed side-by-side with the buggy programs on an Apollo DN300 multi-window workstation. Subjects typed their answers to each question, pressed a preassigned key to go to the next question, and continued until they were finished. Subjects were allowed to work at their own pace. Most subjects needed at least four hours to complete a!! the questionnaires. The questionnaire was designed to prompt the tutors for their thoughts as t,hey considered how they would tutor the student who wrote the program. For example, subjects decided whether a bug would be tutored alone or in a group with other bugs. They also indicated the order in which they would tutor the groups of bugs as we!! the goals they had for tutoring each bug and the methods they would use to achieve the goals. COGNITIVE MODELLING AND EDUCATION / 32 1 While we realize that our experimental design presented subjects with a somewhat artificial situation, we were very encouraged by how engaging our subjects found the task. Subjects took the task seriously, spending as much as 15 hours to complete it. Informal debriefing interviews further convinced us that the tutors felt their responses were valid and would have been essentially the same in a real tutoring session. The Noah Problem: Noah needs to keep track of the rainfall in the New Haven area to determine when to launch his ark. Write a program so he can do this. Your program should read the rainfall for each day, stopping when Noah type “QQQOO”, which is not a data value, but a sentinel indicating the end of input. If the user ty the program should reject it, since negative rainfal P- en in a negative value program should IS not possible. Your of rainy days, rint out the number of valid daya typed InI the number t e average rainfall per day over the perrod, and the 52 maximum amount of rainfall that fell on any one day. Figure 1: The Rainfall Assignment Program Rainfall in Var Dail Rainfa I, ut,output ; Rainy 6 If $ otalRain all,MaxRainfall,Average : Real; ays,TotalDays : Integer; - -0’; Rain Max R Days:= 0; TotalDays:= 0; ainfall:= O- TotalRainfall:= 0;. Writeln{~~;~;i~n~~; Amount of Rainfall’); Readln While ( b ailyRainfall i > 00000) Do Be in‘ - I? DailvRainfall >= 0 Then Rainfall; End; Else Writeln (‘Rainfall Must Be Greater Than 0’); Read(DailyRainfall) End; If TotalDaysCounter > 0 Then Be in Averasze := TotalRainfall/Total 5, avs: e: 02); ’ w ainfall: 022). Number of Days is: ‘, TotaiDays); Number of Ramy Days is: ‘, RainyDays) End; Else Writeln(‘No Valid Days Entered.‘); End. Figure 2: Sample Correct Rainfall Program 2.3 Choice of Bugs for Analysis For this paper, we analyzed 16 of the 36 bugs in the five programs. The 16 bugs represent the range of bugs in the experiment. Criteria for including bugs in the analyses were: l Each bug represented a type of bug tutors frequently encounter. l No more than one of each type of bug was included unless the same bug appeared in two very different contexts. l Both mundane bugs and interesting bugs were chosen. An example of a mundane bug is failing to include an initialization of a counter variable. An example of an interesting bug is employing a complex IF-THEN construct for what should be a simple update of a counter variable. l Bugs were included that produce both obvious effects on the behavior of the program (e.g., a missing READLN of the loop-control variable) and bugs that produce subtle effects on the behavior of the program (e.g., initialization of a counter variable to one more than its correct initial value.) 2.4 Data Scoring and Reliability of Scoring Each response of each tutor was evaluated to identify knowledge relevant to each of the five tutorial considerations. In tbis section we illustrate the scoring of protocols with an example of a tutor’s crit#ica!ity considerations; we also present the criteria for protocol scoring reliability. Scoring the Data: We illustrate the scoring of the protocol data by showing 1) how Tutor l's bug criticality rating is derived and 2) h ow we score the factors the tutor identified in reasoning about bug criticality. Figure 3 shows a bug made by a student who was attempting to solve the Rain fa!I Assignment. The st)udent spuriously assigned 0 to the variable intended t,o contain the value of rainfall entered by the user immediately after the user has entered a value for DailyRainfall. Our analysis of each tutor’s reasoning about bug criticality is in terms of two measures: l The tutor’s criticality rating assigned to the bug based on the tutor’s statements and l the bug criticality factors the tutor identified in reasoning about the bug. Figure 4 shows the template used to score each tutor’s reasoning about bug criticality. The template consists of two parts, a field for the Tutor's Overall Criticality Rating and a list of the factors associated with reasoning about bug criticality.’ The following quotation shows the statements Tutor 1 made that are relevant to the bug criticality consideration. Tutor 1: “(1,) [This is a] trivia! error . . . that must be fixed to get good output. (2.) Simple mistake. (3.) Forgetting that Ra i nfa I I was losing its value . . . . n The first part of Tutor l's first sentence and the entire second sentence show that he does not believe that the spurious initialization bug is very critical. As shown in Figure 4, the tutor’s response to the bug was coded as LOU CRITICAL, the lowest value on the three point scale we used to score tutors’ evaluations of bug criticality. Sentence three shows that Tutor 1 does not believe a deep problem of the Student's Understanding was responsible for the bug; the student simply forgot. Thus, the scoring template contains an “X” in the column for Student's Understanding to show that the tutor identified this factor. Finally, in the second half of the first sentence the tutor says that the bug must be fixed to get good output. This identifies the factor of Program Behav ior Precond it ions since the bug must be fixed for the program to output correct values. ‘A description of the meanings of each of the factors is presented in Figure 6 in Section 4. 322 / SCIENCE - 3 Tutorial Consistency: An Illustration Program Rainfall(input,output); . . . TotalRainfall .= 0. Wr i teln (‘ENT’ER kOUNT OF RAINFALL’); Readln(DailyRainfalI); DailyRainfall := 0; BUG: Aneignment 01 Uto DailyRainfall In this section, we present an example of two tutors reasoning about the same bug. Our intent is to illustrate for the reader the kind of data tutors generated in our study and to provide some intuitions about how we analyzed our protocol dat,a. While Clobberr Initial Value (DailyRainfall < > 00000) Do Begin Ed; Figure 3: Bug: Assignment of 0 to Da i I y Ra i nfa I I Clobbers Initial Value Reliability of Scoring: The data we analyze in this paper are based on subjective interpretation of tutors’ responses. They are not, for example, reaction times or numbers of errors. Rather, the statements tutors made in response to the questionnaires were interpreted in order to produce the data. To assess whether the data derived from the prot,ocol statements accurately reflect the cognitive processes which generated them, such data are normally subjected to reliability analysis. If the interpretations of the protocol responses are sufficiently reliable, then they are judged to reflect cognitive processes of the subjects who produced them. Reliability of encodings of t,he protocol responses was assessed by two rules: l If the coder of a response had any question about the correct label for the response, the response jointly encoded by more than one coder. l A response was eliminated from the analysis if it could not be encoded, or two or more coders disagreed on the appropriate encoding. A random sample of approximately 30% of encodings of each kind of knowledge was evaluated by more than one coder. The random sampling of mutually evaluated responses resulted in less than five percent of the data being shifted from one encoding to another. 3.1 Two Tutors Reason About the Same Bug Figure 3 shows the spurious initialization bug we considered in Section 2. To illustrate similar reasoning of two tutors about the five tutorial considerations, we present and discuss quotations from their protocols as they reasoned about how to tutor the bug. Tutorial Consideration 1: Bug Criticality Neither Tutor 2 nor Tutor 3 felt that the bug shown in Figure 3 was very critical. The following quotations show why both tutors were coded as having the same bug criticality rating: Tutor 2: problem . ..” “It’s a small but annoying and pervasive Tutor 3: “... this does seem like a relatively trivial bug.” Even t.hough the bug interferes seriously with the behavior of the student’s program, neither tutor believed it is a “serious” bug; we will see why when we discuss the tutors’ reasoning about the causes of the bug. Tutorial Consideration 2: Bug Category Both tutors believed that the student who made the bug failed to translate correctly the conceptual object for some variable into its correct name in the program. Instead of initializing the intended variable to 0, the failure to translate the conceptual object into its corresponding code caused the student to initialize the wrong variable, Da i I yRa i nfa I I. The following quotations were the basis of our encoding of the tutors’ categorizations of the bug as a failure to translate correctly from conceptual objects to code: Tutor 2: “Syntactic similarity of the two variable names . . . Tutor 3: “Just mixed up variable names . ..” The reason the tutors believed the student made the bug Tutor's Overall CrltlcalIty Rating LOW CRITICAL identifies the category of bug: namely those bugs that arise FACTORS IDENTIFIED BY TUTOR from failures to translate conceptual objects correctly to the code that instantiates the conceptual objects. Name of Factor Factora Present Student's UnderstandIng Impact on the Tutorial Plan Knowledge Precondltlons Program Behavior Precondltlons @ug Dependencies Student's AblIIty to Find and FIX Bug Alone Student's Motlvatron Diagnostic flpportunlttes Tutorial Consideration 3: Bug Cause Tutor 2 and Tutor 3 identified essentially the same cause for the bug. Tutor 2: “... mixing up the purpose of the variables . ..” Tutor 3: “I think the student Tota I Ra i nfa I with Dai I yRainfal I . ..” was confusing The tutors attributed the cause of the bug to the student’s confusing the variable Da i I yRa i nfa I I with another, similarly named, variable. Evidently they felt that the student had correctly identified the conceptual purpose of the two variables, had given them appropriate names, and then confused the two names because they were so similar. We will see evidence for this view in the next quotations which illustrate the tutors’ goals in tutoring the bug. Figure 4: Scoring Tutor l’s Bug Criticality Consideration Tutorial Consideration 4: Tutorial Goals Both tutors were interested in teaching the student to use variables names that prevent confusion when code. The COGNITIVE MODELLING AND EDUCATION / 323 following auotations show that both tutors wanted to teach the v . student the variable-naming heuristic. Tutor 2: “I would explain that there seems to be a name confusion ...n Tutor 3: “Be careful that you name your variables distinctly enough so that you do not get confused about which role they are serving.” Notice that the tutorial goals identified by Tutor 2 and Tutor 3 are reasonable in light of their explanations of the cause of the bug. Tutorial Consideration 6: Tutorial Interventions The following quotations show that both tutors wanted to draw the student’s attention to the mismatch between the goal the student had for the variable Tota IRa i nf a I I and what actually happens to it. Tutor 2: “One could ask a leading asking him to justify his coding . ..” ‘WHY” question . . Tutor 3: “I could ask them i/ they meant to be initializing Totz I Ra i nf a I I instead of Da i I y Ra i nf a I I n Both tutors selected the strategy of juxtaposing for the student the student’s intentions, or goals, with the actual code in the program. This general kind of tutorial intervention was extremely popular with our tutors and appears to serve the purpose of forcing the student to identify conflicts between intentions and actions.2 The tutors’ responses to the bug shown in Figure 3 illustrate how two tutors can have essentially the same “perspective” on the same bug. In the next section of the paper, we identify the factors that tutors take into account when they reason about bug criticality show, statist,ically, that tutors are consistent in the ways they reason about bug criticality. 4 Bug Criticality In planning tutorial sessions, tutors make decisions about which bugs to focus on explicitly and which bugs to tutor only as opportunities arise. When our tutors identified bugs that they intended to focus on in their tutorial sessions, they gave reasons that made it clear that they felt that those bugs were more critical than others. As we analyzed tutors’ responses to the buggy-program scenario questionnaires, we identified several factors that seemed to play a role in their decisions about which bugs to focus on. For example, tutors focused on bugs that might, have been caused by serious misconceptions, bugs that suggested the student lacked important knowledge or skills, and bugs that interfered with the behavior of the program so much that the student would be unable to debug it. In this section of the paper we describe the main factors that our tutors used to reason about bug criticality. As examples of critical and noncritical bugs, suppose a student writes a solution to the Rainfall A88ignment in which the update for the variable containing the total amount of rainfall, Tota I Ra i n, is like the fragment of code labelled as BUG 1 in Figure 5. Instead of simply updating the variable *This strategy was identified by Collins and Stevens (1976) technique of the as a central “Socratic Method” and formed the basis of the tutorial strategies implemented in the WHY tutor. Tota I Ra i n by adding in t,he value of Da i I yRa i nfa I I, the student has written the update using a very strange, malformed, IF-THEN statement to “guard” the update. Virtually every one of the tutors in our study judged the malformed update bug to be very critical because the bug could be symptomatic of a deep misconception about how to update variables. On the other hand, most novice programmers leave output variables unguarded against the case of no valid input: BUG 2 in Figure 5 is an unguarded output bug. Our tutors uniformly considered BUG 2 to be uncritical because it does not suggest the student who wrote the program has any deep misunderstandings about programming. The student probably just forgot to test this . . . WrItelI ('ENTER AMOUYT OF 44INFALL'). Read(DallyRa,nfall) Whle (il1ljRa1nf311 <> Sentlnell Do eeg I n WrlteIn('ENTLR Af4OUNT OF R~IYFALL'J . . . BUG 1: Malformed Update If TotalRain = Tot + DailyRainfall Of TotalRain Then Tot := TotalRasn; . . . End . . . BNG 2: Output of TotalRain Unguarded on No Input \Vriteln(‘The Total Rainfall is: ‘, TotalRain); Figure 5: Critical Bug: Severely Malformed Update Figure 6 shows the major factors and subfactors we used to score tutors’ reasoning about bug criticality. Our analyses of the tutors’ data revealed two major factors tutors take into account when reasoning about the bug criticality tutorial consideration: l What the bug implies about Student’s Understanding l The bug’s Impact on the Tutorial Plan of the bug. The major factor of Student’s Understanding includes knowledge the student should already have and knowledge the student should acquire by doing the current assignment. For example, one tutor was scored as using this factor when she said the following about a student who did not include a Read In statement in the loop to get the new value of DailyRainfall: “The student doesna understand that the loop is driven by input and therefore must contain an instruction to get input.” The major factor of Impact on the Tutorial Plan, which is more complex than Student’s Understanding, is comprised of six subfactors. \Ve present quotations to illustrate two main subfactors. l Knowledge Preconditions: used to justify tutoring one bug after another bug The following quotation shows the tutor reasoning that tutoring one bug was a necessary precondition to tutoring some other bugs. “These [bugs] make sense to follow that [bug] . . we can presume that now the student has a full understanding 01 initialization [the problem tutored first.]” 324 / SCIENCE l Program Behavior Preconditions: used to justify tutoring a bug first In this quotation the tutor says that he started with a particular bug because fixing that bug was necessary to get the program to run even reasonably well. “It’s important in terms of getting the program to r?Ln e'n any form, thus grts precedence over later bugs.” Further discussion of tutors’ reasoning about factors that have an impact on the tutorial plan can be found in Lit,tman, Pinto, and Soloway (1985). 4.1 Tutors’ Agreement on Criticality of Bugs In this section, we identify two major findings illustrate consistency of tutorial reasoning about the criticality tutorial consideration. l First,, tutors assign consistent criticality ratings to bugs. l Second, tutors agree on the factors and subfactors for why bugs are critical. that bug l Student’s Understanding: Mhat problem solv programming concepts does the student knou? ing and . Impact on the Tutorial Plan: Hou shou Id the tutor i a I be formulated? Knowledge Preconditiona: Know ledge the student must have to learn key material that the tutor intends to teach during the tutorial session. Program Behavior Preconditionr: Does the program's current behavior obstruct the tutor's plan for tutoring a bug the tutor uants to address? Bug Dependencier: Eugs that, together, Interact to produce program behavior. Student’s Ability to Find and Fix Bug Alone: The student's abilrty handle a bug rithout the tutor's assistance. Student’s Motivation: The tutor's assessment of whether the student needs to be handled vlth “kid gloves*. Diagnostic Opportunities: Uou Id address i ng th IS bug provide the tutor ulth useful lnforratlon about the student's programmlng knouledge and programmlng skills? Figure 6: Factors Affecting Bug Criticality If tutors did not agree on the criticality of bugs, then the search for consistency of reasoning about bug criticality would be compromised. Our analyses show that tutors agreed very strongly about which of the 16 bugs were high critical, which bugs were medium critical, and which bugs were low critical. A Chi-squared analysis showed statistical significant; of consistency of tutors reasoning about bug criticality (x- = 119.6, df = 30, p < .Ol). It is possible that tutors would agree about bug criticality, yet, would not identify the same factors and subfactors in their reasoning. Our data, however, show that tutors agree on the factors and subfactors, as shown in Figure 6, for why a particular bug is of high, medium, or low criticality. Chi- squared analysis of tutors’ consistency in identifying pal-t,icular factors and subfactors associated with partic-ulai bugs was statistically significant (x2 = 209, df = 140, p < -01). In summary, we have found that tutors see some bugs as being more critical than others. In addition, statistical analyses of their reasoning about bug criticality show that tutors are consistent both with respect to the criticality of bugs and the factors and subfactors that are associated with the criticality of bugs. 5 Bug Categories When students attempt to solve the Rainfall Assignment, their first syntactically correct programs contain approximately six bugs each (Johnson, Soloway, Cutler, and Draper, 1983.) Instead of reasoning about each bug individually, tutors appear to use knowledge about kindo of bugs to help them determine both why the student made the bug and what to do to help the student. For example, if a student solving the Rain fail Assignment does not protect the calculation of the average against division by zero and also neglects to protect the output of the average against the case of no input data, a tutor might categorize both bugs as “missing boundary guards” . Tutors appear to categorize bugs according to a coarse model of the program generation process and make a gross distinction between bugs that arise during program generation and bugs that arise during program verification; furthermore, they break the program generation phase into three subphases. We now identify the three subphases of the generation category and present a quotation for each that shows the sort of statement that would be scored as referring to the subphase. l Decomposition: Figuring out what to do to solve problem. In the following quotation the tutor shows he t,hinks the student failed to decompose correctly the problem of getting values of the rainfall variable into the two components of getting an initial value and getting each new value in the loop. “I think the student knew they had read in Da i IyRa i nfa I I once and thought that would be enough.” l Mapping: Translating one level of problem analysis into another level (e.g., translating problem goals into plans to achieve the goals.) The next quotation illustrates a tutor responding to a student who failed to protect the accumulator for Tota I Ra i nf a I I against adding in the sentinel value, 99999. To compensate for adding in 99999, the student subtracted 99999 from Tota I Ra i nfa I I just before calculating Ave rageRa i nf a I I. “The student plans to add in the sentinel (MXZI) and then remove it later. I think this is very bad.” l Composition: Coordinating eolutiono for different goals. This quotation shows that the tutor believed the student failed to compose the main loop correctly with other actions the student wanted the 3While tutors did identify some subphases of the Verification category, subcategories of Verification were not stable and so we do not report them here. COGNITIVE MODELLING AND EDUCATION / 325 program to take. The student’s bug was to place below the loop the update of the variable Second, when we have identified tutorial patterns that are accumulating the total amount of rainfall. educationally effective, we can build ITS’s which incorporate them and avoid ineffective patterns. We will then be in a “Common problem - Things outside the loop which should be inside [the loop.)” position to provide the same high quality tutorial experiences to every student who has access to a computer. 5.1 Tutors’ Agreement on Bug Categorization In this section, we present two main findings for bug categorization: l First, tut’ors agree in categorizing bugs as Generation or Verification bugs. l Second, tutors agree in categorizing bugs as either Decomposition, Mapping, or Composition bugs. Tutors were consistent in their categorization of bugs as arising during program generation or program verification, which constitutes the coarsest distinction of the bug category system. The consistency of tutors’ categorizations of bugs as generation or verification bugs is demonstrated by- the statistically significant Chi-square value for the test (x2 = 25.9, df = 15, p < .05). Our plans for the immediate future focus on identifying the patterns of tutorial reasoning that are educationally effective and building an Intelligent Tutoring System for programming which makes use of them. Our long range plans are directed toward empirically evaluating the effectiveness of the ITS for programming and using the tutorial principles we discover from our studies of human tutors to build ITS’s for other domains. 7 References Anderson, J., Boyle, C., Farrell, R., and Reiser, B. Cognitive principles in the design of computer tutors. Technical Report, Advanced Computer Tutoring Project, Carnegie-Mellon University, 1984. R. Burton and Brown, J.S. An investigation of computer coaching for informal learning activities. In Intelligent tutoring systems. D. Sleeman and J. S. Brown (eds.), The major cat,egory, program Generation, is composed of three subphases: Deco&p&ition, the attempt to determine how to solve the problem, Mapping, translating one level of problem solution into a more concrete level, and Composition, recombining the parts of a problem solution. The statistically significant C&square value shows that tutors agreed in cate orizing bugs in these three phases of Q- program generation (x - 162.3, df = 30, p < .Ol). In summary, tutors appear to categorize bugs into groups according to a coarse model of program generation and verification. When tutors’ statements about bugs are analyzed to see how they categorized the bugs, we find that tutors consistently describe bugs in terms of a coarse model of program generation. Academic Press, London, 1982. Carbonell, J. AI in CAI: An artificial intelligence approach to computer assisted instruction. IEEE Transactions on Man- Machine Systems, MMS-M, 4, 1970. Clancey, W. Guidon. Journal of computer-based instruction, Summer 1983, Vol. 10, NOS. 1 & 2, 8 - 15. Collins, A. and Stevens, A. Goals and strategies of interactive teachers. Technical Report #3518, 1976 Bolt, Beranek, and Newman, Cambridge, MA. Johnson, L., Soloway, E., Cutler, B., and Draper, S. Bug catalogue: I. Technical Report #286, 1983, Department of Computer Science, Yale University, New Haven CT. 6 Conclusions, Implications and Future Directions In this paper we have identified five tutorial considerations that tutors take into account when they reason about how to tutor students’ bugs and we have provided vocabulary that permits us to describe and analyze patterns in tutors’ reasoning about bug criticality and bug categories. In addition, we have presented statistical analyses that show that tutors are consistent in how they reason about bug criticality and bug categories. ’ There are two main implications of our ability to identify and describe consistency of tutorial reasoning. First, our data showing consistency of reasoning about individual considerations, such as bug criticality, suggest that we will be able to identify and describe consistent pattern8 of tutorial reasoning that coordinate several tutorial considerations. If we can identify and describe such patterns then we can test their educational effectiveneatr by selectively including various combinations of patterns into the same basic ITS, providing tutorial intervention to students with the modified ITS’s, and empirically evaluating the effectiveness of the modified ITS’s Since intervention would be given to all students by the same basic ITS, differences in performance would be attributable to the specific tutorial patterns included in the ITS. Littman, D., Pinto, J., Soloway, E. Observations on tutorial expertise. Proceedings of IEEE Conference on Expert Systems in Government, Washington, D.C. 1985. Littman, D., Pinto, J., Soloway, E. Consistency of tutorial reasoning. In preparation. Newell, A. and Simon, H. Human problem solving. Prentice-Hall Englewood Cliffs, NJ, 1972. Spohrer, J. and Soloway, E. Analyzing the high-frequency bugs in novice programs. To appear in: Workshop on empirical studies of programmers, E. Soloway and S. Iyengar (eds.), Ablex, Inc., 1986. Stevens, A., Collins, A., and Goldin, S. Misconceptions in students’ understanding. In Intelligent tutoring systems. D. Sleeman and J. S. Brown (eds.), Academic Press, London, 1982. Woolf, B. Context dependent planning in a machine tutor. Doctoral Dissertation, University of Massachusetts, Amherst, MA 1984. 326 / SCIENCE
|
1986
|
145
|
412
|
Constraint Propagation Algorithms for Temporal Reasoning Marc Vilain Henry Kautz BBN LABORATORIES UNIVERSITY OF ROCHESTER 10 MOULTON ST. COMPUTER SCIENCE DEPT. CAMBRIDGE, MA 02238 ROCHESTER, NY 14627 Abstract: This paper considers computational aspects of several temporal representation languages. It investigates an interval-based representation, and a point-based one. Computing the consequences of temporal assertions is shown to be computational/y intractable in the interval-based representation, but not in the poinf- based one. However, a fragment of the interval language can be expressed using the point language and benefits from the tractability of the latter.’ The representation of time has been a recurring concern of Artificial Intelligence researchers. Many representation schemes have been proposed for temporal reasoning; of these, one of the most attractive is James Allen’s algebra of temporal intervals [Allen 831. This representation scheme is particularly appealing for its simplicity and for its ease of implementation with constraint propagation algorithms. Reasoners based on this algebra have been put to use in several ways. For example, the planning system of Allen and Koomen [1983] relies heavily on the temporal algebra to perform reasoning about the ordering of actions. Elegant approaches such as this one may be compromised, however, by computational characteristics of the interval algebra. This paper concerns itself with these computational aspects of Allen’s algebra, and of a simpler algebra of time points. Our perspective here is primarily computation-theoretic. We approach the problem of temporal representation by asking questions of complexity and tractability. In this light, this paper examines Allen’s interval algebra, and the simpler algebra of time points. The bulk of the paper establishes some formal results about the temporal algebras. In brief these results are: l Determining consistency of statements in the interval algebra is NP-hard, as is determining all consequences of these statements. Allen’s polynomial-time constraint propagation algorithm is sound but not complete for these tasks. l In contrast, constraint propagation is sound and complete for computing consistency and consequences of assertions in the time point algebra. It operates in O(n3) time and O(n2) space. l A restricted form of the interval algebra can be formulated in terms of the time point algebra. Constraint propagation is sound and complete for this fragment. Throughout the paper, we consider how these formal results affect practical Artificial Intelligence programs. ‘This research was supported in part by the Defense Advanced Research Agency, under contracts NOOOl4-85-C-0079 and N-0001 4-77-C-0378. Projects The Interval Algebra Allen’s interval algebra has been described in detail in [Allen 831. In brief, the elements of the algebra are relations that may exist between intervals of time. Because the algebra allows for indefiniteness in temporal relations, it admits many possible relations between intervals (213 in fact). But all of these relations can be expressed as vectors of definite simple relations, of which there are only thirteen, 2 The thirteen simple relations, whose definitions appear in Figure 1, precisely characterize the relative starting and ending points of two temporal intervals. If the relation between two intervals is completely defined, then it can be exactly described with a simple relation. Alternatively, vectors of simple relations introduce indefiniteness in the description of how two temporal intervals relate. Vectors are interpreted as the disjunction of their constituent simple relations. A BEFORE B B AFTER A G--v* A MEETS B B MET-BY A A ,B / A / A OVERLAPS B BOVERLAPPED-BY A B / A STARTS B B STARTED-BY A A DURING B B CONTAINS A ++,B A ENDS B BENDEDBY A B/ A--v AEQUALSB BEQUALSA / A / / B Figure 1: Simple relations in the interval algebra Two examples will serve to clarify these distinctions (please refer to figure 2). Consider the simple relations BEFORE and AFTER: they hold between two intervals that strictly follow each other, without overlapping or meeting. The two differ by the order of their 21n fact, these thirteen simple relations can be in turn universally and existentially quantified expressions involving relation, For details, see (Allen 8 Hayes 651. expressed in terms of only one truly primitive KNOWLEDGE REPRESENTATION / 3’7 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. arguments: today John ate his breakfast BEFORE he ate his lunch, and he ate his lunch AFTER he ate his breakfast. To illustrate relation vectors, consider the vector (BEFORE MEETS OVERLAPS). It holds between two intervals whose starting points strictly precede each other, and whose ending points strictly precede each other. The relation between the ending point of the first interval and the starting point of the second is left ambiguous. For instance, say this morning John started reading the paper before starting breakfast, and he finished the paper before his last sip of coffee. If we didn’t know whether he was done with the paper before starting his coffee, at the same time as he started it, or after, we would then have: PAPER (BEFORE MEETS OVERLAPS) COFFEE V, = (BEFORE MEETS OVERLAPS) V, = (BEFORE MEETS) then the product of V, and V, is V, x V2 = (BEFORE) As with addition, the multiplication of two vectors is computed by inspecting their constituent simple relations. The constituents are pairwise multiplied by following a simplified muttiplication table, and the resutts are combined to produce the product of the two vectors. See [Allen 831 for details. Returning to our formal discussion, we note that the interval algebra is principally defined in terms of vectors. Although simple relations are an integral part of the formalism, they figure primarily as a convenient way of notating vector relations. The mathematical operations defined over the algebra are given in terms of vectors; in a reasoner buitt on the temporal algebra, all user assertions are made with vectors. Two operations, an addition and a multiplication, are defined over vectors in the interval algebra. Given two different vectors describing the relation between the same pair of intervals, the addition operation “intersects” these vectors to provide the least restrictive relation that the two vectors together admit. The need to add two vectors arises from situations where one has several independent measures of the relation of two intervals. These measures are combined by summing the relation vectors for the measures. For example, say the relation between intervals A and B has been derived by two valid measures as being both R<A,B> = (BEFORE MEETS OVERLAPS) R<B,C> = (BEFORE MEETS) ~~~~~~~---- R<A,C> = (BEFORE) Flgure 3: Intervals whose relations are to be multiplied V, = (BEFORE MEETS OVERLAPS) V2 = (OVERLAPS STARTS DURING). To find the relation between A and B, that is implied by V, and V, the two vectors are summed: Breakfast BEFORE Lunch $imDle relations: Lunch AFTER breakfast Determining Closure in the Interval Algebra In actual use, Allen’s interval algebra is used to reason about temporal information in a specific application. The application program encodes temporal information in terms of the algebra, and asserts this information in the database of the temporal reasoner. This reasoners job is then to compute those temporal relations which follow from the user’s assertions. We refer to this process as completing the closure of the user’s assertions. Breakfast H /A Relation Vector: Paper ( BEFORE MEETS OVERLAPS) Coffee Figure 2: Examples of simple relations and relation vectors V, + V, = (OVERLAPS). Algorithmically, the sum of two vectors is computed by finding their common constituent simple relations. Multiplication is defined between pairs of vectors that relate three intervals A, B, and C. More precisely, if V, relates intervals A and B, and V, relates B and C, the product of V, and V, is the least restrictive relation between A and C that is permitted by V, and V, Consider, for example, the situation in Figure 3. If we have In Allen’s model, closure is computed with a constraint propagation algorithm. The operation of this forward-chaining algorithm is driven by a queue. Every time the relation between two intervals A and B is changed, the pair CA, B> is placed on the queue. The algorithm, shown in Figure 4 operates by removing pairs from the queue. For every pair CA, B> that it removes, the algorithm determines whether the relation between A and B can be used to constrain the relation between A and other intervals in the database, or between B and these other intervals. If a new relation can be successfully constrained, then the pair of intervals that it relates is in turn placed on the queue. The process terminates when no more relations can be constrained. As Allen suggests [Allen 831, this constraint propagation algorithm runs to completion in time polynomial with the number of intervals in the temporal database. He provides an estimate of O(n2) calls to the Propagate procedure. A more fine-grained analysis reveals that when the algorithm runs to completion, it will have performed O(n3) multiplications and additions of temporal relation vectors. Theorem 1: Let I be a set of n intervals about which m assertions have been added with the Add procedure. When invoked, the Close procedure will run to completion in O(n3) time. Proof: (Sketcfl) A pair of intervals <iti> is entered on 3Most of the theorems in this paper have rather long proofs. For this reason, we have restricted ourselves here to providing only proof sketches. 378 / SCIENCE r Let Table be a two-dimensional array, indexed by intervals, in which Table[i,j] holds the relation between intervals i and j. Tab/e[i,,jl is initialized to (BEFORE MEETS . . . AFTER), the additive identity vector consisting of all thirteen simple relations; except for Table[i,i] which is initialized to (EQUAL). Let Queue be a FIFO data structure that will keep track of those pairs of intervals whose relation has been changed. Let Intervals be a list of all intervals about which assertions have been made. l / To Add( R<i,j>) /’ R<i,j> is a relation being asserted between i and j.*l begin Old t Table[i,ji Table[i,j] c Table[i,j] + R<i,j>; If Table[i,j] # Old then Place <i,j> on Fifo Queue; Intervals c Intervals u {i, 1); end: To Close 1’ Computes the closure of assertions added to the database. l / While Queue is not empty do begin Get next ci,j> from Queue; Propagate(i,j); end; To Propagate /’ Called to propagate the change to the relation between intervals I and Jto all other intervals. l / For each interval Kin Intervals do begin Temp c Table[l,K] + ( Tablet/, J] x Table[J, Kb ; If Temp = 0 then {signal contradiction); If Table[l,K} # Temp then Place C&K> on Queue; Table[l,K] t Temp; Temp t Table[K, J] + ( Table[K, Q x Table[i, J/ ; If Temp = 0 then {signal contradiction}; If Table[K,J] 1: Temp then Place <K,J> on Queue; Table[K,J] c Temp; end: Flgure 4: The constraint propagation algorithm Queue when its relation, stored in Tab/e[i,j], is non-trivially updated. It is easy to show that no more than O(n2) pairs of intervals <i,j> are ever entered onto the queue. This is because there are only O(n2) relations possible between the n intervals, and because each relation can only be non-trivially updated a constant number of times. Further, every time a pair ci,i> is removed from Queue, the algorithm performs O(n) vector additions and multiplications (in the body of the Propagate procedure). Hence the time complexity of the algorithm is O(n . n2) = O(n3) vector operations. The vector operations can be considered here to take constant time. By encoding vectors as bit strings, addition can be performed with a 13-bit integer AND operation. For multiplication, the complexity is actually O(lV, 1 . IV,l), where IV, I and IV,1 are the “lengths” of the two vectors to be multiplied (i.e., the number of simple constituents in each vector). Since vectors contain at most 13 simple constituents, the complexity of multiplication is bounded, and the idealization of multiplication as operating in constant time is acceptable. Note that the polynomial time characterization of the constraint propagation algorithm of Figure 4 is somewhat misleading. Indeed, Allen [1983] demonstrates that the algorithm is sound, in the sense that it never infers an invalid consequence of a set of assertions. However, Allen also shows that the algorithm is incomplete: he produces an example in which the algorithm does not make all the inferences that follow from a set of assertions. He suggests that computing the closure of a set of temporal assertions might only be possible in exponential time. Regrettably, this appears to be the case. As we demonstrate in the following paragraphs, computing closure in the interval algebra is an NP-hard problem. Intractability of the Interval Algebra To demonstrate that computing the closure of assertions is NP- hard, we first show that determining the consistency (or satisfiability) of a set of assertions is NP-hard. We then show that the consistency and closure problems are equivalent. Theorem 2: Determining the satisfiability assertions in the interval algebra is NP-hard. of a set of Proof: (Sketch) This theorem can be proven by reducing the 3-clause satisfiability problem (or 3SAT) to the problem of determining satisfiability of assertions in the interval algebra. To do this, we construct a (computationally trivial) mapping between a formula in 3- SAT form and an equivalent encoding of the formula in the interval algebra. Briefly, this is done by creating for each term P in the formula, and its negation -P, a pair of intervals, P and NOTP. These intervals are then related to a “truth determining” interval MIDDLE: intervals that fall before MIDDLE correspond to false terms, and those that fall after MIDDLE correspond to true terms. The original formula is then encoded into assertions in the algebra; this can be done (deterministically) in polynomial time. The encoding proceeds clause by clause. For each clause P v Q v R, special intervals are created. These intervals are related to the literals’ intervals P, Q, and R in such a way that at most two of these intervals can be before MIDDLE (which makes them false). The other (or others) can fall after MIDDLE (which makes them true). It can then be shown that the original formula has a model just in case the interval encoding has one too. Satisfiability of a 3-SAT formula could thus be established by determining the satisfiability of the corresponding interval algebra assertions. Since the former problem is NP-complete, the latter one must be (at least) NP-hard. The following theorem extends the NP-hard result for the problem of determining satisfiability of assertions in the interval algebra to the problem of determining closure of these assertions, Theorem 3: The problems of determining the satisfiability of assertions in the interval algebra and determining their closure are equivalent, in that there are polynomial time-mappings between them. Proof: (Sketch) First we show that determining closure follows readily from determining consistency. To do so, assume the existence of an oracle for determining the consistency of a set of assertions in the interval algebra. To determine the closure of the assertions, we run the oracle thirteen times for each of the O(n2) pairs ci,j> of intervals mentioned in the assertions. Specifically, each time we run the oracle on a pair ci,j>, we provide the oracle with the original set of assertions and the additional KNOWLEDGE REPRESENTATION / 379 assertion i (I?) i, where R is one of the thirteen simple relations. The relation vector that holds between i and j is the one containing those simple relations that the oracle didn’t reject. To show that determining consistency follows from determining closure, assume the existence of a closure algorithm. To see if a set of assertions is consistent, run the algorithm, and inspect each of the O(n2) relations between the n intervals mentioned in the assertions. The database is inconsistent if any of these relations is the inconsistent vector: this is the vector composed of no constituent simple relations. The two preceding theorems demonstrate that computing the closure of assertions in the interval algebra is NP-hard. This result casts great doubts on the computational tractability of the algebra, as no NP-hard problem is known to be solvable in less than exponential time. A PRECEDES S l Consequences of Intractability Several authors have described exponential-time algorithms that compute the closure of assertions in the interval algebra, or some subset thereof. Valdes-Perez [1986] proposes a heuristically pruned algorithm which is sound and complete for the full algebra. The algorithm is based on analysis of set-theoretic constructions. Malik & Binford [1983] can determine closure for a fraction of the interval algebra with the exponential Simplex algorithm. As we shall show below, their method is actually more powerful than need be for the fragment that they consider. Even though the interval algebra is intractable, it isn’t necessarily useless. Indeed, it is almost a truism of Artificial Intelligence that all interesting problems are computationally at least NP-hard (or worse)! There are several strategies that can be adopted to put the algebra to work in practical systems. The first is to limit oneself to small databases, containing on the order of a dozen intervals. With a small database, the asymptotically exponential performance of a complete temporal reasoner need not be noticeably poor. This is in fact the approach taken by Malik and Binford to manage the exponential performance of their Simplex-based system. Unfortunately, it can be very difficult to restrict oneself to small databases, since clustering information in this way necessarily prevents all but the simplest interrelations of intervals in separate databases. Another strategy is to stick to the polynomial-time constraint propagation closure algorithm, and accept its incompleteness. This is acceptable for applications which use a temporal database to notate the relations between events, but don’t particularly require much inference from the temporal reasoner. For applications which make heavy use of temporal reasoning, however, this may not be an option. Finally, an alternative approach is to choose a temporal representation other than the full interval algebra. This can be either a fragment of the algebra, or another representation altogether. We pursue this option below. A Point Temporal Algebra An alternative to reasoning about intervals of time is to reason about points of time. Indeed, an algebra of time points can be defined in much the same way as was the algebra of time intervals. As with intervals, points are related to each other through relation vectors which are composed of simple point relations. These primitive relations are defined in Figure 5. As with the interval algebra, the point temporal algebra possesses addition and multiplication operations. These operations, whose tables are given in Appendix , mirror the operations in the interval algebra. Addition is used to combine two different measures of the relation of two points. Multiplication is used to determine the relation A SAME B A 0 0 l AKXLCWSB 0 A l 0 Figure 5: Simple point relations between two points A and 5, given the relations between each of A and 5 and some intermediate point C. Computing Closure in the Point Algebra As was the case with intervals, determining the closure of assertions in the point algebra is an important Operation. Fortunately,, the point algebra is sufficiently simple that closure Can be computed in polynomial time. To do so, we can directly adapt the constraint propagation algorithm of Figure 4. Simply replace the interval vector addition and multiplication operations with point additions and multiplications, and run the algorithm with point assertions instead of interval assertions. As before, the algorithm runs to completion in O(n3) time, where n is the number of points about which assertions have been made. As with the interval algebra, the algorithm is sound: any relation that it infers between two points follows from the user’s assertions. This time, however, the algorithm is complete. When it terminates, the closure of the point assertions will have been correctly computed. We prove completeness by referring to the moaer theory OI the time point algebra. In essence, we consider any database over which the algorithm has been run, and construct a model for any possible interpretation of the database. If the database is indefinite, a model must be constructed for each possible resolution of the indefiniteness4 We choose the real numbers to model time points. A model of a database of time points is simply a mapping between those time points and some corresponding real numbers. The relations between time points are mapped to relations between real numbers in the obvious way. For example, if time point A precedes time point B in the database, then A’s corresponding number is less than CTs. Theorem 4: The constraint propagation algorithm is complete for the time point algebra. That is, a model can be constructed for any interpretation of the processed database. Proof: (Sketch) We first note that the algorithm partitions the database into one or more partial order graphs. After the algorithm is run, each node in a graph corresponds to a cluster of points. These are all points related to by the vector (SAME); note that the algoriihm computes the transitive closure of (SAME) assertions. Arcs in the graph either indicate precedence (the vectors (PRECEDES) or (PRECEDES SAME), or their inverses) or disequality (the vector (PRECEDES FOLLOWS)). At the bottom of each graph is one or more “bottom” nodes: nodes which are preceded by no other node. Further, when the algorithm has run to completion the 4This demonstrates completeness in the following sense. If there were an interpretation of the processed database for which no model could be constructed, the algorithm would be incomplete It would have failed to eliminate a possible interpretation prohibited by the onglnal assertions. 380 / SCIENCE graphs are all consistent, in the following two senses. First, all points are linearly ordered: there is no path from any point in a graph back to itself that solely traverses precedence arcs (time doesn’t curve back on itself). Second, no two points that are in the same cluster were asserted to be disequal with the (PRECEDES FOLLOWS) vector. If the user had added any assertions that contradicted these consistency criteria, the algorithm would have signalled the contradiction. Note that all of the preceding properties can be shown with simple inductive proofs by considering the algorithm and the addition and muttiplication tables. The model construction proceeds by picking a cluster of points (i.e., a node) at the “bottom” of some graph and assigning all of its constituent points to some real number. The cluster is then removed from the graph, and the process proceeds on with another real number (greater than the first) and another cluster (either in the same graph or in another one). The process is complicated somewhat because some clusters may be “equal” to other clusters (their constituent points may be related by some vector containing the SAME relation). For these cases it is possible to “collapse” several (zero, one, or more) of these clusters together, and assign their constituent points to the same real number. Some other clusters may be “disequal”. For these, we must just make sure never to “collapse” them together. Because the choice of which “bottom” node to remove and which clusters to collapse is non-deterministic, the model construction covers all possible interpretations of the database. Relating the interval and point algebras The tractability of the point algebra makes it an appealing candidate for representing time. Indeed, many problems that involve temporal sequencing can be formulated in terms of simple points of time. This approach is taken by any of the planning programs that are based on the situation calculus, the patriarch of these being STRlPS [Fikes & Nilsson 711. However, as many have pointed out, time points as such are inadequate for representing many real phenomena. Single time points by themselves aren’t sufficient to express natural language semantics [Allen 841, and they are very inconvenient (if not useless) for modelling many natural events and actions [Schmolze 861. For these tasks, an interval-based time representation is necessary. Fortunately, many interval relations can be encoded in the point algebra. This is accomplished by considering intervals as defined by their endpoints, and by encoding the relation between two intervals as relations between their endpoints. For example, the interval relation A (DURING) B can be encoded as several point assertions A- (FOLLOWS) B- A+ (PRECEDES) B+ A- (PRECEDES) A+ B- (PRECEDES) B+, where A- denotes the starting endpoint of interval A, A+ denotes its finishing endpoint, and similarly for B. This scheme captures all unambiguous relations between intervals, that is all relations that can be expressed using vectors that contain only one simple constituent. It can also capture many ambiguous relations, but not all. One can represent ambiguity as to the pairwise relation of endpoints, but one can not represent ambiguity as to the relation of whole intervals. The vector (BEFORE MEETS OVERLAPS) for example can be encoded as point assertions, but the vector (BEFORE AFTER) can not. See Figure 6. INTERVAL POINT VECTOR TRANSLATION ILLUSTRATION A (BEFORE A- (PRECEDES) B- & /* 0vFaRLAPs A- (PRECEDES) A+ MEETS) B A+ (PRECEDES) B+ e B- (PRECEDES) B+ A (BEFORE AFTER) B No equivalent point form Figure 6: Translation of interval algebra to point algebra The fragment of the interval algebra that can be translated to the point algebra benefits from all the computational advantages of the latter. In particular, the polynomial-time constraint propagation algorithm is sound and complete for the fragment. This/is the interval representation method that Simmons uses in his geological reasoning program [Simmons 83, and personal commur$ation]. This fragment of the interval algebra is also the one used by Malik and Binford [1983] in their spacio-temporal reasoning program. In their case, though, reasoning is performed with the exponential Simplex algorithm. This use of the general Simplex procedure is not strictly necessary, though, since the problem could be solved by the considerably cheaper constraint propagation algorithm. Although many applications may be able to restrict their interval temporal reasoning to the tractable fragment of the interval algebra, some applications may not. One program that requires the full interval algebra is the planning system of Allen and Koomen (19831 that we referred to above. In this system, actions are modeled with intervals. For example, to declare that two actions are non- overlapping, one asserts ACT, (BEFORE MEETS MET-BY AFTER) ACT, As we just showed, this kind of assertion falls outside of the tractable fragment of the interval algebra. In a planner with this architecture, this representation problem can be dealt with either by invoking an exponential temporal reasoner, or by bringing to bear planning-specific knowledge about the ordering of actions. Consequences of These Results Increasingly, the tools of knowledge representation are being put to use in practical systems. For these systems, it is often crucial that the representation components be computationally efficient. This has prompted the Artificial Intelligence community to start taking seriously the performance of Al algorithms. The present paper, by considering critically the computational characteristics of several temporal representations, follows this recent trend. What lessons may we learn from analyses such as this? Of immediate benefit is an understanding of the computational advantages and disadvantages of different representation languages. This permits informed decisions as to how the representation components of application systems should be structured. We can better understand when to use the power of general representations, and when to set these general tools aside in favor of more application-specific reasoners. A close scrutiny of the ongoing achievements of Artificial Intelligence enables a better understanding of the nature of Al methods. This process is crucial for the maturation of our field. KNOWLEDGE REPRESENTATION / 38 1 Appendix: Algebraic Operations in the Point Algebra Addition and multiplication are defined in the point algebra by the two tables in Figure 7. These operations both have constant-time implementations if the relation vectors between time points are encoded as bit strings. With this encoding, both operations can be performed by simple lookups in two-dimensional (8 x 8) arrays. Alternatively, addition can be performed with an even simpler 3-bit logical AND operation. [Allen 831 + I < <= > >= = -= ? I ---+---+---+---+---+---+---+---+ <I<l<lolclol<l<l +---+---+---+---+---+---+---+ <=I < 1 <=I 0 1 = 1 = 1 <I <=I +--- +---+---+---+---t---+---+ >lclol>l>lol~l>l +---+---+---+---+---+---+---+ >=I 0 1 = 1 > 1 >=I = 1 > 1 >=I +---+---+---+---+---+---+---+ =lol=lol=l=lol=t +--- +---+---+---+---+---+---+ -=I < 1 < I > 1 > 1 0 1 -=I -=I +---+---+---+---+---+---+---+ ? 1 < 1 <=I > 1 >=I = 1 -=I ? 1 ---+---+---+---t---+---+---+---+ [Allen 841 [Allen & Hayes 85]Allen, J. F. and Hayes, P. J. A Common-Sense Theory of Time. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, pages 528-531. The International Joint Conference on Artificial Intelligence (IJCAI), Los Angeles, CA, August, 1985. [Allen & Koomen 831 x I < <= > >= = as= ? I --- +a-- +---+---+---+---+---+---+ <1<1<1?1?1<1?1?1 +---+---+---+---+---t---+---t <=I < 1 <=I ? 1 ? 1 <=I ? 1 ? 1 +--- +---+---+---+---+---+---+ >l?l?l>l>l>l?l?l +--- +---+---+---+---+---+---+ >=I ? 1 ? 1 > 1 >=I >=I ? 1 ? 1 +---+---+---+---+---+---+---+ = 1 < 1 <=I > 1 >=I = 1 -=I ? 1 +--- +---+---+---+---+---+---+ C”= I ? I ? I ? I ? I -=I ? I ? I +---+---+---+---t---+---+---+ ?1?1?1?1?~?1?~?~ ---+---+---f---t---+---+---+---+ [Fikes & Nilsson 711 Fikes, R., and Nilsson, N.J. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2:189-208, 1971. [Malik & Binford 831 Key to symbols: [Schmolze 861 0 is () , the null vector < is (PRECEDES) <= is (PRECEDES SAME) > is (FOLLOWS) >= is (SAME FOLLOWS) = is (SAME) -= is (PRECEDES FOLLOWS) ? is (PRECEDES SAME FOLLOWS) Figure 7: Addition and multiplication in the time point algebra [Simmons 831 References Allen, J. F. Maintaining Knowledge About Temporal Intervals. Communications of the ACM 26(11):832-843, November, 1983. Allen, J. F. Towards a General Theory of Action and Time. Artificial intelligence 2312):123-l 54, 1984. Allen, James F., and Koomen, Johannes A. Planning Using a Temporal World Model. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence, pages 741-747. The International Joint Conference on Artificial Intelligence (IJCAI), Karlsruhe, W. Germany, August, 1983. Malik, J. and Binford, T. 0. Reasoning in Time and Space. In Proceedings of the Eighth Int’i. Joint Conference on Artificial Intelligence, pages 343-345. The International Joint Conference on Artificial Intelligence (IJCAI), Karlsruhe, W. Germany, August, 1983. Schmolze, J. G. Physics for Robots: Representing Everyday Physics for Robot Planning. PhD thesis, The University of Massachusetts, Amherst, 1986. Simmons, R. G. The Use of Qualitative and Quantitative Simulations. In Proceedings of the Third National Conference on Artificial Intelligence (AAAI-83). The American Association for Artificial Intelligence, Washington, D.C., August, 1983. [Valdes-Perez 861 Valdes-Perez, R. E. Spatio-Temporal Reasoning and Linear Inequalities. 1986. Unpublished A.1 Memo, Massachusetts Institute of Technology Artificial Intelligence Laboratory. 382 / SCIENCE
|
1986
|
146
|
413
|
What Can Machines Know? On the Epistemic Properties of Machines Ronald Fagin Joseph Y. Halpern Moshe Y: Vardi IBM Almaden Research Center 650 Harry Road San Jose, California 95 120-6099 Abstract: It has been argued that knowledge is a useful tool for designing and analyzing complex systems in AI. The notion of knowledge that seems most relevant in this context is an external, information-based notion that can be shown to satisfy all the axioms of the modal logic S5. We carefully examine the properties of this notion of knowledge, and show that they depend crucially, and in subtle ways, on assumptions we make about the system. We present a formal model in which we can capture the types of assumptions frequently made about systems (such as whether they are deterministic or nondeterministic, whether knowledge is cumulatiw, and whether or not the environment affects the transitions of the system). We then show that under some assump- tions certain states of knowledge are not attainable, and the axioms of S5 do not completely characterize the properties of knowledge; extra axioms are needed. We provide complete axiomatizations for knowledge in a number of cases of interest. 1. Introduction A fundamental problem in many branches of AI and computer science (including planning, distributed com- puting systems, and robotics) is to design, analyze, and understand complex systems composed of interacting parts. An increasingly useful tool in this design and analysis process is the concept of knowledge. In AI, there have been two approaches to ascribing knowledge to machines or components of systems. The classical AI approach, which has been called the irzterpreted- symbolic-structures approach ([Ro]), ascribes knowledge on the basis of the information stored in certain data struc- tures (such as semantic nets, frames, or data structures to encode formulas of predicate logic; (cf. [BL]). The second, called the situated-automata approach, can be viewed as ascribing knowledge on the basis of the infor- mation carried by the state of the machine ([Ro]). Since we concentrate on the second approach in this paper, we describe the intuition in more detail. Imagine a machine composed of various components, each of which may be in various states. (Although we talk here of a “machine composed of components”, everything we say goes through perfectly well for a system of sensors taking readings, or a distributed system composed of robots, processes, or people, observing the environment.) We assume some sort of environment about which the system gains information. At any point in time, the system is in some global state, defined by the state of the environment and the local states of the components. We say a process or component p knows a fact p in global state s if 9, is true in all global states s’ of the system where p has the same local state as it does in s. This notion of knowledge is external. A process cannot answer questions based on its knowledge with respect to this notion of knowledge. Rather, this is a notion meant to be used by the system designer reasoning about the system. This approach to knowledge has attracted a great deal of interest recently among researchers in both AI ([Ro,RK]) and distributed systems ([HMl, PR, HF, CM, FI]) precisely because it does seem to capture the type of intuitive reasoning that goes on by system designers. (See [RK] for some detailed examples.) If we are to use this notion of knowledge to analyze systems, then it becomes important to understand its properties. It is easy to show that the external notion of knowledge satisfies all the axioms of the classical modal logic S5 (we discuss this in more detail in Section 2; an overview of S5 and related logics of knowledge and belief can be found in [HM2]). Indeed, the abstract model most frequently used to capture this notion (for example, in [Ro]), has been the classical Kripke-style possible-worlds model for S5 ([Kr]). But, a priori, it is not the least bit clear that this is the appropriate abstraction for the notion of knowledge in which we are interested. Does each Kripke structure really correspond to some state of knowledge that the system can be in? As we shall show, the answer to this question depends crucially, and in surprising ways, on the assumptions that one makes about the system. In order to explain our results, it is helpful to briefly review some material from [FV] which directly inspired our work here. In [FV] a particular scenario is considered, which intuitively can be viewed as robots observing their environment and then communicating about their obser- vations. These robots are assumed never to forget infor- mation they have learned. Moreover, the communication is assumed to proceed in rounds, and messages either arrive within one round or do not arrive at all. In addition, messages are assumed to be honest; for example, if Alice sends Bob a message p,, then it must be the case that Alice knows q. Under these assumptions (and, as we shall see, under several more that are implicit in the model), it is shown that certain states of knowledge are not attainable. In particular, suppose we let p be a fact that characterizes the environment (for example, if all we care about is the weather in San Francisco, we could take p to be “It is raining in San Francisco”), and suppose we have a system with exactly two robots, say Alice and Bob. Consider a situation where Alice doesn’t know whether p is true or false, and Alice knows that either p is true and Bob knows p, or p is false and Bob doesn’t know that p is false. Alice’s state of knowledge can be captured by the formula: (*I “KilrceP A +he -P A KAllce((PAKBobP)V(-PA~KBab-P))’ Although the state of knowledge described by this for- mula is perfectly consistent with the axioms of SS, it is not attainable under the assumptions of [FV]. To see that it is not attainable, suppose that it were attainable. Then we can reason as follows: 428 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Sup- p is false. Then Alice’s state of knowle&e implies that neither Alice nor Bob knows that p is faire. But if Bob se& Alice a message saying “I don’t know p’: then, since Alice knows that either p is true and Bob knows p or p is faire and Bob doesn ‘t know that p is faire, Ah’ce will know that p must be false. But it ti impossible for Alice and Bob to discover that p 13 false simply by communicating if neither of them had any knowledge about p beforehand. So p murt be tNe. And since this argument holds for aN states where Alice has the same information, Alice knows p. But thB contradicts the assumption that Alice a&n ‘t know p. In [FV] a formal proof of the unattainability of this state of knowledge is given for their model. In order to show the subtlety of the assumptions required to make the proof go through, we now give three situations where the state of knowledge ti attainable. For the first case, suppose p is now the statement “the communication line between us is up”. Suppose p is true and Alice sends Bob the message “Hello”, which Bob receives (since, after all, the communication line is up). At this point, Bob knows p (since he received the message) and Alice doesn’t know whether p is true or false (since she doesn’t know whether Bob received her message). But Alice does know that either p is true and Bob knows it, or p is false and Bob doesn’t know p is false (since if p is false, Bob will have no way of knowing whether he didn’t receive a message because the line was down or because Alice didn’t send one in the first place). Thus, we exactly have the state of knowledge previously shown to be unattainable! Of course, there is nothing wrong with either proof. The first proof of impossibility breaks down in the latter scenario because of Alice’s seeming innocuous assumption that Bob could send her (and she could receive) a message saying “I don’t know p”. If p is false in the latter scenario, then she could never receive such a message because the communication line would be down. Im- plicitly, there is an assumption in the model of [FV] that the primitive propositions talk about “nature” and have no impact on the transitions of the system. In particular, a proposition cannot say that a communication line is down or a message buffer is full. Now suppose we slightly modify the example so that there is a communication link from Alice to Bob and a separate one from Bob to Alice. Further suppose that the link from Bob to Alice is guaranteed to be reliable. Let p say that the communication link from Alice to Bob is up. Just as before, suppose Alice sends Bob a message saying “Hello”, which Bob gets. The same rea- soning as above shows that we have again attained the “unattainable” state of knowledge. But in this case, Bob can send Alice a message saying “I don’t know p”, and Alice would be guaranteed to receive it. So now where does the reasoning in the proof of [FV] break down? This time it is in the assumption that Alice and Bob cannot gain knowledge of a fact that they did not even have implicit knowledge of beforehand (this is called the principle of “Conservation of Implicit Knowledge” in [FV]).’ Although neither Alice nor Bob had any knowl- edge of p before the message was sent, when the message arrived Bob knew p was true, so implicit knowledge is gained in this situation. The point is that while implicit knowledge of the environment cannot be gained if the processes first observe the environment and then com- municate about it (so that, intuitively, all transitions are independent of the environment), this may not be true in a more general setting. A third critical assumption being made in the argument from [FV] is that neither robot forgets; i.e. their knowledge is cumulative. We implicitly assume that if at some point neither Alice nor Bob knows that p is false, then they never had any knowledge of p beforehand. But if knowl- edge is not cumulative, then it may have been the case that Bob knew that p was false, imparted a piece of this knowledge to Alice, and then forgot about it. For ex- ample, suppose Alice knows that if Bob knows p, he will never forget it, while if Bob knows -p, then he may forget it. Suppose in fact that p is true and Bob knows it, and Bob sends Alice two messages. The first one says “either I know p or I know -p” (i.e., KBobpVKsob-p), while the second says “I don’t know that p is false” (i.e., -KBob-p). At this point, Alice knows that either p is true and Bob knows it, or that p is false and Bob doesn’t know that p is false (he may have known this before and forgotten). Again, we have shown that the “unat- tainable” knowledge state is attainable! While this example’ may seem a little contrived, it is in fact easy to capture if we view Alice and Bob as finite-state machines. Indeed, an agent that does not forget must in general have unbounded memory (in order to remember all the messages it has sent and received), so that, in a sense, a finite-state machine can be viewed as the canonical example of a machine that does forget. In order to examine the properties of knowledge care- fully, we define an abstract model of knowledge for machines. Using this model, we are able to give precise formulations of a number of parameters of interest when analyzing systems. The exact setting of the parameters depends, of course, on the system being analyzed, although some choices of parameters seem more common in AI applications than distributed systems applications, and vice versa. Typical parameters include: Is a process’ knowledge cumulative? Most papers that consider reasoning about knowledge over time implicitly assume that knowledge is cumulative. Indeed, this is one of the major reasons that Moore ([MO]) considers knowledge rather than belief. As Moore points out, “If we observe or perform a physical action, we gen- erally know everything we knew before, plus whatever we have learned from the action.” For example, when considering a robot learning a telephone number, we don’t worry about the robot forgetting the number a few steps later. A similar assumption is often made in the distributed systems literature, (cf. [HMl,HF,Le,PR,DM]). This assumption is, of course, an idealization, since cumulative knowledge in general requires unbounded memory. Bounded memory is a more realistic assumption (and has been used in papers such as ([CM,FI,RK]). But note that for limited inter- actions, knowledge can be cumulative even with bounded memory; all that is required is enough memory to store the history.2 Are transitions of the system independent of the en- vironment? In the case of processes or sensors observing the environment and then communicating about it, transitions would be independent if nothing about the state can effect the possibility of communication. But suppose we are observing the weather. One can well imagine that the presence of a heavy thunderstorm 1 We discuss implicit knowledge formally in the next section. Roughly speaking, a system has implicit knowledge of a fact if by putting all their information together, the components of the system could deduce that fact. 2 We remark that Halpern and Vardi have shown that that the assumption that processes’ knowledge is cumulative has a drastic effect on the complexity of the decision procedure for validity of formulas involving knowledge and time ([HV]). KNOWLEDGE REPRESENTATION / t20 could affect communication, and so affect the transi- tions in the system. l Is the system deterministic or nondeterministic? The answer to this question might depend partly on the granularity of our analysis. A system that seems deterministic at one level of analysis may seem nondeterministic if we get down to the level of elec- trons. Note that even if the individual processes or components of the system are deterministic, the system as a whole may be nondeterministic, since we may decide to ignore certain components of the system (such as a message buffer or a particular and-gate) in doing our analysis. l Do we view the system as embedded in its environment, so that the initial state of the system is a (possibly nondeterministic) function of the environment, or do we take the system to be the total environment, so that the initial state of the system completely determines the state of the environment? The former is appropriate w if we consider the system to consist of sensors observing nature, while the latter is more appropriate in certain distributed systems applications where we identify the “environment” with the initial setting of certain vari- ables. Of course, there may well be applications for which some point between these poles might be more appropriate. . Is the system synchronous or asynchronous? The list of parameters mentioned above is not meant to be exhaustive. The interesting point for us is the subtle interplay between these parameters and the states of knowledge that are attainable. For example, if (1) processes’/components’ knowledge is cumulative, (2) the system is embedded in the environment, and (3) transitions of the system are independent of the environment, then it turns out that the axiom system ML of [FV] gives a complete characterization of the knowledge states attain- able (i.e., is sound and complete), independent of the choices of the other parameters. If we assume that the system is deterministic, we get yet another axiom. On the other hand, if we assume that knowledge is not cumulative or that the state of the environment can affect the transitions of the system, we find that S5 does provide a complete characterization of the states of knowledge attainable. To us, the moral of this story is that a reasonable analysis of a system in terms of knowl- edge must take into account the relationship between the properties of the system and the properties of knowledge. The rest of the paper is organized as follows. In Section 2 we describe our abstract model and show how all the various assumptions we would like to consider can be captured in the model. In Section 3 we briefly review the semantics of knowledge in systems. In Section 4 we characterize what states of knowledge are attainable under a number of different reasonable assumptions about systems. We conclude in Section 5 with some directions for further research. 2. The model Consider a system with n processes (or components). A global state of the system is a tuple that describes the state of the environment and the local state of each process. We consider the system to be evolving over time. A complete description of one particular way the system could have evolved is a run. We identify a system with a set of runs. More formally, a system M is a tuple (E, C, R, L,g), where E is a set of primitive environment states; C is a finite set of processes, which, for convenience, we shall usually take to be the set (l,..., n) if n is the total number of processes; R is a set of runs; L is the set of local states that the processes can take on; and g associates with each run r E R and each natural number m E m (which we are viewing as a time) a global state g(r,m), where a global state is a tuple <e,lr ,..., 1, >, with e tl E and /, E L for i= l,...,n. Following [HMl], we may refer to the pair (r, m) as a point. A few comments about the model are now in order. We view a primitive environment state as being a complete description of “nature” (or whatever the domain of discourse is). We could instead have started with a set of primitive propositions, say p1, . . ..p.,,. In this case a primitive environment state would just be one of the 2m truth assignments to the primitive propositions. We prefer to start with these primitive environment states, since they seem to us more basic than primitive propo- sitions (and, as we shall see, our axioms are more nat- urally expressed in terms of them), but everything we say can be easily reformulated in terms of primitive propositions. For the rest of this paper we assume that the primitive environment state does not change throughout the run. Formally, for all runs r c R and all m,m’ E IV, if g(r, m> = k, 11, . . . . I,> and g(r, m’) = (e’, & , . . . . r’,), then e = e’. One can certainly imagine applications where the envi- ronment does change over time (if we have sensors ob- serving some terrain, we surely cannot assume that the terrain does not change over time!). But even in such applications the sensors usually communicate about a particular reading taken at a particular time. In this case we can think of the primitive environment states as describing the possible states of the environment at that time. We have taken time here to be discrete (ranging over the natural numbers). This is mainly an assumption of convenience. We could have taken time to range, for example, over the non-negative reals, and defined a global state g(r,t) at all non-negative real times I; none of the essential ideas would change in this case. Of course, we do not assume that there is necessarily a source of time within the system. Time is external to the system, just like knowledge. Note that we have made no commitment here as to how the transitions between global states occur. There is no notion of messages or communication in our model, as there is, for example, in the model of [HF]. While it is easy to incorporate messages and have the transitions occurring as a result of certain messages occurring, tran- sitions might also be a result of physical interactions or even random events internal to some component. As it stands, this model is still too general for many purposes. We now discuss how a number of reasonable restrictions. can be captured in our model. There are two general types of restrictions we consider: restrictions on the possible initial global states and restrictions on the possible transitions. In terms of knowledge, these restrictions can be viewed as restrictions on the initial state of knowledge and restrictions on how knowledge can be acquired. Definition 2.1. Fix a system M = (E, C, R, L,g). We say s = < e, 11 , . . . . 1, > is a global state in M if s =g(r, m) for some run r < R and time m; e is the environment component of s while <ll,...,l, > is the process componen I. Let s= <e,ll ,..., i,> and s’=<e’,l: ,..., I;> be two global states in M. We say s and s’ are indistinguishable to process i, written s -, s’, if 1, = 1:. We say s and s’ are &cess equivalent if they are indistinguishable to all processes j = 1 , . . . . n, i.e., if s and s’ have the same process component. Process i’s view of rw2 r up to time m is the sequence lo, . . . . lk of states that process i takes on in run r up to time m, with consecutive repetitions omitted. For example, if from time 0 through 4 in run r process i goes through -tjO / SCIENCE the sequence l,l, r’, 1,l of states, its view is just l,Z’,l. Finally, s is an initial state if s = g(r, 0) for some run r. We can now precisely state a few reasonable restric- tions on systems. 1. Restrictions on possible initial states. a. In many applications we view the system as embedded in an environment, where the pro- cesses’ initial states are a function of.observations made in that environment. Thus if process i is placed in environment e, then its initial state is some function of e. This function’ is in general not deterministic; i.e., for a given state of the environment, there may be a number of initial local states that a given process can be in. For- mally, we say that the environment determines the initial states if for each process i and each primitive state e, there is a subset L(i,e) of local states such that the set of initial states is j<e,ll,...,l, > 11, EL(i, e)). Intuitively, L(i,e) is the set of states that i could be in initially if the environment is in state e. If we imagine that i is a sensor, then these states represent all the ways i could have partial information about e. For example, if facts p and q are true in an environment e, we can imagine a sensor that sometimes may observe both p and q, neither, or just one of the two; it would have a different local state in each case. Note we have also implicitly assumed that there is initially no in- teraction between the observations. That is, if in a given primitive environment state e it is possible for process i to make an observation that puts it into state 1, and for j to make an observation that puts it into l,, then it is possible for both of these observations to happen simul- taneously. This assumption precludes a situation where, for example, exactly one of two processes can make a certain observation. An important special case occurs when the initial state of the processes is a deterministic function of the en- vironment. We say the environment uniquely deter- mines the initial state if the environment determines the initial states and, in addition, if L(i,e) is a singleton set for all i and e. b. At the other extreme, we have the view that the system determines the environment. For example, in some distributed systems applications we may want to take the “environment” to be just the initial setting of certain local variables in each process. We say the initial state uniquely determines the environment if, whenever two initial global states are process equivalent, they are in fact identical. Of course, many situations between these extremes are possible. 2. Restrictions on state transitions. a. If a process’ knowledge is cumulative, then the process can and does “remember” its view of a run. Thus, if two global states are indistinguish- able to process i, then it must be the case that process i has the same view of the run in both. More formally, knowledge is cumulative if for all processes i, all runs r,r , and all times m, m’, if g(r, ml *, g(r’,m’), then process i’s view of run r up to time m is identical to its view of run r’ up to time m’. Note that cumulative knowledge requires an unbounded number of local states in general, since it must be possible to encode all possible views of a run in the state. In particular, the knowledge of finite-state machines will, in general, not be cumulative. In a synchronous system, every process has access to a global clock that ticks at every instant of time, and the clock reading is part of its state. Thus, in a synchronous system, each process always “knows” the time. Note that in particular, this means that in a synchronous system processes have unbounded memory. More formally, we say a system is synchronous if for all processes i and runs r, if g(r, m) -, g(r,m’), then m = m’. An easy proof (by induction on m) shows that in a synchronous system where knowledge is cumulative, if g(r, m) -I g(r’, m’), then m = m’ and, if m > 0, g(r,m - 1) -( g(r’,m - 1). A system that is not synchronous is called asynchronous. We say that transitions are independent of the envi- ronment if, whenever we have two process- equivalent initial states, then the same sequence of transitions is possible from each of them; i.e., if s= <e,lI ,..., I,> and s’= <e’,ll,..., l,> are process-equivalent initial states and r is a run with initial state s (i.e., g(r, 0) =s), then there is a run r’ with initial state s’ such that g(r,m) is process equivalent to g(r’, m) for all times m. We say a system is deterministic if the initial state completely determines the run; i.e., whenever r and r’ are runs with g(r, 0) =g(r’, 0), then r = r’.j Note that in both of the previous definitions we considered only initial states. Even if transitions are independent of the environment, it will not in general be the case that the same sequence of transitions is possible starting from two arbi- trary process-equivalent global states, since the transitions may depend on the whole history of the run, including, for example, messages that were sent but did not yet arrive. Similarly, even in a deterministic system there may be two dif- ferent transitions possible from a given global state, depending on the previous history, The point is that there may be some information about the system not described in the global state (such as the fact that certain messages have not yet been delivered). Intuitively, this “incom- pleteness” in the global state arises because we choose not to describe certain features of the system. For example, we may choose the com- ponents of C to be only the processors in the system, ignoring the message buffers. We say there are no hidden components in a system if, whenever r,r’ E R are two runs such that g(r,m) =g(r’,m’), then there is a run r” E R which has the same prefix as r up to time m and continues as r’ (i.e., g(r”, k) = g(r, k) if k < m, and g(r”, k) =g(r’, m’ + k -m) if k 2 m). Intuitively, since the global state contained all the relevant information, starting with r it could have been the case that from time m on we could have continued as in run r’. Note that in a deterministic system with no hidden components, if g(r,m) =g(r’,m’), then g(r,m + 1) =g(r’,m’+l). Similarly, in a system where transitions are in- dependent of the environment with no hidden components, the same sequence of transitions is 3 It is not hard to see that in a deterministic system where transitions are independent of the environment, the initial process component completely determines the run; i.e. g(r’,m) are process-equivalent for every m. if I and r’ are runs where g(r, 0) and g(r’,O) are process-equivalent, then g(r, m) and KNOWLEDGE REPRESENTATION / 4.3 I always possible from two process equivalent global states. We have outlined a few reasonable restrictions on possible initial states and on state transitions. Certainly it is possible to come up with others. The main point we want to make here is that many reasonable restrictions on systems can be easily captured within our model. 3. States of knowledge Consider the language that results when we take prim- itive environment states e,e’, . . . . and close off under ne- gation, conjunction, and knowing, so that if 9 and r+~’ are formulas, so are -p, p, A p’, and K,QJ, i = 1, . . . . n. We also find it convenient at times to have implicit knowledge in the language. Intuitively, implicit knowledge (which is formally introduced in [HM2] and has also been used in [CM, DM, FV, RK]) is the knowledge that can be obtained when the members of a group pool their knowledge together. Put differently, it is what someone who had all the knowledge that each member in the group had could infer. We use Ip, to denote implicit knowledge of V* We now define what it means for a formula ‘p in the language to be true at time m in run r of system M=(E,C,R,L,g), written M,r,m l==v: M,r,m p e, where e is a primitive environment state, if g(r, m) = <e, . . . > M,r,m F-q if M,r,m Vq M,r,m I= ~1 A92 if M,r,m I= w and M,r,m I= ~2 M,r,m /= K, CJI if M,r’, m’ + q for all r’,m’ such that g(r, ml -, g(r’, m’> M,r, m + lg, if M, r’, m’ k cry for all r’, m’ such that g(r, m) is process equivalent to g(r’,m’). It is helpful to comment on the last two clauses of the above definition, which describe when K,p and le, hold. Let S, = {(r’, m’) I g(r, m) -, g(r’, m’)). Intuitively, (r’, m’) E S, precisely if at time m in run r, it is possible, as far as process i is concerned, that it is time m’ in run r’. It is easy to verify that K,IJI holds at time m in run r precisely if q holds at every point in S,. Let S be the intersection of the S’s. Intuitively, (r’, m’) E S precisely if at time m in run r, if all of the processes were to combine their information then they would still consider it possible that it is time m’ in run r’. It is easy to verify that 8 (r,m) is process equivalent to g(r’,m’) precisely if (r’, m ) E S. Thus, lp, holds at time m in run r precisely if q holds at every point in S. Definition 3.1. A formula F is valid if M, r, m /= q for all systems M, runs r, and times m. It is easy to see that the truth of a formula depends only on the global state; i.e., if g(r, m) = g(r’,m’), then for all formulas q,, we have M,r,m p q iff M, r’, m’ I== q. This way of assigning truth gives us a way of ascribing knowledge to components of a system in a given global state. But we still have not defined the notion of a state of knowledge. What is the state of knowledge of a system in a given global state ? We could identify the state of knowledge with the set of formulas true in the global state, but this definition is too dependent on the particular language chosen. Instead, we give a semantic definition of a state of knowledge. We first need a preliminary definition. Definition 3.2. A global state s’ is reachable from s in S if there exist global states se, . . ..sk in S and (not necessarily distinct) processes il, . . . . ik such that s = SO, s’= Sk, and S/-l -,, sI for j= l,..., k. Intuitively, the state of knowledge of the system in global state s depends only on the global states reachable from s. This is borne out in our formal semantics by the fact that the truth of a formula at time m in run r only depends on the global states reachable from g(r,m). Thus, we have the following definition. Definition 3.3. A state of knowledge is a pair (S,s) where S is a set of global states, s E S is a global state, and every member of S is reachable from s in S. A state (S,s) is attainable in a system M if there is a global state s in M such that S consists precisely of those states reach- able from s. In the full paper ([FHV]) we review the classical Kripke semantics for the modal logic S.5 and define the analogue of the notion of state of knowledge for Kripke structures. It is fairly easy to show that there is an exact correspondence between states of knowledge in our model and states of knowledge in Kripke structures. This perhaps justifies the choice of Kripke structures as an appropriate abstraction for the notion of knowledge in systems. However, as we show in the next section, under some of the restrictions on systems we have dis- cussed, not all states of knowledge are attainable. 4. The properties of knowledge We shall not try to give here a complete taxonomy of the properties of knowledge for each choice of pa- rameters that we have discussed (although in the full paper, we do characterize the properties of knowledge for many cases of interest). Instead, we discuss a few illustrative cases, with a view towards showing the sub- tlety of the interaction between the properties of the system and the properties of knowledge. As we remarked above, if we put no restrictions on systems, then there is an exact correspondence between states of knowledge in our model and those in Kripke structures. It is well-known that the axiom system SS captures the properties of knowledge in Kripke structures; i.e., it is sound (all the axioms are valid) and complete (all valid formulas are provable). S5, (the extension of the classical axiom system S5 to a situation with n knowers) consists of the following axioms and rules of inference. The axioms are: Al. All substitution instances of propositional tautologies.’ A2 K,PI~AK,(~~+~~)+K,v~, i- l,..., n A3. K,~=+~, i=l,..., n A4. K,q+K,K,q, i- l,..., n A5. -K,q+K,-KIpl. There are two rules of inference: modus ponens (“from ~1 and qr+ 72 infer CJQ”) and knowledge generalization (“from p infer K,p”). If we extend the language to include implicit knowl- edge, then we obtain a complete axiomatization by adding axioms that say that I acts like a knowledge operator (i.e., all the axioms above hold with K, replaced by r> and the additional axiom: K,q=+lq~, i= l,..., II. In [HM2] it is shown that this axiomatization, called S5I,, is sound and complete in Kripke structures for the extended language with implicit knowledge. For various reasons, philosophers have argued that S5 is an inappropriate axiom system for modelling human knowledge. For example, axiom A2 seems to assume perfect reasoners, that know all logical consequences of their knowledge, while A5 assumes that reasoners can 4 Since our base language consists of primitive environment states rather than primitive propositions, we also have tautologies of the form e+ -e’ if C,C’ are distinct primitive environment states. In addition, if we have only finitely many primitive environment states, say cl, . . . . ex, then elV...Ve~ is a tautology. b.52 / SCIENCE do negative introspection , and know about their lack of knowledge. While these axioms may be controversial for some notions of knowledge, they are not controversial for the external, information-based notion that we are concerned with here. Moreover, it is easy to see that all of these axioms are still sound even under the restric- tions on systems discussed in Section 2. But of course, they may no longer be complete. Recall the Alice and Bob story discussed in the intro- duction. What assumptions were really needed to show that the state of knowledge defined by formula (*) was not attainable? As the counterexamples given in the introduction suggest, we need to assume cumulative knowledge (i.e., no “forgetting”) and that the environment does not affect the transitions. Also implicit in the story is that Alice and Bob initially study nature independently, so we also need the assumption that the environment determines the initial states. It turns out that these three conditions are sufficient to show that (*) is not attainable, as we shall see below. Our first step is to get a semantic characterization of the attainable states of knowledge under these assump- tions. Definition 4.1. A knowledge state (S,s) satisfies the pasting condition if, whenever s’,sr, . . ..s. are global states in S such that e is the environment component of s, and s’ -, s,, i = 1 , . . . . n, then there exists a global state s” in S such that s” is process equivalent to s’ and e is the environment component of s”. Thus, a knowledge state (F, s> satisfies the pasting condition if, whenever s = ( l , 11, . . . . I,) E s, and also sl = (e, 11, 9, . . . . ‘) E S, p,= (e, l , 12, ‘, . . . . ‘) E S, . . . . and s, = (e, ., . . . . ., 1,) E S, then S = k, h , . . . . I,) E S. (Each . represents a value we do not care about.) Proposition 4.2. If M is a system where (I) knowledge is cumulative, (2) the environment determines the initial states, and (3) transitions are independent of the environment, then all the states of knowledge attainable in M satisfy the pasting condition. Conversely, if a state of knowledge satisfre the pasting condition, then it is attainable in some system M satisfying these three assumptions. Note that these assumptions are not unreasonable. They hold for “ideal” sensors or robots observing and commu- nicating about an external environment. Not surprisingly, the fact that the pasting condition holds affects the properties of knowledge. Neither S5, nor S51, is complete. Consider the following axiom in the extended language, where e is a primitive environment state and 11, ,.., n) is the set of processes: A6 I-e + Kl-eV...VK,-e. This says that if it is implicit knowledge that the primitive environment state is not e, then it must be the case that some process knows it. The soundness of this axiom, which is not a consequence of SX,, is easily seen to follow from the pasting condition. We remark that the formula (*) discussed in the introduction is a consequence of S5 together with A6 (provided we assume that the primitive proposition p in formula (*) is a primitive environment state; recall that we said it “completely characterizes the environment”). Even without implicit knowledge in the language, we can get an axiom that captures some of the intuition behind the pasting condition. We define a pure knowledge formula to be a Boolean combination of formulas of the form K,q,, where v is arbitrary. For example, is a pure .;Ac,t the fc.11 edge formula, 9 v;nm .xrharta “ UL y r I \ .-fi,y IJ e is a primitive edge formula: U” L. b” IIJIUGI LIIG I “I,” VY ,116 cI.-.I” UI, ” I‘bl c envir .onment state and v is a pure knowl- A6’. K,(~,~e)=$K,(~~(KI-eV...VK,-e)). The intuition behind this rather mysterious formula is discussed in [FV]. Let ML, (resp. ML;) be S51, (resp. SS,) together with A6 (resp. A6f). Theorem 4.3. ML, (resp. ML,,) is a sound and complete axiomatization (for the extended language) for systems of n pro- cesses where (I) knowledge is cum&tine, (2) the environment determines the initial states, and 13) transitions are in&pen&nt of the environment. Soundness and completeness theorems for ML; and ML, are also proven by Fagin and Vardi in [FV], but in a rather different setting from ours. The model in [FV] is much more concrete than the model here; in particular there is in their model a particular notion of communi- cation by which the system changes its state. Here we have an abstract model in which, by analyzing the im- plicit and explicit assumptions in [FV], we have captured the essential assumptions required for the pasting property and A6 to hold. While soundness in [FV] follows easily from soundness in the model here, they have to work much harder to prove completeness. Recall from our Alice and Bob story in the introduction that the assumptions we made all seemed to be necessary. The following theorem confirms this fact. It shows that if we drop any one of assumptions (l), (2), or (3), all states of knowledge are attainable, and S5, (resp. S51,) becomes complete. Theorem 4.4. All states of knowledge are attainable in systems that satisfy only two of the restrictions of Proposition 4.2 and Theorem 4.3. Thus S5, (resp. SSI,J is a sound and complete axiomatization (for the extended language) for such systems. We remark that in Proposition 4.2 and Theorem 4.3 we assumed that the environment determines the initial states. If we make the stronger assumption that the environment uniquely determines the initial state, then a smaller set of knowledge states is attainable, and again knowledge has extra properties. This is discussed in detail in the full paper. Finally, we turn our attention to systems where the initial state uniquely determines the environment. Recall that this assumption is appropriate for distributed systems applications where the environment is just the initial setting of certain local variables in each process. If knowledge is not cumulative, then it again can be shown that all states of knowledge can be attained. But if knowledge is cumulative, then not only is it initially the case that the processes’ state uniquely determines the environment, but this is true at all times. Definition 4.5. (S,s) is a state of knowledge where the processes’state uniquely&ermines the environment if, whenever two global states s and s’ in S are process equivalent, then s and s’ have the same environment component. Proposition 4.6. If M is a system where knowledge is cumulative and the initial state uniquely determines the environment, then in every state of knowledge attainable in M, the processes’ state uniquely determines the environment. Moreover, every state of knowledge where the processes# state un&uely determmes the en- vironment is attainable in a system where knowledge is cumulative and the initial state uniquely determines the environment. We can show that if the processes’ state always uniquely determines the environment, then S51, is not complete. The fact that the processes’ state uniquely determines the environment can be characterized by the following axiom: A7. q+Ig,. Note that this is an axiom in the extended language. Somewhat surprisingly (and in contrast to the situation in Theorem 4.3), it turns out that if we restrict our attention to formulas involving only knowledge, then S5, KNOWLEDGE REPRESENTATION /i ii.5 ir a complete axiomatization. quired! Thus we have: No new axioms are re- Theorem 4.7. S5, is a sound and complete axiomatization for systems of n processes whose knowledge ti cumulative where the initial state uniquely determines the environment. In the extended kznguclge, S5I, together with A7 forms a sound and complete axiomatization for such systems. This theorem shows that there are cases where the language may not be sufficiently powerful to capture the fact that not all states of knowledge are attainable. Details of the proofs of theorems stated above and further results along these lines can be found in the full paper. 5. Conclusions We have presented a general model for the knowledge of components of a system and shown how the properties of knowledge may depend on the subtle interaction of the parameters of the system. Although we have isolated a few parameters of interest here, we clearly have not made an exhaustive study of the possibilities. Rather, we see our contributions here as (1) showing that the standard S5 possible-worlds model for knowledge may not always be appropriate, even for the external notion of knowledge which does satisfy the S5 axioms, (2) providing a general model in which these issues may be examined, (3) isolating a few crucial parameters and formulating them precisely in our model, and (4) providing complete axiomatizations of knowledge for a number of cases of interest (complete axiomatizations are provided for many choices of parameters in the full paper). We intend to push this work further by seeing what happens when we add common knowledge and time to the language. By results of [HMl] (since reproved and generalized in [CM,FI]), we know that for many choices of parameters, common knowledge will not be attainable in a system. Thus, we expect that even in cases where the axioms of S5 are complete, when we add common knowledge to the language we will need extra axioms beyond the standard S5 axioms for common knowledge (see [Le,HM2] for a discussion of the S5 axioms of common knowledge). We expect to find yet other com- plexities if we allow the language to talk explicitly about time by adding temporal modalities (as is done in [Le,RK,HV]). We can then explicitly axiomatize cumu- lative knowldge, although results of [HV] imply that it may often be impossible to get complete axiomatizations in some cases. 6. References WI R. Brachman and H. Levesque, Readings in Knowi- edge Representation, Morgan Kaufmann, 1985. [CM1 M. Chandy and J. Misra, How processes learn, Proc. 4th ACM Spposium on Principles of Dtitributed Computing, 1985, pp. 204-214. DMI C. Dwork and Y. Moses, Knowledge and com- mon knowledge in a Byzantine environment I: crash failures, Theoretical Aspects of Reasoning about [FHVI D-1 WI WI NM11 WW W’l WI DW D-e1 NoI WI WI [RKI Knowledge (ed. J.Y. Halpern), Morgan Kaufmann, 1986, pp. 149-170. R. Fagin, J.Y. Halpern, and M.Y. Vardi, What can machines know? On the properties of knowl- edge in distributed systems, to appear as an IBM Research Report, 1986. R. Fagin and M.Y. Vardi, Knowledge and im- plicit knowledge in a distributed environment, Theoretical Aspects of Reasoning about Knowledge (ed. J.Y. Halpern), Morgan Kaufmann, 1986, pp. 187-206. M.J. Fischer and N. Immerman, Foundations of knowledge for distributed systems, Theoretical Aspects of Reasoning about Knowledge (ed. J.Y. Halpern), Morgan Kaufmann, 1986, pp. 171-186. J.Y. Halpern and R. Fagin, A formal model of knowledge, action, and communication in dis- tributed systems: preliminary report, Proc, 4th ACM Symp. on Principles of Distributed Computation, 1985, pp. 224-236. J.Y. Halpern and Y.O. Moses, Knowledge and common knowledge in a distributed environ- ment, Proc. 3rd ACM Symp. on Principles of Dis- tributed Computing, 1984, pp. 50-61. Revised ver- sion appears as IBM Research Report RJ 4421, 1986. J.Y. Halpern and Y.O. Moses, A guide to the modal logics of knowledge and belief, Proc. 9th InternationalJoint Conference on Artificial Intelligence (IJCAI-851, 1985, pp. 480-490. J.Y. Halpern and M.Y. Vardi, The complexity of reasoning about knowledge and time: Ex- tended abstract, Proc. 18th ACM Svmposium on the Theory of Computing, Berkeley, May 1986, pp. 304-315. J. Hintikka, Knowledge and belief. Cornell Uni- versity Press, 1962. S. Kripke, Semantical analysis of modal logic, Zeitschrift fir Mathematische L.ogik und Grundlagen der Mathematik 9 (1963), pp. 67-96. D. Lehmann, Knowledge, common knowledge, and related puzzles, Proc. 3rd ACM S’p. on Prin- cipks of Distributed Computing, 1984, pp. 467-480. R.C. Moore, Reasoning about knowledge and action, Technical Note 191, Artificial Intelligence Center, SRI International, 1980. R. Parikh and R. Ramanujam, Distributed pro- cessing and the logic of knowledge, Proc. Work- shop on Logics of Programs, Brooklyn, June 1985, Springer-Verlag, Lecture Notes in Computer Sci- ence - Vol. 193, pp. 256-268. S.J. Rosenschein, Formal theories of knowledge in AI and robotics, New Generation Computing 3, 1985, pp. 345-357. S.J. Rosenschein and L.P. Kaelbling, The syn- thesis of digital machines with provable epistemic properties, Theoretical Aspects of Reason- ing about Knowledge (ed. J.Y. Halpern), Morgan Kaufmann, 1986, pp. 83-98. 4.3-i / SCIENCE
|
1986
|
147
|
414
|
IS BELIEF REVISION HARDER THAN YOU THOUGHT? Marianne Winslett’ Stanford University, Computer Science Dept. Stanford CA 94305 Abstract. Suppose one wishes to construct, use, and maintain a knowledge base (KB) of beliefs about the real world, even though the facts about that world are only partially known. In the AI domain, this problem arises when an agent has a base set of beliefs that re- flect partial knowledge about the world, and then tries to incorporate new, possibly contradictory knowledge into this set of beliefs. We choose to represent such a KB as a logical theory, and view the models of the the- ory as representing possible states of the world that are consistent with the agent’s beliefs. How can new information be incorporated into the KB? For example, given the new information that “b or c is true,” how can one get rid of all outdated in- formation about b and c, add the new information, and yet in the process not disturb any other information in the KB? The burden may be placed on the user or other omniscient authority to determine exactly what to add and remove from the KB. But what’s really needed is a way to specify the desired change intensionally, by stat- ing some well-formed formula that the state of the world is now known to satisfy and letting the KB algorithms automatically accomplish that change. This paper explores a technique for updating KBs containing incomplete extensional information. Our approach embeds the incomplete KB and the in- coming information in the language of mathematical logic. We present semantics and algorithms for our op- erators, and discuss the computational complexity of the algorithms. We show that the incorporation of new information is difficult even without the problems asso- ciated with justification of prior conclusions and infer- ences and identification of outdated inference rules and axioms. 1. Introduction This section informally describes our view of the knowl- edge base component of an intelligent reasoning agent, and reviews the phases of belief revision required when * AT&T Bell Laboratories Scholar the agent makes new observations about the world, con- cluding with an outline of the remainder of the paper. We envision the KB for an agent as containing, among other items, a set of extensional beliefs whose contents are not generated via inference from other data but rather by direct observation or input, and whose justification is therefore simply the fact of direct input. One can imagine these primitive beliefs as stemming from processed sensory input, such as a scene analyzer that uses a line and shadow drawing as its primitive depiction. In addition, the agent’s knowledge base may con- tain derived or intensional beliefs. Intensional beliefs are derived from extensional and intensional beliefs via data-independent logical inference rules such as modus ponens, and data-dependent axioms, such as functional dependencies. For example, in the blocks world, a typ- ical axiom might say that any block is either on the table or on another block. An extensional belief may also have an intensional justification; for example, in the blocks world an agent may directly observe that block A is on the table and also be able to deduce that fact from the observation that block A is not on top of any other block. Belief revision comes in a number of guises. First, new extensional information may lead to a change in extensional beliefs. These new beliefs may in turn trigger changes in intensional and previously exten- sional beliefs, and in axioms. For example, the place- ment of a new block so as to block vision of the rest of a scene may cause uncertainty about the placement of the objects hidden by the new block, or may cause the beliefs about hidden objects to become intensional (i.e., justified solely by frame axioms) rather than exten- sional, depending upon the inferences employed by the agent. This process of belief justification, or reconsider- ation of intensional beliefs, has been studied extensively (see e.g. [Doyle 791). Another variety of belief revision occurs when new information implies that the current set of axioms is now incorrect (typically, when the new theory has no KNOWLEDGE REPRESENTATION / 42 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. models). There is no established technique for revamp- ing axiom sets, and this is an open area of research [Fagin 83, 841. What is less well recognized is that the first stage of belief revision, incorporating new extensional beliefs into the pre-existing set of extensional beliefs, is itself quite difficult if either the new or old beliefs involve in- complete information. In this paper we focus entirely on this problem of adding new extensional knowledge to the set of extensional beliefs, and do not consider the activities of subsequent stages of belief revision such as making new inferences, checking belief justifications, or revoking old axioms. In addition, we do not con- sider here the problems associated with the presence of intensional beliefs and interaction of intensional and extensional beliefs when new extensional information is presented to the agent; the interested reader is re- ferred to [Winslett 86b] f or a discussion of these topics. Finally, we do not consider the problem of extracting information from the KB; for that purpose, we suggest operations such as Levesque’s ASK [Levesque 841. We consider extensional beliefs that have incom- plete information in the form of disjunctions or null values, which are attribute values that are known to lie in a certain domain but whose value is currently un- known; see e.g. [Imielinski 841, [Reiter 841. Levesque [84] considered the problem of updating such KBs with his TELL operation; however, TELL could only eliminate models from the set of models for the theory, not change the internal contents of those models. In other words, one could only TELL the KB new information that was consistent with what was already known. This is an important and vital function, but an agent also needs to be able to make changes in the belief set that contra- dict current beliefs. For example, the agent should be able to change the belief that block A is on block B if, for example, the agent observes a robot arm removing A from B. In recent work on circuit diagnosis and updat- ing propositional formulas, DeKleer [85] and Reiter [85] take a logic theory describing the correct behavior of a ci rcuit, and consider the problem of making mini- ma1 changes in that theory in order to make it consis- tent with a formula describing the circuit’s observed behavior. (Weber [86] does not focus on circuit di- agnosis, but takes a similar approach with a slightly different choice of semantics.) One cannot expect to find a polynomial-time algorithm for diagnosis, as the changes needed in the theory-the diagnosis-are them- selves the output of the diagnostic process, and the determination of whether any changes are needed at all in a propositional theory-i.e., whether the circuit is functioning correctly-cannot in general be done in polynomial time. However, in Section 3 we present a polynomial-time approach that may be used when only the new “diagnosed” theory is of interest, rather than the changes made to the old theory. In the database realm, the problem of incorpo- rating new information was considered by Abiteboul and Grahne [Abiteboul 851, who investigate the prob- lem of simple updates on several varieties of relations containing null values and simple auxiliary constraints. They do not frame their investigation in the paradigm of mathematical logic, however, making their work less applicable to AI needs. In the remainder of this paper, we set forth a facility for incorporating new information into KBs of extensional information. Section 2 introduces exten- sional belief theories, a formalization of such KBs. Sec- tion 3 sets forth a language for adding new information to these theories, and gives syntax, semantics, and a polynomial-time algorithm for a method of adding new information. In [Winslett 86b], the algorithm is proven correct in the sense that the alternative worlds produced under this algorithm are the same as those produced by updating each alternative world individually. 2. Extensional Belief Theories The language L for extensional belief thories, our for- malization of extensional belief sets, contains the usual first-order symbols, including an infinite set of variables and constants, logical connectives, the equality predi- cate, and quantifiers. Lc also includes the truth values T and F, and an infinite set of Skolem constants E, ~1, ~2, . . . . Skolem constants are the logical formulation of null values and represent existentially quantified variables; we assume that the reader is acquainted with their use. ldoes not contain any functions other than constants and Skolem constants, and all constants are standard names. C also contains a set of purely extensional pred- icates (e.g., OnTopOf(), Red()), that is, predicates for which the agent’s belief justification is always the fact of direct observation, rather than inference from other beliefs. (In [Winslett 86b] we also consider predicates with intensional aspects.) In addition, l includes one extra history predicate HR for each extensional predi- cate R. History predicates are for internal use of the algorithm implementations only; the agent is unaware of their existence. Unlike the language used by Levesque [84], C does not include a K operator to refer explicitly to the KB’s knowledge about the world. The main use of the K op- erator in the TELL operation is to add bits of a closed- 422 / SCIENCE world assumption to the belief set, and we have devised a separate technique for performing that function and for maintaining the closed-world assumption when new information arrives, as described in [Winslett 861. Definition. A theory 7 over f, is an exten- sional belief theory iff (1) for each pair of constants cl, c2 in C , 7 con- tains the unique name axiom cl # ~2; (2) the remainder of 7 (its body) is any finite set of well-formed formulas of 1s that do not contain variables. Though the models of 7 look like possible states of the world consistent with the agent’s extensional ob- servations, not everything in a model of 7 is an instan- tiated consequence of extensional beliefs of the agent, due to the presence of history predicates. For this rea- son we define an alternative world of 7 as a model of 7 minus the equality and history predicates. 3. An Operation For Specifying New Beliefs We now present an operation for specifying new exten- sional beliefs or observations, based on the language L. 3.1. Observation Syntax Let 4 and w be formulas over L without history predi- cates or variables. Then an observation takes the form IFdTHENw. Examples. Suppose L contains two predicates, Redo and OnTopOf(), and B, C, and Table are constants in L, presumably denoting blocks and tables. Then the following are observations, with their approximate in- tended semantics (presented formally in the next sec- tion) offered in italics: IF T THEN 1 Red(B) V (OnTopOf(B, E)/\ (E #Table)). Change alternative worlds so that either B isn’t red or it’s on top of something other than the table. IF 1 Red(B) THEN F. Eliminate all alternative worlds where % is not red. IF OnTopOf(B,e) V OnTopOf(r ,B) THEN 1 Red( (e # C). In each alternative world where % is on top of or below something, change the alternative world so that that something isn ‘t red and isn’t C. 3.2. Semantics We define the semantics of an observation applied to an extensional belief theory 7 by its desired effect on the alternative worlds of 7. In particular, the alternative worlds of the new theory must be the same as those ob- tained by applying the observation separately to each original alternative world. This may be rephrased as fol- lows: Extensional beliefs with incomplete information represent a possibly infinite set of alternative worlds, each different and each one possibly representing the real, unknown world. The correct result of incorporat- ing new information into the KB is that obtained by storing a separate extensional belief theory with com- plete information for each alternative world and pro- cessing the observation in parallel on each separate KB. A necessary and sufficient guarantee of correctness for any more efficient and practical method of observation processing is that it produce the same results as the parallel computation method. Equivalently, we require that the diagram below be commutative: both paths from upper-left-hand corner to lower-right-hand corner must produce the same result. 7’ has alternative world 7 ,A 1 observation observation has alternative world ,i A’ The general criteria guiding our choice of se- mantics are, first, that an observation cannot directly change the truth valuations of any atomic formulas (atoms+) except those that unify++ with atoms of w . For example, the observation IF T THEN Red(A) cannot change the color of any block but A, and cannot change the truth valuation of formulas such as Green(A). (Of course, after the first stage of belief revision has incor- porated the extensional fact that A is red, in a second stage the axioms and rules of inference may be used to retract the belief that A is green and any other out- moded fancies. As noted earlier, in this paper we only consider the first stage of belief revision.) The second criterion is that the new information in w is to represent the most exact and most recent state of extensional knowledge obtainable about the atoms in w , and is to override all previous extensional informa- tion about the atoms of w . These criteria have a syntac- tic component: one should not necessarily expect two observations with logically equivalent w s to produce the same results. For example, if the agent observes IF T t In this discussion, atoms may contain Skolem constants. ++ In this formulation, two atoms a and b unify if there exists a substitution of constants and/or Skolem constants for the Skolem constants of a and b under which a and b are syntactically identical. KNOWLEDGE REPRESENTATION / 423 THEN T, this is different frcm observing IF T THEN Red(A)V 1 Red(A); ant observation reports no change in the re:lncss of A, and the other explicitly points out that whether A is red is now unknown. This syntactic property of the semantics may seem unusual at first, though a short acquaintance should prove sufficient to demystify it .t Under these desiderata, the observation IF 4 THEN w is not equivalent to IF T THEN 4 + w ; the second observation allows the truth valuations of atomic formulas in 4 to change, while the first does not. For example, if the agent says something like IF leprechauns exist THEN there’s one outside my door now, this state- ment will not change any truth valuations about lep- rechauns, merely observe that if they do exist then there’s one hanging around now. On the other hand, with IF T THEN if leprechauns exist, then there’s one outside my door, the agent opens up the possibility of leprechauns even if the agent previously did not believe in them. The ability to make changes in the theory dependent upon an unknown bit of information, with- out affecting the truth or falsity of that information, is crucial. An intuitive motivation is in order for our method of handling Skolem constants. The essential idea is that if the agent only had more information, the agent would not be making an observation containing Skolem constants, but rather an ordinary observation without Skolem constants. Under this assumption, the correct way to handle an observation 0 with Skolem constants is to consider all the possible Skolem-constant-free ob- servations represented by 0 and execute each of those in parallel, collecting the models so produced in one large set. Then the result of the observation the agent would have had, had more information been available, is guaranteed to be in that set. For a formal definition of semantics that meets the criteria outlined in this section, let 0 be an obser- vation and let M be a model of an extensional belief theory 7. Let 0 be a substitution of constants for ex- actly the Skolem constants of C#J , w , and 7, such that M is a model of (7)0 , that is, a model of the theory resulting from applying the substitution u to each for- mula in 7 . tt Then for each pair M and 0, S is the set of models produced by applying 0 to M as follows: If (4)0 is false in M , then S contains one model, M . + Other possible semantics are considered in [Winslett 86b]. + + Since Skolem constants do not appear directly in models, the purpose of d is to associate the Skolem constants in 0 with specific constants in M, so that the agent can directly refer to objects such as “that block that I earlier observed was red, though I wasn’t able to tell exactly which block it was.” Otherwise, S contains exactly every model M* such that (1) M’ agrees with M on the truth values of all Skolem-constant-free atoms except possibly those in (& ; and (2) (40 is true in M* , and its truth valuation does not violate the unique name axioms of Ic . Example. If the agent observes IF T THEN Red(A) V Green(A), then three models are created from each model M of 7 : one where Red(A) A Green(A) is true, one where lRed(A) AGreen is true, and one where Red(A) A 1 Green(A) is true-regardless of whether A was red or green in M originally. The remarks at the beginning of this section on correctness of observation processing may be summed up in the following definition: Definition. The incorporation of an obser- vation IF 4 THEN w into an extensional belief theory 7, producing a new theory 7’ , is correct and complete iff 7’ is an extensional belief theory and the alterna- tive worlds of 7’ are exactly those alternative worlds represented by the union of the models in the S sets. Please note that the observation IF 4THEN w does not set up a new axiom regarding 4and w ; rather, the new information is subject to revision at any time, as is all extensional data. 3.3. An Algorithm for Incorporating Observa- tions into the KB The semantics presented in the previous section de- scribe the effect of an observation on the models of a theory; the semantics give no hints whatsoever on how to translate that effect into changes in the extensional belief theory. An algorithm for incorporating observa- tions into the KB cannot proceed by generating models from the theory and updating them directly, because it may require exponential time to generate even one model (since satisfiability testing is involved) and there may be an exponential number of non-isomorphic mod- els. Any algorithm for observations must find a more efficient way of implementing these semantics. The Observation Algorithm proposed in this sec- tion for incorporating observations into an extensional belief theory 7 may be summarized as follows: For each atom f appearing in 7 that unifies with an atom of a , replace all occurrences of f in 7 by a history atom. t Then add a new formula to 7 that defines the correct + Th ese history atoms are not visible externally, i.e., they may not appear in any information requested from or provided by the KB; they are for internal KB use only. it4 ! SCIENCE truth valuation off when 4 is false, and another formula to give the correct valuation off when 4 is true. Before a more formal presentation of the Ob- servation Algorithm, let us motivate its workings in a series of examples that will illustrate the problems and principles underlying the algorithm. Let the body of 7 be 1 OnTopOf(A, B), and the new observation be IF - T THEN OnTopOf(A, B). One’s natural instinct is to add 4 + w to 7, be- cause the observation says that w is to be true in all alternative worlds where $is true now. Unfortunately, w probably contradicts the rest of 7 . For example, adding T+ OnTopOf(A, B) to 7 makes 7 inconsistent. Evidently w may contradict parts of 7 , and those parts must be removed from 7 ; in this case it would suffice to simply remove the formula -) OnTopOf(A, B). But suppose that the body of Tcontains more complicated formulas, such as Red(A)- 1 OnTopOf(A, B). One cannot simply excise 1 OnTopOf(A, B) or re- place it by a truth value without changing the models for the rest of the atoms of 7 ; but by the semantics for observations, no truth valuation for extensional belief atoms except that of OnTopOf(A,B) can be affected by the requested observation. We conclude that con- tradictory wffs cannot simply be excised. They may be ferreted out and removed by a process such as that used in [Weber 861; however, in the worst case such a process will multiply the space required to store the theory by a factor that is exponential in the number of atoms in the observation! The solution to this problem is to replace all occurrences of OnTopOf(A, B) in 7 by another atom. However, the atom used must not be part of the alter- native world of the agent, as otherwise the replacement might change that atom’s truth valu ation. This is where the special history predicates of Cc come into play; we can replace each atom of w by a history atom through- out 7 , and make only minimal changes in the truth val- uations in the alternative worlds of 7. In the current case, OnTopOf(A, B) is replaced by Ho~T~~o~(A, B, 0 ), where 0 is simply a unique ID for the current obser- vat ion. t For convenience, we will write Ho,,~~~of(A, B, 0) as H(OnTopOf(A, B), 0), to avoid the subscript. The substitution that replaces every atom f of w by its history atom H (f , 0 ) is called the history substitution and is written 0~. Let’s now look at a slightly more complicated observation. Suppose that the agent observes 0 : IF Red(B) THEN OnTopOf(A, B), when 7contains 1 OnTopOf(A, B). As just explained, the first step is to replace this body by (1 OnTopOf(A, B)),,, i.e., 1 H(OnTopOf(A,B),O). Within a model M of 7, this step interchanges the truth valuations of every atom f in w and its history atom H( f, 0) ; if 4 was true in M initially, then (#)0, is now true in M . It is now possible to act on the original algo- rithmic intuition and add (cJ~)~~ -+ w to the body of 7, establishing correct truth valuations for the ground atomic formulas of w in models where 4 was true ini- tially. In the blocks world example, the body of 7 now contains the two formulas 1 H(OnTopOf(A,B),O) and Red(B)+ OnTopOf(A,B). Unfortunately, the fact that if B is not red then A is not on top of B has been lost! The solu- tion is to also add formulas governing truth valuations for atoms in w when 4 is false: Add 1 (+)0, -+ (f H H( f, 0) ) to 7 for each atom f in w . Then 7 contains 1 H(OnTopOf(A,B),O), Red(B)+ OnTopOf(A,B), and -Red(B)+ (OnTopOf(A,B)c+ H(OnTopOf(A,B),O)). lnow has the desired alternative worlds. The informal algorithm proposed so far does not work when Skolem constants are present in either the theory or the observation. The basic difficulty is that one must update every atom in the theory that unifies with something in w , since truth valuations for that atom might possibly be changed by the new observa- tion. For example, suppose the body of 7 contains the formula Red(e ), and the agent receives the new infor- mation IF T THEN 1 Red(B). In other words, the agent knew that some object was red, and has observed that block B is now not red, quite possibly because it has just been painted green. t A moment’s thought shows that quite possibly no object is now red (e.g., if the agent has been painting them one by one), and so the formula Red@ ), which unifies with Red(B), must be changed in some way; (e # B)+ Red(e ) is the obvious replacement. In the general case, it is necessary to replace all atoms in 7 that unify with atoms of w by history atoms as part of the history substitution step. Let’s examine one final example. Suppose the agent’s theory initially contains the wff Red(A) and the new observation takes the form IF T THEN Red(e). The suggested algorithm produces a new theory con- taining the three formulas H((Red, A), 0), T-, Red(e), t If the argument 0 were not present, then a similar sub- stitution in a later observation involving OnTopOf(A, B) would make big changes in the alternative worlds of Tat that time. KNOWLEDGE REPRESENTATION / t25 and F+ (Red(e) H H((Red, E), 0)). Unfortunately, this theory has models where Red(A) is false! The problem is that the algorithm does not yet properly take care of the alternative worlds where eis not bound to A; for in those worlds, Red(A) must still be true, re- gardless of what the new information in the observation may be. The solution is to add (E # A)+ (Red(A)- H(Red(A),O)) to 7, and in fact this new theory has the desired alternative worlds. The lessons of the preceding examples may be summarized as an algorithm for executing an observa- tion 0 given by IF 4 THEN w against an extensional belief theory 7 . The Observation Algorithm Step 1. Make history. For each atom f either ap- pearing in w or appearing in Iand unifying with an atom of w , replace all occurrences of f by its history atom H(f, 0) in the body of 7, i.e., replace the body N of 7 by (N),, . Call the resulting theory 7’. Step 2. Define the observation. Add the wff wuw +wto7’. Step 3. Restrict the observation. First a bit of terminology: if atoms f and g unify with one another under substitutions ur and u2 , then 01 is more general than cr2 if it contains fewer individual substitutions for Skolem constants. For each atom fin I’such that f is not an equality atom and f unifies with some atom of w , let C be the set of all most general substitutions 0 under which f unifies with atoms of w. Add the wff (f * H(f, 0)) v (Wo, A v a) UEC to 7’ , writing the substitution Q = z::::z; as (pi = Cl)A. * A(& = c,,). Intuitively, for f an atom that might possibly have its truth valuation changed by observation 0, this formula says that the truth valuation of f can change only in a model where 4 was true originally, and further that in any model so created, f must be unified with an atom of w . 0 The models of 7’ produced by the Observation Algorithm represent exactly the alternative worlds that 0 is defined to produce from 7 : Theorem 1. Given an extensional belief theory 7 and an observation 0, the Observation Algorithm correctly and completely performs 0. In particular, (1) the Observation Algorithm produces a legal extensional belief theory 7’ ; (2) The alternative worlds of 7’ are the same as the alternative worlds produced by directly updating the models of 7 . 0 The interested reader is referred to [Winslett 86b] for the proof of Theorem 1. 3.4. Cost of the Observation Algorithm Let Ic be the number of instances of atoms in the obser- vation 0 ; and let R be the maximum number of distinct atoms of 7 over the same extensional predicate. When 7 and 0 contain no Skolem constants, the Observation Algorithm will process 0 in time O(lc log(R)) (the same asymptotic cost as for ordinary database updates) and increase the size of 7 by O(lc) worst case. This is not to say that an O(lc log(R)) implementation of observa- tions is the best choice; rather, it is advisable to devote extra time to heuristics for minimizing the length of the formulas to be added to 7. Nonetheless, a worst- case time estimate for the algorithm is informative, as it tells us how much time must be devoted to the al- gorithm proper. The data structures required for this running time are described elsewhere [Winslett 861. When Skolem constants appear in 7 or in 0, the controlling factor in costs is the number of atoms of 7 that unify with atoms of 0. If n atoms of leach unify with one atom of 0, then 7 will grow by 0( r~ + Ic). In the worst case, every atom of 7 may unify with every atom of 0, in which case after a series of m observa- tions, the number of occurrences of atoms in Imay multiply by O(mk). To prevent excessive growth in 7 , we have devised a scheme of delayed evaluation and sim- plification of expensive observations, by bounding the permissible number of unifications for the atoms of an incoming observation. 4. Other Work For a discussion of the interaction of extensional and intensional information in belief revision (e.g., how to enforce a class of axioms in the face of incoming infor- mation) and other possible semantics for belief revision, the interested reader is referred to [Winslett 86b]. 5. Summary and Conclusion In this paper we represent the set of extensional beliefs of an agent as a logical theory, and view the models of the theory as representing possible states of the world that are consistent with the agent’s extensional beliefs. The extensional portion of an agent’s knowledge base is envisioned as a set of formulas that are not gener- ated via inference from other data but rather by direct observation; the remainder of the KB contains data- independent logical inference rules, data-dependent ax- ioms, and intensional beliefs that are derived from ex- tensional and other intensional beliefs using the agent’s axioms and rules of inference. The agent’s extensional beliefs may contain incomplete information in the form of disjunctions or null values (attribute values that are $26 / SCIENCE known to lie in a certain domain but whose value is currently unknown). We formalize the extensional be- lief set as an extensional belief theory; formulas in the body of an extensional belief theory may be any sen- tences without universal quantification. From time to time an agent will make observa- tions, i.e., produce new extensional beliefs. This paper sets forth a language and semantics for observations, and an algorithm for incorporating observations into the agent’s KB. This Observation Algorithm is proven correct in the sense that the alternative worlds pro- duced under this algorithm are the same as those pro- duced by processing the observation in each alternative world individually. For beliefs and observations with- out Skolem constants, the Observation Algorithm has the same asymptotic cost as for an ordinary complete- information database update, but may increase the size of the KB. For observations involving Skolem constants, the increase in size will be severe if many atomic for- mulas in the KB unify with those in the observation; if desired, a lazy evaluation technique may be used to control expansion. A simulation program has been con- structed for a closed-world version of the Observation Algorithm. In sum, we have shown that the incorporation of new, possibly contradictory extensional information into a set of extensional beliefs is difficult in itself when either the old or new beliefs involve incomplete inform& tion, even when considered in isolation from the prob- lems associated with justification of prior conclusions and inferences from extensional data and the identi- fication of outdated and incorrect axioms. We have produced a polynomial-time algorithm for incorporat- ing extensional observations; however, it is not in gen- eral possible to process observations efficiently in exten- sional belief theories if the observations reference inten- sional beliefs. 6. Acknowledgments The results cited in this paper owe much to the encour- agement of Christos Papadimitriou and the emphasis on rigor of Moshe Vardi. This paper would not have been possible without the AI perspective and suggestions of David C. Wilkins. 7. References [Abiteboul 851 S. Abiteboul, G. Grahne, “Update Se- mantics for Incomplete Databases,” Proc. of the VLDB Conf., Stockholm, August 1985. [DeKleer 851 J. DeKleer, B. C. Williams, “Diagnosing Multiple, Possibly Intermittent, Faults, Or, Trou- bleshooting Without Backtracking,” Xerox PARC Tech. Report, 1985. [Doyle 791 J. Doyle, “A Truth Maintenance System,” Artificial Intelligence 12, July 1979. [Fagin 841 R. F g’ a in, G. M. Kuper, J. D. Ullman, and M. Y. Vardi, “Updating Logical Databases,” Proc. of the 3rd ACM PODS, April 1984. [Fagin 831 R. Fagin, J. D. Ullman, and M. Y. Vardi, “On the Semantics of Updates in Databases,” Proc. of the 2nd ACM PODS, April 1983. [Imielinski 841 T. Imielinski and W. Lipski, “Incom- plete Information in Relational Databases,” Jour- nal of the ACM 31:4, October 1984. [Levesque 841 H. Levesque, “Foundations of a Func- tional Approach to Knowledge Representation,” Artificial Intelligence 23, July 1984. [Reiter 841 R. Reiter, “Towards a Logical Reconstruc- tion of Relational Database Theory,” in M. Brodie, J. Myopoulos, and J. Schmidt (eds.) On Concep- tual ModeDing, Springer-Verlag, 1984. [Reiter 851 R. Reiter, “A Theory of Diagnosis from First Principles,” University of Toronto Computer Science Dept. Tech. Report No. 86/187, Nov. 1985. [Weber 861 A. Weber, “Updating Propositional For- mulas,” Proc. of the Expert Database Systems Conference, Charleston SC, April 1986. [Winslett 861 M. Winslett-Wilkins, “A Model-Theoret- ic Approach to Updating Logical Databases,” Stan- ford Computer Science Dept. Tech. Report, Jan. 1986. A preliminary version appeared in Proc. of the 5th ACM PODS, March 1986. [Winslett 86b] M. Winslett, “Is Belief Revision Harder Than You Thought?,” Stanford University Com- puter Science Dept. Tech. Report, June 1986. KNOWLEDGE REPRESENTATION / 42’
|
1986
|
148
|
415
|
Self-Reference, Knowledge, Belief, and Modality Donald Perlis University of Maryland Department of Computer Science College Park, Maryland 20742 ABSTRACT An apparently negative result of Montague has diverted research in formal modalities away from syntactic (“first-order”) approaches, encouraging rather weak and semantically complex modal formalisms, especially in representing epistemic notions. We show that, Montague notwithstanding, consistent and straightforward first- order syntactic treatments of modality are possible, espe- cially for belief and knowledge; that the usual modal treat- ments are on no firmer ground than first-order ones when endowed with self-reference; and that in the latter case there still are remedies. I. INTRODUCTION We are in this paper particularly concerned with the concepts of belief and knowledge, in their relation to (and in the avoidance of) self-referential paradox. Let us write Bel(x) and K(x) to indicate that x is believed, resp. known, by an implicit agent g. The syntactic status of x is one of the issues to be addressed. If Be1 and K are predi- cate symbols, then x is an ordinary first-order term which in particular may be the name of a sentence, as in Bel(“Snow is white”). On the other hand, if Be1 and K are modal operators, then x will be a well-formed formula, as in Bel(Snow is white). In [al] it was suggested that for an intelligent reasoner g, a self-referential language is desirable in order to represent such notions as that g has a false belief, this itself being a likely belief of g. We may write, for instance, (Ex)(Bel(x) & -True(x)). But if this very wff is to be a belief of g, then it too can serve (either in quoted first-order form, or in formula -- modal -- form) as argument within another belief formula. We contend that this is such a basic aspect of language and thought that any reasonable representational mechanism for common- sense reasoning must include facilities for expression of self-reference and syntactic substitutions. (Note that Rieger [25] calls essentially this same notion referenceabil- ity.) We will show that this has significant consequences regarding consistency and modal treatments, in that apparent advantages of the latter over non-modal (“syn- tactic”) ones disappear in the presence of self-reference. A longer version of the present paper [22] contains proofs of theorems. . This research was supported in part by grants from the following institutions: U. S. Army Research Office (DAAG29-85-K-0177) and the Martin Marietta Corpora- tion. The example of an agent having a false belief is a key one. For it distinguishes between what we might call weak and strong languages. In particular, traditional modal languages with an operator Be1 for belief do not ordinarily allow variable operands; this would amount to something like a second-order modal language. This means that having a false belief is not straightforwardly representable in such languages. Of course, the same can be said for traditional first-order languages. Thus tradi- tional logics tend to be weak. However, the obvious remedy of introducing names for formulas as in “Snow is white” above, leads to familiar problems of inconsistency. While this has been taken by some to mean that epistemic notions such as Be1 should be left as modal operators rather than risk inconsistency in a first-order setting with names, on the other hand it is too weak to accommodate the needs of artificial intelligence (as in the false belief case). Here we will investigate the introduction of names into formal treatments of belief and knowledge, and ways to retain consistency while retaining as well the strong feature of referenceability. II. PRELIMINARY RESULTS We shall call a theory T (over a language L) with mechanisms for expressing and asserting all such substitu- tions unqualifiedly substitutive. The hallmark of an unqualifiedly substitutive language is that it possesses an operator Sub(P,Q,a,n) directly asserting the result of sub- stituting in an expression P the expression Q for the nth occurrence of the subexpression a. I.e., if P[Q/a,n] is the expression that results from the indicated substitution, then we are requiring Sub(P,Q,a,n) to be provably equivalent to P[Q/a)n]. Note that Sub here is to be an actual symbol (predica.te or otherwise) of L, while P[Q/a,n] is a meta-notation denoting some actual expression of L, namely the one resulting from the actual performance of the substitution. Of course, for the above-mentioned equivalence to be meaningful, the substitution must result in a well-formed formula of L. It turns out that for the applications to be pursued here, only a rather special use of an operator such as Sub is required, namely one in which the substitution of Q for a in P be performed for precisely all occurrences of a in P except the last. Therefore we will write simply Sub(P,Q,a). Contexts will vary slightly in that sometimes all occurrences of a will be identical, sometimes one occurrence will be quoted. We beg the reader’s indulgence in sloppily using the same notation for both cases. We also write P[Q/a] for the result of substitution in either case. 416 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. As will be seen, the asserting of the results of sub- &tutions, i.e., relating the referenced syntactic elements to their intended meanings, runs into paradoxes of self- reference. Firstly, a means of unquoting quoted elements is needed; i.e., of saying formally that “A” carries the mean- ing A. This is often represented as defining a truth predi- cate: True(“A”) is to tell us that the sentence “A” carries a true meaning, i.e., A. That is, Sub(P,Q,a) can be thought of as consisting of two conceptually distinct aspects: form- ing the new expression, and asserting it. These we can con- veniently distinguish by writing, as a gloss for Sub(P,Q,a), the (perhaps pseudo-) formula True(sub(P,Q,a)) where sub is a function producing (a name for) the expression that the indicated substitution leads to, and True asserts this expression. Again of course this can be meaningful only if the substitution leads to a wff of L. For precision’s sake we offer the following definition: Let T be a first-order theory over a language L containing a 3-place predicate symbol Sub together with the axiom schema Sub(“P”,” &“,a) cf P[Q/a] where PiQ/a] is as previously described, for all wffs P and Q and terms a of the language I, (which is assumed to contain a constant “A” for each wff A of L). Then T is said to be an unqualifiedly substitutive theory. Theorem 1: Let T be an unqualifiedly substitutive first- order theory. Then T is inconsistent. For the proof of this and subsequent results, see longer ver- sion In 1191 and [ZI] the difficulty of formalizing a truth predicate in first-order languages was circumvented, based on ideas in [Sj and [14j. It turns out t,hat. this approach can be applied fairly directly as well to the Sub predicate, and leads us to the following result: Theorem 2: A (“qualifiedly substitutive”) first-order theory T formed from extending a consistent theory T’ not involving the symbol Sub, by the addition of the (qualified) schema Sub(“P”,“Q”,a) tf P[Q/a]*, where o* is the result of replacing +ub(“P”,...) by Sub(“lP”,...) in CY, is con- sistent. What we wish to investigate eventually (section IV) is the extent to which the same result holds for modal theories. First we turn to a question addressed by Mon- tague 1161 concerning first-order ana!ogues of certain modal theories. III. FIRST-ORDER ANALOGUES There are some solid technical benefits that would accrue from a first-order approach to propositional atti- tudes; in particular, in the words of Montague [16], “if modal terms [i.e., modal operators] become predicates, they will no longer give rise to non-extensional contexts, and the customary laws of predicate calculus may be employed.” Motivated by these concerns, Montague [16] applied this approach to a modality for necessity. That is, writing Net (“A”) instead of Net A he obtained a quotational first- order construction. Montague proposed axioms for such a formulation, in analogy with standard axioms in the corresponding modal treatments. Unfortunately he found these versions to be inconsistent, whereas each correspond- ing modal operator version h1 is consistent. This seemed to be strong evidence in favor of the modal treat,ment. How- ever, it appears that the inconsistency Montague uncovered hinges on certain fundamental expressive strengths of quotational first-order languages which are lacking in usual modal languages. That is: first-order log- its have richer set,s of formulas than have traditional modal logics. For variables allow the formation of (self- referential) wffs that otherwise would not appear in the language, and thus more is being asserted in first-order logic than in the corresponding modal logic. The question then arises: if a modal theory M is made self-referential [i.e., endowed with expression and assertion of substitu- tions], is it, still consistent? It is of separate interest whether a first-order logic version of a modal logic can be kept suitably “w-eak” so as not to intrude, via its variables, new kinds of wffs that des- troy a faithful match with the modal logic. This has been explored by [d es Rivieres & Levesque 261. Our purposes here are somewhat different, namely, how to represent pro- positional attitudes in an explicitly self-referential context. Our contention is that apart from a desire to avoid incon- sistency, there should be an underlying intuitive model jus- tifying ones axioms, and then presumably whatever under- lying intuitive model justifies the use of any particular modal formulation should apply as well to the full first- order formulation, unless that model itself indicates a prin- cipled argument to the contrary. Montague studied several systems related to S5, with the particular aim of changing Net int,o a predicate symbol applied to names of formulas. VVe need not present details of these modal variants in order to state the follow- ing variation on a result of his, where we freely adopt the symbol I (for information) in place of Net. (Note that under an assumption of omniscience, S5 plausibly formal- izes the “information” a reasoning agent may have. VVe for the moment avoid the terms “knowledge” and “belief” in favor of this more neutral expression.) If T is a first-order theory with function symbols sub and quote of three and one arguments, respectively, and supplied wit#h a term ‘Lo” for each wff Q as well as axioms defining sub and quote appropriately, i.e., quote(e) ZZZZ “e” for each constant symbol e, and sub(P,Q,a) = “<P[Q/a]>“, i.e., the name of the result of the indicated substitution, then T is first-order self-referential. Theorem 3: Let T be a first-order self-referential theory having a monadic predicate symbol I and axioms I(“u.“) - CY for each closed wff Q, and satisfying the condition ]-- l(“(sy”) whenever )-- (u. Then T is inconsistent,. \Yhat does this result tell us? It appears that even a very weak subtheory of S5, when “translated” into a first- order context, goes awry, at least in the presence of substi- tutivity. But is this reason to think that the modal ver- sion is better off? It is true that S5 (and therefore its sub- theories) are consistent. But S5 is not in a substitutive con- text. So the question arises as to whether modal theories such as S5 remain consistent when augmented with substi- tution capabilities. KNOWLEDGE REPRESENTATION / t l- IV. SUBSTITUTrVE MODAL LOGIC If we endow a modal logic M with the property of substitutivity in the form of an operator Sub(P,Q,a), with the intention that this thereby create suitable conditions for referenceability within such an extended version of M, we have at least two available approaches. We can let P and Q be quoted expressions and Sub a predicate symbol, or we can let P and Q be formulas and Sub another modal- ity. Let us begin by exploring the first alternative. Since we already know that a first-order unqualifiedly sub- stitutive theory is inconsistent (Theorem l), then so will be any modal theory M that extends such a first-order theory. Therefore, if we endow S5 with a predicate symbol Sub, we can’t allow it the unqualified substitution axioms as well. What then if we use only qualified substitution axioms of the sort known to be consistent in the first-order case? That is, can we extend S5 to include Sub(x,y,z) f-t True(sub(x,y,z) together with the consistent treatment of True and sub mentioned earlier, and thereby retain con- sistency in the modal theory that results? Unfortunately, the following result shows that we cannot. Theorem 4: If M consists of S5 extended by the qualified Sub predicate with axiom Sub(x,y,z) t-+ True(sub(x,y,z)) and associated axioms for True and sub, then M is incon- sistent. We then consider the second alternative mentioned above, namely, that Sub(P,Q,a) be a modality in which P and Q are formulas. It turns out that even without vari- able arguments to modalities, contradiction arises. Specifically, we define T to be an unqualiJedly substitutive modal logic if T has a modality Sub(P,Q,a) and the by now familiar substitution axioms using P[Q/a], where P, Q and a are wffs. That is, Sub(P,Q,a) is equivalent to the result of substituting Q for all but the last occurrence of a in P. (We need not even use names at all, for instead of arbitrary expression, it suffices to refer to whole formulas.) Theorem 5: Any is inconsistent. unqualifiedly substitutive modal theory So S5 itself is inconsistent with either form of self- reference that naturally arises. We now turn to remedies of this situation, hinging on separating the two troublesome features, namely the schema I(“o?‘)-+cu, and the rule for inferring I(“o?‘) f rom Q. This will at last split I into the two cases of Be1 and K. V. CONSISTENT FORMALIZATIONS We suggest (as is fairly common) that Kx means x is among those beliefs of g that are true. It is important to emphasize that K is to be a symbol in g’s own language, so that Kx means to g that x is one of its true beliefs, even though in general g cannot identify which these are! That is, in general g can only refer in the abstract to its knowledge (true beliefs). Indeed, all g’s beliefs are (by definition) believed by g; as soon as any one is suspected of being false, it is no longer believed. So g cannot isolate its true beliefs from the rest; it simply can refer t#o them in the abstract, just as it can refer to its entire belief set. In effect, g may believe that (the extension of K) is a proper subset of (the extension of) Bel, but can give no examples of the relative complement ! Thus general assertions about K (such as that Kx + x) are part of g’s external view of itself, so to speak, comparing its belief set to an unauthen- ticated outer world of truth, while assertions about part,ic- ular elements are part of an internal view of Be1 relevant. to working directly with individual beliefs as things to use in planning and acting. That is, we are suggesting two postulates, one for individual beliefs (from x infer Be1 x), and one for the totality of beliefs and knowledge ((x)(Kx + x)). It is mix- ing the two that is problematic. A judicious approximat#ion to a mix is however possible, as the following results indi- cate. Theorem 6: Let T be any consistent qualifiedly substitu- tive first-order theory. Then there is a consistent first- order theory Int, which is an extension of T having predi- cate symbol Bel, and obeying the subsumed rules I-- Bel(“cu”) iff I-- CY. A still stronger result would seem to arise if we simply formally identify Be1 with the predicate Thm via the use of Godel numbers. This of course requires incor- porating a certain amount of number theory into the agent’s reasoning, but given the rather powerful assump- tions that go into most logics of knowledge (e.g., that all logical consequences of an agent’s knowledge are also known to the agent), this seems easy to grant. hloreover, the use of substitution is virtually tantamount to the introduction of a certain amount of arithmetic in any case (see Quine [24]), and we have argued that substitution is an essential feature of commonsense reasoning. We then are left with the suggestion that a theory along the lines of Int is appropriate for a formalization of belief. It allows for introspection, to the extent that if a is believed (affirmed) then that very fact is also believed, and conversely. But it makes no claim that totality of its beliefs need be true, even though each separate belief is of course asserted, and hence taken to be the true. The strong contrast with a logic of knowledge is shown in the follow- ing result, which is based on [6,14,19,21]. Theorem 7: Let T be any consistent qualifiedly substitu- tive first-order theory. Then there is a consistent first-order theory Ext, which is an extension of T having predicate symbol K, and axioms K(“cy”) --+ LY for each wff Q, and obeying the (subsumed) rule that I-- K(“Q”) whenever I-- a*, where CY* is the result of replacing lK(“...“) b? K(“l...“) in 0. From cy can be inferred Bel(L’o!“), but to infer K(‘c~“) more is needed, namely CY*, the positrive form of CL. This can be interpreted in various ways depending on the underlying conceptualization of the formalism either as the agent’s-eye-view of the world, or as our own god’s_eye view. The longer paper describes more fully the significance of this and of the *; the latter is the critical distinction between K (Ext) and Be1 (Int). Note that for most wffs 0, CY* is a. 4 18 / SCIENCE As with Int and belief, we suggest Ext as a possibly appropriate formalization of the notion of an agent’s knowledge. But contradiction will arise if the agent tries to combine Int and Ext into one theory (i.e., with Be1 and K conflated). We hope to have suggested why such a combi- nation is not appropriate. More motivational discussion is provided in the longer paper. Elsewhere [23] we have inves- tigated further the ramifications of the idea that an agent’s beliefs are not all true (known), and that a rational agent will believe that. One more observation is in order. An agent g may reason about both its beliefs and its knowledge, simply by combining the theories Int and Ext, but keeping K and Be1 as separate predicates. One can even relate them judi- ciously, such as by the axiom Kx + Be1 x. This we state in the following theorem. The extent to which theories such as S5 can be viewed as ‘Lincorporated” within theories such as Omni below is discussed in the longer paper. Theorem 8: Let T be any consistent qualifiedly substitutive first-order theory. Then there is a consistent first-order theory Omni, which is an extension of T having predicate symbols Be1 and K, the axioms of Int and Ext as in Theorems 6 and 7, and axiom Kx -+ Be1 x. Note that if we introduce Kx by definition to be Bel(x) & True(x), th en we can simply use axioms and rules for True as in [21]. Th is then provides a slightly sharper version of Theorems 7 and 8, in which for instance K(“lK(“cr”)“) may be inferred from [lBel(L’o”) v True(“lcr”)]. VI. CONCLUSIONS When a formal language is endowed with self- referential capabilities, especially in the presence of unqualifiedly substitutive mechanisms, difficulties of con- tradiction can easily arise. This holds for modal as well as (pure) first-order logics. However, the features of self- reference and substitutivity appear fundamental to any broad knowledge representation medium. Moreover, when remedies are taken, the modal treatments seems to offer no advantage over the first-order ones, and indeed the latter carry advantages of their own. One can argue that although an agent g can’t know his beliefs to be true, still they might be true by good luck (or by the clever design of the agent’s reasoning devices by a godlike artificial intelligencer), and all g’s inference rules might be sound as well. But then, if g is an ideal reasoner, wouldn’t it. be appropriate for g to believe Be1 x -+ x? The odd answer (which we have seen in Theorem 3) is: not if g’s beliefs are to be consistent, which of course they must be if they are to be true. This can be seen also as an illicit identification of Be1 with K. The proposed theories Int, Ext, and Omni appear relevant to the study of omniscient reasoning. For limited reasoning, alterations will be needed. Further related work, especially to the latter, includes [1,2,3,4,5,7,8,9,12,15,17,27]. ACKNOWLEDGEMENTS I would like to thank the following individuals for highly motivating discussions that prompted me to carry out the elaboration of the ideas presented herein: Ray Reiter, Nils Nilsson, Jack Minker, Jennifer Drapkin, Michael Miller, Brian Haugh, Kurt Konolige, Dana Nau, and Jim Reggia. (1) (2) (3) (4) (5) (6) (7) (8) (9) (19) (11) (12) (13) (14) (15) (16) (‘7) (18) REFERENCES Drapkin, J. and Perlis, D. Step-logics: an alternative approach to limited reasoning. Proc. Eur. Conf. on Art. Intell. 1986. Drapkin, J. and Perlis, D. A preliminary excursion into step-logics. Proc. Intl. Symp. on Methodologies for Intell. Systems 1986. Eberle, R. A logic of believing, knowing and inferring. Synthese 26 (1974) pp.356-382. Fagin, R., Halpern, J., and Vardi, M. A model-theoretic analysis of knowledge. Proc. 25th IEEE Symp. on Foun- dations of Computer Science, 1984, pp.268-278. Fagin, R. and Halpern, J. Belief, awareness, and limited reasoning: preliminary report. IJCAI 85, pp.491-501. Gilmore, P. The consistency of partial set theory..., in: T. Jech (ed.) A xiomatic Set Theory. Amer. Math. Sot., 1974. Halpern, J. and Moses, Y. Towards a theory of knowledge and ignorance. AAAI workshop on Nonmonotonic Rea- soning, 1984. Halpern, J. and Moses, Y. A guide to the modal logics of knowledge and belief: preliminary draft. IJCAI 85, pp.480-490. Hintikka, J. Knowledge and belief. Cornell University Press, 1962. Hughes G., and Cresswell, M. An introduction to modal logic. Methuen, 1968. Israel, D. What’s wrong with nonmonotonic logic? Proc. First Annual National Conference on Artificial Intelli- gence, 1980. Konolige, K. A computational theory of belief introspec- tion. IJCAI 85, pp.503-508. Kripke, S. Semantical analysis of modal logic. Zeitschrift fur Mathematische Logik und Grundlagen der Mathema- tik, 9, (1963), pp.67-96. Kripke, S. Outline of a theory of truth, J. Phil., 72 (1975), pp.690-716. Levesque, H. A logic of implicit and explicit belief. Proc 3rd National Conf. on Artificial Intelligence, 1984, pp.198-202. Montague, R. Syntactical treatments of modalit,y.... Acta Philos. Fenn. 16, (1963) pp.153-167. Moore, R. Reasoning about knowledge and action. IJCAI 77, pp.223-227. Partee, B. (ed.) Montague grammars. Academic Press, 1976. KNOWLEDGE REPRESENTATION / i lc) (19) (20) (21) (22) (23) (24 (25) (26) Perlis, D. Language, computation, thesis. U of Rochester, 1981. and reality. Ph.D. Perlis, D. Nonmonotonicity and real-time reasoning, AAAI Workshop on Nonmonotonic Reasoning, 1984. Perlis, D. Languages AIJ 25, 1985. Perlis, D. Report. Languages with self-reference I: foundations. self-reference II. U of Md Tech Perlis, D. On the consistency of commonsense reasoning. U of Md Tech Report. Quine, W. Concatenation as a basis for arithmetic. J. Symb. Logic, 11 (1946). Rieger, C. Conceptu University, 1974. al memory... Ph.D. thesis. Stanford (27) Vardi, M. A model-theoretic knowledge. IJCAI 85, pp.509-512. analysis of monotonic 420 / SCIENCE
|
1986
|
149
|
416
|
SYSTEM INTEGRATION OF KNOWLEDGE-BASED MAINTENANCE AIDS Christopher A. Powell, Cynthia K. Pickering, & Keith T. Wescourt FMC Central Engineering Laboratories 1185 Coleman Ave Box 580 Santa Clara, CA 95052 ABSTRACT There are many examples of knowledge-based fault diagnosis advisors for corrective maintenance of complex equipment. However, such advisors are only part of an overall main- tenance solution. To be used effectively, diagnostic advisors must be integrated with other existing and forthcoming sys- tems, such as Automated Test Equipment and maintenance databases. Successful fielding of knowledge-based systems re- quires consideration of integration issues throughout the design process. I INTRODUCTION There are many examples of knowledge-based fault diagnosis advisors for corrective maintenance of complex equipment. However, such advisors are only part of an overall main- tenance solution. To be used effectively, diagnostic advisors must be integrated with other existing and forthcoming sys- teIllS: maintenance history databases, spare parts inventory databases, Built-In Test (BIT) and Automated Test Equipment (ATE) systems, and other knowledge-based advisors for non- diagnostic maintenance tasks requiring expert knowledge. Therefore, successfully deploying a knowledge-based main- tenance advisor requires more than capturing expert diagnostic reasoning. It also involves substantial effort in interfacing the advisor with other physical and information systems to deliver diagnostic advice appropriately within the constraints of the encompassing maintenance support framework. This paper describes the Mark 45 Fault Diagnosis Advisor (Mark 45 FDA), a prototype knowledge-based advisor for the diagnosis and repair of the Mark 45 Naval Gunmount*. We will describe the system integration issues and how they are addressed by the Mark 45 FDA.The discussion will cover three topics addressed in the development of the system testbed: l BIT/ATE integration - the integration of a knowledge- based diagnostic advisor with existing test equipment and the additional diagnostic knowledge it requires * The Mark 45 is a 5-&h 54-caliber gun developed by the FMC Northern Ordnance Division for use on Navy destroyers, frigates, and escort ships. 0 Interactive media - the use of interactive graphical and text media for delivery of procedural instructions to the l Procedure planning - the planning of context-dependent instructions for equipment tests and repairs during fault diagnosis. We will also discuss other issues not specifically addressed by the current system testbed, but important to maintenance sys- tem integration. II THE MARK 45 FAULT DIAGNOSIS ADVISOR ---- A. Testbed Overview The Mark 45 FDA testbed hardware contains three major components, a special purpose symbolic computer, a videodisc player, and a desktop personal computer. The symbolic com- puter is the central computing facility in the group and con- trols all consultations. The videodisc is used to present sup- plementary material during consultions, under the control of the symbolic computer and FDA software. The videodisc player outputs an NTSC (standard TV broadcast quality) sig- nal that is input to a low resolution color monitor. The personal computer emulates the abilities of the embedded Mark 45 microprocessor to access sensor data. The symbolic computer communicates with the peripheral RS-232 standard serial communications. components The Mark 45 FDA software system (Figure 1) contains the fault diagnosis advisor, a procedure planning system, and a text and video procedures database. B. Fault Diagnosis Software Design and Applicability The Mark 45 FDA was developed as an initial application testbed to construct and refine a generic expert systems software architecture applicable to a family of equipment fault diagnosis problems. The architecture provides inference and control structures that exploit structural and functional features of the equipment family. Our intent was to imple- ment a software framework to facilitate the efficient development of fault diagnosis advisors for members of the family. A more detailed description of the Mark 45 FDA software architecture may be found in (Wescourt, Powell, Pickering & Whitehead, 1986). The Mark 45 FDA framework is implemented using the S.1 APPLICATIONS / 851 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. expert systems programming language system.* It consists of a “package” or “library” of S.1 source code that includes defini- tions of object classes, associated attributes, and control blocks (i.e., procedures). The targeted electrical-hydraulic-mechanical (EHM) equipment family includes the Mark 45 and other weapons systems manufactured by FMC. More generally, we believe the software framework can be applied to other material handling/conveyance systems that share design and operating features with the Mark 45. Members of this EHM equipment family are composed of sub- assemblies that perform functional subcycles. Each assembly may perform several subcycles. Conversely, a subcycle may involve more than one assembly. The relationships among as- semblies and subcycles consist of electrical, hydraulic, and * Developed and distributed commercially by Teknowledge, Inc. S.l is a second-generation knowledge-based systems tool, a descendent of the EMYCIN and other early tools. The primary language features of S.l are rules, procedural seg- ments called control blocks, and data objects called class in- stances and attributes. S.l represents factual knowledge using an extension of “object-attribute-value” triples. For example, for the object called “Gunmount”, the attribute “Breech- Position” may have the value “Open”. In S.1 judgemental knowledge is expressed in “traditional” condition-action rules. Given some input facts, a rule asserts new facts as true. Control blocks are the language structure which allows the expression of control knowledge outside the built-in inference- engine, a rule backchainer. INSlXUCl-IONS PLANNER t USCr Detaild Ability Pmcalure Level hlstNctiolla mechanical interlocks. A typical electrical interlock consists of a switch mounted to detect the position of a moving mechanical part within an assembly. When the part moves to a critical position, the signal from the switch is analyzed by electrical/electronic logic. The logic output may terminate or initiate the activation of another assembly. Such inter- locks ensure the coordination of the subcycles performed by the assemblies. The inference structure of the generic architecture decomposes EHM equipment fault diagnosis and repair into four main subproblems. The first determines a fault.cycle whose value is the equipment subcycle directly affected by the fault. The second determines hypotheses that are known problem causes for the f ault.cycle. The third determines which of the hypotheses is the cause.of.problem based on inferences from attributes describing equipment-specific tests. The last deter- mines recommended.repair for the cause.of.problem, taking into account the urgency of the situation, the user’s skill or certification, and information about the availability of tools and parts. This problem solving model integrates abstract diagnosis concepts (symptoms, hypotheses, and cczuses) with equipment-generic concepts (operating mode and fault cycle). It contrasts with one that iteratively refines a hypothesis about fault location within the specific physical structure of the equipment. The problem-solving model for EHM equipment fault diagnosis incorporates an extensive control structure tailored to the in- ference structure described above. Besides providing explicit control over the sequencing of the four main subproblems in the inference structure, the control structure effects detailed SYSTEM UNDER TEST USERINTJZRFACJZ MANAGER Pmcedure TEXTWEO Figure 1. Architecture Diagram 852 / ENGINEERING control within the fault.cycle and cause.of.problem sub problems. It also provides a mechanism for handling problems with multiple concomitant failures by applying the inference structure iteratively for the hypotheses, cause.of.problem, and recommended.repair subproblems. The diagnostic knowledge of the Mark 45 FDA is coded within the generic architecture. Currently the fault diagnosis knowledge provides substantial coverage of 4 of the 14 Mark 45 assemblies, exhibiting expert-level performance in those areas. This part of the system has over 500 rules and iden- tifies over 300 faults. C. BIT/ATE Integration During fault diagnosis, experts use BIT/ATE data to assist fault isolation. BIT/ATE is a tool for these experts, provid- ing partial solutions but not complete diagnoses. Therefore, BIT/ATE data access and reasoning are necessary, but not suf- ficient, for a knowledge-based system to achieve expert diag- nostic performance. Integrated use of BIT/ATE data increases the power, but also the complexity, of a knowledge-based advisor. The amount of raw data available from BIT/ATE is large and using it effectively requires recognizing which data are important. A knowledge-based advisor can capture the experts’ use of BIT/ATE data and recognize key data combinations. The knowledge base may also be designed to recognize BIT/ATE inconsistencies, an ability limited to only a few expert troubleshooters. When the BIT/ATE data are inconsistent, the advisor can focus within the BIT/ATE system during fault diagnosis. The Mark 45 testbed diagnosis system is integrated with simulated test equipment for the Mark 45 control system, giving the diagnosis system access to more than 100 status points monitored by the Mark 45. A simulator was used to generate representative test data allowing us to develop and fully demonstrate the FDA to BIT/ATE interface. The testbed knowledge base includes expert fault diagnosis and BIT/ATE consistency-checking knowledge. For the Mark 45 and similar systems, the primary use of BIT/ATE data is isolating faults to equipment subcycle. The Mark 45 FDA rules for subcycle isolation represent the experts’ ability to determine the state of the equipment from a small subset of the BIT/ATE data. In addition, the BIT/ATE data is used throughout the fault diagnosis, along with other test and observation data, in attempts to confirm specific possible fault hypotheses. Our experts’ used system functional design documents to derive this knowledge by trac- ing details of the system subcycle. The BIT/ATE data is also tested for consistency, based on ex- pert knowledge of the physical device. Some combinations of BIT/ATE data represent physically impossible configurations of the equipment and indicate failures in the BIT/ATE system. Tests for these physically impossible sensor value pairs were compiled by Mark 45 design engineers and are included in the Mark 45 FDA knowledge base. More complex combina- tions were discussed with the experts or derived by analysis of Mark 45 functioning. During a consultation, when rule premises are tested that re- quire BIT/ATE data they trigger an I/O function that re- quests data point values from the test equipment. The test equipment responds by transmitting the data through a serial connection from the test equipment to the FDA host com- puter. The I/O interface is transparent to the rule processor. Thus, rule premises use BIT/ATE data as they would other test point date requested from and supplied by the user. The direct interface allows the FDA to obtain and use large amounts of such data without effort by the user. The BIT/ATE reasoning portion of the prototype knowledge base contains over 300 rules and identifies nearly 100 faults. Calculations based on engineering data indicate that this por- tion of the knowledge base will eventually contain nearly 900 rules identifying over 650 faults. D. Interactive Media Presentation Traditional media for presenting maintenance information have inherent problems. Usually, maintenance manuals are large and complex, requiring strong reading and cognitive organiza- tional skills. A single diagnosis problem may require infor- mation presented in various, unrelated, forms and spanning several volumes. Continual updating of these manuals tends to increase demands on organizational skills. Within the domain of military equipment maintenance, low basic skill levels intensify these problems. In addition, high personnel turnover predludes the use of extensive training as the primary solution to the maintenance performance problem. Computer systems with multiple interactive media can over- come the deficiencies of traditional media. Program control of the access and presentation of diagnostic information sig- nificantly reduces the organizational skills required. Updates to the information may be integrally incorporated with exist- ing information so no additional burden is placed on the user. Computer-based user interaction may require only limited training. Yet, delivering procedural instructions interactively to less experienced technicians enables them to complete more complex and sophisticated procedures with fewer diagnosis and repair errors (Halff, 1984). The systems architecture of the Mark 45 FDA testbed incor- porates several interactive media to support fault diagnosis and repair. The media include videodisc stills and sequences, digitized drawings, and a text description hierarchy. Access to the various media is integrated into a single menu-based “help” facility, available to the user upon request during a diagnostic consultation. Coordination between the diagnostic advisor and media controller is achieved by generating side- effects of advisor actions that associate requested information types, current consultation focus, video sequences, text, and digitized images. Thus, the diagnostic system automatically accesses and displays relevant information on request, eliminating manual information search. The video material includes schematics, film sequences and stills illustrating diagnostic procedures, settings and results, and repair procedures. Videodisc frame numbers are indexed by diagnostic repair or test description, and information type. Upon request, software functions retrieve the appropriate videodisc frame numbers from the database and operate the videodisc player via a serial interface. Digitized drawings from existing reference material support detailed instructions, allowing the technicians to use familiar material. When the user requests access to reference material, the advisor locates and presents the information. Drawings are accessed and displayed during disassembly and repair APPLICATIONS / 853 sequences to illustrate the parts associated with the ongoing procedure. The drawings are labelled so that associated text descriptions can refer to individual parts by name. Text help for tests and observations is available at different degrees of detail suited to users with differing levels of ex- perience. The text help is available in detail levels from “overview” to “step-by-step” and the level of viewing is user directed. Each troubleshooting test or observation, in all but the most detailed level, has associated text instructions stored in a database. The most detailed level of instructions cannot be represented simply as text in a database. Instead, it is generated by a procedure instructions planner. E. Planning Requirements for Procedural Information A fault diagnosis advisor can advise a user of what A tests/observations to perform to diagnose the cause of a fault w - condition. ing many tests/observations are contextdependent: We have found that, in their present form, ad- they visors do not provide a fully satisfactory capability to advise the user how to perform recommended tests/observations or the recommended repair for a fault. The use of interactive media, described in the previous section, by itself is an in- complete approach. The problem is that FDA designs assume that a specific test/observation, represented by an attribute, is equivalent across contexts. However, we have observed that in the Mark 45 FDA, a given test/observation is sometimes used to diagnose different faults, sometimes in different oDerating cycle contexts. The detailed procedures for perform- involve some actions-equipment configuration or disassembly- that may have already been performed (and perhaps later undone) for prior tests/observations. In general then, correct instructions for how to perform a test/observation can vary for the different fault cases where it is required. Therefore, lengthy procedure instructions for a recommended actions can- not simply be stored in an associated static definition. An- derson, et al (Anderson, Cramer, Lineberry, Lystad, & Stern, 19841, indicate that to address this problem, a successful intel- ligent advisor should be viewed as a cooperating ensemble of expert systems and interface mechanisms. Our approach specifies procedure details as a hierarchical plan. In a dynamic environment, high-level plans may require sig- nificant modification during execution depending on situation context. Thus, the successful completion of high-level plans depends on dynamic planning of lower-level details. Our ap- proach uses a dynamic planning subsystem which cooperates with the fault diagnosis advisor to provide procedure details. We have implemented an interface for accessing supplemen- tary instructions from the existing Mark 45 FDA. We have also developed a prototype subsystem for planning procedure instructions. The planner is a primarily goal-directed system that composes step-by-step procedures for maintenance tasks using a procedural network representation of the diagnosis history. The planner maintains a “record” of the state of the Mark 45 throughout the diagnostic consultation based on its initial state and the effects of test procedures conducted during the consultation. chical representation of Mark 45 components-and their This record is maintained using a static hierar- REMOVE THE PLUG.27. IS THE PLUG.27 REMOVED ? (Y or N) No. IT IS NECESSARY FOR THE PLUG.27 TO BE REMOVED BEFORE WE C RN CONTINUE. DO YOU KNOW HOW TO REMOVE THE PLlJG.27? (Y or N) No. LOCATE IPB FIGURE 35, SHEET 1, DETRIL Y-Y IN VOL. 7. USE A l/4 INCH F\LLEN TO UNSCREW THE PLUG 27. REMOVE THE PLUG.27. IS THE PLUG.27 REMOVED ? (Y or N) Yes. REMOVE THE PACKING.28. IS THE PACKING.28 REMOVED ? (Y or N) Yes. 4EflOUE THE CHECK.UALUE.SLEEUE.29. ES THE CHECK.UALUE.SLEEUE.29 REMOVED ? (Y or Nl I 193 / 27 Figure 2. Mark 45 FDA Detailed Repairs Output Screen 854 / ENGINEERING possible actions. When a plan is required, the instruction generator builds the procedure using the representation of the device’s current state and the structural knowledge of the Mark 45. The planned procedure is composed of the links between the current device state and the device state that satisfies the preconditions of the desired diagnostic test. Figure 2 shows an example of detailed repairs being presented to the user. The repairs were planned by the prototype planning subsystem and are presented using text accompanied by a digitized drawing. III FURTHER ISSUES For the Mark 45 FDA and diagnostic advisors for similar systems, there are a number of design issues that require fu- ture system expansion to field the maintenance aiding system successfully. Issues include hardware and software delivery environments, and integration with a variety of maintenance logistics systems. A. Delivery Media Delivering a diagnostic advisor into the field requires solutions to two fundamental problems. First, the field environment may be hostile toward the hardware required by the advisor. This may be reflected in specific customer requirements, e.g., requiring MILSPEC hardware. Second, environmental restric- tions may also require special user interface hardware. For example, space restrictions in the users’ environment may re- quire portable, remote user interfaces to access the advisor, or embedding the maintenance aiding system within existing operating and maintenance equipment. For U.S. military customers acceptance of software products requires compliance with MIL-STD 2167, implementation in Ada. However, it is not clear that knowledge-based software can be implemented in Ada so that it is easily maintainable: A& does not have rule structures. Currently U.S. military customers will accept knowledge-based software products in languages other than Ada. However, it is anticipated that some form of compliance with the standard will be required in the future. We expect that inference control programs will be viewed as applications and will be implemented in Ada. Knowledge-bases, however, will be viewed as data and will not be implemented in Ada. Thus, while we expect that advanced knowledge-based system tools will continue to be applicable, we expect that their implementation in Ada, and ability to interface to other Ada programs, is essential for military applications. B. Integration with Logistics Management Information Systems Currently, the diagnosis advisor collects data directly from the faulty system, and from the user. Yet, our domain ex- perts indicate that in some cases they examine system history data to assist in forming their diagnostic hypotheses. Integra- tion of the advisor with a maintenance history database could allow the automation of expert, history-based, diagnosis reason- ing. For example, the frequency of particular faults occur- ring for a group of the devices could change the order in which hypothesed fault are tested. Similarly, integration with a current parts inventory database could allow the ad- visor to structure the investigation of faults based on the availability of spare parts. Finally, the diagnostic advisor could maintain records on maintenance actions automatically for use in higher level planning and logistics. For example, an integrated logistics system could anticipate spare-parts needs based on the maintenance history of a group of the devices. The primary issues for integrating these additional systems are data access and interpretation. Each external data source must represent the data in a form which meets the needs of their primary users. In addition, the data must be represented so that it is accessible and interpretable by the fault diagnosis advisor. Thus, the knowledge-base represen- tation chosen for the diagnostic system must allow access to external data and programs. Further, the representation of external data should be analogous to that of the internal data so reasoning methods are independent of data source. IV CONCLUSIONS One focus of our development of the Mark 45 FDA testbed has been the integration of multiple capabilities to produce a complete knowledge-based maintenance aiding system. We have successfully integrated multiple interactive presentation media, existing BIT equipment, and multiple knowledge-based subsystems in a prototype maintenance aid. We have also considered future issues involving integration of additional subsystems into a comprehensive maintenance aiding and logis- tics support system to help insure the extensibility of the system architecture. We believe that effective development, fielding, and support of knowledge-based systems requires con- sideration of these issues throughout the design process. ACKNOWLEDGEMENTS We wish to acknowledge J. Dar&h, E. Goodstadt, R. Grommes, G. Harstad, G. G. Harstad, S. Kalpin, D. Whitehead, and C. J. Yi of FMC Northern Ordnance Divi- sion for their contributions to the development of the Mark 45 FDA testbed. REFERENCES Anderson, B. M., Cramer, N. L., Lineberry, M., Lystad, G. S., & Stem, R. C. Intelligent Automation of Emergency Procedures in Advanced Aircraft. In The 1st Con- f erence on Artlf k&L Intelligence Applications. LOS Angeles, CA: IEEE Computer Society Press, 1984. Halff, H. Overview of Training and Aiding. In Artificial Intelugence In Maintenance: Proceedings of the Joint Services Workshop. Brooks Air Force Base, ‘Ix: Air Force Systems Command, Air Force Human Resources Laboratory, 1984. Wescourt, K, Powell, C, Pickering, C., & Whitehead, D. Generic Expert Systems for Equipment Fault Diagnosis. In Proceedings of the Nineteenth Annual Asilomar Conference on Circuits, Systems, and Computers. Pacific Grove, CA: IEEE Computer Society and the Naval Postgraduate School, In press, 1986. APPLICATIONS / 855
|
1986
|
15
|
417
|
A VIEWPOINT DISTINCTION IN THE REPRESENTATION OF PROPOSITIONAL ATTITUDES John A. Bamden Computer Science Department Indiana University Bloomington, Indiana 47405 ABSTRACT A representation scheme can be used by a cognitive agent as a basis for its normal, inbuilt cognitive processes. Also, a representation scheme can serve as a means for describing cog- nitive agents, in particular their ‘mental” states. A scheme can serve this second function either when it is itself naturally used by a cognitive agent (that reasons about agents), or when it is merely an artificial, theoretical tool used by a researcher. In designing a representation scheme one must pay very careful at- tention to two related questions: the questron of whether, for any given agent, the scheme is used by the agent or is used to describe the agent (or both); and the question of whether the scheme is being used as a theoretical tool as well as, perhaps, 6 being used by agents). I show by example t at representational pitfalls can be encountered when these questions are not clearly addressed. The examples revolve around Creary’s logic-based scheme and Maida and Shapiro’s semantic network scheme, both of which were designed primarily to facilitate the representation of propositional attitudes (beliefs, hopes., desires, etc.). How- ever, the general points have wider application to schemes for propositional attitude representation. By appeal mainly to the Maida and Shapiro case I demonstrate also that it is possible to be misled by the ambiguity of whether “to represent” means “to denote” or “to be an ambassador/representative/abstraction of-. I TWO REPRESENTATIONAL VIEWPOINTS For the sake of discussion I will view a (declarative) repre- sentation scheme as consisting of possible expressions together with b) a specification of how ex- t a) a specification of a set of pressions either denote (refer to) entities in some world or make assertions about such entities, and sions can be manipulated. Item (b I c) rules about how expres- is the “semanticsn of the scheme. It is typical for the semantics of an AI representation scheme to make denoting expressions denote (Lordinaryn entities such as people, blocks, places, telephone-numbers, etc., and to take the assertions made by assertional expressions to be about such things. The scheme is (typically) viewed as instantiated in one or more cognitive agents that use the scheme in order to achieve their goals. This means in part that the scheme’s expressions are abstractions from the internal, ‘mental” nature of each such cognitive agent X. A given expression, which might denote the person John for example, is an abstraction from a something- or-other internal to X. Now let us assume, just for definiteness, that this something or other is X’s concept of John, or at least part of it. Then this concept (or ‘intension”) is deemed to have the person John as its uextensionn. Notice the three things we have here: the rep resentational expression, X’s concept of John, and the person John. AI researchers are accustomed to saying that the expres- sion “represents” John, where the notion of representing is that of ‘denoting” - in the sense in which a term in a logic scheme denotes an entity according to some interpretation. However, it is also possible to consistently maintain that the expression “represents” X’s John-concept, provided we realize that the nm tion of representation here is akin to “being a representative or ambassador of”, “being a theoretical handle on” or “being an abstraction of” rather than to the notion of denoting (or refer- ring to). Of course, it is possible for a representation scheme’s expres- sions to denote concepts, or, more generally, entities within or aspects of an agent Y’s mental make-up. (Concomitantly, some expressions would make assertions about such things.) Impor- tant examples of schemes with this roperty are the conce t- denoting schemes of McCarthy (1979p and Creary (1979). Tks sort of representation of concepts by expressions must be care- fully distinguished from the ambassadorial/abstractional sort. Indeed, the scheme might be being used by an agent X, so that the scheme’s concept-denoting expressions ambassadorially rep- resent concepts (or other mental entities) in X but denote con- cepts in Y. (Extra difficulty arises when Y is actually X; but the two types of representation must still be distinguished.) Equally, the scheme could be in use by researchers as a theoretical tool for describing a domain that includes agent Y, and not used by any agent as part of its normal processing. Some expressions in the scheme then denote concepts, but none ambassadorially represent any. The notion of ambassadorial representation applies also to assertional expressions in a scheme instantiated in an agent X. Such an expression abstracts from some assertional mental something-or-other in X. The expression does not of course usu- ally assert anything about that something-or-other, Part of the purpose of the paper is to demonstrate the sort of problems that can arise if the distinction between denotation and ambassadorial representation is not properly maintained (although of course this distinction is not new in itself). But another major, related goal is to show the importance of distin- guishing between the following two possible views of a represen- tation scheme: (4 as something that is used by normal cognitive processing; an agent as a basis for its (B) as a means for describing (making assertions about and/or denoting aspects of) agents, especially in the case when the scheme is used as an artificial, theoretical tool by a researcher. On the way, I will demonstrate some re resent at ional that do not seem to have been noticed efore. g infelicites The main discussion will centre on attempts to get repre- sentation schemes to embody information about ‘Lpropositional attitudes” (beliefs, desires, hopes, etc.). This should be no sur- prise, in view of our concern with description of agents. The study can be seen as a contribution to our appreciation of the complex subtleties inherent in the topic of propositional atti- tudes. The points I will make are closely connected with obser- vations made in the past by other people, but there will not be space to trace the connections. See (Barnden, 1986) for further discussion. KNOWLEDGE REPRESENTATION / 4 11 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. My main examples of difficulties arising in representation schemes come from the work of Maida & Shapiro 1982) and 6 of Creary (1979). Before going on to this, it is wort pointing out one or two preliminary examples of how the ambiguity of ‘%o represent? has appeared in the literature. A first instance is Brachman’s writings on KL-ONE (Brachman, 1979; Brachman & Schmolze, 1985). The KL-ONE scheme is not focused on propositional attitudes. However, in its sophisticated treatment of conceptual structures it shows promise of being a good candi- date for application to the subtle and complex issues raised by propositional attitudes, and Brachman & Schmolze (1985) state that the context mechanism should prove useful for propositional attitude representation. It is therefore with some alarm that in a paper such as (Brachman 1979) we find, at best, severe unclar- ity of presentation. On pp.34/5 of (Brachman 1979) we are told in one breath that “Concepts” (which, note, are formal objects in the representation scheme) represent intensional objects, not “extensional (world) objet ts” , while in another we are told that Concepts represent objects, attributes and relationships of the the domain being modelledi also, we are told that, say, that the ARC-DE-TRIOMPHE Individual Concept “denotes” the real Arc de Triomphe. Analysis of the text is difficult, because it may be that Brachman takes “denote” to mean something differ- ent from ?epresentn , for “represent” alone. and may be using more than one meaning However, since KL-ONE seems to have been partly intended for use by AI programs, one is justified in suspecting that Brachman is (in the cited paper) unclear as to whether his representational expressions denote concepts or ambassadorially represent them. This feeling is reinforced by the use of the word “Concept” as a name for a representational object that is claimed to represent (denote?), rather than to & (an abstraction of), a concept or “intensional object”. The re- cent description of KL-ONE in Brachman & Schmolze (1985) also uses the term ‘represent” ambiguously in places. The noted ambiguity is analogous to one that creeps into Johnson-Laird’s work on mental models (Johnson-Laird, 1983 . h His theory is based on the idea that our minds manipulate bot “propositional representations” (natural-language-sentence-like structures, which denote and assert in familiar ways) and struc- tures called “mental models”. Johnson-Laird talks much of the time as if mental models are models of proposition represen- tations - using the term (Lmodels~ in something akin to the model-theoretic sense of logic. The components of models are thus internal objects that are, presumably, to be considered as denoted by terms in the propositional representations. At the same time, Johnson-Laird often talks of mental models as ‘rep- resenting” states of affairs in the outside world. [For just one example, see Johnson-Laird, 1983:p.419).] Now, it seems most consistent L wit his general viewpoint to take him to mean that mental models are abstractions from outside-world states of af- fairs, and thus to “ambassadorially” represent them in a sense; rather than that mental models denote, and make assertions about, outside-world entities. However, this view is somewhat at odds with his (more tentative) suggestions that mental mod- els could contain propositional-style elements, such as numerical tokens. The point is that such a suggestion seems to imply that a model can be partially denotative, as well as ambassadorial. This mixing may well be what is needed - but the point is that the issues should be made clear, and the various notions of representation in use should be properly distinguished. II THE MAIDA AND SHAPIRO SYSTEM Here I demonstrate some difficulties in the semantic net scheme of (Maida b Shapiro 1982). [See also Rapaport & Shapiro (1984).] M ai a and Shapiro’s proposal is in the tradition of ex- d plicitly bringing in concept-like intensions as a basis for the rep resentation of propositional attitudes. Other proposals in this general line are McCarthy (1979), Creary (1979) and Bamden (1983). My objections to the Maida and Shapiro scheme are probably not fatal; but it is precisely because of the scheme’s importance and promise that it is worthwhile to point out types of theoretical inelegance and pragmatic awkwardness that could have been avoided. 4 12 / SCIENCE Maida and Shapiro place much emphasis on the idea that the nodes in their networks do not follow the usual line of rep resenting “extensions”, by which they mean Yordinaryn sorts of entities like people, numbers, etc.; rather, nodes Vepresent” in- tensions. Intensions can be, for instance, concepts like that of Mike’s telephone number as such (where the %s such” indicates that the concept in some sense includes the characterization of the number as the telephone number of Mike), or a propositional concept like that of Bill dialling Mike’s telephone number. The question now is: which of the two types of representation are they appealing to ? If their John node merely represents X’s John-concept in the ambassadorial sense, then it denoted John. But Maida & Shapiro (1982: p-296) say ezplicitly that they ore deporting from the idea of nodea denoting ordinary objects like John. So, we have to assume that they mean that the John node actually denotes the John concept via the scheme’s seman- tic function, and does not denote John himself - more generally, that the universe of discourse of their scheme is just the world of their chosen agent X’s concepts, so that the semantic function of the scheme maps nodes to such concepts. On the other hand, Maida and Shapiro want their networks to be used by artificial cognitive agents, not just to serve as a theoretical tool, and there are strong signs that Maida and Shapiro really think to an important extent of their network nodes as am6assadorially representing intensions. On p.300/301 of (Maida & Shapiro 1982) we read about a hypothetical robot in which there are connections between network nodea and sensors and effectors. On p.319 of that paper, nodes are talked about as if they are in cognitive agents rather than just being items in a theory about the cognitive agents. To give the authors the benefit of the doubt we could say that they are, merely, loosely talking about nodes when what they really mean to talk about is the mental entities denoted by those nodes. If so, we might have expected an explicit caveat to this effect. Maida and Shapiro’s use of the verb Yo model” is also sus- pect. They say in (1982: p.296) that they want their networks to model the belief structures of cognitive agents. It is this desire that is the motivation for their adopting the ‘intensional” view whereby nodes represent intensions. They failed to notice that a conventional network that ambassadorially represents agent X, but denotes ordinary things in the world, doea model the belief structure of X. Such a network models the agent X ambassadori- ally while modelling the agent’s world in the denotational sense. We now go on to see what problems arise in the Maida & Shapiro scheme that are linked to the ambassadorial/denotational confusion and to the distinctions between views A) and B) in Section I. The treatment here is necessarily brief. hrther details can be found in (Barnden, 1985). The proposition that John dials Mary’s telephone-number could appear in a Maida & Shapiro network in the way shown in Figure 1. The network is associated with a particular cognitive agent X (“the system”). (For convenience, I simplify the form of the networks in harmless ways, and adopt a diagrammatic nota- tion slightly different from the one used by Maida and Shapiro.) The John, Mary, dial and tel-num nodes in Figure 1 denote X’s concept of John, X’s concept of Mary, X’s concept 01 dialling, and X’s concept of the telephone-number relation. The ‘head” node D denotes the proposition, which is itself a concept or in- tension, and has a truth value as its extension. The node MTN denotes the concept of Mike’s telephone number as such. A. Non- Uniform Dereferencing Our first difficulty is to do with the semantics of the net- work fragment in Figure 2, which shows the way the proposition that Bill believes that John is taller than Mary could appear. Rather than trying to appeal to a precise semantics, I shall use an informal, simplified approach. The proposition, denoted by the node B in Figure 2, states a belief relationship between two entities. We ask: what sort of entities are they? One is a per- son and the other is a proposition. Notice here that the latter entity is just the concept denoted by the TJM node, whereas d\ofho John dial M-I-N tel-num Mary Figure 1 Figure 2 Figure 3 Bill believe MFP fav- Fli ke prow Figure 4 the former is the e&e&on of the concept denoted by the Bill node. Thus, in determining what a proposition denoted by a node states, we sometimes “dereference” the concepts denoted by argument nodes and sometimes we do not. This non-uniform dereferencing counts as a theoretical and practical drawback. Theoretical, simply because it is a compli- cation, and it has no analogue in some other powerful concept- based propositional attitude representation schemes, such as that of Creary (1979). Practical, because it forces processing mechanisms that act on the networks to be aware of the need to dereference in some cases but not in others. An example of such a mechanism might be a system that translates network fragments into natural-language statements. In the Figure 2 ex- ample, we do not want the language generator coming out with a statement to the effect that a concept of Bill believes aome- thing, or to the effect that Bill believes some truth-value (the extension of the TJM proposition). Remember that, despite their view of their networks as de- noting conceptual structures in agent X, Maida and Shapiro also seem to propose that an AI program could use their networks in dealing with the world. Such a program, then, might need the language-generating mechanism of the previous paragraph. B. Descriptions of ProBositiong The complication of having certain argument positions of certain relationships be of type “do-not-dereference” is unwel- come, but perhaps tolerable. However, consider now the propo- sition that Mike’s favourite proposition is more complex than Kevin’s favourite proposition. This sort of example, where there are definite descriptions of propositions rather than explicit dis- plays of them, is not considered in (Maida & Shapiro 1982), nor indeed in most studies of propositional attitudes, whether in Philosophy or AI. I am merely using the ‘favourite proposition” function as a way of generating simple examples. Other sorts of mundane descriptions of propositions would be more impor- tant in practice (e.g. the description ‘the belief he expressed yesterday”). Figure 3 shows the network structure that would presum- ably be used. It is essential to realize here that MFP does not denote Mike’s favourite proposition, but rather the concept 01 Mike ‘3 facourite proposition aa such. This is by analogy with node MTN of Figure 1. Thus, MFP denotes a concept whose extension is a (propositional) concept. Then, in saying what the proposition denoted by node MC in Figure 3 is about, we must dereference MFP and KFP. The relation “more-complex-than” is like the relations ‘taller-than” and =to dial” in that it does not have do-not-deref argument-positions. The Figure 3 example does not in itself cause difficulty; but what are we to make of the task of representing the proposi- tion that Bill believes Mike’s favourite proposition? Suppose we were to use the structure shown in Figure 4. In saying what the proposition denoted by node B states, we do now have to dereference the concept denoted by the ARG2 node (i.e. MFP) in the belief structure, in contrast to the case of Figure 2. For, it isn’t that Bill believes the concept of Mike’s favourite propo- sition; rather, he believes that suggestion of having do-not-dere P roposition itself. The simple adequate. argument positions is thus in- An alternative technique that would cope with the present problem, as well as with the Figure 2 difficulty, is to refrain from dereferencing just those concepts denoted by “proposition head nodes- - nodes that send out REL arcs. This rule specifies that the hlFP and KFP concepts in Figs. 3 and 4 should be dereferenced, but not the TJM concept in Figure 2. But we now get into trouble with the proposition that Kevin’s favourite proposition is that John is taller than Mary. The structure in Figure 5 is not satisfactory. The problem is that TJM denotes the proposition that John is taller than Mary, but equally, by analogy with the Figure 3 example, it should denote a concept KNOWLEDGE REPRESENTATION / -f 13 I John taller Mary fav- Kevin prow Figure 5 Figure 6 of some proposition., because it is the ARG2 node of a favourite- In Barnden (1985) I comment on the difficulties that would proposition-of predication. arise in an attempt to avoid the above problems by deploying an explicit dereferencing operator, or by having FPK’s ARG2 node (see Figure 5) go to a node different from TJM but related to it by the Maida and Shapiro EQUIV facility. C. The Source of the Difficulties Although Maida and Shapiro tried to design a scheme in which the denoting expressions always denote concepts, they were strongly though unintentionally influenced by the more typical idea of a scheme that ambassadorially represents con- cepts. They correctly realized that a representation scheme able to cope with the representation of agents’ propositional attitudes can be achieved by having representational items that denote in- tensions (concepts with the form, for instance, of propositions and definite descriptions). The same basic idea has appeared in other systems, such &s the predicate-logic schemes of McCarthy (1979) and Creary (1979). H owever, each of these other schemes was ezpzplicitly proposed for uBe by a cognitive agent, with the de- noting items in the scheme denoting people, numbers and so on as well as intensions. Assume in fact that we were to design a semantic-network scheme along these more conventional lines. Suppose that the agent X using the scheme is entertaining the proposition that Bill believes that John is taller than Mary. Agent X could then contain a cognitive structure P whose semantic network formu- lation or abstraction is depicted in Figure 6. Notice carefully that the node labelled “NB;[~” denotes Bill himself now, not an X-concept of him, so that it is unlike the Bill nodes in previous figures. Equally, node Nbc[ieac denotes the relation of believing itself! not an X-concept of it. On the other hand, the nod,e NT is akin to a Maida and Shapiro proposition node in that It de- notes the proposition T that John 1s taller than Mary. I leave other details vague, in particular the means by which node NT denotes T and the semantics of the topmost node in the figure. Let us assume, then, that nodes N~ill and NT are the obstrac- tions from some cognitive structures C~;tl and CT in agent X, where we say that CBi,l and CT are concepts that X has of the person Bill and the proposition T. Now suppose that we choose to design a network scheme for describing agents such as X. Then we need a node No,, to denote X’s concept CBill. This node is therefore similar to the person nodes that we have seen in examples of the Maida and Shapiro system. But, similarly, we also need a node N& to denote X’s concept CT of the roposition T. This node is not like the Maida and Shapiro no s e TJM in Figure 5. Node TJM denotes the proposition T itself, whereas our N& denotes CT. If Maida and Shapiro wanted to build a scheme whose nodes denoted only intensions, they should perhaps have built a scheme containing nodes like N&. That they did not can be explained that what they really had at the back of their minds in con- sidering propositions like “Bill believes that John is taller than Mary” was an ambassadorial representation structure such aa the ooze in Figure 6. The trouble is that in adopting their inten- siona! view they “lifted Bill up by one intensional level” [they introduced a node denoting an intension for Bill] but failed to lift proposition T up by an intensional level [introduce a node denoting an intension for that proposition]. It is the systematic confusion of U with Cu for propositions U that is at the root of the difficulties for Maida and Shapiro. We can see this by supposing that in Figure 2 node TJM is now taken to denote a concept ofthe proposition T that John is taller than Mary, rather than T itself. Then in determining what the structure in the figure is saying we have to dereference both argument, nodes of node B, not just one. This dereferencing in belief subnets then removes the difficulties we encountered with the example of Figure 4. III SOME SUBTLE IMPUTATIONS Assume that agent X is using Creary’s scheme as a repre- sention medium and translates inputed English sentences into Creary (1979) has proposed a neo-Fregean way of using logic expressions in the scheme. to represent propositional attitudes. The proposal is a develop Then X will systematically impute probably-incorrect conceptual atructurea to other agents, in a sub- tle way. Suppose X receives the sentence ment of one by McCarthy (1979). In the intended inte retation of Creary’s system, terms can denote propositional an ‘s descrip tiona! concepts as well as “ordinary” things. ((Sl)) M k b I i e e ieves that Jim’s wife is clever. The simplest Creary rendering of this sentence (reading it in a “de-ditto” fashion) is w9 believetmike, Clever(Wife(Jim))). The symbols j im and mike denote the people Jim and Mike. The symbol Jim denotes a particular (apparently standard) concept of the person Jim. The symbol Wife denotes a function that when applied to some concept c of some entity delivers the (de- scriptiona!) concept of “the wife of [that entity as characterized by 4” as such. Thus the term Wif e(Jim> denotes a complex, descriptional concept. The symbol Clever denotes a function that when applied to some concept c of some entity delivers the propositional concept of “[that entity, aa characterized by c] being clever”, as such. The believe predicate is applied to a person and to a propositional concept. Note that the intuition underlying the use of the Clever and Wife functions in the formula is that Mike’s belief is in some sense couched in a direct way in terms of wife-ness and cleverness. Notice on the other hand that the formula embodies 414 / SCIENCE no claim that Mike conjures with entities representing the func- tions denoted by the Creary s mbols Clever and Wife. The Clever and Wife functions are so far at least) merely tools the T agent X uses to ‘mentally discuss~ Mike. However, suppose now that X inputs the sentence e ieves that Mike believes that Jim’s wife is 1 A major Creary interpretation of (S2) is W)) believe(g, Bel(Mike, Clever$(Wife$(Jim$)))) Intuitively. this says that Georne has in his belief snace some- thing that’says whit (Cl of the concepts denoted the new formula. 1 b says. Our point hinges on the nature y the second-order concept terms in The ceDt of J symbol JimS denotes a (standard) concept of im. The svmbol Be1 denotes the function that a con- . when applied to a concept of a person and a concept of a p;oposi- tional concept, delivers the DroDositional conceDt of that Derson (so characte&ed) believing-th& proposition (sb characteiized). The term Wif e$ (Jim$> denotes a concept of the concept denoted by Wife(Jim). In fact, the concept denoted by Wife$( JimS) explicitly involves the idea behind the concept-construction func- tion that is denoted by Wife, just as the concept denoted by Wife (Jim) explicitly involves the (wifeness) idea behind the wife-delivering function denoted by wife. Similarly, the term Clever$(Wif e$(Jim$> > denotes a concept C of the proposi- tional concept denoted by Clever(Wife(Jim)), where C ezplic- itly involves the the idea behind the concept-construction junc- tion that is denoted by Clever. The trouble then arising is that we must conclude that, intuitively, (C2) conveys that George ‘a belief ia couched in terms of the Clever and Wife concept-construction functiona. This is analogous to (C 1) conveying that Mike’s belief is couched in terms of the wife-of function and cleverness pro erty. We now see that (C2) is a deviant interpretation of S2 , whereas Creary makes it out to be a major plausible possi L s ility. (C2) is deviant because it takes George to be conjuring with concept- construction functions that no-one except a theoretician (e.g. Creary) could normally be expected to conjure with. It is convenient to sum up this phenomenon as being an imputation, to cognitive agents, of features of X’s particular method of describing cognitive agents. X’s method uses concept- construction functions like Clever and Wife; and X, probably incorrectly, imputes the use of these functions to other cognitive agents. This observation about Creary’s concept-construction functions was implicit in a primitive form in (Bamden, 1983). The imputation problem in Creary’s system could have prac- tical importance, albeit at a sophisticated level, as I show in (Barnden, 1986). I demonstrate there that analogous imputa- tion issues crop up in many types of scheme - for instance, auotational schemes somewhat on the lines of lBurdick 1982. eerlis 1985, Quine 1981), and the B,-based situ&ion-semantics scheme of Barwise & Perry (1983), if this is naturally extended to deal with nested attitudes. The above comments have had to be very brief. The mat- ter is portrayed in much greater detail and in a wider context in (Barnden 1986). 1 hope the comments made are enough to suggest that the Importance of the imputations varies greatly according to whether or not the representation scheme at hand is meant to be used as a theoretical tool for the description of propositional attitudes. If this is not the intention - so that the scheme is instead jwt meant to be used by a cognitive agent 89 a base for its normal cognitive processing - then the imputations may be tolerable, since the inaccuarcy of representation that they embody may not prevent the agent reacting appropriately to its environment in most cases. That is, the representation scheme could still be adequate to a heuristically acceptable de- gree. If, however, the representation scheme is meant to be used as a theoretical tool, then the theorist should be very aware that the scheme is introducing imputations, or, if you like, is slipping in important theoretical assumptions about the psychology of agents by a back door. These assumptions may be over and above the ones the theorist was aware of making. IV CONCLUSION To tie everything together: we saw that the importance of imputations inherent in a scheme depends greatly on whether the scheme is merely meant to be used by agents or is meant to be B theoretical tool for describing agents; and the critique of the Maida and Shapiro system shows that an intended view of a scheme as describing an agent (perhaps because the scheme is being used as a theoretical tool h can be unwittingly and dele- teriously affected by a view of t e scheme as being used by the agent. The considerations of this paper have led me to devise a propositional-attitude representation scheme that is relatively free of imputation difficulties. found in (Bamden, in press). A preliminary sketch is to be REFERENCES Barnden, J.A. “Intensions As Such: An Outline.” In Prow 8th. Int. Joint Conf. on Artificial Intelligence. Karlsruhe, W. Germany, August 1983. Barnden, J. A. “Representations of Intensions, Representations as In- tensions, and Propositional Attitudes.” Tech. Rep. 172, Com- puter Science Dept., Indiana University, Bloomington, Indiana, June 1985. Barnden, J.A. “Imputations and Explications: Representational Prob lems in Treatments of Propositional Attitudes.” To appear in Cognitive Science. Tech. Rep. 187(revised), Computer Science Dept., Indiana University, Bloomington, Indiana, April 1986. Barnden, J.A. “Interpreting Propositional Attitude Reports: Towards Greater Freedom and Control.” To appear in Prom 7th Europenn Gonf. on Art. ht., July 1986. In press. Barwise, J., & Perry, J. Situation8 and Attitudes. Cambridge, Mass.: MIT Press, 1983. Bra&man, R.J. On the epistemological status of semantic networks. In N.V. Findler (ed.), Aseocintive Nelworku. New York: Academic Press, 1979. Brachman, R.J. & J.G. Schmolze. “An Overview of the KL-ONE Knowledge Representation System.” Cognitive Science 92 (1985) 171-216. Burdick, II. “A Logical Form for the Propositional Attitudes.” Sgn- these 52 (1982) 185-230. Creary, L.G. “Propositional Attitudes: Fregean Representation and Simulative Reasoning.” In Proce. 6th. ht. hint COnf. on Artificial Intelligence, Tokyo, Japan, August 1979. Johnson-Laird, P.N. Mental Modek. Cambridge, Mass.: Harvard Univ. Press, 1983. McCarthy, J. “First Order Theories of Individual Concepts and Propo- sitions.” In J. E. Hayes, D. Michie & L. 1. Mikulich (Eds.), Mo- chine Intelligence 9. Chichester, England: Ellis Horwood, 1979. Maida, A.S. & S.C. Shapiro. “Intensional Concepts in Propositional Semantic Networks.” Cognitive Science 6:4 (1982) 291-330. Perlis, D. “Languages with Self-Reference I: Foundations.” Artificial Intelligence 25 (1985) 301-322. Quine, W.V. Yntensions Revisited.” In W.V. Quine, Theories and Things. Cambridge, Mass.: Harvard U. Press, 1981. Rapaport, W. J., k S.C. Shapiro. “Quasi-IndexicaI Reference in Propositional Semantic Networks.* In Prow 10th Int. Conf. on Compulational Linguistics, Stanford Univ., USA, 1984. KNdWLEDGE REPRESENTATION / 4 15
|
1986
|
150
|
418
|
POINTWISE CIRCUMSCRIPTION: PRELIMINARY REPORT Vladimir Lifschitz Department of Computer Science St an ford IIniversity Stanford, CA 94305 Abstract Circumscription is the minimization of predicates subject to restrictions expressed by predicate formulas. We propose a modified notion of circumscription so that, instead of being a single minimality condition, it becomes an “infinite conjun(*tion” of “local” minimality condi- tions; each of these conditions expresses the impbssibility of changing the value of a predicate from true to f&e at one point. We argucl that this “pointwise” circumscrip- tion is conceptually simpler than the traditional “global” approach and, at the same time, leads to generalizations with the atl&tionaJ. flexibility needed in applications to the theory of commonsense reasoning. 1. Introduction Circumscription (McCarthy 1980, 1986) is logical nhirnimtiion, that, ;s, the minimization of predicates sub- ject to restrictions expressed by predicate formulas. The iriterprct,ation of a predicate syn~bol in a model cm be dcsc*ribcd in two ways. One is I.0 represent a k-nry predical,e by ;I srlbsct of I/“‘, where II is the universe of the model. ‘I‘his approach identifies a predicate with its extension. The other possibility is to represent a predi- cate by R 13oo1c;u1-vnlucd function OII li”. These two ap- proaches are, of coItrsc, Inathem;~t,ically equivalent; but the intuitions behind them are somewhat, dill’erent,, and tliey suggest dilfercsnt views on what “minimizing a pred- i(.ittC" Illigllt lncnn, If a ])rcYlici~l~c is a set tlicn predicates are orderc’d by set inclusion, ;uitl it is natural to urltlcrstatid the mini- mality of a prctlicate as minimnlity relative to this order. A smaller predicate is a stronger predicate. A predicate satisfying a given condition is minimal if it cannot be made stronger without violating tl~c corltlition. This un- derslantling of nliniltla1it.y Ieatls to the usual definii~ion of circurnscriplion. . - ----- - This rc~sc~nrch was partiall<y s~~pport~ctl b,y DAIU’A uodrar (:or~t,~~;\(.t, NOO:l!)-,Y't-C-0'250. Let us accept now the view of predicates as Boolean- valued functions, or, in other words, as families of truth values. Each predicate is a family of elements of the ordered set {false, true}. IJnderstanding “smaller” as “stronger” still makes sense; but now we can also think of making a predicate smaller at a poht ( E II” as chang- ing its value at that point from true to false. As far *IS the values at other points are concerned, we can require, in the simplest case, that they remain the same; or we can allow them to change in an arbitrary way; or some of them cm be required to remain fixed, and the others allowed to vary. The new definition of circumscription proposed in this paper expresses, intuitively, the minimality of a pred- icate “ at every point”. It can be interpreted as an “inf- nite conjunction” of “local” minimality conditions; each of these conditions expresses the impossibility of changing the value of a predicate from true to fulse at one point. (Formally, this ‘Ynfinitc conjunction” will be represented by a universal quantilier). We argue that this “pointwise” approach to circum- scription is in some wa,ys conceptually simpler than the traditiortal “global” approach alid, at the* sa111e time, leads to gc.licr;~lizat,iOIis with t,he iLdtlitiOllid Urxibility nccdccl in applications to the theory of commonsense yea- soning. Proofs of the mathematical facts stated below will be published in the full paper. 2. The Basic Cost of Point,wiso Circumscripl,ion Let us start with the simplest case of circumscribing one predicate with all other non-logical constants treated as parameters. Let n(P) b e a sentcncc containing a pred- icate constant P. R.ccall that the (glohl) circumscription of P in rl( 1’) is, by delinilion, the second-order formrlla A(P) A Vpn( [l(P) A p < P). (1) Here p is a predicate variable of the sntne arity as P, and p < P stands for 406 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (Z is a tuple of object variables). We denote (1) by Circum( A( P); P). The pointwise circumscription of P in A(P) is A(P) A VxjPx A il(Xy(Py A 2 # y))]. (2) Notice that this is a first-order formula. We denote (2) by Cp(A( P)). A model of (2) is a model of A(P) which cannot be transformed into another model of A(P) by changing the value of P from true to false at one point. The quantifier Vx represents the “infinite conjunction” mentioned above, aud the formula following the quantifier can be viewed as a minimality condition: it asserts the minimality of the value of P at point x. It is easy to check that Circum(A; P) implies Cp(A)‘. If we assume that all occurences of P in A(P) are posi- tive then these two formulas are equivalent. This special case is important, because in standard applications of cir- cumscription (McCarthy 1986) the minimized predicates usually have no negative occurences in the axioms. To illustrate the difrerence between (I) and (2), take A t.o be Pu E Pb. Any model with P identically false satisfies both Circum(A; P) and Cp(A). In addition, any model of A in which P is true at exactly two points, a and b, is a model of the pointwise version, but not of the global one. 3. A Generalization Even in simple applications to formalizing common- srusc knowledge wc usually riecd forms of circuxiisc.ription slightly 1ll0rc gcncral t,liati tliosc defiiicd in the previous section. Let us start with a formula A(P, Z), where % is a tn- ple of predicate and/or function constants. Tllc (global) circumscription of P iI1 A( P, 2) with 2 dlowecf to vary is A( I’, Z) A vpz- I( A(p, z) A p < P), (9 whcrc t is il t4plc of prcdicatc and/or Cuticbioti variables similar to %. This formula, dcnotcd by Circum( tl( P, Z); P; Z), asserts that the extetlsion of P cannot bc made smaller even at the price of chauging the intcrprebations of the symbols included in 2. The corrcspotltling form of pointwise circumscri~~tion is A(P, 2) A VXZl[Pa: A A(Xy(Py A 2 # y), z)]. (4) It will be denoted by C,( A(P, Z); Z). Because of the variables Z, (4) is, gCIlcsrillly, Zk second-order f0~tIllll~. (4) is cquivalcnt to (3) iT all owllmiws of I’ it1 /l( I’, Z) are positive. 4. Minimizing Several Predicates In a further generalization of global circumscription (3), P is a tuple of predicate constants PI,. . . , P,,. The meaning of Circum(,l; P; 2) is given again by (3), with p standing this time for a tuple of predicate variables Pl,***,Pn, and p < P understood as n n A ( vx p;x 2 Pi,) A v +x(P;x 3 p,x). i=l i=l This form of joint minimization of several predicates is called parallel circumscriptior~ (to distinguish it from the case when different members of P are minimized with different priorities, which is discussed below). What is the relationship between circumscrib- ing Pl , * - * , Cl in parallel and circumscribing each Pi? It is easy to show that Circum(A; P; Z) implies A, Circum(A; Pi; Z). In the important special case when all occurences of PI, . . . , P,, in A are positive, the converse also holds; hence, in this case, Circum( 11; P; 2) is equiv- alent to /\; Cr. (A; Z). Tl lis conjunction asserts that, whenever one valuck of one of the predicates PI ,“‘, p, is changed from tr?Le to false, and the interpretation of 2 is changed in an arbitrary way, the resulting structure cannot possibly be a model of A. This formula is the pointwise counterpart of parallel circumscription; there is no need to introduce a special definition. Let, 11s turn now to prioritized circumscription, and considrr, for simplicity, the case of two prcdicntcs PI, Pz. 7’11c c.irciiInscript,il)ll CircuIti( 14; 1’1 > 1’2; Z), which as- s1g11s a higher priority to the task of minimizing 1’1, is defined by the sarnc formula (3), but with p < P intcr- pret,ed lexicogral,bic;l.lly; for details, see (l,ifschitz 1985). This circumscription is equivalent to wlic~tic~vc~r /‘, , I’2 tlitV(’ 0lll.y posil,ivcb occ1ircticcs it1 /I. ‘I’his cori.iuuc4ion is the pointwisct COUIlt?rpiWt of prioritize-d cir- cumscription: 110 value of PI can be cl~i~~~g~d from true to false even at the price of changing P2 arbitrarily. 5. A Further Generalization Our next goal is to introduce some more general forms of poinlwisc circuliiscripliorl. We start with a mo- tivating example. Consider n simple version of the blocks world, in which a l)lock catI bc irl 0111,~ ou(’ of two i)IiL(*cs: cit,hcr on the tnblc or OIL ttlc floor. We want to tlcscribc the KNOWLEDGE REPRESENTATION / 407 effect of one particular action, putting block B on the table. This can be done using two unary predicate con- stants, ONTABLE and ONTABLEI, which represent the configurations of bloclcs before and after the action. There are two axioms: TAB x > (ONTABLE x f ONTABLE x) (5) and ONTABLE B. (6) Here AB is the “abnorma1it.y” predicate which will be cir- cumscribed in the conjunction of (5) and (6). The first axiom expresses what John McCarthy calls the “com- monsense law of inertia”: normally, objects remain where they are. This formula exemplifies the use of circumscrip- tion for solving the frame problem (McCarthy 1986). The second axiom expresses the basic property of the action under consideration: in the new configuration of blocks, B is on the table. What should be varied in the process of minimizing AB? The purpose of our axiom set is to characterize the new configuration of blocks; hence it is natural to cir- cumscribe AU with ONT,4BLEl allowed to vary. Such a circumscription (global or pointwise) gives AB x z (x = B A lONTABL& II). This is exactly what we wol~ltl intuitively expert; the oldy block which changes its location is H, and tlkis only hap- pens if it was not on thr table prior to the event. It can bc shown that ~i~clltnscl-iI)tion dots not Irad t,o the sanic rcs~~lt. if ONT/1/1 /,I{,, alit1 ON’I’,jl /I I, I<, arc b0l.h varied or both fixed; we must treat ON?‘Al1LE,, and ONTABLE in difFerent ways. Let us change now slightly the formal language used in this examplr and move closer to tlic formalism of the SitJJtltiOJl CakJJhS of (McCarthy and llaycs 1969). 111 ad- dit,ion to Vllriill)lPS for I)locks, wc ittl ro~ltt(‘~ n scc*ottd sort of Vilriil,l)lCS, vnriahlcs for SilflitliOtts. ‘l’lt(~tT ilI‘(’ tW0 sikrta- tion constatlt,s, S,, and S, , which rcprc~sctlt two sit8uations separated by the act ion of placing f1 on the table. Instcnd of two tmary predicates ONTdl1Z;E,, a~14 ON’Z’A BLEI, we have now one binary prcdicatc ON7’11BLE, which is supposed to have a situation term as its second argument. III t,hc new notation, (5) and (6) bccornc -lAB z > (ONTABLl+, St,) G ONTAU,!,~(x, S,)) and What corresponds to the circumscription described above in this new notation? We would like to vary the values of ONTABLE s) for s = S1, and have the val- ues corresponding to s = So fixed. Definitions (3), (4), do not allow us to do that; for any predicate or function constant in the language, we have to either include it in list 2, and then all values of that predicate or function may vary, or not include it in 2, and then all of its values must remain fixed in the ‘process of minimization. We would like to be able to specify, for each function and predicate in 2, the part of its domain on which its val- ues must remain fixed, and allow the other values to be The following notation will be useful. If p, q, lr are predicate symbols of the same arit,y then .we write EQ,,(p, 4) for VO(TX > (pz ZE qz)) (“p and q are equal outside 7”‘). If f, g are function symbols of the same arity as r then EQ1.( f,g) stands for Vx(lrx > (fx = gx)). Assume first that 2 consists of only one symbol, a predicate constant or a function constant. Consider a X-expression V of the same arity as 2, which has no pa- rameters and contains neither P nor 2. Intuitively, it specifies the part of the domain of 2 on which 2 may vary. For global circumscription, we propose the following formula: A(P, 2) A ‘++WV( z, 2) A A(p) z) A p < P]. (7) IT I/ is idcnt,ically true then (7) becomes (3). Making V identically Calsc is equivalent t 0 t,reating 2 as a parameter rather tflli111 varyiltg it; (7) hcotnc5 ( I). In the cxnmplc fro111 the prrvious scctiotl, Z is ON7’,4llLE, and WC cn~l get the desired rffect, for instance, by taking V to be Xys(s f- S,,); ONTABLE ’ 1s allowed to vary in situations other than S,,. The counterpart of (7) for pointwisc circumscription is We can allow cvcn more ilcxibility by making it possible for x to affect the choice of the part of the domain on which 2 may vary when .P is minimized at x. (This ad- ditional flexibility, not nccdcd in this example, is essential for more c’olnplc>x applications.) Let T/’ bc a X-expression XZ?LV( T, 74 w 10s~ arity equals the sulli of 1,hc arit,ics of P I and %. Intuitively, 1/ reprcscnts tlic function whicli nlaps every value of 2 into the set of all values of u satisfying T/(x, u); accordingly, we will write Vx for AuV(x, u), The IIC~ form of ~ir~utllscriI)t,ioll is 408 / SCIENCE We will denote this formula by Cp(A(P, 2); Z/V). If V is identically true then Cp(A; Z/V) becomes Cp(A; 2). If V is identically false then Cp (A; Z/V) is equivalent to CP(A). 6. More on Priorities Now we know how to perform circumscription with some values of 2 allowed to vary. There is another inter- esting possibility: we may vary some values of the mini- mized predicate P itself. We start with the case when 2 is empty. The new schema is Here V is a X-expression hyV(z, y) which has no pa- rameters and does not contain P. The second term of (8) expresses that it is impossible to change arbitrarily the values of P on {y : V(X, y)}, and then change its value at x from true to false, without loosing the prop- erty A. We denote (8) by Cp(A;P/V). It is, generally, stronger than the basic form of pointwise circumscript,ion Cp(A) and turns into it when V is identically false. The following example shows how we can use the new form of circumscription to create the effect of as- signing different priorities to the tasks of minimizing P at different points. Applying the basic forms of global circumscription (1) or pointwise circumscription (2) to Pa V Pb gives Vx(Px 5 x = a) v Vx(Px 5 2 = b). Using the form of pointwise circumscription introduced in this section, we can express the idea of assigning a higher priority to the task of minimizing P at b; this circumscription will lead to the stronger result Vx(Px zi x = a). (9) To this end, introduce a binary prcdicatc constant V, and let A(P) be tl le conjunction of Pa V Pb and V( b, a). The second formula shows that P may be varied at point a when it is minimized at b. This condition expresses in the language of pointwise circumscription that minimizing P at b is given a higher priority. It is rasy to see that, in this case, (8) implies (9). An interesting feature of this example is that infor- matsion 011 priorities is rcprcscntc>tl by the axiom V(b,n), which is iucludtd iu the datnl~asc~ along with l’u V 1’6. A circumsrriptive Ihc>or,y is tISlIiLI1~ tl1ollgIlt, of as an axiom set along with a circumscriptiorl policy, a metamathemat- ical statement describing which predicates are allowed to vary, and what the priorities are. The form of circum- scription proposed here allows us to describe circumscrip- tion policies by axioms rather than metamathematical expressions. As another example, consider the problem posed in (Hanks and McDermott 1985), Section 7.1. Let A(P) be the conjunction of these axioms: ai # aj (0 < i < j 5 3), S(x,y) G [(x = qAy = ao)V (x = a2 Ay = a) V(x = a3 A y = a& ~Px A s(y, x) > Py, TPao. We can think of a0 , . . . , a3 as instances of time, S as the successor relation, and P as an “abnormality” of some kind. Applying any of circumscriptions (1)) (2) to A(P) gives Vx[Px E (x = al v 2 = a;z)] v Vx[Px G (x = a1 v x = a3)]. Hanks and McDermott ask what kind of formal non- monotonic reasoning can capture the idea of preferring “minimization at earlier instants of time”, which would leacl to selecting the second disjunctive term. Their anal- ysis shows that temporal reasoning of this kind is impor- tant but apparently cannot be captured by the existing formalisms. The problem is clearly similar to the one discussed a ove. b Extend the theory by thcsc “policy” axioms: v(ai,aj) (0 5 i < j 5 3). The additional axioms tell us that P may be varied at the points “later than z”when minimized at x; in this way, it “gives prefcrcnce to tlic past”. If a model with a2 -f a3 satisfies I,llc> lirst tcrui of (IO) tllcll a “bc11,er” mottcl can be constructed by making Pa2 false and Pa3 true. This method cm1 be also used to resolve a diffi- culty uncovered in recent attempts to formalize reasoning about thcl blocks world using circumscription (see (Mc- Carthy 198G), Section 12). WC would like to use the formalism of Sit~lliLtiOIi calculus to tlescribc the effect of moving a block. One of the axioms is “111~ law of mo- tion” expressing that, norn~nlly, moving a block x to a location 1 leacls to a situation in which 2 is at 2. Rn- other axiom tells us that. tlic case when x is riot clear is a11 cxccp tion. Imagine 11ow two blocks A and R side by side on the table. Vi’C iL1 t.Clll[)l t0 pl;L’.C It 011 top Of KNOWLEDGE REPRESENTATION / +O’-) B and then to move B somewhere else. Intuitively, the second action will be unsuccessful, because after the first action B is not clear. In ot(her words, the first action will be “normal” relative to the law of motion, and the second will be “abnormal”. Unfortunately, circumscrib- ing abnormality does not allow us to prove this assertion. It does not eliminate the possibility that the first action leaves the positions of the blocks unchanged, so that B remains clear, and the second action leads to the normal result. In this alternative model, the second action is “normal” , and the first is not. Each of the two models corresponds to a minimal value of AB; circumscription only gives a disjunction. The “bad” model can be eliminated by “giving pref- erence to the past”, as in the previous example. Details will be given in the full paper. The solution uses also the ideas of the previous section, so that what we need is a definition of circumscription which covers all the forms introduced above. This most general definition of point- wise circumscription is given in the next section. 7. The General Case Let A(S1, . . . , S,) b e a sentence, where each S; is a predicate symbol or a function symbol (in particular, it can be a 0-ary function symbol, i.e., an object con- stant). We want to minimize one of the predicate sym- bols from this list, say, Si. (Thus 5’1 corresponds to P and Sz,..., S,, correspond to 2 in the notation used be- fore). The pointwise circumscription of 5’1 in A with S; &owed to vary on I< is, by definition, A(S) A V’ZS~[S~Z A A EQV,,(Si, Si) i=l (11) A @Y(QY A x Z Y), 32,. . .>I. Here S stauds for ,!?I,. . . , S,, s is a list 31,. , . , s,, of pred- icate and function variables corrcspouding to the pred- icate atld frrllctiott ~~~lIlStiLlltS S; V; (; = 1,. . . ,72) is a predicate without parameters which does not contain Sl , * * * , s,, and whose arity is the arity of SI plus the arity Of Si. We denote (11) by Cs,(A; Sl/Vl,. . . ,S,/V,,). If K is identically true then we will drop /Vi in this notation. If V, is idrutically false Illen we can drop the term Si/Vi altogether. 8. Conclusion The following point wise i1pprOdl atlvautnges over the traditiounl globill approach. to circumscription hits the 1. The basic case of pointwise expressed by a first-order formula. circumscription is 2. There is no need to define the circumscription of more than one predicate. 3. Circumscription policies become more “modular”: a separate policy is defined for each of the minimized prcdicat es. 4. Circumscription policies, including the selection of priorities, can be described by axioms, instead of meta- mathematical definitions. 5. Circumscription policies may vary from point to point, which provides additional flexibility useful in appli- cations to formalizing reasoning about time and actions. We hope that the form of circumscription proposed in this paper is sufficiently powerful for formalizing many relatively complex forms of commonsense reasoning. Fu- ture work on applications of pointwise circumscription should lead to the discovery of general principles regard- ing the choice of circumscription policies (such as, for in- stance, the principle of assigning a higher priority to min- imization at earlier instants of time in temporal reason- ing). The present situation, when the policy is selected in many cases by trial and error, is clearly unsatisfactory. It is also important to extend the existing methods for determining the res.llt of circumscription to more general forms needed in applications. Acknowledgements I aui grateful to Michael Gelfond, Benjamin Grosof, .Joh u McCarthy, Nils Nilssotr, Raymond Reitcr and Yoav Sltoharu for useful discussions. References Hanks, S. and McDermott, D., Temporal Reasoning nrrn Ucthult, Log&, Tccbuical Report YALEU/CSD/RR #/ 430, Yale 1Tuivcrsity (I 985). I,ifwltil.x, V., (:otttl)lll,ittg c,irc.rlttts~riotiott, I’rcw. $111 [II- t~~~llili.iOll~ll Joirlt C:ohfcrellcc 011 Arlificial 11~telligezlce 1, 1985, 121-127. McCarthy, J., Circumscription - a form of non-monotonic lY’iLSOJlillg, Artificial hlk~fipJtce 13 (1980), 27-39. M&artlly, *J., A pplicntiotts of ~irc.tttnscription to formal- ixirtg ~‘OlIllJJOltS~‘J1S~’ lmowlrtlgc~, ilrtiliichl Iu.kl!igcnce 28 (198(i), 89-118. McCartlty, J. and Hayes, I’., Some philosophical problems from 1,ltv st,;m(lpoittt of :~rt,ilic*i;d it~ttlligc~ttcc, itt: Meltzer, J3. iLtt(l Midtie, I>. (bhls.), M il(’ Jill<’ I IJJt(‘l/igX’JlCt’ 4 (l~tiill- brlrgll 1 Iliiversity l’ress, I~tliub~~rgli, I%!)), 463502. 410 / SCIENCE
|
1986
|
151
|
419
|
THE LOGIC OF PERSISTENCE Henry A. Kautz Department of Computer Science University of Rochester Rochester, New York 14627 ABSTRACT A recent paper [Hanks19851 examines temporal rea- soning as an example of default reasoning. They conclude that all current systems of default reasoning, including non-monotonic logic, default logic, and circumscription, are inadequate for reasoning about persistence. I present a way of representing persistence in a framework based on a generalization of circumscription, which captures Hanks and McDermott’s procedural representation. 1. Persistence The frame problem is that of representing a dynamic world so that one can formally infer the facts whose truth values are not changed by a given action. A temporal world model allows one to assert that various actions occur at various times, and to be silent about other times. When one reasons with such a model, the frame problem is gen- eralized to the persistence problem: given that no relevant action, or perhaps no action at all, occurred over a stretch of time, one may need to infer that certain facts do not change their truth values over that time. [n other words, one needs to represent the “inertia” of the world, the moment to moment persistence of many of its proper- ties. Examples of persistence abound in everyday reason- ing. Sitting in my office, I can infer that my car is in the parking lot, because that is where I left it this morning. IHanks examines the following example, here simplified. Assume a simple linear, discrete model of time, containing instants 1, 2, 3, etc. At time 1 John is alive, and a gun aimed at John is loaded. At time 3 the gun is fired. We know that if the gun is loaded when it is fired, John will die at the next moment of time. We would like to conclude that John is not alive at time 4. In order to do so, we must make the persistence inference that the gun stays loaded from times 1 to 3. (See figure 1.) This report describes work done in the Department of Computer Sci- ence at the University of Rochester. It was supported in part by National Science Foundation grant DCR-850248 1. 2. Problems with Default Reasoning We would like to find some simple rule of default inference which captures persistence reasoning. Hanks and McDermott describe “obvious” solutions to the per- sistence problem using Reiter’s default logic, McCarthy’s circumscription operation, and McDermott and Doyle’s non-monotonic logic. In default logic, for example, one would include an rule which stated that if a fact held at a time Tl, and it was consistent that it held over an interval immediately following time Tl, then infer that it does hold over Tl. In the circumscriptive approach, one could define a “clipping event” which occurs whenever a fact changes truth value. Persistence is indirectly asserted by circumscribing (minimizing) the predicate which holds of all clipping events. While intuitively appealing, these approaches do not work. The basic problem, Hanks and McDermott point out, is that default inferences are not prioritized by each system. For example, applying default rules in different orders yields different extensions; in circumscription, many different models of the axioms may be minimal in the “clipping” predicate. Yet only some of these exten- sions or minimal models correspond to the intuitive understanding of persistence. Consider the gun example. The axioms have a minimal model (or corresponding extension) in which the fact ALIVE persists, but the fact LOADED is (mysteri- ously) clipped between times 1 and 3. (See figure 2.) Therefore simply circumscribing clipped (or adding default rules) does not sanction inferences about per- sistence. 3. .I\ Procedural Solution Hanks provides a temporal-assertion management program which computes perslstences. Hanks’s program functions by computing persistences in temporal order, from the past to the future. For example, the persistence of LOADED is computed before the persistence of ALIVE, and so the program concludes that John dies. The program reflects our intuitions in many cases because it captures the temporal order of causality: the gun being loaded can cause John to die, and so has precedence over it. KNOWLEDGE REPRESENTATION / -tO 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Hanks is not optimistic about the ability of any default logic to handle this reasoning properly: “...If a significant part of defeasible reasoning can’t be represented by default logics, and if in the cases where the logics fail we have no better way of describing the reason- ing process than by a direct procedural characterization (like our program or its inductive definition), then logic as an AI representation language begins to look less and less attractive.” Such pessimism may be premature. It is possi- ble to represent many kinds of ordered defaults in an declarative representation. We show how this can be done in a circumscriptive framework. 4. Model Theory The semantics of circumscription are based on the idea of minimal entailment. One statement entails another if all models of the first are also models of the second. Suppose a partial order is defined over class of models. The minimal models of a statement are those which have no strict predecessor in the partial order. Then one statement minimally entails another if all minimal models of the first are also models of the second. McCarthy’s original formulation of circumscription [McCarthyl980] defined the partial order over models in terms of the extension of some predicate, say P. A model Ml would be less than a model M2 if the extension of P in Ml is a subset of its extension in M2, and Ml and M2 are otherwise the same. Newer work [McCarthy1985] has refined this definition, largely concentrating on the role of the non-circumscribed predicates in the minimization. But many other variations on circumscription are possible. Let facts (such as LOADED) be represented by terms, and the atom Hold(t,f) be used to assert that fact f holds at time t. The predicate Clip holds of a time and a fact if the fact becomes false at that time; otherwise, the truth-value of the fact persists from the earlier instant. That is: Hold(t,f) 3 (Hold(t+ 1,f) @ Clip(t+ l,f)) (The symbol $ represents exclusive or.) Suppose we are given some assertions about when various facts hold. We wish to define a partial order over models of these sen- tences which reflects our intuitions about persistence. 1 2 3 4 I ALIVE I t LOADED I I FIRE Figure 1 Figure 2 “Good” models, Hanks and McDermott suggest, are ones in which earlier facts persist as long as possible, and so should fall at the beginning of the ordering. Where Ml and M2 are models, Ml is as good or better than M2 if every clipping in Ml is either matched by an identical clipping in M2, or by an earlier clipping in M2 (possibly of some different fact) which does not also appear in Ml. The less than or equal relation between models is for- mally defined as follows. Where Ml is a model and P is a predicate, the expression Ml[P] yields the extension of P in Ml. The extension of a binary predicate such as Clip is a set of pairs, where the pair of x and y is written <x, y>. Models can be compared only if they interpret constant, function, and predicate symbols other than Clip or Hold in the same way. In particular, this means that the models agree on the predicate “<“, which is used to order time instances. Because models may be compared even if they do not agree on the predicate Hold, that predicate (as well as Clip) is said to vary during the minimization. Ml < M2 if and only if (i) Ml and M2 have the same domain (ii) Every constant, function, and predicate symbol other than Clip and Hold receives the same interpretation in Ml and M2. (iii) The following (meta-theoretic) statement is true: <f, 0 E Ml[Clip] 3 <f, 0 E M2[Clip] V 3 t’,f’ . <f’, t’> f M2[Clip] & <f’, t’> 6 Ml[Clip] & <t’, 0 f Ml[<] The final clause in this formula means that the time instant t’ is before the time instant t. A model Ml is strictly better than 442 (MKM2) just in case Ml<M2 and it is not the case that M2<_Ml. From this definition one can prove that if MKM2, then (in terms of the Clip predicate) Ml and ,542 are identical up to some time t’; at t’, the set of clippings in M2 prop- erly includes the set of clippings in ,Ml. 1 2 3 4 I ALIVE t==l I FIRE 402 / SCIENCE The minimal models are those M such that there is no M’ such that M’<M. It is important to understand that the set of models is only partially ordered; there will be many minimal models. The set of minimal models may be empty if there is an infinite chain of models, Ml > M2 > M3 > . . . . This can occur if we minimize an existential statement of the form, “f will eventually be clipped”, with no upper bound placed on the time of clipping. The preference order will attempt to postpone the clipping for an infinite period of time. This problem does not occur if such an unknown time is given a (skolem) constant name, however, due to the fact that constants and functions do not vary in the minimization. 5. Proof Theory AMcCarthy’s circumscription formula is a statement in 2nd-order logic which entails those statements true in all the minimal models of a predicate. The following per- sistence circumscription formula entails those statements true in models minimal in the partial order defined above. Let K(Clip,Hold) be our initial set of temporal assertions. We write an expression such as K(Foo,Bar) to stand for the set of sentences obtained by substituting the predicates Foo and Bar for every occurrence of Clip and Hold in K(Clip,Hold) respectively. The variables c and h range over predicates. W c,h . W(c,h) & w t,f . c&f) > Wip(t,f) V 1 t2,f2 . t2<t & Clip(t2,f2) & ~c(tZ,fL)]) 3 w cf. CIip(t,f) G c(t,f) The formula can be informally understood as follows. Suppose that c and h are arbitrary predicates which satisfies all the constraints placed by the knowledge base on the predicates Clip and Holds respectively. Further- more, suppose whenever c holds of a particular time and a particular fact, then either Clip also holds of that time and fact, or Clip holds of some earlier time and fact which are not in the extension of c. The conclusion is that c and Clip are identical; the second alternative is never the case. There cannot be a predicate which satisfies all the con- straints on Clip, yet allows some fact to persist for a longer time, without having to clip some other fact at that time. The predicate-variable h was introduced in order to allow Holds to vary during the minimization of Clip. In order to use this formula, we must select particular instantiations for the variables c and h, such that the ini- tial set of assertions K(Clip,Hold) entails the main antecedent (in curly braces). Typically c is instantiated as a lambda expression which enumerates the the desired set of clippings. The variable h is instantiated by a lambda expression which describes which and when facts hold in the corresponding minimal models. lModus ponens then allows us to conclude that the extension of Clip is pre- cisely the desired set of clippings: Clip(t,f) G c(t,f). It is possible to show that this formula is valid in all models minimal in the above sense. As with standard cir- cumscription, the formula is inconsistent if there are no minimal models. [Lifschitzl985] develops a generic circumscription-like formula based on pre-orders. The formula above is easy to express in Lifschitz’ compact and elegant notation. 6. Example The gun example illustrates the use of persistence cir- cumscription. K(Clip,Hold) is the following set of state- ments, Not shown are unique name axioms, such as LOADED f ALIVE, etc. Hold(t,f) > (Hold(t+ 1,f) @ Clip(t+ l,f)) Hold(t,FIRE) & Hold(t,LOADED) 3 lHold(t + l,LOADED) & lHold(t + 1,ALIVE) Hold(l,LOADED) Hold(l,ALIVE) Hold(3,FIRE) The goal is to prove that lHold(4,ALIVE). (A more complete set of axioms would also state that if something is a fact, and it does not hold at a time, then its negation holds at that time. This complication would not materially change our solution.) Our intuitions tell us that the only (required) clipping event occurs at time 4, when both LOADED and ALIVE become false (as in figure 1). The instantiation for c is therefore: c = h t, f. t=4 & (f=LOADED V f=ALIVE) When do various facts hold? Again referring to figure 1, we see that LOADED and ALIVE hold between times 1 and 3, and FIRE begins holding at time 3 (and persists thereafter). For h we can thus choose: h = A t,f. (f=LOADED > l<t<3) & -- (f=ALIVE > l<t<3) & -- (f=FIRE 103) & (f-LOADED7 f=ALIVE V f=FIRE) These expressions are placed in the persistence cir- cumscription formula, which is then simplified. l&s involves probing that the main antecedent of the formula: KNOWLEDGE REPRESENTATION / 403 (K(c,h) & v t,f . c(t,f) z) [Clipkf) V 3 t2,f2 . t2<t & Clip(t2J2) & ic(t2,f2)]) > v t,f . Clip(t,f) 3 c(t,f) must be true, where c and h are defined as above. This is done by showing (i) that K(Clip,Hold) entails K(c,h), and (ii) that K(Clip,Hold) entails: * v t,f . c(t,f) > [Clip(t,O V 3 t2,fL . t2<t & Clip(Qf2) & ~c(t2,f2)] The first part of the proof involves substituting c and h for Clip and Holds in the initial knowledge base and simpli@ing, which is straightforward but tedious. For example, the formula Hold(l,LOADED) becomes h(l,LOADED), which is: [A t,f . (f=LOADED > l<t<3) & (f=ALIVE 1 l<t?3>g, -- (f=FIRE > 03) & (f= LOADEDT f= ALIVE V f= FIRE)] (l,LOADED) This expression reduces to: (LOADED=LOADED > 1<1<3) & (LOADED=ALIVE > 19237& (LOADED= FIRE 1 123) & (LOADED= LOADED V LOADED= ALIVE V LOADED= FIRE) which, given the unique name axioms mentioned above, is a tautology. The second step, as mentioned above, is to show that K(Clip,Hold) entails the statement marked with a (*). The antecedent of (*) is false, and the statement therefore true, except when t=4, and f= LOADED or f=ALlVE. Therefore we must show that: Clip(4,LOADED) V 3 t2,f2. t2<4 & Clip(t2,f2) & ~(t2,Q) and t Clip(4,ALIVE) V 3 t&f2 . t2<4 & Clip(t2,f2) & ic(t2,f2) Consider the sentence involving ALIVE, marked with a (5-). We can show this statement is true by showing that if the second main disjunct is false, then the first disjunct must be true. So suppose that 3 t2,fL . tX4 & Clip(t2,fL) & ic(t2,Q) is false. This means that there is no clipping event before time 4. K(Clip,Hold) includes the statements Hold( l,ALIVE) and Hold( 1,LOADED). The axiom Hold(t,f) z) (Hold(t+ l,f) $ Clip(t+ 1,f)) can therefore be applied for times t = 1 and t = 2, giving the conclusion Hold(3,ALIVE) & Hold(3,LOADED) Since Hold(3,FIRE), the axiom about firing loaded guns tells us that -rHold(4,ALIVE). Since Hold(3,ALIVE), we finally conclude that Clip(4,ALIVE), the first disjunct of (i), is true. Therefore (T) is true. The sentence (‘just before (?)) involving LOADED can be proven in a similar manner. Thus the statement (*) is true, the main antecedent of the instantiated persistence circumscription formula is true, and so Clipkf) G c(t,f) Since c(4,ALIVE), it must the case that Clip(4,ALIVE), and so lHold(4,ALIVE). Discussion Several morals can be drawn from this exercise. One is that in reasoning about time, and probably most other applications, default inferences must be properly ordered. Another is that we may need to step beyond the incre- mental progression of circumscriptive techniques, from predicate circumscription, to circumscription with vari- ables, to formula circumscription, and view circumscrip- tion as a general framework for expressing inference in terms of various classes of minimal models. A final moral is that by thinking about default inference in terms of relationships between models, we may more readily see the inadequacies of our own purported solutions. The particular formulas just presented do not solve in the persistence problem in general. Recall the example using persistence to infer that my car is in the pai-king lot. Suppose I learn at time 1000 that my car is gone. Using the techniques just described, I can infer that the car was in the parking lot up to the shortest possible time before I 40-i / SCIENCE knew it was gone. This is clearly an unreasonable infer- ence. Someone could have stolen it five minutes after J left it there: I have no reason to prefer an explanation in which it vanished five seconds before I glanced out my office window. The inadequacy is ontological: we can’t handle persistence properly until we have a richer theory of causation. The purely temporal solution often works because the flow of tune reflects the order of physical cau- sation. When the full story of causation is told, we then require an efficient algorithm for performing the necessary deductions, such as Hanks’s, and a clear model theory, such as that provided by generalized circumscription, to explain and justify the whole process. References Hanks1985. Steve Hanks and Drew McDermott, “Temporal Rea- soning and Default Logics,” YALEUKSDIRR #430, Yale University, Department of Computer Sci- ence, Ott 1985. Lifschitzl985. Vladimir Lifschitz, “Some Results on Circumscrip- tion,” in Proceedings from the Non-Monotonic Rea- soning Workshop, AAAI, Ott 1985. McCarthyl985. John McCarthy, “Applications of Circumscription to Formalizing Common Sense Knowledge,” in Proceedings from the Non-Monotonic Reasoning Workshop, AAAI, Ott 1985. McCarthy1980. John McCarthy, “Circumscription -- A Form of Non-Monotonic Reasoning,” ArtiJicial Intelligence, vol. 13, no. 1, pp. 27-38, 1980. KNOWLEDGE REPRESENTATION / -to-i
|
1986
|
152
|
420
|
A COMPARISON OF THE COMMONSENSE AND FIXED POINT THEORIES OF NONMONOTONICITY Dr. Frank M. Brown Department of Computer SC University Lawrence ABSTRACT The mathematical fixed point theories of nonmonotonic reasoning are examined and compared to a commonsense theory of nonmonotonic reasoning which models our intuitive ability to reason about defaults. It is shown that all of the known problems of the fixed point theories are solved by the commonsense theory. The concepts of this commonsense theory do not involve mathematical fixed points, but instead are explicitly defined in a monotonic modal quantificational logic which captures the modal notion of logical truth. IINTRODUCTION A number of recent papers [McDermott & Doyle, McDermott, Moore, and Reiter] have attempted to formalize the commonsense notion of something being possible with respect to what is assumed. All these papers have been based on the mathematical theory of fixed points. For example, [McDermott & Doyle] describes a rather baroque theory of nonmonotonicity in which sentences such as 'A are discovered to be theorems of a system by determining if 'A is in the intersection of possibly infinite numbers of sets which are the fixed points of the theorems generated by applying inference rules to axioms and possibility statements in all possible ways. Explicitly if K is the set of axioms it must be determined whether: ('A is in the (intersection of all S such that (S is a fixed point of K))) where: (S is a fixed point of K) iff S = (Theorems of (union K {'(,P is possible with respect to what is assumed): P is not in S})). The main problem with such "mathematical fixed point" theories of nonmonotonicity is that even if the theorems of these theories were in accord with our primitive intuitions (which they are not as we shall see in section 3) and even if deductions could be carried out in such theories (and this is not likely since they inherently involve proofs by mathematical induction over both the classical theorem generation process and the process of generating sentences) by no stretch of the imagination would those deductions reflect our of Kansas Kansas ience commonsense understanding of the concept of something being possible with respect to what is assumed. For what after all have intersections of infinite sets, mathematical fixed points, infinite sets of theorems generated by formalized deduction procedures, mathematical induction over formalized deduction procedures, or even formalized deduction procedures themselves to do with commonsense arguments about nonmonotonicity (such as for example the argument presented in section 2 below)? In our opinion, commonsense nonmonotonic arguments do not involve such concepts, at any conscious level of human reasoning, and therefore to try to explain such concepts in that terminology is an extraordinary perversion of language that is likely to lead only to unintuitive theories. The unintuitiveness of these fixed point theories is in fact recognized by some of the very proponents of these theories although they tend to view said unintuitiveness as an intrinsic property of nonmonotonic reasoning rather than as a mere artifact of their particular theories. For example, [McDermott] states "AS must be clear to everyone by now, using defaults in reasoning is not a simple matter of 'commonsense', but is computationally impossible to perform without error" and "we must attempt another wrenching of existing intuitions." Generally, we suggest that the problems with these fixed point theories is a consequence of trying to model commonsense reasoning by semantic analysis rather than by developing a calculus which directly models that commonsense reasoning. We briefly describe our commonsense theory on nonmonotonicity in section 2 and then compare it to the fixed point theories in section 3. II THE CO- The basic idea of our commonsense theory of nonmonotonicity is that nonmonotonicity is already encompassed in the normal intensional logic of everyday commonsense reasoning and can be explained precisely in that terminology. For example, a knowledge base consisting of a simple default axiom expressing that a particular bird flies whenever that bird flies is possible with respect to what is assumed is stated as: (that which is assumed is (if (A is possible with respect to what is assumed) then A)) 394 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. where A stands for the proposition that that particular bird flies. Reflection on the meaning of this knowledgebase leads immediately to the conclusion that either A is logically possible and the knowledgebase is synonymous to A, or A is not logically possible and the knowledgebase is synonymous to logical truth. This conclusion is obtained by simple case analysis: for either A is possible with respect to what is assumed or it is not. If A is possible with respect to what is assumed then, since (if truth then A) is just A, that which is assumed is indeed A. Since that which is assumed is A, A is possible with respect to what is assumed only if A is logically possible. On the other hand, if A is not possible with respect to what is assumed then since falsity implies A is just truth, that which is assumed is truth. Since that which is assumed is truth, A is not possible with respect to what is assumed only if A is not logically possible. Thus if it is further assumed that A is logically possible, then it follows that the knowledgebase is synonymous to A itself. The nonmonotonic nature of these expressions becomes apparent if an additional proposition that that particular bird does not fly is added to the knowledgebase: (that which is assumed is (and (not A) (if (A is possible with respect to what is assumed) then A))) Reflection on this new knowledgebase leads immediately to the conclusion that it is synonymous to not A. This conclusion is again obtained by simple case analysis: for if A is possible with respect to what is assumed then, since (if truth then A) is just A, that which is assumed is indeed ((not A)and A) which is falsity. Since that which is assumed is falsity A is possible with respect to what is assumed only if A and falsity is logically possible which it is not. Thus A is not possible with respect to what is assumed. On the other hand, if A is not possible with respect to what is assumed then, since falsity implies A is just truth, that which is assumed is just not A. Since that which is assumed is (not A), A is not possible with respect to what is assumed only if A and (not A) is not logically possible which is the case. Thus it follows that the knowledgebase is synonymous to (NOT A). Therefore, whereas the original knowledgebase was synonymous to A the new knowledgebase, obtained by adding (not A), is synonymous, not to falsity, but to (not A) itself. These simple intuitive nonmonotonic EirgUxnentS involve logical concepts such as not, implies, truth, falsity, logical possibility, possibility with respect to some assumed knowledgebase, and synonymity to a knowledgebase. The concepts: not, implies, truth (i.e. T), and falsity (i.e. NIL) are all concepts of (extensional) quantificational logic and are well known. The remaining concepts: logical possibility, poss- ibility with respect to something, and synonymity of two things can be defined in a very simple modal logic extension of quantificational logic, which we call Z[Brown 1,2,3,41. The axiom- atization of the modal logic Z is described in detail below. But briefly, it consists of (extensional quantificational logic) plus the intensional concept of something being logically true written as the unary predicate: (LT P) . The concept of a proposition P being logicaly possible and the concept of two propositions being synonymous are then defined as: (POS P) = (NOT(LT(NOT P))) P is logically possible (SYN P Q) = (LT(IFF P Q) 1 P is synonymous to Q The above knowledgebases and arguments can be formalized in the modal logic Z quite simply by letting some letter such as K stand for the knowledgebase under discussion. The idiom "that which is assumed is X" can then be rendered to say that K is synonymous to X, and the idiom "X is possible with respect to what is assumed" can be rendered to say that K and X is possible: (that which is assumed is X) = (SYN K X) (X is possible with respect to what is assumed) = (POS(AND K X)) These two idioms are indexial symbols referring implicitly to some particular knowledgebase K under discussion. This knowledgebase referenced by the (X is possible with respect to what is assumed) idiom is always the meaning of the symbol generated by the enclosing (that which is assumed is X) idiom. Each occurrence of the (that which is assumed is X) idiom always generates a symbol (unique to the theory being discussed) to stand for the database under discussion. These knowledgebases have been expressed solely in terms of the modal quantificational logic Z. In particular, the nonmonotonic concepts were explicitly defined in this logic. The intuitive arguments about the meaning of these nonmonotonic knowledgebases can be carried out solely in the modal quantificational logic Z. Most importantly, our commonsense understanding and reasoning about nonmonotonicity is directly represented by the inference steps of this formal theory. Therefore, it is clear that nonmonotonic reasoning needs no special axioms or rules of inference because it is already inherent in the normal intensional logic of everyday commonsense reasoning as modeled by the modal quantificational logic Z. It remains only to axiomatize the logic Z. Our theory Z of commonsense intensional reasoning is a simple modal logic ['Lewis] that captures the notion of logical truth. The symbols of this modal logic consist of the symbols of (extensional) quantificational logic plus the primitive modal symbolism: (LT p) which is truth whenever the proposition p is logically true. The axioms and inference rules of this modal logic include the axioms and inference rules of (extensional) quantificational logic similar to that used by Frege in Begriffsschrift [Frege], plus the following inference rule and axioms about the concept of logical truth. RO: from The Modal p infer (LT P) Logic Z KNOWLEDGE REPRESENTATION / 395 Al: (IMPLY(LT P) P) A2: (IMPLY(LT(IMPLY P Q)) (IMPLY(LT P)(LT Q))) A3: (OR(LT P) (LT(NOT(LT P)))) A4: (IMPLY(ALL Q(IMPLY(WORLD Q) (LT(IMPLY Q E')))) (LT PI 1 A5: (ALL S(POS(meaning of the generator subset S))) The inference rule RO means that p is logically true may be inferred from the assertion of p to implicitly be logically true. The consequence of this rule is that a proposition P may be asserted to be logically true by writing just: P and that a proposition P is asserted to be true in a particular world or state of affairs W by writing: (LT(IMPLY w P)) The axiom Al means that if P is logically true then P. Axiom A2 means that if it is logically true that P implies Q then if P is logically true then Q is logically true. Axiom A3 means that P is logically true or it is logically true that P is not logically true. The inference rule RO and the axioms Al, A2 and A3 constitute an S5 modal logic. A good introduction to modal logic in general and in particular to the properties of the S5 modal logic is given in [Hughes and Cresswell]. Minor variations of the axioms Al, A2, and A3 were shown in [Carnapl to hold for the modal concept of logical truth. We believe that the additional axioms, namely A4 and A5, are needed in order to precisely capture the notion of logical truth. The axiom A4 states that a proposition is logically true if it is true in all worlds. We say that a proposition P is a world iff P is possible and P is complete, that P is complete iff for all Q, P determines Q, that P determines Q iff P entails Q or P entails not Q, that P entails Q iff it is logically true that P implies Q, and that P is possible iff it is not the case that not P is logically true. These definitions are given below: (WORLD P)=df (AND(POS P) (COMPLETE P)) P is a world (COMPLETE P) = df (ALL Q(DET P Q)) P is complete (DET P Q)= df (OR(ENTAIL P Q) (ENTAIL P(NOT Q))) P determines Q (ENTAIL P Q) = df (LT(IMPLY P Q)) P entails Q (POS P) = df (NOT(LT(NOT P))) P is possible Thus a world is a possible proposition which for every proposition entails it or its negation. The axiom A5 states that the meaning of every conjunction of the generated contingent propositions or their negations is possible. We call this axiom "The Axiom of the Possibility of Contingent facts" or simply the "Possibility Axiom". The need for this axiom follows from the fact that the other axioms of the modal logic do not imply certain elementary facts about the possibility of conjunctions of distinct possibly negated atomic expressions consisting of nonlogical symbols. For example, if we have a theory formulated in our modal logic which contains the nonlogical atomic expression (ON A B) then since (ON A B) is not logically true, it follows that (NOT(ON A B)) must be possible. Yet (POS(NOT(ON A B))) does not follow from these other axioms. Likewise, since (NOT(ON A B)) is not logically true (ON A B) must be possible. Yet (POS(ON A B)) does not follow from the other axioms. Thus these contingent propositions (ON A B) and (NOT(ON A B)) need to be asserted to be possible. There are a number of ways in which this may be done and these ways essentially correspond to different ways the idiom: (P is a meaning combination of the generators) may be rendered. In this paper we have chosen a general method which is applicable to just about any contingent theory one wishes. This rendering is given below: (meaning of the generator subset S) = df (ALL G(IMPLY(GENERATORS G) (IFF(S G) (GMEANING G)) )) (GMEANING '(p ,Xl...,XN)) df (p(GMEAt-iING ~~)...(GMEANING XN)) for every contingent symbol p of arity n. (GENERATORS) = df (LAMBDA(A) (A is a contingent variable free simple sentence)) We say that the meaning of the generator subset S is the conjunction of the GMEANINGs of every generator in S and the negation of the GMEANINGS of all the generators not in S. The generator meaning of any expression beginning with a contingent symbol 'p is p of the GMEANING of its arguments. The generators are simply any contingent variable free atomic sentences we wish to use. One of the most striking features of nonmonotonic knowledgebases is that they are sometimes described in terms of themselves. Such knowledgebases are said to be reflexive[Hayes]. For example, the knowledgebase K purportedly defined by the axiom: (SYN K (IMPLY(POS(AND K A))A) ) is defined as being synonymous to the default: (IMPLY(POS(AND K A))A) which in turn is defined in terms of K. Thus this purported definition of K is not actually a definition at all but is merely an axiom describing the properties possessed by any knowledgebase K satisfying this axiom. In general, a purported definition of a knowledgebase: (SYN K(f K)) will be implied by zero or more explicit definitions of the form: (SYN K g) where K does not occur in g. The explicit definitions which imply a purported definition of a knowledgebase are called the solutions of that purported definition. In general a purported definition may have zero or more solutions. For example, (SYN K(NOT K)) is (LT(IFF K(NOT K))) which is (LT NIL) which is NIL and therefore has no solutions, and (SYN K K) is (LT(IFF K K) 1 which is (LT T) which is T and therefore has all solutions. Finally, (SYN K G) where K does not occur in G is an explicit definition of K and therefore has only one solution namely itself. Because K is the knowledgebase under discussion, it is not itself a contingent proposition of that knowledgebase. Thus 'K is not a GENERATOR and the possibility axiom A5 will not 396 / SCIENCE apply to it. As an example of how the modal logic Z is used, we carry out in Z a slightly more general argument similar to the unformalized nonmonotonic arguments described above. This argument is about a knowledgebase K consisting of (a conjunction of) axioms G not containing K plus one additional standard default axiom. A standard default axiom is an axiom of the form: (IMPLY(POS(AND K A)) (IMPLY B A)) This structure contains as instances default axioms such as: (IMPLY(POS(AND K(CAN-FLY ENTERPRISE))) (IMPLY(IS-SPACE-SCHUTTLE ENTERPRISE) (CAN-FLY ENTERPRISE))) T1:A knowledgebase containing exactly one variable free standard default has precisely one solution. (IFF(SYN K(AND G(IMPLY(POS(AND K A))(IMPLY B A)))) (SYN K(AND G(IMPLY(POS(AND G A))(IMPLY B A))))) proof (SYN K(AND G(IMPLY(POS(AND K A))(IMPLY B A)))) (IF(POS(AND K A)) (SYN K(AND G(IMPLY(AND B T)A))) (SYN K(AND G(IMPLY(AND B NIL)A))) ) (IF(POS(AND K A)) (SYN K(AND G(IMPLY B A))) (SYN K G)) (OR(AND(POS(AND K A))(SYN K(AND G(IMPLY B A)))) (AND (NOT(POS(AND K A)))(SYN K G)) ) (OR(AND(POS(AND G(IMPLY B A)A)) (SYN K(AND G(IMPLY B A)))) (AND(NOT(POS(AND G A))) (SYN K G))) (OR(AND(POS(AND G A))(SYN K(AND G(IMPLY B A)))) (AND(NOT(POS(AND G A))) (SYN K G)) ) (IF(POS(AND G A)) (SYN K(AND G(IMPLY B A))) (SYN K G)) (SYN K(IF(POS(AND G A))(AND G(IMPLY B A))G)) (SYN K(AND G(IF(POS(AND G A))(IMPLY B A)T))) (SYN K(AND G(IMPLY(E'OS(AND G A))(IMPLY B A)))) The solutions to the two purported definitions of the informal arguments given at the start of this section are obtained from theorem Tl as corollaries for if G is T, B is T, and A is possible it follows that: (IFF(SYN K(IMPLY(POS(AND K A))A)) (SYN K A)) and if G is (NOT A) and B is T it follows that: (IFF(SYN K(AND(NOT A)(IMPLY(POS(AND K A))A))) (SYN K(NOT A))) We now compare our commonsense theory of nonmonotonicity to the fixed point theories. In this section we examine four fixed point theories: [McDermott & Doyle, McDermott, Moore, and Reiter] and comment on their modelling of our commonsense intuitions and on their computational tractability. [Reiter! presents a theory of nonmonotonicity called "A Logic for Default Reasoning" which is essentially a first order logic supplemented with additional inference rules of the form: from (A X),(m(Bl X)),...,(m(Bn X)) infer (C X) where *‘m*’ is not a symbol of the theory, but like "infer" is merely part of the structural syntax of the inference rule itself. This rule is intended to mean that if A holds and all Bs are possible then C may be inferred. The problem with this default theory is that even though it uses the concept of being possible with respect to what is assumed, it does not allow the inference of any laws at all about the concept of being possible with respect to what is assumed because the possibility symbol "m" is not part of the formal language. Thus, although there is a certain pragmatic utility to this theory, it does not actually axiomatize the concept M of being possible with respect to what is assumed. [McDermott & Doyle] describes a nonmonotonic logic which was intended to capture the notion of a sentence being consistent with the sentences in a given knowledgebase: "We first define a standard language of discourse including the nonmonotonic modality M ('consistent')." Since the intended meaning of their symbol M is essentially our idiom (that which is possible with respect to what is assumed) if the knowledgebase is K the intended meaning of the notion M could be defined in our logic as: (M X) = df (POS(AND K X)) There are two problems with this theory. First, as pointed out in [McDermott & Doyle] it is computationally intractible: "there seems to be no procedure which will tell you when something is a theorem" and in fact no proof procedure is given for even a first order quantificational nonmonotonic logic. Second, again as is pointed out in [McDermott t Doyle] this theory is too weak to actually capture the notion of consistency with a knowledgebase: *'Unfortunately, the weakness of the logic manifests itself in some disconcerting exceptional cases which indicate that the logic fails to capture a coherent notion of consistency". All these disconcerting cases are solved in our theory. The first such problem is that the knowledgebase K consisting of the expression: (AND(M A) (NOT A)) is not synonymous to falsity in their logic even though intuitively it should be since (NOT A) is in K and therefore (AND K A) is contradictory. This problem is solved in our theory of nonmonotonicity since: (IFF(SYN K(AND G(POS(AND K A))(NOT A))) (SYN K NIL)) A second problem with their logic, as they point out, is that (M A) does not follow from (M(AND A B) ), even though intuitively it should. This problem is solved in our theory since: (IMPLY(POS(AND A B)) (POS A)) (IMPLY(NOT(LT(NOT(AND A B)))) (NOT(LT(NOT A)))) (IMPLY(LT(NOT A)) (LT(NOT(AND A B)))) which by A2 of the modal logic Z is implied by: (LT(IMPLY(NOT A) (NOTlAND A B)))) T McDermott and Doyle consider their logic to have a third problem, namely that a theory consisting of * (AND(IMPLY(M A)B) (NOT B)) where 'A and 'B are simple sentences (ie. GENERATORS in our terminology) is incoherent because it has no fixed point. However, intuitively, whether the knowledgebase consisting of this axiom has a KNOWLEDGE REPRESENTATION / 3g7 solution or not depends precisely on whether (AND A(NOT B)) is logically possible or not; for if (AND A(NOT B)) is not logically possible, then it is not possible with respect to any K, and therefore K is synonymous to (NOT B) and if it is logically possible then B is in K and therefore the false proposition (AND A(NOT B)B) would have to be logically possible (which it cannot be) for there to be a solution. Since 'A and 'B are assumed to be generators, it follows that (AND A(NOT B)) is possible. Therefore intuitively such a knowledgebase K should not have any solutions. We therefore do not consider this example to be a defect of their theory. This same point is made in[Moore21 where this example was analyzed from the perspective of Stalnaker's [Moore21 theory. This example does, however, illustrate that the theory in [McDermott&Doyle] only applies to generators, for if A were falsity or were synonymous to B then there would be a solution, namely that K is synonymous to (NOT B). Therefore: (IFF(SYN K(AND(IMPLY(POS(AND K A))B) (NOT B))) (AND(NOT(POS(AND(NOT B)A))) (SYN K(NOT B))) ) Thus if 'A and 'B are assumed to be generators, it follows that (IFF(SYN K(AND(IMPLY(POS(AND K A))B)(NOT B))) NIL) [McDermott] makes a second attempt to find a coherent theory of nonmonotonicity. This attempt is based essentially on the idea of supplementing the theorem generation process with the rules of inference and axioms of a modal logic. Because it is based on the same general set theoretic fixed point constructions as in [McDermott & Doyle] this new theory is just as computationally intractible. The "necessity operator": L of these nonmonotonic modal logics intuitively mean that something is entailed by what is assumed (i.e. that the negation of that thing is not possible with respect to what is assumed). Thus the intuitive meaning of L could be captured in our modal logic Z by the definition: (L A) = df (ENTAIL K A) (i.e (NOT(M(NOT A)))) Three modal logics: T, 54, and 55 are investigated because McDermott does not believe any one is superior to the others: "The reason why I study a variety of modal systems is that they are all closely related, and no one is obviously better than the others." This statement is entirely correct because none of these three modal logic extensions of the nonmonotonic theory captures the intuitive notion of being possible with respect to what is assumed. The problem with the first two logics: T and 54 is that they are too weak. For example, one problem with [McDermottl's nonmonotonic S4, as is therein pointed out, is that a knowledgebase K consisting of the expression: '(IMPLY(L(M A)) (NOT A)) where *A is a simple sentence (i.e. a GENERATOR in our terminology) is not contradictory although intuitively it should be. For if (L(M A)) is the case then the knowledgebase is synonymous to (NOT A) and (M A) is contradictory making (L(M A) 1 contradictory. And if (L(M A)) is not the case then the knowledgebase is synonymous to T and since (L(MT)) is the case a contradiction results. This problem is solved in our theory of nonmonotonicity since: (IFF(SYN K(IMPLY(LT(IMPLY K(POS(AND K A)))) (NOT A) 1) (OR(AND(SYN A T) (SYN K NIL)) (AND(SYN A NIL) (SYN K T))) ) Thus, when 'A is a generator there are no solutions: (IFF(SYN K(IMPLY(LT(IMPLY K(POS(AND K A)))) (NOT A) 1) NIL) Thus, even Nonmonotonic S4 (and since T is weaker than S4 it too) is too weak to capture the notion of being possible with respect to what is assumed. There remains only the question whether [McDermottl's nonmonotonic S5 captures the notion of being possible with respect to what is assumed. One problem with this nonmonotonic 55 logic, as is therein pointed out, is that a knowledgebase consisting of the simple default: (IMPLY(M A)A) has a fixed point containing (NOT A). This bizarre result follows from the fact that in McDermott's theory the additional default: '(IMPLY(M(NOT A)) (NOT A)) which is logically derivable in the knowledgebase from the first default is(in our terminology) incorrectly assumed to be part of what entails the knowledgebase. Thus, in McDermott's S5 logic a knowledgebase containing a default always (in our terminology) includes in its purported definition the opposite default thus giving the situation: (IFF(SYN K(AND(IMPLY(POS(AND K A))A) (IMPLY(POS(AND K(NOT A))) (NOT A)))) (OR(SYN K A) (SYN K(NOT A))) ) which states that a knowledgebase with two opposite defaults has two solutions A and (NOT A) . The unintuitiveness of having a default actually default to the opposite of what is specified is recognized by McDermott: "Surely the logic should draw some distinction between a default and its negation if it is to be a logic of defaults at all." (In fact [McDermottl's nonmonotonic S5 logic is so bizarre that as is pointed out therein it is not nonmonotonic after all as its theorems are just those of monotonic S5 modal logic.) This problem of defaults does not appear in our theory of nonmonotonicity because we do not make the erronious assumption that the derived default is part of what entails the knowledgebase K: (SYN K(IMPLY(POS(AND K A))A)) Thus, even though either default is equivalent in the knowledgebase K: (IFF(ENTAIL K(IMPLY(POS(AND K A))A)) (ENTAIL K(IMPLY(POS(AND K(NOT A))) (NOT A))) ) and therefore that the first default is equivalent to the conjunction of two: (IFF(ENTAIL K(IMPLY(POS(AND K A))A)) (ENTAIL K(AND(IMPLY(POS (AND K A))A) (IMPLY(POS(AND K(NOT A))) (NOT A))))) and that K entails the two defaults it does not follow that K is synonymous to the two defaults: (SYN K(AND(IMPLY(POS(AND K A))A) (IMPLY(POS(AND K(NOT A)))(NOT A)))) is false because the two defaults do not entail K: (ENTAIL(AND(IMPLY(POS(AND K A))A) (IMPLY(POS(AND K(NOT A) )) (NOT A))) K) is false. 398 / SCIENCE These facts are verified by theorem Tl which proves that a knowledgebase (SYN K(IMPLY(POS(AND K A))A)) consisting of one default(even though the opposite default is entailed by it) has only one solution, namely A. Another problem with [McDermottsl's nonmonotonic S5, as [Moore21 points out is that for every A, the 55 axiom (IMPLY(L A)A) causes every know- ledgebase to have (in the absence of information to the contrary) a fixed point which contains A. This is not a problem in our system because again quantified laws such as: '(ALL X(IMPLY(L(P X)) (P X))) '(ALL X(IMPLY(L(IMPLY(P X)(Q X))) (IMPLY(L(P X))(L(Q X1)))) '(ALL X(OR(L(P X)) (NOT(L(NOT(P X))))) are not theorems of autoepistemic logic. One might try to repair this problem of autoepistemic logic by adding the axioms of S5. However, this does not solve the problem, because when the axiom: '(IMPLY(L PIP) is added to autoepistemic logic, just as in [McDermott]'s S5 nonmonotonic logic, the result is that there is a fixed point of every knowledgebase containing P. For this reason [Moore] suggests that only the axioms of a weaker modal logic than S5 which does not include * (IMPLY(L PIP) be added. The problem with this is that the excluded axiom '(IMPLY(L P)P) where 'P is a variable is intuitively true of the concept of being possible with respect to what is assumed, and therefore should be deducible as a theorem. Moore tries to justify his system's failure to include this axiom by saying that his system tries to capture the notion M of something being possible with respect to what is "believed" by an ideally rational agent and the concept L of something being entailed by what is believed: "The problem is that all of these logics also contain the schema LP-BP, which means that, if the agent believes P then P is true, but this is not generally true". Moore then essentially argues that since, as it is well known, this law fails for the notion of belief when this sentence is asserted as being true in the real world it must be incorrect to assert it generally. (The other S5 modal laws hold for the concept of belief as can readily be proven in our modal logic Z when "believes" is defined to mean that which is entailed by one's explicit beliefs.) The problem with Moore's analysis is that it confuses the real world and the agent's belief world when it states that the second P in "LP->P" means P is true; for in autoepistemic logic the assertion of a sentence is a statement that that sentence is believed, not that it is true. Therefore, the correct rendering of this belief interpretation is: (That which is believed is: (if (P is believed) then P)) which intuitively is true. These problems are solved in our theory of nonmonotonicity, because all the axioms and inference rules of the concept of being possible with respect to what is assumed, are theorems of the modal logic Z. An interesting number of these theorems are listed below. (LTK p) is interpreted to The mean that p is entailed by what is assumed. in the purported definit ion represents the conjunction knowledgebase. of axioms asserted into Interpretation in Z of the Modal Logic KZ TKR~: (IMPLY (KTRUE P) (KTRUE (LTK PI)) TKAl: (KTRUE (IMPLY(LTK P)P))) TKA2: (KTRUE (IMPLY(LTK(IMPLY P Q)) (IMPLY(LTK P) (LTK Q)))) TKA3: (KTRUE (OR(LTK P) (LTK(NOT(LTK P))))) TKA4: (KTRUE (IMPLY (ALL Q(IMPLY(WORLDK Q) (LTK(IMPLY Q P)))) (LTK PI)) TKA5: (ALL S(IMPLY(ENTAIL(meaning of the generator subset S)K) (KTRUE(POSK(meaning of the generator subset S))))) PURPORTED-DEFINITION: (SYNK . ..) DEF (WORLDK W) df (AND(POSK W) (COMPLETEK W)) (COMPLETEK W) df (ALL Q(DETK W Q)) (DETK P Q) df (OR(ENTAILK P Q) (ENTAILK P(NOTQ))) (ENTAILK P Q) df (LTK(IMPLY P Q)) (POSK P) df (NOT(LTK(NOT P))) (LTK P) df (LT(IMPLY K P)) (SYNK P) df (SYN K P) (KTRUE P) df (LT(IMPLY K P)) We now answer the general question which [McDermott&Doyle,McDermott,and Moore] attempted to answer, namely, from the viewpoint of asserting things into a knowledgebase, what precisely are the laws which capture the notion of something being possible with respect to a knowledgebase. Here they are: KNOWLEDGE REPRESENTATION / 399 The Modal Logic KZ KRO: from p infer (LTK p) KAl: (IMPLY(LTK P)P) KA2: (IMPLY(LTK(IMPLY P Q)) (IMPLY(LTK P) (LTK Q))) KA3: (OR(LTK P) (LTK(NOT(LTK P)))) KA4: (IMPLY(ALL Q(IMPLY(WORLDK Q) (LTK(IMPLY Q P)))) (LTK P)) KA5: for the meaning of all the generator subsets s which entail K: (POSK(meaning of the generator subset S)) PURPORTED-DEFINITION: . . . Reflection: (entail . . . K) where . . . is the conjunction of axioms actually being asserted into the knowledgebase. It should be noted that the notion of entailment is precisely defined in the modal logic Z and therefore KA5 does not involve a circular definition as do the fixed point theories. An examination of these laws, ironically, shows that the problem with [McDermott&Doyle, McDermott,and Moore] is not with choice of modal laws such as KAl,KA2, KA3, and KA4, since all these laws are true, but rather with the basic fixed point construction itself which is (incorrectly) far stronger than KA5 and the reflection pOrtiOn Of the purported definition. IV CONCLUSIU Any scientific theory must be judged by its correctness (Does it predict all the phenomena so far examined or are there counterexamples?), by its experimental feasibility (Is it possible to make predictions from the theory, or are the deductions so computationally intractable that it is practically impossible to determine the consequences of the theory?), and by its generality (Does it apply to just the current problem at hand or does it also provide solutions to other radically different problems). By these criteria, unlike the fixed point theories, our theory of nonmonotonicity based on the modal logic Z fairs extremely well. For, indeed, first, we have not found any phenomena predicted by our theory which clashes with our primitive intuitions and in fact even after examining the example problems of four other theories of nonmonotonicity, we have not found any example therein described for which our theory does not give the intuitively correct result. Secondly, unlike the fixed point theories, our theory of nonmonotonicity is computationally tractable in that deductions can be made from it merely by deducing theorems in the modal quantificational logic Z (which is monotonic) in the traditional manner by applying inference rules to axioms and previously deduced theorems. Finally, unlike the fixed point theories, our theory of nonmon- otonicity which is essentially nothing more than the axioms and inference rules of the modal quantificational logic Z is a quite general theory applicable to many problem areas. For example it has been used to define a wide range of intensional concepts [Brown 4,5] such as those found in doxastic logic, epistemic logic, and deontic logic. ACKNOWLEDGEMENTS This research was supported by the Mathematics Division of the US Army Research Office with contract: DAAG29-85-C-0022 and by the National Science Foundation grant: DCR-8402412, to AIRIT Inc., and by a grant from the University of Kansas. I wish to thank the members of the Computer Science department and college admin- istration at the University of Kansas for providing the research environment to carry out this research and also Glenn Veach who has collaborated with me on some of the research herein described. REFERENCES Brownl, F.M.,"A Theory of Meaning," Department of Artificial Intelligence Working Paper 16, University of Edinburgh, November 1976 Brown2,F.M., "An Automatic Proof of the Complete- ness of Quantificational Logic", Department of Artificial Intelligence Research Report 52,1978 Brown3, F.M., "A Theorem Prover for Metatheory," 4th Conference on &,,&z@tic Theorem Proving Austin, TX,1979 Brown4, F.M., "A Sequent Calculus for Modal Quantificational Logic," 3rd Procm Hamburg, July 1978 Brown5,F.M., "intensional Logic for a Robot,Part 1: Semantical Systems for Intensional Logics Based on the Modal Logic SS+Leib," Invited paper for the Electrotechnical Laboratory Seminar IJCAI 5, Tokyo, 1979 ’ . Carnap, Rudolf, Meaning Necessltv. A Studv in the I The University of Chicago Press, 1956 Frege,G., "Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought", 1879, in From Freoe to God_& 1967 Hayes,P.J., "The Logic of Frames", Frame Conc&ns ant Text Understu, Walter de Gruyter & Co. 1979 Hughes,G.E. . ’ and Creswell,M.J., Anion to &&U&g& METHUEN and Co. Ltd., London, 1968 Lewis,C.I., A Survey of Se I University of California Press, 1918 McDermott,D., "Nonmonotonic Logic II: Nonmonotonic Modal Theories", m, Vo1.29, No. 1, Jan. 1982 McDermott,D., Doyle,J. "Non-Monotonic Logic I", . . * Artlflclal 13. 1980. Moore,R.C., "Semantical Considerations on Nonmono- tonic Logic", Artificial Intm, 25, 1985 Reiter,R., "A Logic for Default Reasoning", 13, 1980 400 / SCIENCE
|
1986
|
153
|
421
|
Chronological Ignorance Time, Nonmonotonicity, Necessity and Causal Theories Yoav Shoham Yale University Computer Science Department Abstract. Concerned with the problem of reason- ing efficiently about change within a formal system, we identify the initiution problem. The solution to it which we offer, called the logic of chronological igno- rance, combines temporal logic, nor-monotonic logic, and the modal logic of necessity. We identify a class of theories, called causal theories, which have elegant model-theoretic and complexity properties in the new logic. 1 Introduction: the prediction task The work overviewed here falls into the class of at- tempts to formalize aspects of commonsense reasoning. The particular task considered is the predictiolt task. When we see someone pulling the trigger of a gun we brace ourselves, predicting that a loud noise will fol- low. I would like to be able to emulate this process on a computer. In other words, I am interested in be- ing able to reason efficiently, naturally, and rigorously about the behavior of a system, given a description of it and of the relevant rules of “lawful change.” This is related to, but distinct from, the work done in qual- itative physics [I], since I am interested in a precise logic. The stress is on maintaining formal semantics throughout the process, so that the denotation of our symbols always remains clear. This research was motivated by the need to reason about complex and continuous processes, such as bil- liard balls rolling and colliding with one another, or liquids heating until they boil. In [11] I indeed dis- cuss such scenarios, but in this paper I will necessarily have to simplify the discussion. First, I will view time as being discrete and linear. Second, I will interpret propositions over time points rather than time inter- vala. Neither assumption is one I believe in (see, e.g., [l2]). However, the essential concepts - chronologi- cal ig?torn?zce and causal theories - can be explained already in this constrained framework. Consider the following very simple scenario, to which I will make reference throughout the paper. In it a gun is loaded at t = 1 and fired at t = 5. Furthermore, our knowledge of guns tells that if a loaded gun is fired at t = i then there is a loud noise at t = i+ 1, provided no “weird” circumstances obtain: there is air to carry the sound, the gun has a firing pin, the bullets are made out of lead and not marshmallows, and this list can be continued arbitrarily. Are we justified in concluding that there will be a loud noise at time t = 6? The answer is of course no, and there are two rea- sons for that. First, there is the question of whether the gun is loaded at time t = 5. It was loaded at t = 1, but how long did that last? We would like to say that it lasted until the firing, or more generally, that it lasted for “as long as possible” (that is, the interval of “being loaded” cannot be extended with- out violating some physical law and other facts that happen to be true). How do we capture in our logic the property of persisting “for as long as possible”? Second, even if we managed to show that the gun was loaded at time t = 5, we would still not be able to show that none of the uweirdn circumstances hold at t = 5; that is not entailed by our statements. I will term the first problem (that of assigning “in- ertia” to propositions) the persistence probler72, and the second problem (that of excluding unusual circum- stances) the initiation problem. These two problems are related to the infamous frame problem [5], but transcend the particular framework of the situation calculus [5]; they arise whenever one uses “local rules of change.- I explore this issue further both in [lo] and in the full version of this paper. Here I will make do with an informal description of the problem. In this paper I outline a solution to the initiation problem. Intuitively, it is the problem of having to specify many mundane details -such as the gun having a firing pin, there being air, the bullets being made out of lead, and so on - in order to make a single prediction KNOWLEDGE REPRESENTATION / 387 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. ( in our case - that a noise will follow the shooting). The basic solution is to allow nonmonotonic infer- ences. We conclude that a noise will follow the shoot- ing, but retract that if we learn that the scenario takes place in a vacuum. Several nonmonotonic logics have appeared in the literature. I will discuss them briefly in the last section, but at this point let me just say that none of them have the right properties for our purposes. The rest of the paper gives the details of the appropriate nonmonotonic logic, the logic of chrono- logical ignorance. Beside a solution to the particular problem, the paper offers two more general contribu- tions. First, it offers a uniform and flexible way of constructing nonmonotonic logics (either classical or modal). Second, it suggests a semantical account of causation; in the full version of the paper, and in [ll], I make that account explicit. 2 Chronological Ignorance There are in principle two ways to resolve the initia- tion problem. The first is a syntactic one: we treat the assertions that “the gun was loaded and then fired” merely as shorthand, as an abbreviation of a much richer set of assertions, including the fact that in was not fired in the interim, that there is air, that the gun has a firing pin, and so on. This way of explaining away the initiation problem requires that we actually provide a translation rule, which expands an arbitrary abbreviated theory into the full-blown one. I will take another route, the semantic one. The se- mantic solution of the initiation problem is an instruc- tion to interpret the theory differently than it usually is. In other words, it provides a new meaning to the assertions. 2.1 Definitions: TETL, c.m.i. models In order to discuss semantics, we first have to fix a language for representing temporal information. As was mentioned in the introduction, I will use a toy language in this paper. Definition 1. The Toy Temporal Logic (TTL) is de- fined as follows. Assume a set of primitive propositions <p, and a set of time-point aynlbola 0. Atomic fortnu- las are those of the form TRUE(l,p), where /YE+ and ~0. The set of all formulas is the boolean closure of the atomic ones, that is, their closure under ‘1’ and ‘A’ (quantification over time points would be a simple addition, but I will not even allow that here). The semantics of formulas is obvious: the meaning of a 390 / SCIENCE primitive proposition is a set of time points, and for any interpretation V = (Vl,V2), V /= TRUE(Q) iff I/l(t) E 1/2(p). Th e meaning of more complex formu- las is defined inductively in the usual way. As was also mentioned in the introduction, we assume a fixed in- terpretation of time, namely that of the integers, and so we can use the syntactic t and the semantic Vi(t) interchangeably. Given more space I would be able to motivate the following introduction of a modal operator. Since I don’t, I’ll have to refer the reader to the full paper, assuring him at this point that the transition is not thoughtless extravagance on my part, but rather a cal- culated and advantageous step. Definition 2. Toy Epistemic Temporal Logic (TETL) is TTL augmented by the modal operator K. We intend the usual meaning for this operator, as it as been used in recent years: KY is read ‘X is known’, and, assum- ing the now-standard Kripke semantics, will say that KX is true in a world exactly if x is true in all worlds accessible from that particular world. We furthermore assume an S5 system, so that possible worlds form equivalence classes. ’ For more details on the modal logic of knowledge see, e.g., (21. Definition 3. Atomic knowledge sentences are those of the form K(TRUE(t,p)) or of the form K(- TRUE(t,p)). We use K(t,p) as abbreviation for K(TRUE(t,p)), and K(t.1~) as abbreviation for K(- TRUE(t.p)). We are now in a position to define the notion of chronological ignorance. Definition 4. A (Kripke) structure Mr is chrono- fogicafly more ignorant than a structure & if there exists a time to such that 1. 2. 3. the two structures agree on all atomic knowledge sentences K(t,(p) such that t<to (wrt the global interpretation of time; I’ll omit this comment in the future), For any atomic knowledge sentence x = K(to,p/, if 441 k x then also MQ b X. There exists an atomic knowledge sentence x = K(to,cp) such that M2 b x but Mr k x. Definition 5. A structure M is a chronologically maximally ignorant (c.7n.i.) model of a formula @ if M ‘In this paper I will interpret the modal operator epistemi- tally. In fact, in this context I have an alternative and, I believe, better interpretation of the modality. Going into that, however, will be too lengthy, and I will reserve that discussion to the full paper. is a model of @ and if there is no other model of + that is chronologically more ignorant than M. Notice that chronological maximal ignorance is nonmonotonic; a c.m.i. model of +I A <pz need not be, and usually is not, a c.m.i. model of +I. 2.2 The shooting scenario revisited Armed with these definitions, let us reexamine the shooting scenario. First, we formulate the theory in TETL. There is more than one way this can be done; I choose the following axiom and axiom schemata for reasons that will become apparent soon. 1. K(l,loaded) 2. K(5, fire) 3. K(i,loaded) A K&fire) A A 1 K(l, fire) A 1 F$lacuum) A 7 K(j,no kingpin) A 7 K(j,marshmallow-bullets) A . , . 1 K. . , other “weird” conditions 3 K(j+l. noise), for all icj Axioms 1 and 2 can be thought of the boundary co&itiona of the scenario. Axiom schema 3 represents “physics,” in this case consisting of a single causal rule. It says that firing after loading caused a noise, unless certain conditions obtain which “disable” this partic- ular rule. What do c.m.i. models of this theory look like? There are many different such models, but they have one thing in common: they all satisfy the same atomic knowledge sentences. These are the sentences K(l,loaded). K(5.he) and K(G.noise) - exactly the ones we would have liked. In fact, this is no coinci- dence. In the next subsection I will identify a class of theories all of which have this property. Notice a certain tradeoff that is taking place here. Consider, for example, the conjunct lK(j,vacuum) in the causal rule. We could replace this conjunct by a conjunct K(j.1 vacuum), but the result would be slightly different. In the theory as we have it above, we need not say anything about there being air in order to be able to infer that there will be a noise after the firing. On the other hand, if there is no air we had better state that fact explicitly in the initial conditions, otherwise we will erroneously conclude that there will be a loud noise. If we changed the formulation as we have just described, we would be in the exact opposite situation. The principle underlying our logic may be called the ostrich principle, or the what-you-don’t-know-won’t- hurt-you principle. If K(t,cp) appears on the 1.h.s. of a causal rule then we have in effect set the default of cp to be la/se, since if we say nothing about cp then K(t,cp) is false. Notice, however, that we have set the default to false only as far as this particular rule is concerned. On the other hand, if lK(t,lcp) appears on the 1.h.s. of a disabling rule then we have in effect set the default of cp to be true, as far as this particular causal rule is concerned. Which alternative is better depends on what happens more often. If shooting sce- narios rarely take place in a vacuum, as indeed is the case in everyday life, then we are better off sticking with the original theory; we will not need to mention the atmospheric conditions, except in those unusual occasions when things indeed take place in a vacuum. 3 Causal theories In general, a theory might have many different c.m.i. models, or none at all. However, it was demonstrated that in at least one case, the shooting example, all c.m.i. models are essentially the same, and further- more that it is exactly the model we intend for our the- ory. In this section I pin down the discussion further by giving general conditions under which we can expect chronological ignorance to be useful. Intuitively speak- ing, the reason the concept was useful in the shooting example is because events in that domain only had in- fluence on the future: loading and firing after to could not affect any noises before to. This property is com- mon to all theories which we intuitively think of as causal; causes must precede their effects. 3.1 Defining causal theories Definition 6. A toy causal theory is a collection of sen- tences in TETL, which can be divided into two subcol- lections (in the following, [-] means that the negation sign may or may not appear, and p, with or without a subscript, is a primitive proposition): 1. “Boundary conditions:” a collection of sentences of the form K(t,[l]p). 2. Vauaal rules:” a collection of sentences of the form @ A 0 3 K(ti,[l]p), where @ is a nonempty conjunction of sentences K(tj,[l]pj) such that tj<ti, and 0 is a (possibly empty) conjunction of sentences lK(r, *I-]Pj) such that tj<tia KNOWLEDGE REPRESENTATION / 39 1 These toy causal theories embody a few simplifica- but again, given the space limitations, I will only be tions beyond the ones made in the underlying tempo- able to briefly mention a few points. ral logic. First, since we are interpreting formulas over time points and not time intervals, “causes” (i.e., the conjuncts of +) cannot overlap in time with their “ef- feet”. Furthermore, we even prohibit simultaneity of cause and effect (since we demand tj< ti in the causal rules). This is clearly too limiting, and general causal theories do not have these limitations. They are dis- cussed in [ll]. 3.2 The simplicity of causal theories When discussing the simple model-theoretic properties of the shooting scenario, I claimed that those proper- ties were no coincidence. I will now make the state- ment more concrete. Theorem I. The atomic knowledge sentences satis- fied by a c.m.i. model of a toy causal theory consist of the boundary conditions and a subset of the r.h.s.‘s of the causal rules. Corollary 2. All c.m.i. models of a finite toy causal theory satisfy a finite number of atomic knowledge sen- tences. Theorem 3. (The ‘unique’ c.m.i. model property.) All the c.m.i. models of a toy causal theory satisfy the same atomic knowledge sentences. The simple c.m.i. model-theory of causal theories makes them also very easy to reason about. The result given here refers to toy causal theories, but extends easily to general causal theories. The general argu- ment is that in order to enumerate the atomic knowl- edge sentences, all you need to do is “sweep forward in time.’ Since you’d like to know as little as possible for as long as you can, as you move forward in time you add only that knowledge that is absolutely neces- sary in light of your knowledge and ignorance so far. The particular form of causal theories guarantees that future knowledge and ignorance will not affect past knowledge and ignorance. As a more specific example, we have the following Theorem 4. The (unique and finite) set of basic knowledge sentences satisfied by any c.m.i. model of a finite toy causal theory can be computed in time 00% log 12)) where 12 is the size of the causal theory. 1. There has been considerable amount of work on nonmonotonic logics in AI. The best known sys- tems are McCarthy’s circumscription 161, Reiter’s de- fault logic [9], McDermott and Doyle NML I [7] and McDermott’s NML II 181. The reader may ask why we cannot simply adopt one of those and be done with it. The short answer is that there has been much wish- ful thinking in this regard; in reality, almost none of those claiming that a particluar nonmonotonic system captured the inferences they desire verified that it in fact did. This discrepancy between hopes and reality was recently made very clear when S. Hanks and D. McDermott tried to apply the first three systems to a simple problem in temporal reasoning, and none of them turned out to have the right properties [3] (see related paper in this volume). It is a direct corollary of the Hanks and McDermott experiment that none of the above systems can be used to achieve the effect of chronological minimality. The underlying problem is the crude criterion of what constitutes a “minimal model.” Taking McCarthy’s circumscription as an ex- ample, we have a “set inclusion” criterion: when you circumscribe a FO formula cp, you select models in which the extension of cp is not a superset of its ex- tension in any other model. This turns out to be too crude a criterion of minimality for our purposes. 2. In all FO-based nonmonotonic logic one must specify explicitly what it is that is being minimized. For example, in circumscription one must supply the predicate to be circumscribed, and for the more ex- otic versions (e.g., parameterized) even more needs to be specified. Notice that in the logic of chronological ignorance, the object of minimization is defined once and for all: we (chronologically) minimize knowledge. 3. Recently, V. Lifschitz proposed a new form of circumscription called pointwise circumscription [4]. In that new formulation the minimality criterion is made much more flexible, and can be used to chrono- logically minimize the extension of a particular predi- cate (or set of predicates). It cannot, however, be used to emulate the notion of chronological ignorance, since one must still specify explicitly what it is that is being minimized. There is a way of combining pointwise cir- cumscription with the ‘abnormality” predicate which bears an interesting relation to our logic, but I will refer the reader to the full paper for details. 4 Related work 4. The discussion in this paper has been entirely model theoretic. One of the elegant features of circum- There is much to say about the relation of this work scription, in either McCarthy’s original formulations to previous work in computer science and philosophy, or Lifschitz’ recent ones, is that it comes along with 392 / SCIENCE a circumscription axiom, a second-order axiom that when added to the theory achieves the effect of limit- ing the models to the “minimal” ones (in the relevant sense of minimality). The question is, though, since we understand the model theory anyway, what do the various (extremely ingenuous) circumscription axioms add to our understanding. It would seem that those would be worthwhile only if there were a way to use them to generate automatic inferences, or if they gen- eralized any results on (say) chronological minimiza- tion to a larger class of nonmonotonic logics. I am skeptical of the first possibility: the only uses of cir- cumscription to date have been manual and incredibly simplistic. It seems that at this point the burden of the proof that the circumscription axiom is of any use is on its vendors. The second possibility, however, that the circumscription axiom (and I have in mind Lifschitz’ new version) would suggest results that transcend the particular criterion of chronological minimality, looks more promising. 5. The inadequacy for our purposes of the set- inclusion minimality criterion extends to recent log- its of minimal knowledge, such as those discussed by Moore, Konolige, Halpern and Moses, and Vardi. 5. Causation has been the subject of much discus- sion in philosophy. I think it is fair to say that there has not yet been a satisfactory account of the con- cept, which plays such a prominent role in our every- day thinking. I am now in a position to give a precise semantic account of causation, which appears not to suffer from shortcomings of previous accounts. Since I do not have the space to give the details, I will reserve those to a fuller version of this paper. Here I will only claim that the expressiveness of causal theories on the one hand, and their simplicity on the other, explain why causal reasoning is so pervasive in everyday life. 5 Summary The main messages of this paper have been the follow- ing. 1. One problem that arises when one tries to reason about change both efficiently and rigorously is the ini- tia tion problem. The logic of chronological ignorance offers one solution. 2. Causal theories have nice model-theoretic and complexity properties, which is one explanation why the concept of causation plays such a prominent role in everyday thinking. 3. Nonmonotonic logics are constructed semanti- cally, by deciding on the minimality criterion for mod- els. Here one such criterion was discussed; in discuss another. Ill1 1 Acknowledgements. This work was supervised by Drew McDermott. Beside Drew, I have benefitted from discussions with more people at Yale and outside it than I could possibly list, such as Joe Halpern, Steve Hanks and Vladimir Lifschitz. Bibliography [l] D. G. Bobrow (Ed.), Special Volume on Quafi- tative Reasoning and Physical Systems, Artificial Intelligence, 24/l-3 December (1984). [2] J. Y. Halpern and Y. Moses, A guide to the Modal Logics of Knowledge and Belief: Preliminary Draft, Proc. IJCAI-85, IJCAI, 1985, pp. 480-490. [3] S. Hanks and D. V. McDermott, Temporal Rea- soning and Default Logics, Technical Report YALEU/C! #430, Yale University, October 1985. [4] V. Lifschitz, Pointwise Circumscription, Manuscript 1986. [5] J. M. McCarthy and P. J. Hayes, Some Philo- sophical Problems From the Standpoint of Arti- ficial Intelligence, Readings in Artificial Intelli- gence, Tioga Publishing Co., Palo Alto, CA, 1981, pages 431-450. [6] J. M. McCarthy, Circumscription - A Form of Non Monotonic Reasoning, Readings in Artificial In- telligence, Tioga Publishing Co., Palo Alto, CA, 1981, pages 466-472. [7] D. V. McDermott and J. Doyle, Nonmonotonic Logic I, Artificial Intelligence, 13 (1980), pp. 41- 72. [8] D. V. McDermott, Nonnaonotonic Logic II: Non- monotonic Modal Theories, JACM, 29/l (1982), pp. 33-57. [9] R. Reiter, A Logic for Default Reasoning, Artifi- cial Intelligence, 13 (1980), pp. 81-132. [lo] Y. Shoham, Ten Requirements for a Theory of Change, New Generation Computing, 3/4 (1985). [11] Y. Shoham, Reasoning about Change: Time and Causation from the Standpoint of Artificial Intei- figence, Ph.D. Thesis, Yale University, Computer Science Department, 1986. [l2] Y. Shoham, Reified Temporal Logics: Semantical and Ontological Considerations, Proc. 7th ECAI, Brighton, U.K., July 1986. KNOWLEDGE REPRESENTATION / 393
|
1986
|
154
|
422
|
Propagating temporal constraints for scheduling ing Jean-Francois Rit LIFIA, BP 68, 38402 Saint-Martin-d’Heres, FRANCE ITMI, Chemin des pres, ZTRST, 38240 Meylan, FRANCE Abstract We give in this article a general frame for propagat- temporal constraints over events, these events be- ing exclusively considered as sets of possible occurrences (SOPOs). These SOPOs are the numerical expression of an uncertainty about the exact occurrence of an event, while this exact occurrence is constrained by the possible occurrence of other events.Thiz key-problem of schedul- ing is an instance of the consistent labeling problem and is known to be NP-complete. We introduce a graphi- cal representation of SOPOs which is a useful tool to understand and help solving the problem.We give a con- straint propagation algorithm which is a Waltz-type fil- tering algorithm. Theoretically, it does not discard all the inconsistent occurrences; however, under a number of relatively weak assumptions, the problem can be trans- formed into a solvable one. 1 Introduction occurrences so that they become compatible with the symbolic relations which act as constraints upon them. Suppose, for example, you plan to meet someone during your lunch-break. The two events lunch-break and meeting are linked by the relation during. Suppose your lunch-break lasts from half an hour to an hour, between 11:30 and 13:30, and your meeting lasts at least half an hour, ending before 12:30. This defines the initial sets of occurrences of lunch-break and meeting. In order to be consistent with the constraint during, the meeting must start after l1:30 and the lunch-break must start before 12:O0. The sets of occurrences must be modified in order to build a coherent schedule. This operation consists in solving what we call a constrained occurrences problem. It must be done whenever some numerical scheduling is involved. Things may seem simple on this example, but the situation can become considerably more complex when more relations are considered. When you want to define how a task must be done, you have to solve two different sub-problems: 1. First you have to structure the task.To perform this, you define the various elementary subtasks which allow the performance of the main task, and what resources will be needed. Furthermore, the logical relations which must be kept between the subtasks are stated. Some of them are symbolic temporal relations such as precedence relations. is not in the scope of this paper. A general architecture where 2. Then you have to detail the actual execution of the task, that is when the subtasks will take place and the resources the place of a temporal module is clearly defined can be found will be used. The main output of this activity is a sched- ule which keeps some symbolic information from the first in [3] (Delesalle Descotte, 86) for example. step and gives some numerical information. In particular, the duration of the tasks is a key data in schedules. . The aim of this article is to provide a theoretical model for the definition and propagation of numerical temporal con- straints for the second subproblem. Its purpose is to be a gen- era1 framework for building a “temporal module” in a schedul- ing system. What the scheduling system should otherwise be A formal definition of the problem is given in Section 2. Sec- tion 3 introduces a graphical point of view which helps under- standing how a constraint propagation algorithm can work.In Section 4 we discuss the validity of this model and show that it is not possible to discard every inconsistent occurrence un- less some restrictions on the sets of occurrences are made and disjunctive relations are “eliminated” in some way. We also evaluate the computational complexity of the propagation. In Section 5 we relate our work to others’, especially Allen’s, who introduced a complete taxonomy of symbolic relations among intervals, and Vere’s, who introduced the concept of window which is a particular kind of set of occurrences. Finally we conclude with suggestions on how the model could be strength- ened. 2 A Formalization of the Constrained Occurrences Problem 2.1 What Events Are be described as the association of a logic predicate and an oc- currence. The semantics of the predicate is outside the scope of this temporal module. It can denote a fact, the existence Events are the basic objects of our time module. An event can The basic objects of our model are events linked with sym- bolic temporal relations. Events are characterized by sets of possible occurrences, where an occurrence is defined as the in- terval during which an event happens. The function of a tem- poral module for sched uling is to modify these sets of possible of a process, the execution of a task, whatever the abstraction level. On the contrary, the occurrence is the significant data at this relatively low-level stage of reasoning. An occurrence is a one-dimensional interval, its intuitive meaning is “the mo- ments when the predicate holds”. The case of events happening KNOWLEDGE REPRESENTATION / 383 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. over multiple intervals, like having-lunch (which happens ev- ery day), is not considered here. At a higher level, such events can be easily decomposed into elementary subevents happening over single intervals. Handling events with a uniquely defined occurrence is not very interesting. In most cases, the occurrence of an event is numerically constrained : it belongs to a domain where the duration, beginning and end of the occurrence are constrained. This domain is called the set of possible occurrences (SOPO). It should be now obvious that the meaning of the predicate is not relevant to the temporal module point of view. 2.2 Linking Events with Temporal Relations The second type of input data are relational data. They are temporal links between events and are mainly derived from higher level descriptions of the world. For example, suppose you want to buy something, then your action of paying hap- pens during buying because paying is hierarchically a subtask of buying. Moreover paying happens before getting the object because paying is a condition of getting the object. For the temporal module, all the relations are translated into a low-level set of primitives which describe the relationship between two intervals. These primitives are before, during, overlap... A complete enumeration was given by Allen [l]. However, these primitives are used to link two intervals, and we want to link SOPOs. This requires some formalization. A temporal relation is a boolean function defined over the SOPOs of two events (we will also say that it is defined between these events). The arguments of such a relation are therefore two intervals : R12 : 01x02 + {W (w32) H R12(01,02) Where R12 is a temporal relation, Or and 02 are SOPOs, and or,02 are occurrencesThis relation express a constraint over the parameters of orandoz (beginning, end and duration). For example, if Rl2 is the relation before, b(o) the beginning of o and e(o) the end of o: R(ol,oz) = 1 * b(o2) > e(a) Let Or,. . . , 0, be n SOPOs and or,. . . , o, be n occurrences such that Vi E [l,n],Oi E Oi The set {oi,i E [l,n]} f 0 occurrences is a solution of a set of relations { Rjk, j, k E [1, n]} iff : V.j,k E [Ml, Rjk(oj, ok) = 1 Semantically, a solution of a set of temporal relations is a de- scription of a world where each event has one occurrence and where all the temporal relations are satisfied. 2.3 The Constrained Occurrences Problem t I I J I I I + 0 1 3 5 6 8 Figure 1: one-dimensional representation of occurrences problem is : Discarding from Or,, . . , 0, the occurrences which do not be- long to any solution. Of course, finding all the solutions of R would be more satis- fying, but it is impossible to explicitly represent them since the SOPOs contain an infinite number of elements. It is then nec- essary to keep implicitly the solutions in the SOPO-based rep- resentation. In order to “extract” a solution, you must choose one occurrence o in a SOPO. If the constrained occurrences problem has been solved, o belongs to at least one solution. Choosing o is equiva!ent to defining a new SOPO, thus a new constrained occurrences problem. Solving this new problem will give all the solutions containing o. Naturally, you can more generally choose a subset of a SOPO. As long as R is not changed, you know that the new constrained occurrences problem has some solutions. 3 A Graphical Representation of Con- strained SOPOs 3.1 A 2-D representation of occurrences An occurrence being a numerical interval, a one-dimensional line is an obvious graphical aid for representing occurrences and relations between them (fig 1). However, a 1-D representa- tion is ambiguous because SOPOs have to be represented with intervals whereas they are sets of intervals. For example, the interval on fig 1 could represent the set of occurrences begin- ning after 2 and ending before 6, or the set of such occurrences with a duration equal to 3, or the set of occurrences whose beginning cannot belong to [4,5]. This ambiguity comes from the 2-dimensional nature of an occurrence : it is completely determined with two parameters : beginning and end. This is why we use, as a graphical aid, the plan of occurrences with beginning and end axis (fig 2). As occurrences begin before they end, all possible ones belong to the dotted region of fig 2. The main diagonal has a particular meaning: it is the locus of occurrences whose beginning equals the end, thus the locus of dates. This links the 2-D representation with the 1-D one in the following manner : the beginning and the end of any occurrence o are dates and can then be “projected,, onto the diagonal (fig 2). The segment defined by these two dates is the set of all the “instants,, forming o, in other words, this segment is the 1-D representation of o. It is thus possible to switch easily from one point of view to the other : 2-D for sets of occurrences, 1-D for occurrences. Fig 3 is a unambiguous representation of the SOPOs that fig 1 cannot represent. Let er, . . . ,e, be events, Or,. . . , 0, their SOPOs and R a set of relations between the events. The constrained occurrences 384 / SCIENCE begin Figure 2: two-dimensional representation of occurrences end begin Figure 4: A mapping of the plan using Allen’s primitives end Figure 5: An example of a disjunctive constraint Figure 3: Fig 1 revisited with a 2-D representation 3.2 2-D Representation of Relations and SOPOs Let R be a temporal relation, and o an occurrence, the set of all the occurrences verifying the relation R with o, {z/R(z, o) = l}, is a generally easily representable region in the plan. We will call it the region allowed by o and R. Fig 4 is a “complete” mapping of the plan with the set of temporal relations given by Allen [I]. Any occurrence on it can be temporally related to o. Of course, there are obviously other ways of mapping, with more or less primitives, depending on how complex the temporal module should be. We will keep this one in the following. Disjunctive relations can be represented as the union of the regions allowed by the parts of the disjunction (fig 5). In the same way, conjunctions are represented through the graphical intersection of regions (fig 6). A SOP0 is also a region of the plan. The simplest SOP0 (except the trivial occurrence set) is the window, using Vere’s (one-dimensionnal) (71 t erminology. Occurrences in a window must begin after an earliest start time and end before a latest finish time. If their duration is not constrained, the window has a triangular shape, if their duration is given, the window is a segment (fig 7). The SOP0 of a given event can theoreti- cally have any shape. However, in the following, we will speak only of one sort of SOPO, unless otherwise stated, namely the generalized window where the beginning, end and duration of an occurrence are independantly bounded. This simple repre- sentation covers a fair number of cases. The shape of such a end begin Figure 6: An example of a conjunctive relation KNOWLEDGE REPRESENTATION / 385 begin earliest start time end end latest finish time earliest finish begin Figure 9: Propagation of a constraint on a SOP0 Figure 7: windows time l/J lajst begg start time start time Figure 8: a generalized window SOP0 is on fig 8. 3.3 Propagation of Temporal Constraints over SOPOS Returning to the problem of temporal relations, a SOP0 can be considered as a disjunctive clause where each part of the clause is a single occurrence. In this respect, given a relation R and a SOP0 Or, we can define a region allowed by 01 and R as the union of the regions allowed by each occurrence in Or and R (formally, this allowed region is {z / 30 E Or /R(z, o) = 1)). If R is defined between Or and a SOP0 02, this means that any occurrence of 02 that does not belong to this region does not belong to any solution, whereas the other occurrences may do so. On a dual point of view, the intersection of all such regions({z /Vo E Or, R(z, o) = 1)) defines a “certain” region : all the elements of 02 in this region belong to a solution, what- ever the choice of an element in Or, whereas the elements which do not belong to this certain region might not belong to any solution (fig 9). This is the basis of a temporal constraint propagation. Each time a SOP0 Oi is constricted, the regions allowed by 0; and any relation &j involving Oi are constricted. If the SOPOs Oj such that Oi &j Oj are therefore constricted, the propagation must be furthered. The general structure of the algorithm is thus : begin while S # 0 for i = 1,n if Oi E S for j = 1, n ifOjflA(Oi,&j) #0 Oj + OjnA(Oi,&j) S + SU{Oj} endif endfor s + s - (Oi) endif endfor endwhile end where S is the set of modified SOPOs to be checked and A (Oi , &j) is the region allowed by Oi and R+j. An interesting feature of this method is that disjunctive con- straints can be propagated. Disjunctive constraints can create “holes” in a SOPO. In fig 10, the SOP0 Or creates a hole in the window 02 which therefore loses its connexity. However, constraints involving 02 can still be propagated, furthering a kind of common information about the two parts of 02. 4 Evaluating the Algorithm 4.1 Consistency The constrained occurrences problem belongs to a class of prob- lems which received much attention in Artificial Intelligence and Operational Research, namely the consistent labeling prob- lem. In our case, the domain of variables is not finite because it is continuous. 3% / SCIENCE end not discarded by propagating “before (01,02) or after (0 1,02)” Cl discarded, thus creating a hole I begin. Figure 10: Making a hole in a SOP0 The algorithm we gave for temporal constraint propagation in section 3.3 is the same than Waltz’s algorithm [8] used for scene analysis problems. According to Mackworth’s terminol- ogy (41 it achieves only arc-consistency because it only checks consistency between pairs of SOPOs. As defined in section 2.2, belonging to a solution is a global property of an occurrence. A local check between pairs of SOPOs is necessary but not sufficient. This is why such an algorithm is only a filtering algorithm, it ‘lets go by” inconsistent occurrences. In fact the consistent labeling problem is NP-complete [4]. However, if the SOPOs are generalized windows, and if the disjunctive relations me eliminated by enumerating the differ- ent terms, the constrained occurrences problem can be solved with an arc-consistency algorithm (see [S] for hypotheses a bit weaker than generalized windows). Such an elimination should come after the arc-consistency elimination, otherwise the in- terest in our formalism of propagating disjunctions would be greatly spoiled. 4.2 Complexity We will consider here only the complexity of the arc-consistency algorithm since it is the heart of the constraint propagation algorithm. We saw in section 3.3 that each time a SOP0 is modified, a consistency check must be made. The complexity of the algorithm is then : number of constrictions x cost of a consistency check ( number of constricted SOPOs x number of constrictions $r SOPO) X ( number of relations per SOP0 x cost of a consistency check between two SOPOs) NnXeXrXc The evaluation of this expression is in fact quite difficult : n number of events or SOPOs, is precisely defined and cannot be influenced. e measures the efficiency of the constrictions, e = 1 means that each SOP0 is modified once with a “one-shot”, definitive constriction. Unfortunately, things can go very bad in the end Figure 11: A bad case for constraint propagation case of contradictory relations. For example, on fig 11, e = 4 for A and B : you must apply four times the constraints before (A,B) and before (B.A) in order to remove this inconsistent conjunction. r is the average number of relations involving a given event. If r is big, the algorithm will be more costly, having more constraints to propagate. But the “information” of a con- striction will be propagated more efficiently, thus lowering e. c depends on the “shape” of the SOPOs, and not on their num- ber. If the shape is very complicated (for example with a lot of holes) the global efficiency will be lowered. In fact, c is directly bound to the number of disjunctions, since they make holes into SOPOs, thus complicating their shape. As an example, if r is biggest, e lowest, the complexity of the algorithm is cn2. It is analogous to the complexity of arc- consistency algorithms : Nn2, where N would be the number of elements in a SOPO. But we must repeat here that the true complexity can be highly variable. 5 Related Works Many systems were designed to be temporal modules. They all suppose that occurrences are intervals (or their degenerated form : points). But they differ in the objects they describe best and thus belong to two families : numerical and symbolic systems. Symbolic systems put the emphasis on the description of temporal relations. In [l] (Allen 83) a relation is a disjunction of primitives which are very cleanly built in [2] (Allen Hayes 85) from the “meet” relation between two intervals. This represen- tation allows a very easy computation of conjunctive relations (AqB A Ar2B ---+ ArsB) which is in some way an intersec- tion of combinations. Moreover, Allen gives a table for com- puting the composition of relations (AqB A Br2C --+ ArsC). This allows the computation of path-consistency (according to Mackworth’s terminology [4]) which is more complete than an KNOWLEDGE REPRESENTATION i 387 arc-consistency, hence a O(n3) complexity, where n is the num- ber of events. However, the system only deals with symbols, so numerical data are not efficiently handled, especially duration. On the other hand, numerical systems put the emphasis on the description of occurrences. The beginning and the end of an occurrence are explicitly represented with two numbers. DE- VISER [7] (V ere 83) is a planning system where the windows were first introduced. They are compressed in the same way that we constrict our SOPOs. The hypotheses of DEVISER ensure that the arc-consistency is sufficient to solve the con- strained occurrences problem. However, all Allen’s primitives are not representable (some are with an astute use of the for- malism) and disjunctions cannot be propagated. Moreover, the temporal relations are inside the system since they are implic- itly deduced from the plan structure, so the user has no access to them. Smith [6] proposes a temporal module for the ISIS job-shop scheduling system. ISIS builds a hierarchical task net with resources and Smith’s temporal module propagates the reservations of these resources through the net, also using win- dow compressing. Here the plan structure is also the only way of specifying temporal relations, and disjunctions are not prop- agated. 6 Conclusion We gave in this article a general frame for propagating temporal constraints over events, these events being exclusively consid- ered as sets of possible occurrences (SOPOs). These SOPOs are the numerical expression of an uncertainty about the exact oc- currence of an event, while this exact occurrence is constrained by the possible occurrence of other events. Making the SOPOs compatible with the temporal constraints is what we call a con- strained occurrence.9 problem. This key-problem for scheduling, is an instance of the consistent labeling problem and is known to be NP-complete. We introduce a graphical representation of SOPOs which enables the visualization of a constraint prop agation algorithm. This algorithm is a Waltz-type filtering algorithm [8], it d oes not remove every inconsistency, unless the SOPOs are of the “generalized window” type and all the disjunctions are removed. We are developping our current reflections along two axis : 1. Making the algorithm more efficient. The complexity can be variable, depending on how effectively the constraints are propagated. In an ideal network of constraints, every SOP0 is linked to the other so that only one constric- tion is sufficient (i.e. the whole information is propagated during the first shot along the links). If there is a num- ber of constrictions over one SOPO, this means that the temporal relations involving the SOP0 are too weak in comparison with the implicit constraint of the net. For a greater efficiency, the topology of the net should be changed, making explicit the actual constraints. Such a symbolic inference can be done using Allen’s [l] symbolic propagation, 2. Using the temporal module for higher level scheduling concepts. All the solutions of a constrained occurrence problem are not good ones. Qualifying solutions involves high level concepts such as robustness. A robust schedule should require minor adjustments when a perturbation occurs. In this respect, a graphical point of view might give visual “patterns” useful for detecting the presence of good solutions. Acknowledgements This paper benefited greatly from discussions with Y. Descotte and from the useful comments of J-C Latombe and J. Crowley. References [l] J. F. Allen, Maintaining knowledge about temporal intervals Communications of the ACM, November 83, Volume 26, Number 11, pp 832-843. [2] J. F. Allen and P. J. Hayes A common sense theory of time, in proc. IJCAI 85, Los Angeles, USA, August 85, pp 528 531. [3] H. Delesalle Y .D escotte Une architecture de systeme expert pour la planification d’activite (in french), in proc. 6th In- ternational Workshop on Expert Systems and their Appli- cations, Avignon, France, April 86, pp 903-916. [4] A. K. Mackworth Consistency in network8 of relations, Ar- tificial Intelligence 8 (1977), pp 99118. [S] J-F Rit Vera une representation du temps pour la planifica- tion (in french) memoire de DEA, Institut National Poly- technique de Grenoble, June 85. [6] S. Smith Exploiting temporal knowledge to organize con- straints Technical report CMU-RI-TR-83-12, Carnegie Mel- lon University, July 83. [7] S.Vere Planning in time : Windows and durations for ac- tiwitiee and goals, IEEE trans. on PAMI, Vol. PAMI-5:3, May 1983, pp 246267 [8] D. L. Waltz. G enerating semantic descriptions from draw- ing of scenes with shadows MAC AI-TR-271 MIT 1972 388 / SCIENCE
|
1986
|
155
|
423
|
A REPRESENTATION FOR TEMPORAL SEQUENCE AND DURATION IN MASSIVELY PARALLEL NETWORKS: Exploiting Link Interactions Hon Wai Chun Computer Science Department Brandeis University Waltham, MA 02254 U.S.A. ABSTRACT One of the major representational problems in massively parallel or connectionist models is the difficulty of representing temporal constraints. Temporal constraints are important and crucial sources of informa- tion for event perception in general. This paper describes a novel scheme which provides massively paral- lel models with the ability to represent and recognize temporal constraints such as sequence and duration by exploiting link to link interactions. This relatively unex- plored yet powerful mechanism is used to represent rule- like constraints and behaviors. The temporal sequence of a set of nodes is defined as the constraints or t,he tem- poral context, in which these nodes should be activated. This representation is quite robust in the sense that it captures subtleties in both the strength and scope (order) of temporal constraints. Duration is also represented using a similar mechanism. The duration of a concept is represented as a memory trace of the activation of this concept. The state of this trace can be used to generate a fuzzy set like classification of the duration. I. INTRODUCTION Massively parallel models of computation [I, 21 (also known as connectionist, parallel distributed pro- cessing, or interactive activation models) consist of large networks of simple processing elements with emergent collective abilities. The behavior of such networks has been shown to closely match human cognition in many tasks, such as natural language understanding and pars- ing 131, speech perception and recognition [4, 5, 61, speech generation, physical skill modeling, vision and many others. The use of such models provides cognitive scientists with experiment and simulation results in a finer level of detail than was previously possible. In addition, various learning algorithms [7, 8, 91 have been developed to enable these networks to acquire knowledge through gradual adaptation. The distributed nature of some of these models enables network structures to be less sensitive to structural damages. One of the major representational problems in massively parallel models is the difficulty in representing temporal constraints. These are constraints which con- trol network activation based on temporal knowledge. Temporal constraints are important and crucial sources of information for event perception in general. This is especially true for tasks such as speech perception and schema selection. This paper describes a novel scheme which provides connectionist models with t,he ability to represent and recognize temporal constraints such as sequence and duration by exploiting link to link interac- II. OVERVIEW Sequential constraints on a set of events is one form of temporal constraints. There are basically two types of knowledge about temporal sequences. One is how to generate a sequence of activations. The second is how to recogni,-e a given sequence of events. Figure 1 shows a simple network structure [l] which can activate a predetermined sequence of events (El -+ E2 -+ . ..). This type of network structure is useful in modeling schema execution such as physical motor control. Figure 1. Activating a sequence of events. The second type of temporal sequence knowledge (recognizing a sequence of events) is somewhat more difficult to represent. The main problem is constructing a network st’ructure which can represent temporal con- texts in which nodes should be activated. A mechanism is needed which could recognize particular sequences of activation patterns over time. This problem arises most notably in modeling speech perception. For example, a word should be recognized only if its constituent acoustic segments are heard in the correct sequence. 372 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. One approach to this problem is to construct time buffered network structures in which each time slot con- tains an identical substructure [l, 61. Conceptually, this type of network contains a copy of the event to be recog- nized at each particular slot in time in which this event might occur. There are many problems associated with this type of duplicated structures (either pre-wired or dynamic). Various techniques of dynamic structures and connections [lo, ll] have been suggested to partly handle these problems. However, there is still an overhead in both computation and memory when using this approach. Another approach is to approximate sequence con- straints through lateral priming [5]; priming events that follow those that have already been activated. Using the example of word perception, this approach is only useful if there are several words with similar acoustic segments. In this case, the priming will help activate the most plausible word. However, in general, this approach will not inhibit the perception of a word when segments are given out of sequence. This paper presents an alternative network representation which can recognize particular temporal sequences without using duplicated or dynamic struc- tures. This paradigm is further extended to form a representation of temporal duration. III. METHODOLOGY The crux of this paper centers around the notion of representing interactions among the links to define tem- poral constraints. This idea is remotely analogous to neural networks where synapses may be made between different parts of the neuron; for example between axon and dendrite, axon and axon, dendrite and dendrite, and axon and cell body [ 121. In the model presented here, links may be made between node and node, as well as between node and link. The approach uses two types of link interaction. Figure 2 (a) shows one type in which the activation-flow from one node to another (node A to B) is precondi- tioned by a third node (node C). The association between nodes A and B can be formed only if node C is also activated. Figure 2 (b) shows how this interaction is implemented. The intersection of the two links is represented by a CONSTRAINT node. This node imple- ments the computation (described in detail in the next section) required to represent the link interactions. Letter “P”, in figure 2(b), indicates the precondition input link. P > (4 Figure 2. Precondition link. The second type of interaction (figure 3) has just, the reverse effect. Activation flows from node A to B, unless node C is already activated. In other words, node C inhibits the association between nodes A and B to occur. The inhibiting link is called the exception input and is labeled with an “E” at the CONSTRAINT node. E (4 Figure 3. Exception link. (b) These two relatively unexplored yet powerful mechanisms can be used to represent rule-like con- straints and behaviors. Link interactions represented by the CONSTRAINT node are, by no means, a simple binary enable/disable type mechanism. The CON- STRAINT node is robust enough to a capture a contin- uum of interactions. For instance, the CONSTRAINT node scales the activation flow between node A and B according to the activation level of the node C. In the precondition case? to account for noisy environments, a CONSTRAINT node can be programmed (by setting a positive bias level) to allow energy to trickle through, from node A to B, even when the node C is not activated. This bias level can be thought of as defining the fuzziness or looseness of the precondition. The CONSTRAINT node reduces to a BINDER node ]2j when the bias level is zero and there are no exception inputs. A. The Computation of the CONSTRAINT Node Although the CONSTRAINT node was originally designed to represent link interactions, the computation is general enough to be used as a normal node in the net- work simply by not having precondition or exception KNOWLEDGE REPRESENTATION / 373 inputs. Thus allowing a uniform node unit to be used throughout the network to represent nodes as well as link interactions. The following describes the computa- tion of the CONSTRAINT node used in our simulations: The main energy parameters of this node are: P - the potential, and V - the output value. These depend on the following internal parameters: t - the threshold 6 - the bias level d - the decay i - vector of normal inputs i,,...,i P - vector of precondition inputs il,...,pn E - vector of exception inputs el,...,en The functions to compute the new values are: P + f(i,P&P,6,4, 2’ + 9 (Ptt). In the case where the CONSTRAINT node is used to represent a normal node in the network (i.e. when P and E is absent), the computation involved is defined as: P + p (1 - d) + qwk x ;,, [O<w&l] v + if p>t then p else 0, where wk is the link weight on the link ;k and p and v are continuous values in [0, 11. The potential is simply the previous potential scaled by a decay factor, plus the sum- mation of weighted inputs. The output is equal to p wit(h a threshold. When the CONSTRAINT node is used to represent link interactions (i.e. with P or E), p becomes: P - [C(uj x PJ - w$ x eJ + 61 x P’. where p’ is the potential without P and E. The new potential is equal to p’ scaled by tie difference in the weighted inputs of P and E plus the bias level, B. Characteristics of the CONSTRAINT Node To give a flavor of the type of constraints that can be represented using the CONSTRAINT node, a simple example from the monkey-and-banana problem is shown. In this problem, one of the rules might be expressed as: if [goal = possess-banana] then [action = grasp-banana] precondition: [location = at-banana]. exception: [state = grasped-banana]. Figure 4 shows how this can be represented in net- work form. The CONSTRAINT node permits the asso- ciation between the goal to possess banana and the action to grasp banana only if the monkey is at the banana location and have not already grasped the banana. cl POSS-B Figure 4. Monkey and banana. Imagine that, initially, the monkey wants to pos- sess the banana but is not at the banana location. Although, the POSS-B node is activated, there is no energy flow to the GRASP-B node (since the precondi- tion is not present). The bias level of the CON- STRAINT node in this case is zero since we want a strict precondition (i.e. the monkey cannot start, to grasp the banana unless it is positively at the banana location). As the monkey moves towards the banana, AT-B node gets activated (perhaps through visual perception) and energy gradually flows to the GRASP-B node which triggers the grasping action. As the monkey performs this action, the GRASPED node is activated and inhibits the grasping action to continue. The GRASP-B node is activated only after the precondition AT-B is activated and is deactivated after the exception GRASPED. The CONSTRAINT node in this example behaves like a feed- back mechanism to control physical motion. IX TEMPORAL SEQUENCE The network structure used to represent/recognize a temporal sequence is conceptually an extension to the structure used in the monkey-and-banana problem. Fig- ure 5 shows how a schema (Sl), which consists of a sequence of three events (El ---) E2 + E3), can be recog- nized using CONSTRAINT nodes to represent link interactions. Figure 5. Representation for temporal sequence. Schema Event Tokens Temporal Constraints Event Types 37t / SCIENCE The lowest layer of this network structure consists of the type nodes for the various events. The type is accessible by other schema structures as well. Above this is the temporal constraint layer which restricts a particular temporal ordering of the types to occur before the event tokens can be activated. The schema structure is represented as a set of event tokens. The schema node remains activated, with a varying potential level, as long as the input events follow the sequence defined in the temporal constraint layer. For example, after E2 is activated, the token for E2 will be activated only if the token of the preceding event (El) is already activated and the token of the next event (E3) has not yet been activated. In other words, the token for E2 will only be activated after El and before E3. Not only can this schema structure recognize a strict sequence of events, it is also flexible enough to accommodate variations such as missing events or events which are only weakly present. For example, in the case of word recognition, a particular instance of a phoneme might be present only in a very weak form or even absent, since speech input is often incomplete or only partially specified. There are three parameters which can adjust the overall behavior of the temporal con- straints to accommodate for these variations. First, there is the bias level in the CONSTRAINT node. A non-zero bias level allows energy to trickle through even in the absence of the precondition input. This is useful for cases where the precondition events may not always be present. Second, there is the strength of the precondi- tion inputs (i.e. the link weight on the precondiGon input links). This weight reflects the probability that the precondition events would lead to the current event. The third parameter is the scope of the temporal con- straints. It can be adjusted by having higher order dependencies for precondition and exception inputs (i.e. not only dependencies on the preceding and next node but also on a larger temporal scope). In our experi- ments, these parameters were adjusted manually. How- ever, it is conceivable that a learning algorithm can be used to fine-tune the parameters based on experience. This model suits word recognition quite appropri- ately. In this case, the schema is a word in the lexicon and the events are phonemes. The model has been used successfully in constructing a word recognition system which recognizes alphabets and digits spoken by a single speaker 1131. The massively parallel computation was simulated on a Symbolics Lisp machines using the AINET-2 simulator [14]. This system represents one of the first to successfully use real speech data on a speech system based on a massively parallel model. One of the main reasons for its accomplishments is the robustness in temporal representation which this model can capture. The behavior of this model is similarly, in certain aspects, to a discrete Markov chain model. Both models traverse a discrete number of states over a discrete number of time intervals. The link weights on the precondition inputs in the temporal sequence model reflects the state transition probabilities in the h4arkov chain model. However, the state transition in the tem- poral sequence model is gradual over time, unlike the Markov chain. This permits smooth graceful transitions between states based on the gradual accumulation of evi- dence to support the transition. The temporal sequence model also allows for higher order dependencies other than just the previous state. V. TEMPORAL, DURATION The network structure which represents/recognizes temporal duration uses a similar temporal constraint mechanism based on link interactions. The duration of a concept is represented as a memory trace of the activa- tion of this concept. Fuzzy Set Classification Duration Network Input Figure 6. Representation for temporal duration. Figure 6 shows the network structure which can measure the duration of t,he activation of the node “x”. Activa- tion energy in this example network gradually spreads from the node labeled “1” (short duration) to the node labeled “ 10” (long duration) as node ‘lx” remains activated. This is similar to a memory trace of how long “X” was active. The state of this trace can be used to generate a fuzzy-set like classification of the duration. Figure 7 shows the different activation plots for the different durations of “x”. Although knowledge of temporal duration is extremely important in event perception, this source of information has been seriously lacking before in mas- sively parallel models. One example, in speech recogni- tion, in which duration information is useful is in the distinction bet)ween the letters “b” and “v”. If a labial stop is long, it is more likely to be a labial fricative that has been misidentified as a labial stop. KNOWLEDGE REPRESENTATION / 3’5 The example network structure shown above only measures the absolute duration according to a fixed scale. For relative duration, one needs an indication of the context to shift the attention of the fuzzy classification network. actlvatlon level F.-------.-.--e /- I .’ /- . /- # /’ - long - - - - nedlun .._ .- ._. _ short f , : :.I: : : : : 1 2 3 4 5 6 7 8 : : : : : : : : : : : : 91~1112131415161718192~ ,cycle (a) The activation plot for a short “x” input. sctlvatlon level 1.08 ? p-----m f i I 0.25- 8’ / , f ‘t, , / , L 8.807 : :.’ : : : ! : : : : : : : : : : : : : -. 1 2 3 4 5 6 7 8 91@11121314151617181920 > cycle (b) The activation plot for a medium “x” input. actlvatlon level - long - - - - nedlun _....... - short (c) The activation plot for a long “x” input. Figure 7. Activation plot showing fuzzy classification gen- erated by the duration network for the node ‘(xl. VI. SUMMARY This paper presented a novel approach towards representing temporal sequence and duration based on capturing interactions among links. This model fills a much needed void in present massively parallel models. In addition, the mechanism seems capable of represent- ing a wide variety of constraints other than temporal constraints. Future research may include developing learning algorithms which could gradually improve or learn the link weights in highly structured networks such as those described in this paper. At present, this lack of suitable learning algorithms, is the main limitation of this model. Another interesting topic is to explore how temporal con- straints might fall out from learning in random or semi- random distributed networks. One recent approach uses the “error propagation” learning algorithm [8) on a recurrent distributed network to learn to complete sequences, i.e. given the initial part of a sequence, the network generates the rest of this sequence. However, network structures that result from this process seem to be less flexible in accepting distorted inputs or inputs with varying durations. ACKNOWLEDGEMENTS I would like to sincerely thank Dave Waltz and the members of the Brandeis AI Group for their comments on this research, Tangqiu Li for his refreshing Eastern point of view, and Maurice Wong for cooperating with me in using real speech data to test the concepts described in this paper. PI PI PI PI I51 PI PI I81 PI REFERENCES Feldman, J.A. and D.H. Ballard, “Connectionist Models and Their Properties,” Cognitive Science, 6, 1982. Shastri, L. and J.A. Feldman, “Semantic Networks and Neural Nets,” CS Dept., The University of Rochester, TR131, 1984. Waltz, D.L. and J.B. Pollack, “Massively Parallel Parsing,” Cog- nitive Science, Vol. 9, No. 1, 1985. Chun, H.W., T. Li, J. Peng and X. Zhang “Massively Parallel Approach to Chinese Speech Processing Problems,” IEEE - Academica Sinica Workshop on Acoustics, Speech, and Signal Processing, Beijing, China, April 1986. Elman, J.L. and J.L. McClelland, “Speech Perception as a Cogni- tive Process,” In N. Lass (ed.), Speech and Language: Vol. X., Orlando, Florida, Academic, 1984. McClelland, J.L. and J.L. Elman, “The TRACE Model of Speech Perception,” Cognitive Psychology, 18, 1986. Hinton, G.E., T.J. Sejnowski and D.H. Ackley, “Boltzmann Machines: Constraint Satisfaction Networks that Learn,” Techn- ical Report CMU-CS-84-119, Department of Computer Science, Carnegie-Mellon University, May, 1984. Rumelhart, D.E., G.E. Hinton and R.J. Williams, “Learning Internal Representations by Error Propagation,” in D.E. Rumelhart and J.L. McClelland (eds.), Parallel Distributed Pro- cessing. Vol. 1, MIT Press, 1986. Kirkpatrick, S., C.D. Gelatt,Jr., and M.P. Vecchi, “Optimization by Simulated Annealing,” Science, Vo1.220, No.4598, May, 1983. [ 101 Feldman, J.A., “Dynamic Connections in Neural Networks,” Biological Cybernetics, 46, 1982. [ll] McClelland, J.L., “Putting Knowledge in Its Place,” Cognitive Science, Volume 9, Number 1, January-March, 1985. [12] Stevens, C.F., “The Neuron,” Scientific American, Volume 241, Number 3, September, 1979. [13] Wong, K.M. and H.W. Chun, “Toward a Massively Parallel System for Word Recognition,” 1986 IEEE - International Conference on Acoustics, Speech, and Signal Processing, 1986. 1141 Chun, H.W., “AINET-2 User’s Manual,” CS Department, Bran- deis Ilniversity, Technical Report, CS-86126, 1986. 376 / SCIENCE
|
1986
|
156
|
424
|
A REPRESENTATION FOR COLLECTIONS OF TEMPORAL INTERVALS* Bruce Leban, David D. McDonald and David R. Forster Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 ABSTRACT Temporal representation and reasoning are necessary components of systems that consider events that occur in the real world. This work explores ways of considering collections of intervals of time. This line of research is mo- tivated by related work being done by our research group on appointment scheduling and time management. Natu- ral language expressions that refer to collections of inter- vals are used naturally and routinely in these contexts, and an effective means of representing them is essential. Previous studies, which considered intervals primarily in isolation, have difficulties in representing some classes of expressions. This occurs not only with expressions that explicitly refer to collections of intervals, such as “the first of every month,” but also with expressions that do so only implicitly, such as the U.S. Election Day: “the first Tues- day after the first Monday in November.” The traditional solution to this problem has been to provide special means of specifying those forms that are judged to be the most useful (to the exclusion of all other forms). The “collection representation” builds on previous work in temporal representation by introducing operators that allow the representation of collections of intervals, whether they occur explicitly or implicitly in the expres- sion. The operators introduced are natural extensions of the relations and operations on intervals. The representation has potential use in scheduling in three areas: graphical display, natural language translation, and reasoning. I PRIOR WORK Much of the work on time has focused on temporal reasoning (as opposed to temporal representation). For example, Rescher and Urquhart (1971) and van Benthem (1983) describe temporal logics for reasoning mathemati- * This work was supported in part by the Air Force Systems Control, Rome Air Force Development Center, Griffiss AFB, New York, 13441 and the Air Force Office of Scientific Research, Bolling AFB, DC 20332 under Contract No. F30602-85-C-0008 and by the National Science Foundation under Support and Maintenance Grant DCR-8318776. tally about time. The logics are based on the concept that instead of a predicate calculus statement being universally true or false, it may be true or false at different moments of time. Temporal quantifiers (much like the universal and existential quantifiers) are used to augment the calculus. Allen (1983) d escribes a computational approach to maintaining knowledge about events in time, for use in AI systems that reason about temporal knowledge. Allen’s representation takes the concept of a temporal interval as a primitive and explicitly allows representations of indefinite and relative temporal knowledge. A temporal interval is used as the primitive unit because reasoning about points in time frequently yields counter-intuitive or paradoxical results. Ladkin (1985, 1986a) makes an argument for the use of non-convex intervals for reasoning. A convex interval is an interval in the usual sense: a contiguous period of time. A non-convex interval is an arbitrary union of convex intervals. In this paper, it is assumed that a temporal structure based on convex intervals has been defined that has a useful set of operations and relations (see appendix). We believe that the work could be extended to temporal structures based on time-points or non-convex intervals. II COLLECTIONS OF INTERVALS An interval t is denoted by ( ta, TV) or ( tcr; ts ) where ta, TV and ta + t6 are real numbers denoting moments in time; the interval starts at time tcu and extends through time tg or t, + tb?* A collection of intervals is a structured set of inter- vals. The order of a collection is a measure of the depth of the structure. An order 1 collection is an ordered list of intervals. This is somewhat similar to a non-convex interval except that the maximal convex subintervals of ** We ignore the sticky questions of whether the intervals are open or closed and whether time is represented in a continuous or discrete fashion, as these issues are largely irrelevant to the work discussed here. We assume that if t and u are intervals and tp = u,~ then tuu= (tcx,up). KNOWLEDGE REPRESENTATION / 367 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. a non-convex inteFva1 are disjoint and the order they are given in is immaterial. An order n collection (n > 1) is an ordered list of order n-l collections. The notation used for collections is essentially set notation, except for the un- derstanding that the order of elements is maintained. For example, {{%x2}, {x3+4), {x5}} is an order 2 collection. The collection of Thursdays (which contains all the Thursdays in order) is an example of an order 1 collection. The collection of months where each month is represented by a collection of the days in that month (in order) is an order 2 collection. A. A Formula Approach Many useful collections can be described by arithmeti- cal formulae, but there are subtle difficulties with this. We reject this approach for the reasons outlined in this section. Given an appropriate definition for day representing the length of one day and, for convenience, assume that time to is Saturday, December 31, 1904, midnight, the col- lection of Thursdays can be described by the formula: Thursdays = {(q lday) 1 Q! = 5days + to (mod 7days)) We can generalize Thursdays by replacing the 5 with any other value. In other words, it can be understood that Tuesdays is an essentially similar collection to Thursdays. The same approach applied to construct the collection of all Januarys is less successful. Since every fourth year is a different length, one possible formula is: Januurys = { (cr; 3lduys) 1 ((1: + to mod 1461days) E {0,365,730,1095} } This formula is considerably more complicated than the one given for Thursdays .* More importantly, it fails to provide a means of conveniently recognizing Augusts as a generalization of Januarys. To generalize from Januarys we would need to replace each of the values (except 1461) with appropriate new values: the chance of an arbitrary substitution producing a reasonable generalization is quite small. Essentially, the formula is in a “compiled” form that is quite distant from how the concept would naturally be expressed. The formulae become even more complicated when New collections can be built by combining other col- new collections must be built from existing collections. For lections using these operators. The calendars serve as a example, consider “the first Thursday of every January.” basis for this construction. Since the calendars are not suf- This requires combining the collection of Thursdays and ficient for reasoning about statements that reference col- the collection of Januarys to produce a new collection. Fur- lections that might not yet have been defined or might thermore, the system must allow for collections to be com- include unknown intervals in the future (e.g., “when Di- bined in fairly arbitrary ways, since it will not be possible ana is at work”), collections can also be built by predicate to predict all useful specifications. reference. III THE COLLECTION REPRESENTATION The foundation of the collection representation is a set of primitive collections called calendars. A calendar is a collection consisting of an infinite sequence of inter- vals that span the timeline, i.e., ti meets ti+l for any two consecutive intervals. A calendar may have a first interval (the first moment in time the system is prepared to con- sider), but does not have a last interval. Days, Months and Chinese- Calendar- Years are instances of calendars. Two new classes of operators, slicing and dicing, are defined to operate on collections of intervals. The dicing operators provide means of generating collections from in- tervals, for example, to break a collection of intervals into smaller intervals. In Figure 1, a dicing operation is illus- trated between the first two steps. This operation replaces each interval on the left (a week) with a collection of subin- tervals (the days in that week). The slicing operators provide means of selecting in- tervals from collections of intervals, for example, to select the first interval of a collection. In Figure 1, a slicing op- eration is illustrated between the second two steps. This operation replaces each order 1 collection (a collection of the days in each week) with a single interval (the fifth day of each week). I I . . I . Dicing . Slicing . . . . Weeks Days :during: Weeks Figure 1. Slicing and Dicing S/Days :during: Weeks The terms “slicing” and “dicing” are chosen for both their euphonic and metaphoric appeal?‘: The operators have a right-to-left precedence. Each operator corresponds roughly to a preposition, so these expressions can be read naturally by someone who speaks a prepositional language (e.g., English). * It would be even more complicated if it were correct: the Gregorian calendar specifies that only 97 out of every 400 years are leap years. ** If these terms seem to have conflicting meanings, “Slicing” can be thought of as corresponding to “Selection” and “Dicing” to “Dividing up”. 368 / SCIENCE A. Primitive Collect ions B. The Dicing Operations A calendar is defined by specifying the intervals of which it is composed. The notation ((a; 61; 62;. . . ; 6,)) de- notes the calendar {(a;61),(a:+61;~2),..., (~+~s;,6n),~a+C6i,61),. 4. i<?I-1 i<n The list of &values is treated as if it were a circular list. A calendar can also be defined by specifying how it is to be constructed from another calendar. This is denoted by ((C;sl;s2;...; Sn)) to indicate that the first interval of this calendar is the union of the first s1 intervals of C; the second interval is the union of the next s2 intervals of C, etc. As above, the list of s-values is treated as a circular list. If we assume that the unit of measure is 1 second, we might have the following definitions: Days s ((to; 86400)) Months E ((Days;31;28;31;30;31;30;31;31;30;31;30;31; 31;28;31;30;31;30;31;31;30;31;30;31; 31;28;31;30;31;30;31;31;30;31;30;31; 31;29;31;30;31;30;31;31;30;31;30;31)) These definitions are intensional rather than extensional. That is, while a calendar defines an infinite data struc- ture, it does not require that an implementation actually build the complete structure, but only that it build those portions of the structure it needs. Collections can also be constructed from a predicate. The collection ((Condition)) is the minimal collection of intervals C that satisfies the property that there does not exist an interval t disjoint from C, such that Condition is true during t. This definition is carefully constructed to avoid the question of whether the predicate operates on intervals or points. a. Weeks :overlaps: (Januaw1986) b. Weeks .overlaps. (January-1986) c. Weeks :during: (January-1986) d. Weeks .<. ( January-l 986) Figure 2. Dicing Operators The dicing operators are extensions of the relations on intervals (listed in the appendix). A dicing operator takes an order 1 collection as its left argument, an interval as its right argument and produces an order 1 collection as a result. A dicing operator can also take a collection as the right argument, in which case it operates on each interval in that collection. For each relational operator (R) there are two dicing operators: strict (:R:) and relaxed (.R.). If C is an order 1 collection and t is an interval, the dicing operators are defined by: C :R: t G {c n t ] c E C A c R t} \ {E} C .R. t z {c ] c E C A c R t} \ {E} The effect of a strict dicing operator is to break up t into pieces according to C. An illustrative example occurs when C is a calendar. The expression Weelcs :overlaps: (January-1986) will break up the month on the bound- aries of the weeks, i.e., it will give those weeks or parts of weeks that overlap the month. (See Figure 2a.) The effect of a relaxed dicing operator is to select intervals from C that have the appropriate relation with t. Thus Weeks .overlaps. (January-1986) will break up the month in the same way as above, but for the weeks at the begin- ning and end of the month it will give the entire week (in- cluding that part not overlapping the month). (See Figure 2b.) In contrast, Weeks :during: (January-1986) will give only the weeks that are completely contained in the month. (See Figure 2c.)’ Finally, Weeks :I: (January-1986) will give only the partial week at the beginning of the month. (See Figure 2d.) C. The Slicing Operations The slicing operators, denoted f/C and [f]/C, op- erate on any collection, replacing each of the contained order 1 collections with the result of the application of the slicing operator. Operating on an order 1 collection yields either a single interval or an order 1 collection (usu- ally a subcollection of the original order 1 collection). The expression f/C applies the selection function f to the col- lection and returns a single interval, while [f]/C returns a collection. F may be a predicate, in which case it con- structs a collection containing the intervals which satisfy the predicate. The expression [fl, f2, . . . , fn]/C is the col- lection consisting of the individual applications of fl, f2, . - . 7 fn to C in order. In some cases, a selection function may not have a result (e.g., the 29ths of Februarys), in which case the re- sult is defined to be the empty interval E. Note that since the dicing operators will never produce a collection that contains E, any result that includes E is a sign of a failed selection operation. KNOWLEDGE REPRESENTATION / 369 The integers are defined as selection functions so that n/C selects the nth interval in C and -n/C selects the nth interval from the end. The function the is defined so that the/C selects the single interval of C, and produces E if C contains other than a single interval. The function any is used to select intervals nondeter- ministically. any/C selects a single interval of C. [any nJ/C selects n intervals of C. [any -n]/C selects all but n inter- vals of C. The any slicing operator has a subtly different operation when used in a declarative statement - in that case, it refers to an interval without specifying which one. This usage of any has a close relationship to the existential quantifier of the predicate calculus. D. Examples of Collections Table 1 gives a list of English phrases and their cor- responding expressions in the collection representation. IV APPLICATIONS The reason for constructing this representation is to provide a framework for a scheduling system. The pre- vious sections have shown how terms commonly used in scheduling can be easily expressed. The representation was designed to address three areas of our group’s research on scheduling: graphical display, natural language translation (primarily generation), and reasoning (about schedules). The illustrations in Figures 1 and 2 indicate the type of graphical display that would be generated by the system. The definition of each calendar can be made to contain simple graphical display information, such as the shape and orientation of any “boxes” in which they and their contents are shown. The b oxes in Figures 1 and 2 are unlabelled. An interval of a calendar could also carry tags that could be used to label the boxes or to organize the data in a tabular form. The bus schedule of Figure 3 provides a good illus- tration of this. The schedule is constructed as an order 2 collection, where each interval has been tagged. The col- lection prefers to display intervals with the same tag in the same column. The intervals in turn prefer to display only their start times. Notice that in several places a table entry is blank. Despite this, displaying the table presents no problem. Hampshire Amherst UMass Smith Mt. Holyoke - - 8:20 lo:oo 1O:lO 10:20 11:oo 11:lO 11:20 &ii0 Every hour 6:lO . . . 6:20 7:oo 7:lO 7:20 8:00 8:lO 8:20 11:oo 1l:lO 11:20 8:35 8:45 10:35 10:45 11:35 11:45 6:35 6:45 - 7:45 8:35 - 11:35 11:45 Figure 3. A Bus Schedule The appointment calendar display of Figure 4 would be treated in a similar fashion. In this case, the collection of appointments is superimposed on another collection to provide the time grid, with the roles of tags and starting times reversed in the displayed table. The English text in Table 1 indicates the type of nat- ural language that could be produced or processed by the system. Expressions in the collection representation can be almost literally translated into natural language with comprehensible results. Similarly, statements can be eas- Table 1. English Collection Representation Mondays 2/Days :during: Weeks Januarys l/Months :during: Years First Monday in January 1986 l/Mondays :during: Januarys :during: 1986/ Years or equivalently: 1/ (2/Days :during: Weeks) :during: First of every month First Monday of every month Last two Mondays of every month Week of the 15th of each month First full week of each month Week of the first of the month First week of the month U.S. Election Day The first (or only) day of t The day after t Any day of the week Any day this week l/Months :during: 1986/ Years l/Days :during: Months l/Mondays :during: Months [ - 1, -2]/Mondays :during: Months the/ Weeks .overlaps. 15/Days :during: Months I/ Weeks :during: Months l/ Weeks .overlaps. l/Days :during: Months l/ Weeks .overlaps. Months l/Tuesdays .>. l/Mondays :during: November l/Days .overlaps. t l/Days .>meets. - 1 /Days .overlaps. t any/ Days :during: We&s any/Days :during: Weeks .overlaps. ( Today) 370 / SCIENCE 9 10 11 12 :oo :20 :40 zjF?Ej Figure 4. An Appointment Calendar ily translated since the temporal components of the state- ment are not distributed across a number of quantifiers and predicates. For example, the statement ((Roy-worked)) contains Weekend-Days :during: (January) can be glossed as “The time that Roy worked included the weekend days in January.” Since the expressions are stored symbolically, the sys- tem need only generate the actual intervals that it needs. For example, for the expression 23/Seconds :during: 457O/Minutes :during: 1986/ Years the system naturally would not generate a data structure containing the 31536000 seconds in 1986 before selecting the one desired. If the system was asked whether two ex- pressions conflicted and could not determine this by purely symbolic means, it still would not need to generate all the intervals in each collection. Only those subcollections and intervals that have been determined to be possible candi- dates for conflicts need to be generated (and this process can be done recursively). If scheduling conflicts occur, the system can replace specific slicing operators with the any operator. For exam- ple, the system could make the following successive gener- alizations in searching for a non-conflicting schedule: the/Mondays :during: l/ Weeks :during: Months the/Anyday :during: I/ Weelcs :during: Months the/Mondays :during: any/ Weeks :during: Months the/Anyday :during: any/ Weeks :during: Months Our motivation for this work has been to provide a framework for the scheduling system. We are in the pro- cess of building a scheduling system around the represen- tation. We believe that the consideration of collections of intervals is essential to the scheduling domain and that the notation and accompanying semantics introduced in this paper provide a natural medium for that consideration. ACKNOWLEDGMENTS We would like to thank Scott Anderson, Carol Brover- man, John Brolio, David Lewis, James Pustejovsky, Pene- lope Sibun, Philip Werner, Mary-Anne Wolf, and Bev Woolf for their assistance in this research and/or the prepa- ration of this paper. We would also like to thank Peter Ladkin for sending us advance copies of his papers pre- sented at this conference. APPENDIX The intersection of two intervals is defined by: t flu E (max(t,,u,),min(tp,vp)) The cover of two intervals is defined by: t u u E (min(ta,ucu),max(tp,ug)) The union of two intervals (t U u) is defined only if the intervals overlap or meet and is equal to the cover of the two intervals. The empty interval E = (00, -00) and any interval that has o 2 /3 is automatically replaced by E. This definition is motivated by the desire to have t n E = E and t H E = t , for any t. We use the following binary relations on intervals: t overlaps u G t fl u # E t during u = (ta 2 u,) A (tp I up) t contains u E u during t t<uzitp~u, t>uEt&ug t < u E (ta < u,) A (t/j 5 up) t > u E (ta 2 U@) A (t/j 2 up) t meets u E (tp = uQ) The during, 5 and 2 operations form partial orders. Note that t 5 u is not equivalent to (t < u) V (t = u); however, t < u is equivalent to (t ‘: u) A ‘(t overlaps u). REFERENCES Allen, James F., 1985, “Maintaining Knowledge about Temporal Intervals” in Brachman and Levesque, Readings in Knowledge Representation. Morgan Kauf- mann, pp. 509-521. van Benthem, J.F.A.K., 1983, The Logic of Time. D. Rei- del, Boston. Ladkin, Peter, 1985, “Comments on the Representation of Time” in Proceedings of the 1985 Distributed Artificial Intelligence Workshop, Sea Ranch, California, pp. l37- 156. Ladkin, Peter, 1986a, “Primitives and Units for Time Specificat ion” in the Proceedings of the National Con- ference on Artificial Intelligence, Philadelphia, Pennsyl- vania. Ladkin, Peter, 1986b, “Time Representation: A Taxon- omy of Interval Relations” in the Proceedings of the National Conference on Artificial Intelligence, Philadel- phia, Pennsylvania. Rescher, Nicholas, and Urquhart, Alasdair, 1971, Temporal Logic. Springer-Verlag, New York. KNOWLEDGE REPRESENTATION / 3’1
|
1986
|
157
|
425
|
Default geasoning, Nonmonotonic Logics, and the Frame Problem Steve Hanks and Drew McDermott’ Department of Computer Science, Yale University Box 2158 Yale Station New Haven, CT 06520 Abstract Nonmonotonic formal systems have been proposed as an exten- sion to classical first-order logic that will capture the process of hu- man “default reasoning” or “plausible inference” through their infer- ence mechanisms just as modus ponena provides a model for deduc- tive reasoning. But although the technical properties of these logics have been studied in detail and many examples of human default reasoning have been identified, for the most part these logics have not actually been applied to practical problems to see whether they produce the expected results. We provide axioms for a simple problem in temporal reasoning which has long been identified as a case of default reasoning, thus pre- sumably amenable to representation in nonmonotonic logic. Upon examining the resulting nonmonotonic theories, however, we find that the inferences permitted by the logics are not those we had in- tended when we wrote the axioms, and in fact are much weaker. This problem is shown to be independent of the logic used; nor does it depend on any particular temporal representation. Upon analyzing the failure we find that the nonmonotonic logics we considered are inherently incapable of representing this kind of default reasoning. Finally we discuss two recent proposals for solving this problem. 1 Introduction Logic as a representation language for AI theories has always held a particular appeal in the research community (or in some parts of it, anyway): its rigid syntax forces one to be precise about what one is saying, and its semantics provide an agreed-upon and well- understood way of assigning meaning to the symbols. But if logic is to be more than just a concise and convenient notation that helps us in the task of writing programs, we somehow have to validate the axioms we write: are the conclusions we can draw from our representation (i.e., the inferences the logic allows) the same as the snes characteristic of the reasoning process we are trying to model? If so, we’ve gone a long way toward validating our theory. The limitation of classical logic as a representation for human knowledge and reasoning is that its inference rule, modus ponena, is the analogue to human deductive reasoning, but for the most part everyday human reasoning seems to have significant non-deductive components. But while certain aspects of human reasoning (e.g. inductive generalization and abductive explanation) seem to be sub- stantially different from deduction, a certain class of reasoning, dubbed “default reasoning,” resembles deduction more closely. Thus it was thought that extensions to first-order logic might result in formal systems capable of representing the process of default reasoning. While it is still not clear exactly what constitutes default reason- ing, the phenomenon commonly manifests itself when we know what conclusions should be drawn about typical situations or objects, but ‘This work was supported in part by ONR grant N00014-855I<-0301. Many thanks to Alex Kass and Yoav Shoham for discussing this work with us, and for reading drafts of the paper. we must jump to the conclusion that an observed situation or object ia typical. For example, I may know that I typically meet with my advisor on Thursday afternoons, but I can’t deduce that I will ac- tually have a meeting nezt Thursday because I don’t know whether next Thursday is typical. While certain facts may allow me to de- duce that next Thursday is not typical (e.g. if I learn he will be out of town all next week), in general there will be no way for me to deduce that it ia. What we want to do in cases like this is to jump to the conclusion that next Thursday is typical based on two pieces of information: first that most Thursdays are typical, and second that we have no reason to believe that this one is not. Another way to express the same notion is to say that I know that I have meetings on typical Thursdays, and that the only atypical Thursdays are the ones that I know (can deduce) are atypical. Research on nonmonotonic logics2, most notably by McCarthy (in (81 and [9]), McDermott and Doyle (in [12]) and Reiter (in [l4]) attacked the problem of extending first-order logic. in a way that captured the intuitive meaning of statements of the form “lacking evidence to the contrary, infer CX” or more generally “infer p from the inability to infer a.” But since that first flurry of research the area has developed in a strange way. On one hand the logics have been subjected to intense technical scrutiny (in the papers cited above, and also, for example, in Davis [2]) and have been shown to produce counterintuitive results under certain circumstances. At the same time we see in the literature practical representation problems such as story understanding (Charniak [1]), social convention in conver- sation (Joshi, Webber, and Weischedel [6]), and temporal reasoning (McDermott [11] and McCarthy [9]), in which default rules would aeem to be of use, but in these cases technical details of the formal systems are for the most part ignored. The middle ground-whether the technical workings of the logics correctly bear out one’s intentions in representing practical default- reasoning problems-is for the most part empty, though the work of Reiter, Etherington, and Criscuoio, in 131, [4], and elsewhere, is a notable exception. Logicians have for the most part ignored practical problems to focus on technical details, and “practitioners’ have used the default rules intuitively, with the hope (most often unstated) that the proof theory or semantics of the logics can eventually be shown to support those intuitions. We explore that middle ground by presenting a problem in tem- poral reasoning that involves default inference, writing axioms in nonmonotonic logics intended to represent that reasoning process, *So called because of the property that inferences allowed by the logic may be disallowed as axioms are added. For example I may jump to the conclusion that next Thursday is typical, thus deduce that I will have a meeting. If I later come to find out that it is otypicd, I will have to retmcl that conclusion. In first-order logic the addition of new knowledge (axioms) to a theory can never diminish the deductions one can make from that theory, thus it is never necessary to retract conclusions. 3This sounds much more straightforward than it is: consider that the theo- rems of a logic are defined in terms of its inference rules, yet here we are trying to define an inference rule in terms of what is or is not a theorem. 328 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. then analyzing the resulting theory. Reasoning about time is an interesting application for a couple of reasons. First of all, the prob- lem of representing the tendency of facts to endure over time (the “frame problem” of McCarthy and Hayes ]lO] or the notion of =per- sistence” in McDermott [ll]) has long been assumed to be one of those practical reasoning problems that nonmonotonic logics would solve. Second, one has strong intuitions about how the problem should be formalized in the three logics, and even stronger intuitions about what inferences should then follow, so it will be clear whether the logics have succeeded or failed to represent the domain correctly. In the rest of the paper we discuss briefly some technical aspects of nonmonotonic logics, then go on to pose formally the problem of temporal projection. We then analyze the inferences allowed by the resulting theory and show that they do not correspond to what we intended when we wrote the axioms. Finally we point out the (un- expected) characteristics of the domain that the logics were unable to capture, and discuss proposed solutions to the problem. 2 Nonmonotonic inference testing theoremhood within an extension. So any definition of de- fault reasoning based on discriminating among extensions is actually beyond the expressive power of default logic. Reiter does provide a proof procedure for asking whether a sentence is a member of any extension, but, as he points out, this is not a satisfying definition of inference since both a sentence and its negation may appear in different extensions. Our view in this paper is that some notion of inference is neces- sary to judge the representational power of the logic. A logic that generates one intuitive extension and one unintuitive extension does not provide an adequate representation of the problem, since there is no way to distinguish between the two interpretations. For that reason we will define inference in the manner of McDermott’s logic: a sentence 5 can be inferred from a default theory jnst in case 6 is in every extension of that theory. (This definition is also consistent with circumscriptive inference as described in the next section.) While there is no general procedure for determining how many extensions a given theory has, as a practical matter it has been noted that theories with “conflicting” default rules tend to generate multi- ple extensions. For example, the following default theory Since we are considering the question of what inferences can be drawn from a nonmonotonic theory, we should look briefly at how inference is defined in these logics. We will concentrate on Reiter’s default logic and on circumscription, but the discussion and subsequent results hold for McDermott’s nonmonotonic logic as well. 2.1 Inference in default logic Reiter in [14] defines a default theory as two sets of rules. The first consists of sentences in first-order logic (and is usually referred to as W), and the second is a set of default rules (referred to as D). Default rules are supposed to indicate what conclusions to jump to, and are of the form ct :MB w = {Q(N), R(N)}, D = (Q(x) ;,,l” p(x), R(x) : M 7p(x) ) -P(x) . , has two rules that would have us jump to contradictory conclusions. But note that applying one of the rules means that the other cannot be applied, since its precondition is not met. This default theory has two extensions: El = {Q(N), R(N), P(N)} and E2 = {Q(N), R(N), 1 P(N)} that correspond to the two choices one has in applying the default rules. (One interpretation of this theory reads Q as ‘Quaker,” R as “Republican,” P as “Pacifist,” and N as ‘Nixon.“) Thus the above theory entails only the sentences in W, plus tautologies (for example P(N) V -P(N)). We are not claiming that this admission of multiple extensions is a fault or deficiency of the logic-in this particular example it’s hard to imagine how the logic could license any other conclusions. Our point is that when a theory generates multiple extensions it’s generally going to be the case that only weak inferences can be drawn. Further, if one extension captures the intended interpretation but there are other different extensions, it will not be possible to make only the intended inferences. where (Y, p, and 7 are first-order sentences. The intended interpre- tation of this rule is “if you believe Q, and it’s consistent to believe p, then believe 7,” or, to phrase the idea more like an inference rule, “from a and the inability to prove +, infer 7.” (But recall our note above about the futility of trying to define inference in terms of inference.) In order to discuss default inference we must introduce the con- cept of an extension-a set of sentences that “extend” the sentences in W according to the dictates of the default rules. A default the- ory defines zero or more extensions, each of which has the following properties: (1) any extension E contains W, (2) E is closed under (monotonic) deduction, and (3) E is faithful to the default rules. 2.2 Inference in circumscription To describe inference in circumscribed theories we will have to be rather vague: there are several versions of the logic, defined in [8], (91 and elsewhere, and we will not spend time discussing these dif- ferences. By the last we mean that if there’s a default rule in the theory of the form y , and if a E E, and (-p) @ E, then 7 E E. The extensions of a default theory are all the minimal sets E that satisfy rthese three properties. Extensions can be looked upon as internally consistent and coherent states of the world, though the union of two extensions may be inconsistent. We will speak generally of predicate circumscription, in which the intent is to minimize the extension of a predicate (say P) in a set of first-order axioms. Using terms like those we used in describing default logic, we might say that when we circumscribe axioms over P we intend that “the only individuals for which P holds are those individuals for which P must hold,” or alternatively we might phrase it as ‘believe ‘not P’ by default.” To circumscribe a set of axioms A over a predicate P one adds to A an axiom (the exact form of which is not important for our discussion) that says something like this: “any predicate P’ that satisfies the axioms A, and is at least as strong as P, is exactly as strong as P.” The intended effect is (roughly) that for any individual X7 Finding a satisfying definition of default inference--what sen- tences can be said to follow from a default theory-is tricky. Reiter avoids the problem altogether, focusing on the task of defining ex- tensions and exploring their properties. He expresses the view that default reasoning is really a process of selecting one extension of a theory, then reasoning “within” this extension until new information forces a revision of one’s beliefs and hence the selection of a new extension. This view of default reasoning, while intuitively appealing, is in- feasible from a practical standpoint: there is no way of “isolating” a single extension of a theory, thus no procedure for enumerating or if A Y P(x) Circum(A, P) t- 1 P(x) then KNOWLEDGE REPRESENTATION / 329 where Circum(A, P) f re ers to the axioms A augmented by the cir- cumscription axiom for P. To talk about circumscriptive inference, we should first note that since Circum(A, P) is a first-order theory we want to know what deductively follows, but we are interested in characterizing these de- ductions in terms of the original axioms A. In brief, the results are these: if a formula ‘p is a theorem of Circum(A, P) then cp is true in all models of A minimal in P (this property is called soundness), and if a formula cp is true in all models of A minimal in P then ‘p is a theorem of Circum(A, P) (this property is called completeness). Completeness does not hold for all circumscribed theories, but it does hold in certain special cases-see Minker and Perlis [13]. Minimal models, the model-theoretic analogue to default-logic extensions, are defined as follows: a model M is minimal in P just in case there is no model M’ that agrees with M on all predicates ezcept for P, but whose extension of P is a subset of M’s extension of P. As with default logic, there is no effective procedure for deter- mining how many minimal models a theory has. And note that the converse of the soundness property says that if ‘p is not true in all models minimal in P it does not follow from Circum(A, P). So once again, if we have multiple minimal models, what we can deduce are only those formulas true in all of them. Because of the obvious paral- lels between extensions and minimal models (and “NM fixed points” in McDermott’s logic, which we will not discuss here), we will use the terms interchangeably when the exact logic or object doesn’t matter. Intuitively we would like to assume so, because it’s typically the case that eating breakfast does not cause one to fall asleep. But given the above axioms there is no way to deduce T( AWAKE(JOHNj. &j. We could add an axiom to the effect that if one is awake in a situ- ation then one is still awake after eating breakfast, but this seems somewhat arbitrary (and will occasionally be false). And in any rea- sonable description of what one might do in the course of a morning there would have to be a staggering number of axioms expressing something like “if fact f is true in a situation s, and e is an event, then f is still true in the situation RE.SULT(e. sj.” McCarthy and Hayes (in [lo]) 11 ca axioms of this kind “frame axioms,” and identi- fied the “frame problem” as that of having to explicitly state many such axioms. Deductive logic forces us into the position of assuming that an event occurence may potentially change the truth value of all facts, thus if it does not change the value of a particular fact in a particular situation we must explicitly say so. What we would like to do is assume just the opposite: that most events do not affect the truth of most facts under most circumstances. Intuitively we want to solve the frame problem by assuming that in general an event happening in a situation is irrelevant to a fact’s truth value in the resulting situation. Or, to make the notion a little more precise, we want to assume “by default” that T(t sj I T(t RESULT(e, sjj for all facts, situations, and events. But note the quotes: the point of this paper is that formalizing this assumption is not as straight- forward as the phrase might lead one to believe. 3 The temporal projection problem The problem we want to represent is this: given an initial description of the world (some facts that are true), the occurence of some events, and some notion of causality (that an event occuring can cause a fact to become true), what facts are true once all the events have occured? We obviously need some temporal representation to express these concepts, and we will use the situation calculus [lo]. We will thus speak about jucta holding true in situations. A fact syntactically has the form of a first-order sentence, and is intended to be an assertion about the world, such as SUNNY, LOADED(GlJN-35), or V x.HAPPY(xj. Situations are individuals denoting intervals of time over which facts hold or do not hold, but over which no fact changes its truth value. This latter property allows us to speak unambigu- ously about what facts are true or false in a situation. To say that a fact f is true in a situation s we assert T([ s), where T is a predicate and f and s are terms. Events are things that happen in the world, and the occurence of an event may have the effect of changing the truth value of a fact. So we think of an event occuring in a situation and causing a transition to another situation-one in which the event’s effects on the world are reflected. The function RESULT maps a situation and an event into another situation, so if So is a situation and WAKEUP(JOHNj is an event, then RESULT(WAKEUP(JOHNj. So) is also a situation- presumably the one resulting from JOHN waking up in situation So. We might then want to state that JOHN is awake in this situation: T( AWAKE(JOHNj. RESULT( WAKEUP(JOHNj, So)) or more generally we might state that V p, s. T(AWAKE(pj, RESULT(WAKEUP(pj, sjj. A problem arises when we try to express the notion that facts tend to stay true from situation to situation as irrelevant events occur. For example, is JOHN still awake in the state SZ, where S2 = RESULT(EAT-BREAKFAST(JOHNj, RESULT(WAKEUP(JOHNj. So))? McCarthy’s proposed solution to the frame problem (described in 191) involves extending the situation calculus a little, to make it what he calls a “simple abnormality theory.” We state that all ‘normal” facts persist across occurences of ‘normal” events: V f. e. s. T(t sj A 7 AB(f e. sj I T(t RESULT(e. sjj where AB(f, e, sj is taken to mean ‘fact f is abnormal with respect to event e occuring in state s,” or, “there’s something about event e occuring in state s that causes fact f to stop being true in RE- SULT(e.s).” We would expect, for example, that it would be true that V p, s. AB(AWAKE(p), GOTOSLEEP(pj. sj and we would have to add a specific axiom to that effect. Of course we still haven’t solved the frame problem, since we haven’t provided any way to deduce 7 AB(te,sj for most facts, events, and situations. As an alternative to providing myriad frame axioms of this form, McCarthy proposes that we circumscribe over the predicate AB, thus “minimizing” the abnormal temporal indi- viduals. The question is whether this indeed represents what we intuitively mean by saying that we should ‘assume the persistence of facts by default,” or “once true, facts tend to stay true over time.” As an illustration of what we can infer from a very simple situation- calculus abnormality theory, consider the axioms of Figure 1. For simplicity we have restricted the syntactic form of facts and events to be propositional symbols, so the axioms can be interpreted as re- ferring to a single individual who at any point in time (situation) can be either ALIVE or DEAD and a gun that can be either LOADED or UNLOADED. At some known situation So the person is alive (Ax- iom l), and the gun becomes loaded any time a LOAD event happens (Axiom 2). Axiom 3 says that any time the person is shot with a loaded gun he becomes dead, and furthermore that being shot with a loaded gun is abnormal with respect to staying alive. Or to use our definition from above: there is something about a SHOOT event occuring in a situation in which the gun is loaded that causes the fact ALIVE to stop being true in the situation resulting from the shot. Axiom 4 is just the assertion we made above that %ormal” facts persist across the occurence of “normaln events. 330 / SCIENCE (1) ~(ALIVE. 54-J (2) V s. T(LOADED. RESULT(LOAD. s)) (3) V s. T(LOADED. s) > AB(ALIVE. SHOOT, s) A compels us to believe AB(ALlVE, LOAD, So), so we assume its nega- tion. From this assumption and from Axiom 4 we deduce T(ALIVE, Sr). Reasoning along the same lines, we can deduce T(LOADED, Sr) and we are free to make the assumptions 1 AB(ALlVE, WAIT, (4) T(DEAD. RESULT(SHOOT. s)) V 6 e, s. T(t s) A 7 AB(t e. S)I T(t RESULT(e. s)) SJ and 7 AB(LOADED. WAIT. S,) so we do so and go on to de- duce T(ALIVE. S2) and T(LOADED, $j. Again moving forward in time, we can deduce from Axiom 3 that AB(ALIVE, SHOOT, Sz) so Figure 1: Simple situation-calculus axioms we can’t assume its negation, but we can assume 1 AB(LOADED, SHOOT, $1. At that point we have T(DEAD, Ss) and T(LOADED. S3). This line of reasoning leads us to the interpretation of Figure 2b. We can easily verify that this interpretation is indeed a model of A (that Axioms 1 through 4 are satisfied). Furthermore, this model is minimal in AB: any submodel would have to have an empty extension for AB, which cannot be the ca.se.5 model of first-order axioms -AB(ALIVE, LOAD, SO) -AB(ALIVE, WAIT, Sl) -C\B(LOADED, WAIT, $1) AB(ALIVE, SHOOT, S2) -AB(LOADED, SHXIT, 52) The interesting question now is whether the model of Figure 2b- our intended model of A-is the only minimal model, or, more to the point, whether T(DEAD. Ss) and T(LOADED. $1 are true in all (b) a model minimal in AB minimal models. Because if they’re not true they can’t be deduced from Circum(A, AB). in all minimal models -AB(ALNE LOAD, So) -.AB(ALIVE WAIT, $1) ABJLOADED, WATT, $1) -AB(ALIVE SHOOT, 9) (c) another minimal model Figure 2: Three models of the Figure 1 axioms Consider the situation in Figure 2c. The picture describes a state of affairs in which the gun ceases to be loaded “as a result of” waiting. Then the individual does not die as a result of the shot, since the gun is not loaded. Of course this state of affairs directly contradicts our stated intention that since nothing is explicitly “AB” with respect to waiting everything should be “not AB” with respect to waiting. Does this interpretation describe a minimal model? First recall that there can be no models having a null extension for AB, so if this interpretation is a model at all it must be minimal in AB. Then we circumscribe Axioms l-4 over AB, recalling that the deducible formulas will be those that are true in all models mini- ma1 in AB. As above we will refer to Axioms l-4 as A, and to t,he circumscribed axioms as Circum(A, AB). of what facts will be true One can “build” this model in much the same way we constructed the model of Figure 2b, except this time instead of starting at SO and working forward in time, we will start at Ss and work backward. In other words, we will start by noting that T(ALIVE&) must be I true, then consider what must have been true at SZ and in earlier situations for that to have been the case. in Now consider the problem the following situations: projecting The first abnormality decision to make is whether AB(ALIVE, SHOOT. S2) is true. Since we haven’t made a decision to the con- trary, we will assume its negation. But then from the contrapositive of Axiom 3, we we can deduce 7 T(LOADED. Sz). But if that is the case, and since it must also be the case that T(LOADED. Sl), we can deduce from Axiom 4 that AB(LOADED, WAIT. 5~). The rest of Figure 2c follows directly, since we can assume that ALIVEis %ot AB” with respect to LOAD and WAIT, thus deduce that it is true in both Sr and Sz. := RESULT(LOAD. So), $= RESULT(WAIT. S1), and S3= RESULT(SHOOT. &) = RESULT(SHOOT. RESULT(WAIT. RESULT(LOAD. So))). In other words, our individual is initially known to be alive, then the gun is loaded, then he waits for a while, then he is shot with the gun. The projection problem is to determine what facts are true at the situations S;. The event WAIT is supposed to signify a period of time when nothing of interest happens. Since according to the axioms no fact is abnormal with respect to a WAIT event occuring, we intend that every fact true before the event happens should also be true ajter it happens. One interpretation of the Figure 1 axioms is shown in Figure 2a. This first picture represents facts true in all models of A (thus what we can deduce if we don’t circumscribe). can make the following deductions: From Axioms i and 2 we What, then, can be deduced from the (circumscribed) abnormal- ity theory? It’s fairly easy to verify that the two models we have presented are the only two minimal in AB, so the theorems of Circum(A, AB) are those common to those two models. So we can deduce that ALIVE and LOADED are true in Sr, that ALIVE is true in $,, but we ca,n say nothing about what is true in &except for statements like T(ALIVE, Ss) v T(DEAD. 5). What we can deduce from Circum(A, AB) is therefore considerably weaker than what we had intended. T(ALIVE. St,), T(L OADED. S,), but we can deduce nothing about what is true in S, or in S3. We also cannot deduce any Uabnormalitiesn nor their negations. But this is pretty much as expected: the ALIVE fact did not persist because we could not deduce that it was “not AB” with respect to loading the gun, and the gun being loaded did not persist through the WAIT event because we could not deduce that it was ‘not AB” with respect to waiting. Intuitively we would like to reason about “minimizing abnormal- ities” like this: we know ALIVE must be true in So, and nothing 4 How general is this problem? The question now arises: how dependent is this result on the specific problem and formulation we just presented? Does the same prob- lem arise if we use a different default logic or a different temporal formalism? sTo see why this is true, consider that in any model of A either T(ALIVE 5%~ is true, or it’s false. If it’s true we can immediately deduce an abnormality from Axiom 3. But if it’s false then either AB(ALIVE. LOAD. So) or AB(ALIVE. WAIT. S1) would have to be true. In either case we must have at least one abnormality. KNOWLEDGE REPRESENTATION / 33 1 We can easily express the theory above in Reiter’s logic: we use the same first-order axioms from Figure 1, but instead of circum- scribing over AB we represent “minimizing abnormal individuals” with a class of (normal) default rules of the form D={ : M yAB(f, e, s) > lAB(I, e,s) only a single default rule? It turns out that conflict between rulG arises in our domain in a different, more subtle, manner. To see how, recall how we built the first minimal model (that of Figure 2b). The idea was that we assumed one ‘normality,” then went on to make all possible deductions, then assumed another “normality,” and so on. The picture looks something like this: (where any individual may be substituted for < e, and s). Recall that extensions are defined proof-theoretically instead of in terms of models, so we must translate the minimal models shown in Figure 2 (b and c) into sets of sentences; the question becomes whether the following sets are default-logic extensions: where the conflict to notice is that as a result of assuming a ‘nor- mality” we could deduce an abnormality. The same thing happened when we build the model in Figure 2c, except the picture looks like A B(LOA DED, WAIT, SI) a T(LOADED, Sz) j . . . =s- AB(ALIVE, SHOOT, S2) E, this instead (reading from right to left): T(ALIVE. So) 7 AB(ALIVE. LOAD, So) T(A Ll VE. SJ T(LOA DED. SI) - AB(ALIVE. WAIT, S1) 7 AB(LOADED. WAIT. SI) T(ALIVE, S2) T(LOA DED, SJ AB(ALIVE. SHOOT, S2) 7 AB(LOADED. SHOOT, S2) T(DEA D, S3) T(LOA DED. S3) T(A LIVE. So) 7 AB(ALIVE, LOAD, So) T(ALIVE. S1) T(LOADED. SJ 7 AB(ALIVE. WAIT, S1) AB(LOADED. WAIT. S1) T(ALIVE, &) 7 AB(ALIVE, SHOOT, Sz) T(ALIVE, S3) AB(LOADED. WAIT, SI) s= . . . -e --, T(LOADED. s2) -+ 7 AB(ALIVE. SHOOT. 52). The only difference between the two models is that in the first case we started at the (temporally) earliest situation and worked our way forward in time, and in the second case we started at the latest point and worked our way backward in time. Another way to express the idea is that in the first model we always picked the Uearliest possible” ([ e, S) triple to assume “normal” and in the second model we always picked the latest. So the class of models we want our logic to select is not the “min- imal models” in the set-inclusion sense of circumscription, but the “chronologically minimal” models (a term due to Yoav Shoham): those in which normality assumptions are made in chronological or- der, from earliest to latest, or, equivalently, those in which abnor- mality occurs as late as possible. (In a richer temporal formalism the criterion chronological min- Of course these are partial descriptions of extensions. Each set also contains A and all tautologies, and in E,, for example, we also in- elude all sentences of the form 1 AB(f. e, s) for all individuals (t e, s) except (ALIVE, SHOOT. &). We will omit the proof that both E, and Er, are extensions, imality might not be the right one. If several years had lapsed be- though in our longer paper, [5), we carry it out in some detail. It tween the WAIT and the SHOT, for example, it would be reasonable should be easy to convince oneself that both sets satisfy the three to assume that the gun was no longer loaded. But chronological conditions we set down in Section 2: they contain A, they are closed minimality does correctly represent our simple notion of persistence: under deduction, and they are faithful to the default rule in the sense that facts tend to stay true (forever) unless they are “clipped” by a defined previously. To verify that they are both minimal, note that contradictory fact.) in both cases all the sentences except the default-rule assumptions indeed follow from the default assumptions and the axioms in A. So circumscription is not the culprit here-Reiter’s proof-theoretic default logic has the same problem. We can also express the same problem in McDermott’s nonmonotonic logic and show that the the- ory has the same two fixed points. Nor is the situation calculus to blame: in a previous paper [5] we use a simplified version of McDermott’s temporal logic and show that the same problem arises, again for all three default logics. In the next section we will show what characteristics of temporal projection lead to the multiple-extension problem, and why it appears that the three default logics are inherently unable to represent the domain There appears to be no way represent this criterion, either in published versions of circumscription6 or in the logics of Reiter or McDermott. The concept of minimality in circumscription is inti- mately bound up with the notion of set inclusion, and chronological minimality cannot be expressed in those terms. As far as Reiter and McDermott’s logics go, what we need is some way to mediate appli- cation of default rules in building extensions or fixed points, which is beyond the expressive power of (Reiter’s) default rules or of NML sentences involving the M operator. 6 Potential solutions correctly. 5 A minimality criterion for temporal reasoning We noted above that default-logic theories often generate multiple extensions. But characteristic of all the usual examples, like the one we used in Section 2, is the fact that the default rules of these theories were mutually exclusive: the application of one rule rendered other rules inapplicable by blocking their preconditions. Thus it comes as somewhat of a surprise that the temporal pro- jection problem should exhibit several extensions. How can there be conflicting rules in the same way we saw above when our theory has Two lines of work have been proposed as solutions to this prob- lem. Yoav Shoham in 1151 presents a logic that directly addresses the problem of representing causation in terms of “time flowing for- ward.? Rather than trying to extend existing nonmonotonic logics so that they capture this new minimality criterion’he instead starts with a precise description of the chronologically minimal models. He then demonstrates that when a certain restricted class of first-order theories are minimized with respect to how much is known about each situation (instead of minimizing what is true in each situation) the resulting theory has a unique chronologically minimal model. While Shoham’s logic handles the specific case of causal or temporal 6These include predicate circumscription and joint circumscription [8], for- mula circumscription and prioritized circumscription [9]. But see the note on pointwise circumscription below. 332 / SCIENCE reasoning, his solution is obviously not an answer to the question we pose about the general relationship between default reasoning and nonmonotonic logics. A second proposal, due to Vladimir Lifschitz in [7], involves a reformulation of and extension to predicate circumscription called pointwise circumscription, in which one minimizes a predicate one point at a time (in our example a point would be a (fact, event, situation) triple). The order in which points are minimized is spec- ified by an object-language formula that can express the concept of “temporally earlier” and “temporally later.* Thus one is able to say something to the effect “minimize abnormalities, but favor- ing chronologically earlier ones.” Pointwise circumscription contains predicate circumscription as a special case, and has been shown to solve a simple example of interacting defaults that we presented in PI* But what benefits do we realize from these new, more expres- sive, more complex versions of circumscription? The problem is that the original idea behind circumscription, that a simple, problem- independent extension to a first-order theory would “minimize” pred- icates in just the right way, has been lost along the way. Instead, a complex, problem-specific axiom must be found to rationalize a set of inferences which must themselves be justified on completely separate grounds. The real theory of reasoning is the minimality cri- terion. In this example it was Shoham’s chronological minimality; for other cases of default reasoning there will be other criteria for adding deductively unwarranted conclusions to a theory. It contributes lit- tle to our understanding of the problem that these criteria can be expressed as a second-order circumscription axiom; the criteria are justifying the axiom rather than the other way around. The situation might be different if the second-order axiom were ‘productive,” that is, if further, perhaps unforeseen conclusions could be drawn from it, mechanically or otherwise. But it can be very hard to characterize the consequences of the circumscription axioms for a reasonably large and complex theory, and when the consequences are understood, they may not be at all what we intended. The upshot is that no one really wants to know what follows from circumscription axioms; they usually wind up as hopefully harmless decorations to the actual theory. 7 Conclusion We have presented a problem in temporal reasoning-causal or tem- poral projection-that involves defeasible inference of the sort nor- mally associated with nonmonotonic logics. But upon writing axioms that describe temporal projection in an intuitive way, we found that the inferences licensed by the logics did not correspond to our in- tentions in writing the axioms. There seem to be two reasons for this: that conflicting default rule instances lead to unexpected mul- tiple fixed points (minimal models), and that our preference of one extension over another (our criterion for minimality) depends on an ordering of individuals that cannot be expressed by circumscribing over any predicate or set of predicates, or by the default rules in the other nonmonotonic logics. At this point we need to re-evaluate the relationship between nonmonotonic logics and human default reasoning. We can no longer engage in the logical “wishful thinking” that led us to claim that circumscription solves the frame problem [9], or that “consistent’ is to be understood in the normal way it is construed in nonmonotonic logic.[l]” From a technical standpoint, there is no “normal way” to understand the M operator, or the Reiter default rules, or a theory circumscribed over some predicate, apart from the proof- or model theory of the chosen logic. The term “consistent,” has too often used informally by researchers (e.g. in [6]) as if it had an intuitive and domain-independent mean- ing. \n7e have shown that in at least one case a precise definition of the term is much more complex than intuition would have us be- lieve, and that the definition is tightly bound up with the problem domain. As such, the claim implicit in the development of nonmono- tonic logics-that a simple extension to classical logic would result in the power to express an important class of human nondeductive reasoning-is certainly called into question by our result. References PI PI PI 141 PI I61 PI PI PI PO1 PI PI WI PI P51 Charniak, Eugene “Motivation Analysis, Abductive Unifica- tion, and Non-Monotonic Equality”, Cognitive,Science, to ap- pear. Davis, Martin, “The Mathematics of Non-Monotonic Reason- ing”, Artificial Intelligence, vol. 13 (1980), pp. 73-80. Etherington, David W., “Formalizing Non-Monotonic Reason- ing Systems”, Computer Science Technical Report No. 83-1, University of British Columbia. Et,herington, David W. and Raymond Reiter, “On Inheritance Hierarchies with Exceptions”, Proceedings AAAI-83, pp. 104- 108. Hanks, Steven and Drew McDermott, “Temporal Reasoning and Default Logics” , Computer Science Research Report No. 430, Yale University, October 1985. Joshi, Aravind, Bonnie Webber and Ralph Weischedel, “De- fault Reasoning in Interaction”, Proceedings oj the Non- Monotonic Reasoning Workshop, AAAI, October 1984, pp. 151- 164. Lifschitz, Vladimir “Pointwise Circumscription”, unpublished, draft of March 11, 1986. McCarthy, John, “Circumscription - A Form of Non- Monotonic Reasoning”, Artificial Intelligence, vol. 13 (1980), pp. 27-39. McCarthy, John, “Applications of Circumscription to For- malizing Common Sense Knowledge”, Proceedings oj the Non- *%fonotonic Reasoning Workshop, AAAI, October 1984, pp. 295- 324. McCarthy, John, and P. J. Hayes, “Some Philosophical Problems from the Standpoint of Artificial Intelligence”, in: B. Meltzer and D. Michie (eds.), Machine Intelligence 4, Ed- inburgh University Press, 1969, pp. 463-502. McDermott, Drew V., “A Temporal Logic for Reasoning About Processes and Plans”, Cognitive Science, vol. 6 (1982), pp. lOl- 155. McDermott, Drew V. and Jon Doyle, “Non-Monotonic Logic I”, Artificial Intelligence, vol. 13 (1980), pp. 41-72. Perlis, Donald, and Jack Rlinker, “Completeness Results for Circumscription”, Computer Science Technical Report TR- 1517, University of Maryland. Reiter, Raymond, “A Logic for Default Reasoning”, iirtificial Intelligence, vol. 13 (1980), pp. 81-132. Shoham, Yoav, “Time and Causation from the Standpoint of Aritificial Intelligencen, Computer Science Research Report, Yale University, forthcoming (1986). KNOWLEDGE REPRESENTATION / 333
|
1986
|
158
|
426
|
Time Representation: A Taxonomy of Interval Relations * Peter Ladkin Kestrel Institute 1801 Page Mill Road Palo Alto, Ca 94304-1216. Abstract James Allen in [AZZ2] formulated a calculus of convex time in- tervals, which is being applied to commonsense reasoning by Allen, Pat Hayes, Henry Kautz and others [AZZKuu, AZZHay]. For many purposes in AI, we need more general time intervals. We present a taxonomy of important binary relations between intervals which are unions of convex intervals, and we provide examples of these relations applied to the description of tasks and events. These relations appear to be necessary for such de- scription. Finally, we provide logical definitions of a taxonomy of general binary relations between non-convex intervals. Introduction James Allen in [Alli?] formulated a calculus of convex time in- tervals, which is being applied to commonsense reasoning by Allen, Pat Hayes, Henry Kautz and others [AZZKau, AZZHay]. Convex intervals are intuitively those which have no gaps. The term convex comes from topology. Allen’s calculus is a finite relation algebra in the sense of Tarski [Jo Tul, JoTa2, dada]. It has 13 atoms, which Allen enumerates, and hence the algebra has 2ls elements. We refer to the elements of this algebra as convex relations. There are close relations between algorithms used by Allen [Freu], and work in representations of relation al- gebras [Ma&, corn11 WC present some mathematical results on Allen’s algebra in [LadAdnd. other ways of representing time in AI have been argued for in [ McDer1, McDerZ]. Here, we investigate the binary relations that can hold be- tween intervals which are UniCJnS of convex intervals. We call such relations non-convex relations. These intervals consist in- tuitively of some (maximal) convex subintervals with convex gaps in between them. We star? by discussing points-based and intervals-based representations of time. We then present a tax- onomy of important binary relations between intervals which are unions of convex intervals, and we provide examples of these relations applied to the description of tasks and events. These relations appear to be necessary for such description. Finally, we provide logical definitions of a taxonomy of general binary relations between non-convex intervals. The combinatorial explosion of possible binary relations be- tween unions-of-convex intervals is dampened by considering only a subset of all possible relations. However, results of the author and Roger Maddux show that there are infinitely many relations definable in the algebra generated by these intervals [LadMadj. The notion of convex interval is definable in the al- gebra, as are the notions of having exactly (greater than, less than) n maximal convex subintervals, for each n [LadMad]. *This work was partially supported by RADC contract F30@X-84-C- 0109 and DARPA contract N00014-81-C-0582 Instants, Intervals and the Representation of Periods In [Ladl], we discussed points-based and intervals-based ways of representing time. Project management systems, amongst others, need a way of representing periods of time over which tasks happen, are scheduled, etc. There is a choice to be made between instant- based and interval-based representations of periods: Instants are atomic, indivisible entities which do not over- lap, and are usually partially or linearly ordered. The order is usually called later than. Instants have no duration. This notion is used in the semantics of serial or concurrent programming languages with atomic instructions. Instants of time are identified with states of the system, and attached to these instants are propositions which describe the internal, non- temporal structure of the states. Use of instants in this way can be referred to as taking snap- shots, and this approach is often taken when the system to be modelled is clocked. All snapshots are then synchronised with the clock, and the problem of determining is reduced to a count of clock interrupts. durations of periods To build periods from instants, we have to specify a range of instants, e.g. period(tl,tz) ES {t : tl < t 5 tz}. There is a question about whether to include endpoints, which we shall refer to again. A more complex, but useful, kind of period can be specified by taking (finite or arbitrary) unions of these basic convex periods. We then have periods which can represent, say, the during which a given process has control of the processor time time-shared environment. Intervals represent time periods directly. Intervals have duration, and are not necessarily indivisible. They are thus an abstraction from the properties of sets of time instants with measure. Thus, there are 12 ways that intervals may be related, excluding equality, e.g. precedes, overlaps, contained in [AZ1211 By contrast, instants can be related by only two, earlier than, later than. To determine what structure we need in this context, we believe it is best to work with the abstraction directly. This position is argued in [Ladl, AZZZ, AZZKau, AZZHay]. We consider the sets-of-points notion as one possible interpretation of intervals. The use of intervals is not restricted to AI. [~am1] defines an ordering on intervals (there referred to as sets of events), in order to prove the correctness of certain concurrency algo- 360 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. rithms. Interval representations are also considered in [uBen, Hum, Dow]. There are mathematical constructions that con- vert point structures to interval structures and vice versa, e.g. the sets-of-points with measure construction [vBen]. See also [LadMad]. In particular, [Hum] resolves some supposed difficulties in the definition of truth of propositions over intervals. Additional reasons we prefer to work with intervals are l intervals provide a natural way of talking about duration, the length of time over which something happens, because they are an abstraction of the properties of periods of time The Choice of Relation Primitives There are too many discrete ways that unions of convex inter- vals may be related to each other. An exhaustive enumeration is infeasible, because Theorem 1 The number of relations between unions of convex intervals is at least ezponential in the number of maxconsubints. In fact, a much sharper result is true (see [LadMadj), but we intend only to establish infeasibility here. Proof: l interval notation is extensible: complexifying time struc- ture doesn’t lead to changes in syntax; whereas point notation isn’t extensible, in that the number of points needed to specify a time structure varies with the com- plexity of the structure, e.g. we need 2 x n points to specify the union of n convex periods l there are unresolved difficulties with the endpoints of time periods, whether specified in point structures or interval structures. These need to be resolved before any imple- mentation of time is attempted, but the difficulties are treated in a more ad hoc manner by points-based models of time. Notions such as temporal conjunction of propo- sitions may be expressed in interval formulations [Hum], but not easily (so far) in standard points-based temporal logic. The interval approach allows all possible relations between endpoints, whereas such points-based approaches as [McDerl, McDeri?] have to chose a convention which is then hard-wired into the semantics. In terms of [AU?], the We prove the theorem by enumerating the relations between two intervals with n maxconsubints and using an inductive ar- gument . Consider two intervals which are the unions of 2 convex subintervals each. Suppose the first subinterval of each is en- tirely disjoint from the second subinterval of the other. Then each first maxconsubint precedes the second maxconsubint of the other. The intervals are related in 132 ways, including equality, since the first maxconsubints can be related in 13 ways, including equality, and so can the second maxconsubints. When we consider that the first maxconsubint of each may be related by other than precedence to the second maxconsubint of the other, e.g. they may overlap, or meet, we see there are more than 132 relations overall. Now consider two intervals with n + 1 maxconsubints, such that the first n maxconsubints of each interval all precede the final maxconsubint of the other. By the inductive hypothesis, there are more than 13” ways the subintervals consisting of the first n maxconsubints may be related. The final two max- points-based approach has to choose between dreeedence (don’t include the endpoint), or overlapping (include the endpoint), and usually rules out meeting for any inter- vals. See [Hum] for another example involving temporal conjunction. consubints may be related in 13 ways, and therefore the total number of possible relations is more than 13n+1, Again, when we consider that the final two maxconsubints may be related by overlaps or meets to the penultimate maxconsubint of the other, we notice there are many more relations than just those we enumerated in the proof. Relation Primitives for Unions of Convex Intervals Hence we have established the base and the inductive steps, and we draw the conclusion of the induction. End of Proof. Intervals which are unions of convex intervals occur naturally. For example, any recurring time period can be represented: we can regard the period MONDAYS as being composed of each individual Monday, LABOR-DAYS is likewise the union of convex intervals, consisting of each individual Labor day, the period of the regular weekly meeting with the boss is also a union of convex intervals, each of them the period of single meeting. These kinds of intervals seem to be among the most useful of the non-convex cases, and since we have reason to hope that knowledge gleaned from considering the convex case will transfer in part, we consider unions of convex intervals in detail. We develop further the definition of time units, which include examples such as the above, in [Lada]. An interval which is a union of convex intervals looks like this 1 - To avoid the combinatorial explosion implied by the theo- rem, our basic relations don’t depend on the number of maxcon- subints. It is intuitively plausible that we don’t need relations that depend on the number of maxconsubints for expressing properties of time periods associated with actions, tasks, events or propositions. However, the relation algebra generated by the relations we consider is still infinite, and still enables us to define the class of intervals with exactly n subintervals, for each n [LadMad]. The approach we take will generalise the convex relations, by introducing functors that generate non-convex relations from convex relations, by enumerating new subclassifications of rela- tions that weren’t there in the convex case, and by enumerating the various relations that arise from considering just the first and last maxconsubints. Additionally there is one relation, bars, which is not ob- tained by generalising the convex case in some way. This interval i has three “parts”, i.e. maximal convex subinter- vals, which we call mazconaubints. We obtain the following relations between unions of convex intervals: KNOWLEDGE REPRESENTATION / 36 1 l those generated by the functors mostly, always, partially, sometimes, and disjunction from convex relations (always may be defined in terms of mostly) l contain8 l disjoint from, which splits into: - precedes and follows - meets and is met by - intermingles with, which splits further into: * disjointly-contains and disjointly-contained by * disjointly-overlaps and disjointly-overlapped by * begins preceding and begins following * ends preceding and ends following * surrounds and is surrounded by 0 strictly intersects, splitting into - begins after which splits into * begins in * begins with * begins following - ends before which splits into * ends in * ends with * ends preceding - begins before and ends after, with the corresponding case splits - begins at and ends at l bars, which is a new relation not generalised from convex relations We give below the definitions of the relations, and follow with examples to show they are naturally occurring. We con- clude that these relations are necessary for expressiveness in a calculus of intervals which are unions of convex intervals. These relations are not atomic (not disjoint as sets of in- terval pairs), and some of them are definable from others. For example, surround8 is the conjunction of the relations begins before and ends after, i.e. begins before A ends after, where (i(RAS)j) f ((i Rj) A (iSj)) The Definition of the Relation Primitives The Intended Calculus We intend that these relations will be manipulated algebraically, that is, by considering only derived relations defined in the re- lation algebra generated by these relations [Jo Ta2, Madl, Lad- Mad]. We are not concerned with first-order definability, since we don’t intend to use a first-order theorem-proving approach, and thus we have many more relations than we would need if we were using a first-order language approach. We believe the payoff is in the simpler structure of an algebraic theory. Informal Definitions of the Primitives Define a component of a non-convex interval to be a maxcon- subint , We shall speak of the n’th component of i and the n’th com- ponent of j, where i and j have finitely many components, as a matched pair of subintervals. In case of i and j having infinitely many components, we assume without defining it a one-to-one function that matches the “closest” components. This function may, in fact, be rigorously defined [LadMad]. R” is the converse relation to R; i.e. (i R” j) iff (j R i). We draw example intervals i and j for each relation. We represent them on different lines, but they are intended to be intervals on the same time line. It would be better to use two colors. The relation functors are: mostly: i mostly R j, where R is a convex relation, if, for every component of j, there is a component of i that is R to it. This allows the possibility that there are other components of i, but not of j. E.g. i mostly meets j i - j always: i always R j, where R is a convex relation, iff matched pairs of components of i and j are R to each other. Alternatively, a form of the definition that will work for both finite and infinite unions of convex intervals:- every component of i R some component of j, and every component of j R” some component of i. Always is de- finable from mostly: i always R j iff i mostly R j and j mostly R” i, where R” is the converse relation to R. E.g. i always meet8 j i - - - j partially: i partially R j, where R is a convex relation, iff some pairs of components of i and j are R to each other, and all others are disjoint. This allows the possibility that the disjoint intervals may meet E.g. i partially meets j i - j sometimes: i sometimes R j, where R is a convex rela- tion, iff some pairs (at least one pair) of components of i and j are R to each other E.g. i sometime8 meets j i - ~ j disjunction: i R V . . . ..V Q j iff every pair of components is related by R or . . . . or Q or precedes or follows E.g. i meets V contains j 362 / SCIENCE i - - j Many of the convex classifications generalise directly to the non-convex case. However, some of the convex classifications get new subclassifications in the non-convex case, notably dis- joint from, which obtains a new category of intermingles, which itself has subclassifications, and strictly intersects, which ob- tams many new subcategories. Some of the new subcategories are valid for both intermingles, which is a category of disjoint, and strictly intersects, which is a different category. l contains; unchanged from the convex case; i contains j iff every component of j is contained by some component of i i -- j - - l disjoint from, which is a symmetric relation, and can be classified into: - precedes, as in the convex case; precedes is anti- symmetric, and i precedes j iff all subintervals of i precede all subintervals of j i --- j -- - follows, the inverse of precedes - meets, antisymmetric; i meets j iff the final compo- nent of i meets the first component of j i -- j - is met by, the inverse of meets - intermingles with; new in the non-convex case, symmetric, and itself has subclassifications enumer- ated below l strictly intersects, which has new subclassifications gen- erated by the relation functors, as well as the new sub- categories enumerated below We now enumerate the subclassifications of strictly intersects. l i begins after j; which is split into the mutually exclusive cases: - i begins in j; the leftmost component of i is over- lapped by a component of j. i j -- - i begins with j; the leftmost component of i is overlapped by the leftmost component of j i - - j - - i begins following j; some component of j precedes all components of i 1 j - l i ends before j; split into the mutually exclusive cases: - i ends in j; the rightmost component of i overlaps a component of j i - j -- - i ends with j; the rightmost component of i over- laps the rightmost component of j i - - - j -- - i ends preceding j; some component of j succeeds all components of i. j -- l the converses begins before and ends after, which are also split into cases, giving begins preceding and ends following, and the converses of the other four relations. We can’t find any useful names for these other four at present l i begins at j; the leftmost component of i starts the leftmost component of j i -~ j - -- o i ends at j; the rightmost component of i finishes the rightmost component of j i - J -- We enumerate the subclassifications of intermingles with: l i disjointly-contains j; equivalent to i is disjoint from and surrounds j i - j l i is disjointly-contained in j; the converse of disjointly contains KNOWLEDGE REPRESENTATION / 363 l i disjointly-overlaps j; equivalent to di+int and begins preceding and ends preceding tion task l always meets: i - j l i is disjointly-overlapped by j; the converse of disjointly- overlaps Finally, we note there are certain classifications that are valid in both the intermingling and the intersecting cases: l begins preceding, begins following, ends preceding and ends following, with the definitions given in the intersectin.q case valid also for the interminglinq case l i surrounds j; equivalent to i begins before and ends after j in the intersecting case, and disjointly-contains in the disjoint case. We illustrate the intersecting case: i - j l i is surrounded by j; the converse of surrounds Additionally, there is a polyadic relation that is of some importance. We illustrate it for two intervals, and it should be clear how to generalise it to many. In our calculus, we only consider the binary case [LadMadj. l i bars j; the union of i and j is convex. Bars is a sym- metric relation, and commutative in the general case i - j - - Examples of Relations Between Non-Convex In- t ervals We illustrate the relations by examples drawn from general pro- cesses, procedures, tasks and occurrences. We conjecture that the most useful applications of the calculus will be in the areas of task description and management, action theory and process theory. The reader may observe that many of the relations above are converses of relations already included in the enumeration. We include enough examples to show that the relations are useful and natural for descriptive purposes. Each example has the form of expressing a relation between two tasks, events or actions. Suppose P is such a creature. Then we attach to P the interval int(P) [LadS]. Th en, for two tasks, P and Q, we can consider the interval relation R(int(P), int(Q)) between the associated intervals. This relation R appears above each example. l mostly meets: - after the committee has reviewed the market on Mon- day, the brokers may act to buy the desired stock. - when designs are finalised, the programmers assigned may be immediately available for the implementa- - after the committee has reviewed the market on Mon- day, the brokers always act to buy the desired stock. - when designs are finalised, the programmers assigned should be immediately available for the implementa- tion task 0 always overlaps: - investigation of the system crash starts before system service is restored, and continues afterwards. - preparations for performing the task should be initi- ated while the design team is finishing the detailed description 0 (overlaps V contains): - investigation of the system crash starts before sys- tem service is restored, and sometimes continues af- terwards. - these tasks are always concurrent: work on the processor configuration and the dis- tributed system design task a partially contains: - if you need to cross the road, you do so while there’s no traffic. - task 34 should only be worked on when Fred and Mary are available for it l partially meets - when system service degrades, sometimes we need to reboot - the distributed system design task may need to be followed by a feasibility study l begins in: - the emergency procedures were introduced and used during the company reorganisation - the implementation should be commenced while the system is being configured l ends in: - the final reorganisation before dissolution occurred during last year’s first financial crisis - the implementation should be completed while the system is in the test stage l begins with: regular system backups were started during the first time the system lost a drive because of a head crash a ends with: communication with other machines ceased with the last failure of the main processor of the gateway 364 / SCIENCE l begins at: tasks A and B are independent starting tasks for the project l ends at: finishing a project at the deadline date Relation Primitives for General Non- Convex Intervals General non-convex intervals correspond to arbitrary sets of time-points, in a points-based model of time. To define the relations logically, we can either assume that certain relations are primitive, that a notion of subobject is primitive, or that part of is primitive (subobjects are parts of their superobjects, and contain8 and subobjects are interdefin- able also, as indicated below). The notion of subobject may be introduced by definition also from interval operators. For example, if we have a notion of intersection of intervals, which is natural in certain represen- tations such as that of a set of points, we may stipulate that the result of any intersection of intervals is a subobject of all those intervals. For another example, the argument intervals of a union operation on two intervals are subobjects of the result of the union. We do not consider operators on intervals in this paper, and prefer to avoid them where possible, since the addi- tion of operators vastly complicates the algebra, which would be no longer simply a relation algebra in the sense of Tarski. We have the following classification of general non-convex intervals: 0 contain8 l disjoint from, which splits into: - precedes and follows - meets and is met by - intermingles with, which splits further into: * disjointly-contains and disjointly-contained by * disjointly-overlaps and disjointly-overlapped by * begins preceding and begins following * ends preceding and ends following 0 strictly intersects, splitting into - begins after - ends before - begins before and ends after, with the corresponding case splits - begins at and ends at l bars We shall refer to the basic object of time (point, interval or whatever) as a time object. For example, if you prefer points- based time notions, then you will represent intervals as sets of points, and your basic time object will be a set of points. We assume at present that there is no null object. Allen and Hayes [personal communication, AZZHay] are able to define the convex relations from one primitive, in a first- order manner. Van Benthem [vBen] uses two. We have more, to enable us to keep the logical form of the definitions to a statement with at most two bounded quantifiers. We give En- glish descriptions of the definitions, but it should be obvious how to translate them into a first-order logical language with the declared primitives. Given a basic time object, we define its subobjects as those objects which are contained in it. So, for example, for the set- of-points notion, the subobjects of a’ are precisely the subsets of i. For the convex interval notion, subobjects are convex subin- tervals, and for the unions-of-convex-intervals notion, subob- jects are unions of convex subintervals. We assume that in a given ontology of intervals, either the subobjects are precisely defined, or the notion of containment is primitive. They are interdefinable, as indicated below. If subobject is a primitive notion, or, alternatively, contain- ment is a primitive relation, and precedes and meets are also primitive relations, we can give the following definitions of the relations. We assume that the conditions in the subclassifica- tions are conjoined with the conditions in the apropriate super- classification, so that we may avoid repeating parts of defini- tions; e.g. i disjointly contains j is to be read as (i is disjoint from j) A some subobject of i is . . . . . . . . . . . i contains j: j is a subobject of i i precedes j: a primitive relation i meets j: also primitive i is disjoint from j: i and j have no common subobjects. We note that this definition is adequate only because there is no null object, which would have to be a subobject of every object. i disjointly contains j: some subobject of i precedes all subobjects of j and some subobject of i follows all subobjects of j i disjointly overlaps j: some subobject of i precedes all subobjects of j and some subobject of j follows all subobjects of i i begins preceding j: some subobject of i precedes all subobjects of j i begins following j: some subobject of i follows all sub- objects of j i ends preceding j: some subobject of j follows all sub- objects of i i ends following j: some subobject of i follows all sub- objects of j i strictly intersects j: (also known as i overlaps ~1 i and j have a common subobject, and neither i contains j nor conversely. Notice this relation is symmetric. We can turn this re- lation into an antisymmetric relation, as overlaps is for Allen, with our primitives, by asserting that the subclas- sifications below are mutually exclusive and exhaustive of the strictly intersects relation. However, we would consider this move to be a claim about the structure of a particular interval model. For example, if one were to consider axiomatising the structure of closed sets of real numbers, one might want to assert that meets is the empty relation (since two closed sets can only meet KNOWLEDGE REPRESENTATION / 365 by intersecting at a point, which is a closed set); that pre- cedes is a dense (partial) order (any two non-intersecting closed sets may be separated by a closed set); etc. We prefer to leave the assertion of exhaustiveness as a structure axiom if it is needed. Thus we must allow strictly intersects the luxury of symmetry. i begins after j : some subobject of j precedes all subob- jects of i i ends before j: some subobject of j follows all subob- jects of i i begins before j: some subobject of i precedes all sub- objects of j i ends after j: some subobject of a’ follows all subobjects of j i begins at j: there is no subobject of i that precedes all subobjects of j, and symmetrically for j and a’ i ends at j: there is no subobject of i that follows all subobjects of j, and symmetrically for j and i For the case of bars, we did not include a monadic predicate in our language for selecting convez intervals. It is obvious that we would need such a predicate, whether primitive or defined, in order to define bars, however this doesn’t solve all the problems. For example, if we were to provide a predicate for convexity, we might try to define: i bars j: there is a convex object k such that the subobjects of k are exactly the subobjects of i and j Such an attempt doesn’t work: consider arbitrary sets of real numbers, with subobjects being subsets, and convex objects being convex sets. Let A = [a, b), and B = [b,c], where a < b < c. Then intu- itively A bars B since A u B is [a, c] which is convex. However, let z,y be such that a < z < b < y < c. [z, y] is a subobject of AU B, but isn’t a subobject either of A or of B. It’s probable that bars has to be a primitive relation. We note that primitives for interval-based notions of time are discussed extensively in [vBen] and [Hum]. We note further that our proposed classification is much finer-grained than the primitives in these works, but that all are first-order definable from the primitives used therein. We have mentioned before that our motives for such profusion are algebraic. We note that [vBen] also provides an extensive discussion of the types of interval structure that may be obtained in different domains of application. Acknowledgements We thank Tom Brown, Allen Goldberg, Pat Hayes, Richard Jullig, Wolf Polak, Bob Riemenschneider, Richard Waldinger, and the referees for discussion and comments. Bibliography AIll : Allen, J.F., Towards a General Theory of Action and Time, Artificial Intelligence 23 (2)) July 1984, 123-154. A112 : Allen, J.F., Maintaining Knowledge about Temporal In- tervals, Comm. A.C.M. 26 (ll), November 1983, 832-843. AllKau : Allen, J.F. and Kautz, H., A Model of Naive Tem- poral Reasoning, in Hobbs, J.R. and Moore, R.C., editors, Formal Theories of the Commonsense World, Ablex 1985. AllHay : Allen J.F. and Hayes, P. J., A Commonsense Theory of Time, in Proceedings IJCAI 1985, 528-531. Corn1 : Comer, S.D. Combinatorial Aspects of Relations, Al- gebra Universalis 18, 1984, 77-94, Dow : Dowty, D.R. Word Meaning and Montague Grammar, Reidel, 1979. Freu : Freuder, E.C., Synthesizing Constraint Ezpressions, Comm. A.C.M. 21 (11)) N ovember 1978, 958-965. Hum : Humberstone, I.L., Interval Semantics for Tense Logic: Some Remarks, J. Philosophical Logic 8, 1979, 171-196. JoTal : Jonsson, B. and Tarski, A., Boolean Algebras with Operators 1, American J. Mathematics (73), 1951. JoTa : Jonsson, B. and Tarski, A., Boolean Algebras with Operators 11, American J. Mathematics (74), 1952, 127- 162. Lad1 : Ladkin, P.B., Comments on the Representation of Time, Proceedings of the 1985 Distributed Artificial Intelligence Workshop, Sea Ranch, California. Lad2 : a self-reference. Lad3 : Ladkin, P.B., Primitives and Units for Time Specifica- tion, Proceedings of AAAI-86 (this volume). LadMadl : Ladkin, P.B. and Maddux, R.D., The Algebra of Time Intervals, in preparation. Lam : Lamport, L., On Interprocess Communication Part I: Basic Formalism, Distributed Computing, to appear. Mad1 : Maddux, R.D., Topics in Relation Algebras, Ph. D. Thesis, University of California at Berkeley, 1978. McDer : McDermott, D., A Temporal Logic for Reasoning about Actions and Plans, Cognitive Science 6 (2)) April- June 1982, 71-109. McDer2 : McDermott, D., Reasoning about Plans, in Hobbs, J.R. and Moore, R.C., editors, Formal Theories of the Commonsense World, Ablex 1985. vBen : van Benthem, J.F.A.K., The Logic of Time, Reidel 1983. 366 / SCIENCE
|
1986
|
159
|
427
|
Star Pl an I I Evolution of an Expert System Ronald W. Siemens, Marilyn Golden, and Jay C. Ferguson Ford Aerospace & Communications Corporation Sunnyvale Operation, 1260 Crossman Avenue Sunnyvale, California 94089-1198, (408)743-3206 ABSTRACT An expert system for satellite anomaly communication between knowledge sources occurs through higher level meta-monitors. resolution must perform monitoring, situation The architecture of the system consists of assessment, diagnosis, goal determination and five major components, shown in Figure 1. planning functions in real time. StarPlan is such a system being developed at the Ford Aerospace Sunriyvale Operation. This paper details the evolution of the StarPlan architecture from a rule-based system in which multiple "experts" classified and resolved anomalies to a more generic architecture that utilizes an object model of the domain to perform fault diagnosis using causal reasoning. The StarPlan I architecture is described; the lessons learned in StarPlan I implementation are discussed; and the architecture of StarPlan II is presented. Guardians classify incoming data, filter relevant data, and translate the data through methods from numeric to symbolic ranges to derive a set of hypotheses. Monitors reason from the set of hypotheses established by the guardians to resolve specific classes of anomalies. Meta-Monitors are responsible for the control, interaction and data fusion of the individual monitors. 1. INTRODUCTION This is the second paper in a series on the evolution of the StarPlan architecture and knowledge representation Cl]. StarPlan is an expert system that performs a fault diagnosis and resolution function for satellites [2]. The system monitors incoming telemetry from a satellite, alerts the satellite control operator to anomalous conditions and suggests corrective actions. The architecture of StarPlan I, the first generation of the system, is described in this paw-, and the lessons learned during implementation are detailed. Our experience with StarPlan I led to a significant architectural restructuring of the knowledge representation scheme that captures a model of the domain and uses that model to perform fault diagnosis by utilizing the relational links between the objects of the domain model and the declarative description of the object behaviors. Production rules are used only when the information being captured is not defined well enough to be modeled. 2. STARPLAN I ARCHITECTURE The StarPlan I system architecture is based on Minsky's Society of Experts approach [3]. There are multiple knowledge sources [4] that are customized to specific problems; knowledge sources exist at different levels of abstraction; both goal driven and opportunistic control strategies are used [5]; and the (Anomaly Resolvers) Anomaly1 Recommendations Subsystem 1 Recommendations User { Recommend;fions r FIGURE 1 STAR-PLAN 1 ARCHITECTURE USER INTERFACE 844 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. o Data Bases are used by the other components of the system to obtain relationships, facts, and other relevant information. o The Simulator models the satellite systems. The system is implemented in the Knowledge Engineering Environment (KEE) from Intellicorp. KEE allows object oriented programming within class/subclass/member hierarchies, message passing, inheritance, and active values along with an integrated graphics interface [6]. 2.1 Problem Detection Each guardian, upon initialization, attaches alarms (active demons) to the necessary data values in the telemetry database. These alarms initialize their range sensitivity parameters from the data range limits database. Upon telemetry receipt the attached alarms check the sensitivity range. When a range has been exceeded the alarm sends the attached guardian a message containing the violated range parameter and the relevant satellite object involved (i.e., a battery). Based on the data range limits database the guardian incorporates new range parameters. The alarms act both as a filter to minimize processor utilization and an abstractor to reclassify specific data to a symbolic representation. This simplistic classification uses data ranges rather than specific data values. 2.2 Problem Determination The guardians contain contextually partitioned subsets of rules [7] that watch for specific anomaly classes, which are usually grouped according to objects. A typical guardian may, for example, watch the batteries for associated anomalies (over or under temperature, current, or voltage problems). The attempt is to provide a cover set of rules to perform anomaly detection for a specific anomaly class that is small enough to be readily managed and verified by an expert. Once an anomaly is detected by the guardian, a monitor from a set of prototype monitors for that class of monitors (i.e., battery emergency overtemperature) is instantiated for the specific object (i.e., Battery 1) to resolve the anomalous condition. The current status of the alarm demon messages are maintained by the guardian as well as the monitor status (active/inactive). Further alarm messages regarding an anomaly with an active monitor are maintained but not acted on until the active monitor notifies the guardian it has completed resolution of the problem and removes itself from the system. The guardians forward chain through the covering rule sets to try to match the incoming symbolic telemetry status patterns against known or expected anomaly patterns. 2.3 Problem Resolution The monitors (anomaly resolvers) are goal-driven and contain rule-sets contextually partitioned to the specific anomaly class. Each hypothesis contains a rule set to guide the strategy of diagnostic procedure, problem resolution methods, command sequences and operational considerations. Diagnostic strategy may require command recommendations to change satellite configuration (for safety or to eliminate specific hypotheses) which conform to an allowable set of command sequences designed to preserve spacecraft integrity. The monitor may require additional, or more detailed telemetry data on spacecraft status, so a monitor may set its own alarm demons to watch for rapid or unexpected changes, and then reschedule itself to rerun the rule set at a future time, allowing the satellite system time to respond. Alarms may tag data to allow explicit temporal reasoning [8,9] if necessary. Upon anomaly resolution the status database is updated to reflect the state change. When the unexpected telemetry values that triggered the monitor have been controlled or corrected to the monitor's satisfaction and the system status has been updated, the monitor will delete its alarm demons and itself from the system, thereby allowing the guardian to once again set a monitor on that specific object and anomaly class if necessary. If the diagnostic procedure proves how an object failed, it will be marked as to the specific failure so that future diagnostic strategy and problem resolution reasoning can take the failed condition into account. For reasons of time constraints or satellite safety an object may be marked as having an unknown status, which may affect future actions in an entirely different manner. 2.4 Management of Multiple Experts Since the guardians are looking for contextually specific patterns [lo], even a single fault anomaly can activate multiple monitors that will be working on independent hypotheses. The conflicting, and sometimes contradictory, diagnostic procedures generated by the various monitors must be resolved by subsystem meta-monitors [ll], which can give diagnostic control to the most urgent hypothesis within a group of monitors, usually centered around a satellite subsystem. The subsystem recommendations are coordinated in a top-level meta-monitor that decides overall strategy (i.e., if there are payload problems, status telemetry losses, and a power subsystem problem, then shut down non-essential payloads and allow the power subsystem monitor to make recommendations). 2.5 The Knowledge Base The knowledge base consists of various databases that are used by the system to obtain required information. There are four databases to support the guardian and monitor functions. APPLICATIONS / 845 behavioral information was still embedded in rules, sustaining complex rule sets. 3.4 Recommendations The two major recommendations arising from an analysis of StarPlan I were: (1) Separate the functions of classification, diagnosis, goal determination, and planning and command to provide more modularity and less overlap of functions performed at various levels of processing, and (2) Define knowledge representation techniques in place of rules that provide semantic knowledge that can be addressed by generic problem-solving mechanisms. 4. STARPLAN II ARCHITECTURE The StarPlan II architecture separates the monitoring, situation assessment, diagnostic, goal determination and planning that was inherent in the monitors and meta-monitors of StarPlan I. These correspond to monitoring, problem identification, diagnosis, goal identification and plan and modify in Clancy's Classification Hierarchy [14]. The five major components of the new system are: the Active DataBase, Situation Assessment, Causal Diagnosis, Goal Determination,-and Planning & Command, as shown in Figure 2. These modules all operate on the same underlying knowledge representation, which is generalized and constrained by the knowledge acquisition tools so that the domain experts can represent their environment in a consistent manner. COMMAND SEQUENCES The knowledge base contains a description of each object that can be reasoned about, given the telemetry available from the satellite. Each object defined has three basic parts: the attributes of the object, the relationship of the object to other objects in the satellite, and a behavioral description of the object. The behavioral description is captured declaratively in a process description language [15]. This language allows the expert to define the object's behavioral states, events and processes so that the information can be reasoned about. 4.1 Problem Detection The general mechanism of the StarPlan I Alarm Demons was extended to create the Active DataBase [16]. The function of the Active DataBase is to monitor the incoming telemetry data to detect and notify the system when telemetry fails to meet expectations. All incoming data values are translated into symbolic point or interval values [17] that relate to the expectations that are in place at the time of data receipt. The expectations are symbolically expressed and when those expectations are violated a la Schank [18], the Active DataBase sends a notification message to the Situation Assessment module that an event of interest has occurred, for example "Battery 1 temperature is critically high." Knowledge acquisition tools [19] were developed that allow ACTIVE DATABASE 1 ASSESSMENT CAUSAL DIAGNOSIS FlGURE 2 GENERIC PROBLEM-SOLVING ARCHITECTURE the knowledge engineer (KE) to identify each incoming telemetry datum from the satellite and specify both the symbolic translation and event notification mechanisms to be applied. The KE can graphically enter ranges, trends or specific conditions to be applied to the data each time it is received from the satellite (or simulator), the events of which the Situation Assessment module should be notified and their relative importance. Much of the factual information that was represented as production rules in StarPlan I can be entered into the computer using the knowledge acquisition tools in a manner that is natural and logical to the KE, is consistent to allow generic control mechanisms and is self-documenting. During system operation, the telemetry data are acquired from the satellite in bursts called frames. A symbolic translation mechanism [20] assesses each datum of telemetry and sets its point and interval symbolic value (normal, high, low, increasing, not changing, unstable, etc.). Then the symbolic value is evaluated to see if the Situation Assessment mechanism is to be invoked (e.g., make the notification when the value is unstable and not increasing). After the Active DataBase has processed an incoming frame of data, it notifies the Situation Assessment module that a data cycle is complete. 4.2 Situation Assessment As each data expection failure notification 846 / ENGINEERING The limits database contains the various ranges that a telemetry value can take (i.e., emergency over temperature) with the upper and lower values for each range specified. All alarms reference these limits. These range limits may be modified by historical analysis, configuration status changes, or short-term expectations derived from the diagnostic procedure. The status database contains information reflecting status data received, inferred from commands sent, and discovered during the diagnostic process. This database represents the dynamic status of the satellite and is used from pass to pass for continuity and planning. The telemetry database contains the latest telemetry received from the satellite. The alarms are attached to this database. The commands are grouped into their own database to facilitate capture of expertise concerning order and allowable combinations in command strings. 2.6 The Simulator The simulator is used to generate telemetry data for testing purposes. Mathematical models of the various satellite systems have been developed to allow the simulation of the satellite in various states. The simulator allows real time testing for completeness of classification rules. 3. LESSONS LEARNED IN STARPLAN I As StarPlan I was extended by the Knowledge Engineers to cover more complex portions of the domain, several structure, system test and knowledge engineering issues surfaced that pointed to weaknesses in the architecture. 3.1 Structure Although one of the key features of the architecture is the contextual partitioning of the rules, the control of distributed rule sets can be complex and costly in terms of overhead. This is especially true when the data driven system must deal with multiple manifestations of a single fault or multiple faults [12]. In addition, the structure (or lack thereof) of rules themselves hampered the use of generic mechanisms for knowledge manipulation (one goal of our design was to be have a common core of generic mechanisms that would act on satellite-specific data so that expert systems for different satellites could be easily produced by swapping the satellite-specific data). 3.1.1 Multiple Manifestations of a Single Fault. When a fault occurs that causes metry associated with different classes of satellite objects to be out of limits, the guardians assigned to each class will instantiate monitors to handle the perceived problems. A meta-guardian is then required to detect that possibly a single fault, not multiple faults, is responsible for the problem, and an appropriate meta-monitor that could reason across both classes would be required to work on the problem solution in conjunction with the two class monitors. No meta-guardians were designed into StarPlan I which caused a problem when dealing with anomalies that were manifested in several classes of objects. A method for focusing on the most likely cause(s) of the problem is required; the combinatorial explosion involved with meta-guardians makes that an unlikely choice. 3.1.2 Multiple Faults. In the rare event that multiple faults occur simultaneously onboard the satellite, multiple monitors are instantiated. A meta-monitor regulates and controls the monitors' processing and command sequencing. The problem is that the meta-monitor may subsume the lower-level monitors' strategies which defeats the envisioned goals of partitioning. This seems inevitable as long as the diagnostic, problem resolution, and command sequence planning rules are interwoven. 3.1.3 Rule Structure. Rule-based systems offer enormous advantages over traditional software systems when it comes to separation of inference from control mechanisms. However, the lack of consistency and structure inherent in a rule-based system limit the use of generic processing mechanisms to pattern-matchers that operate only on patterns that exist in the data. If the data were structured in a manner that had semantic as well as syntactic significance, more generic problem-solving algorithms could be employed in the system. 3.2 System Test Because of the possible side-effects introduced with any change to any rule in the system, testing of the system proved to be a difficult task [13]. The only really effective test approach for system validation is exhaustive testing, and re-testing after modification of the system. 3.3 Knowledge Engineering One of the most difficult problems in building expert systems is obtaining the domain information from the expert and transferring it to an appropriate representation for use by the expert system. There were several problems encountered in collecting the knowledge for StarPlan I. 3.3.1 Mismatch Between Object Classes and Anomaly Classes. Although partitioned rule sets facilitated the knowledge acquisition process by constraining the expert to describe a small portion of the knowledge base at a time, the structure of the partitions did not always correspond well to the natural thinking processes of the domain experts. 3.3.2 Rules for Non-heuristics. When the original OPS-5 implementation of StarPlan was moved to the frame-based, semantic network provided by KEE, the factual data moved from rules to frames. However, the procedural and APPLICATIONS / 847 is received from the Active DataBase, the Situation Assessment module identifies the objects that appear to be involved and the event in which they are involved. After all notifications are generated from an incoming telemetry frame, the situation assessment module sweeps the active objects with a focus mechanism [21] and extracts the list of objects of maximum interest. This object list is passed to a ranking mechanism which ranks the list to form a situation assessment agenda to be passed to the Causal Diagnosis module. The entire situation assessment mechanism operates on the structure of the knowledge base and on any knowledge entered by the KE/expert. There are several ranking mechanisms which can be used either singly or in groups. 4.3 Causal Diagnosis The function of the Causal Diagnosis module is to perform a causal analysis to explain expectation failures [22]. Using the Situation Assessment agenda as a guide, the causal diagnostic mechanism can directly reference an object in the knowledge base and use its local attributes, relationship data, and behavioral description to determine what has failed [23]. For example, if the battery 1 temperature is too high, the diagnostic module can get battery l's internal variables for the temperature, and look at the behavior associated with temperature. This definition shows that the temperature is calculated from the internal variable current times the internal variable resistance plus the external variable heat from heater A. If the battery itself is not causing the excess temperature, the stored relationship data will provide indexes to the external objects that can contribute to the problem, and further analysis continues until the cause of the problem is detected. The output of the causal Diagnosis module is a list of what objects are broken and the state in which they failed (e.g., Heater A on Battery 1 is failed open). During causal diagnosis, the diagnostic mechanism may initiate tests in the form of satellite configuration changes to prove/disprove a specific hypothesis under consideration. The diagnostic goal is passed to the Planning and Command module for planning and for mission operational constraint checking. The Planning and Command module may reformulate the plan on satellite safety or operational priority grounds. 4.4 Goal Determination With the list of failed objects and their failure status, the Goal Determiner can then identify the configuration goals needed to resolve the anomaly [24]. In some cases the diagnostic procedure may have left the satellite in an unbroken condition (e.g., broken heater A is now isolated from power) and only the status of the object of concern must be marked with the failure so it will not be activated in the future. In other cases the satellite may be placed in a "safe" mode with all unnecessary functions shut down. The Goal Determination module must then provide a goal list for powering the systems back up, bypassing failed components. This goal list is passed on to Planning and Command for operation constraint checking. 4.5 Planning and Command This module receives a set of goals and creates a plan for transitioning from the current state to the goal state, and then determines the command sequence necessary to accomplish the plan [25]. Next operational constraints are taken into consideration to ascertain the correctness of the plan. The behavioral descriptions of the objects in the domain can be consulted to look at the effects of the plan prior to execution. 4.6 Simulator In StarPlan I the simulator used a mathematical modeling mechanism. The experts had to have an object simulation working before being able to test any of the rules for the object monitors. In StarPlan II the behavior of the object is declaratively defined [26] and is used as the basis of the simulator. Since the states, events to induce state changes, and processes that occur in each state are defined, it is possible to compile this declarative representation in LISP (or any other language) and execute it directly. Due to the objects being constrained to using only internal variables or variables that can be accessed through relational links, the system is defined in a controlled manner. The simulator can be used to simulate all or part of the satellite telemetry; the objects that are to be simulated plus their external inputs can be selected to be run in simulation mode. 4.7 Knowledge Representation It is the underlying structure provided by the knowledge representation methodology, PARAGON, [27] that supports the functional modules (i.e., classification, diagnostics, etc.) that gives this system its powerful, generic application capability. This representation allows the expert to create concepts (a noun that describes a group of things or the actual instances of things) and the relationships that exist between concepts in the domain of interest, i.e.,.a model of the domain with which the system can reason. The selection of the knowledge representation method for StarPlan II followed an analysis of the most common techniques for knowledge representation, a summary of which is detailed in Table 1. A hybrid knowledge representation scheme was designed that incorporated the strong points of each of the techniques and eliminated the weaknesses through an overriding requirement for consistent definition. 848 / ENGINEERING Table 1 Knowledge Representation Technique Tradeoffs TECHNIOUE 1 DESCRIPTION I STRENGTHS I UEAKNESSES RULES CONTEXTUALLY DEPENDENT *FLEX1 BLE *LACK OF STRUCTURE FACTS *STAND ALONE *NO rIETHOOOLOCY OF DEVELOPHENT *REPRESENT POORLY l SEMANTICS DIFFICULT STRUCTURED AND/OR l PROBLEH SOLVING TECHNIOUES POORLY UNOERSTOOO LIHITEO INFORMATION l DIFFICULT TO MANAGE *AVAILABLE OEVELOPIIENT l DIFFICULT TO MAINTAIN TOOLS l HINOER GENERIC OEVELOPKENT l OlFFICULT TO REPRESENT CONTROL AND/OR TEIIPORAL KNOWLEDGE OBJECT ORIENTED FRAMES OF RELATE0 *FLEXIBLE *NO UNDERLYING PRINCIPLES FACTS AN0 BEHAVIOR *OATA AN0 BEHAVIOR OR CONSTRAINTS USING MESSAGE PASSING PACKED TOGETHER *LACK OF DEVELOPMENT FOR CONTROL WIAINTAINABLE IIETHOOOLOGY *AVAILABLE OEVELOPHENT *NO ASSOCIATE0 PROELEtl- TOOLS SOLVING TECHNIOUES SYSTEH OROEREO FRAMES OF RELATE0 FACTS AN0 l flAINTAINABLE *LACK OF OEVELOPIIENT BEHAVIORS IIETHOOOLOGY *ONE OIHENSIONAL (LIMITS NUHBER OF RELATIONSHIPS ; REPRESENTEO I SEMANT I C NETUORK GRAPH OF NOOES (REPRE- 4JIOE VARIETY OF l AMBIGUOUS DEFINITION OF SENT ING CONCEPTS I AN0 RELATIONSHIPS RELATIONSHIPS LINKS (REPRESENTING REPRESENTEO *LACK OF DEFINED PROBLEM RELATIONSHIPS I l SOflE KETHOOOLOGY SOLVING KETHOOS *NATURAL REPRESENTATION BLACKBOARO STRUCTURE OF OOHAIN AN0 HOU LEVELS OF OOHAIN COHtlUNICATE OR INTERACT *MULTIPLE LEVELS *COMPLEXITY OF OEFINITION *ABILITY TO OEFINE *LACK OF EXPLICIT CONTROL INTERACTION EETUEEN tlETHOOOLOGY LEVELS l INOEPENOENT KNOYLEOCE SOURCES CONTRIBUTE 4.8 Knowledge Acquisition Tools The best way to achieve a consistent underlying knowledge representation structure throughout the PARAGON knowledge acquisition process [28] is to provide knowledge acquisition tools [29] which translate the experts' input into that structure. A large effort has gone into producing tools that assist the expert in defining telemetry data, the expectations associated with the data, the concepts that comprise the domain, their interrelationships and their behavior. 5. SUMWRY The StarPlan II architecture, its underlying knowledge representation scheme, and the automated knowledge acquisition tools are a vast improvement over the StarPlan I system. The consistent definitions applied throughout the knowledge acquisition process have allowed the development of generic control and problem-solving mechanisms. Perhaps the greatest benefit derived from StarPlan II is that not only will it facilitate building anomaly resolution systems for a wide variety of satellites, it is generic enough to be the basis for any problem-solving system in which the domain is understood well enough to be declaratively modeled. REFERENCES [l] Ferguson, J.C., R.W. Siemens, R.E. Wagner, STAR-PLAN: A Satellite Anomaly Resolution andplanning System, Proceedings of AAAI Workshop on Coupling Symbolic and Numerical Computing in Expert Systems, August 27-29, 1985. [2] Golden, M., R.W. Siemens, An Expert System for Automated Satellite Anomaly Resolution, Proceedings AIAA/ACM/NAS4/IEEE Computers in Aerospace V Conference, October 21-23, 1985 [3] Minsky, M., Matter, Mind & Models in Semantic Information Processing edited by Marvin Minsky, MIT Press, Cambrjdge, Mass., 1968 [4] Hayes-Roth, B., The Blackboard Architecture: A General Framework for Problem Solving, Stanford University Heuristic Programming Project Report #HPP-83-30, May, 1983. [S] Buchanan, B.G., E.H. Shortliffe, Rule-Based Expert Systems, Addison-Wesley, 1984 [6] Fikes, R., T. Kehler, The Role of Frame-Based Representation in Reasoning, Communications of the ACM, Vol. 28 Number 29, September 1985, pp 904-920. [7] Cohen, P.R., E.A. Feigenbaum, Rule Space, The Handbook of Artificial Intelligence, Vol. III, Heuristech Press, W. Kaufmann Inc. APPLICATIONS / 849 [8] Denbigh, K.G., Three Concepts of Time, Springer-Verlag, Berlin, 1981. [9] Fagan, L.M., VM: Representing Time-Dependent Relations in a Medical Setting, Ph.D. Thesis, Stanford University, 1980. [lo] Watterman, D.A., F. Hayes-Roth, Pattern-Directed Inference Systems, Academic Press, 1978 [ll] Genesereth, M.R., D.E. Smith, Meta-Level Architecture, Stanford University Heuristic Proaramminq Project Memo HPP-81-6, Dec. 1982.[12] -Genesereth, M.R., The Use of Design Descriptions in Automated Diagnosis, Qualitative Reasoning about Physical Systems, D.G. Bobrow, pp. 411-436, MIT Press, Cambridge, Mass. [13] Hickam, D.H., E.H. Shortliffe, M.B. Bischoff, A.C. Scott, C.D. Jacobs, A Computer-Based Treatment Consultant for Clinical Oncology, The Quality of Computer-Generated Advice, Stanford University Heuristic Programming Project Memo HPP-84-9, May 1984. [14] Clancy, W.J., Heuristic Classification, AI Journal, Vol. 27, Number 3, Dec., 1985, pp. 289-350, North-Holland, Amsterdam. [15] Ferguson, J.C., Declarative Representation for Procedural Knowledge: A Structured Semantic Network Approach, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Ave., Sunnyvale, CA 94089-1198 [16] Heher, D.M., Genesis: An Intelligent Active Data Base, Internal Proprietary Report, Ford Aerospace & Communications Corporation Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198. [17] Kahn, M.G., J.C. Ferguson, E.H. Shortliffe, 81 L.M. Fagan, An Approach for Structuring Temporal Information in the Oncocin System, Proceedings of the Symposium on Computer Applications in Medical Care, 1985 [18] Schank, R.C., Dynamic Memory, Cambridge University Press, 1982 [19] Contreras, V., J.C. Ferguson, Knowledge Manasement, Internal Proprietary Report, Ford Aeroipace a Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [20] Forbus, K.D., Qualitative Process Theory, Qualative Reasoning about Physical Systems, D.G. Bobrow, pp. 85-168, MIT Press, Cambridge, Mass. [22] Schank, R.C., Questions and Thought, Yale University Report YALEU/CSD/RR#385, August 1985 [23] Ferguson, J.C., Causal Analysis: Explanation based on Accountability, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [24] Ferguson, J.C., Using Behavioral Domain Knowledge to Determine Goals, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [25] Ferguson, J.C., Planning through Behavioral Knowledge Based on Constraints & Skeletal Plans, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [26] Ferguson, J.C., Simulation through Declarative Models, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [27] Ferguson, J.C., PARAGON: A Knowledge Representation Methodology for Defining Declarative Models, Internal Proprietary Report,Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [28] Musen, M.A., L.M. Fagan, D.M. Combs, E.H. Shortliffe, Facilitating Knowledge Entry for an Oncology Therapy Advisor Using a Model of the Application Area, Stanford University Technical Memo KSL-86-1 [29] Ferguson, J.C., Graphical Acquisition & Translation of Knowledge into PARAGON, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 [21] Ferguson, J.C., Situation Assessment: An Efficient Focus Mechanism, Internal Proprietary Report, Ford Aerospace & Communications Corporation, Sunnyvale Operation, 1260 Crossman Avenue, Sunnyvale, California 94089-1198 850 / ENGINEERING
|
1986
|
16
|
428
|
Primitives and Units for Time Specification * Peter Ladkin Kestrel Institute 1801 Page Mill Road Palo Alto, Ca 94304-1216. Abstract We work in a calculus of intervals, formulated by James Allen for convex intervals, and by ourselves for unions of convex in- tervals [AZZ2,Lad.Z]. W e investigate the primitive relations and operations needed for implementing such calculi in a system which includes some set theory, and which allows the assertional definition of operators in Horn clause fashion. We indicate how standard temporal logic may be rephrased in the interval cal- culus, and present a formalisation of a system of time units in the interval framework. We are implementing the primitives in the REF IN ETM system’. Introduction James Allen introduced an interval calculus for reasoning about time, and we have proposed an extension of this calculus to en- able the representation of time by non-convex intervals. Recent work on the convex interval calculus is by Allen, Pat Hayes, and Henry Kautz [All2, AM, AllHay, AllKau]. Recent work on the non-convex calculus is by ourselves and Roger Maddux[Lad2, Lad3, LadMad. Convex intervals are those intervals considered by Allen and Humberstone [AZl2, Hum], which span a period of time, with- out gaps of any sort. In a formalism based on points, these are l-dimensional convex sets of points. We are concerned with intervals that are arbitrary unions of these, which we need for expressing temporal properties of intermittent events [Ladl, Lad2]. We consider the interval formulation to be an abstrac- tion of time periods, and in this view, sets of time points would be just one way of modelling intervals. Mathematically, Allen’s calculus of convex relations is a par- ticular relation algebra in the sense of Tarski [Jo Tal, JoTa2]. Allen’s algebra has thirteen atoms, and thus generates a relation algebra of size 213. By Contras t Ladkin’s algebra is infinite, as , well as having only infinite representations. We argue in [Lad21 that the relations, which are a strict subset of all relations be- tween unions of convex intervals, are not only convenient but necessary for expressive power. Mathematical results concern- ing Allen’s and our own calculi are contained in [LadMad]. We present a specification of primitives which can be used to implement time intervals represented as unions of convex in- tervals. It is important to us to allow only relation structure in the interval calculus, so that we are able to maintain the structure of a relation algebra, and to restrict the prolifera- tion of intervals denoted by basic terms in our model [LadMad, ‘This work was partially supported by RADC contract F30602-84-C- 0109 and by DARPA contract NO001481-C-0582 ‘REFINERY is a trademark of Reasoning Systems Inc. Lad2]. We obtain the effect of operators on intervals by using a correspondence between an interval and a certain set of convex subintervals of that interval. Sets of convex intervals are needed in any case for our model of time units. We show how to express standard temporal logic primitives in the interval calculus, and finally we develop a general model of time units in the interval framework. We assume throughout that time is linearly ordered, with respect to the relation precedes, although this work is equally applicable to branching time models. Some modification would be needed to the measure functions, and other modifications would be of a minor nature only. Other references to time representation by intervals are [ uBen, Dow]. Another representation of time for AI purposes using a points-based model rather than an interval model is described in (McDerl, McDer2]. Notation We assume the reader is familiar with standard logical notation, in particular the connectives and quantifiers .AVTJV3 We use certain terminology from [AZZ2, Lad2], in particular We refer to the relations in [All21 as conuez relations. All convex relations are irreflexive and antisymmetric, except for equality. Non-convex intervals or relations are those for unions of convex intervals in [Lad2]. A picture of such a beast will look like a sequence of lines with gaps, when drawn in one dimension. The lines are the maximal convez subintervals, or maxconsubints. ]] is the convex relation meets. . i ,‘;il;1 ff i is before j with no interval occurring between < is the convex relation contained-in (i < j) iff i is a strict subinterval of j, i.e. i starts j v i during j v i ends j in the terminology of [All21 4 is the convex relation precedes (i < j) iff i is before j, with some other interval occurring between 0 is the convex relation overlaps (z’ @ j) iff, intuitively, i starts before j, and finishes before 5 354 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. l (RI v Ra V . ..) is the non-convex relation of disjunction, where the & are convex relations. (i(R1 V R2 V . ..) j) ff i corresponding maxconsubints of i and j are one of the & to each other (different subinter- vals may be related by different R;). We assume a l-l correspondence between the maximal convex subintervals is available. (i always-R j), where R is a convex relation, iff max- consubints of i are in each case R to the corresponding maxconsubints of j (i sometimes-R j), where R is a convex relation, iff some maxconsubint of i is R to the corresponding maxcon- subint of j Operators for Intervals We attach intervals to actions, tasks, events and assertions, rep- resenting the periods of time over which an action takes place, a task is performed, an event happens, or an assertion is true. We note here that certain supposed problems with the defini- tion of truth-on-an-interval have been adequately answered in [Hum]. We also provide the correspondence between an interval and a set of convex intervals, a means of measuring the duration of an interval, and the length of time over which it happens, here called the diameter. Duration and diameter are, of course, the same for convex intervals. In mathematics, duration is usually called measure, and diam- eter is terminology from topology. The operators We use the word type to indicate a domain of objects of the same sort. There is no implied type theory in the use of this terminology. interval-of(P): P is of type task / action / event / assertion returns i of type interval such that occupies(P,i) (see be- low) Since we reason assertionally about time intervals, this gives us a way of passing between the task domain and the time domain. dissect(i): the set of maximal convex subintervals of i Dis- sect is somewhat like a selector for the data domain of non-convex intervals even though it returns a set, not an interval. combine(S): S is a set of intervals makes an interval out of S, rather like a union operator. In general, this interval will be non-convex. All the inter- vals in S are subintervals of combine(S). Combine is the constructor for the data domain of non-convex intervals. duration(i): type real, the measure of i convexify(startint,endint): type interval, the smallest con- vex interval containing startint and endint. Note that it follows that convezifv is commutative and associative. alltime: type interval, the global time interval that includes all time. Note that we have omitted from the list of primitives the func- tion diameter(i): type real, the largest distance between two subin- tervals of i (including the duration of the subintervals) Diameter tion may be defined in terms of duration by the equa- l diameter(i) = duration(convexify(i,i)) We do not include diameter as a primitive, since it is definable, but it is a basic part of the constraint-expression language. Similarly, duration need only be defined for conuez intervals, since its extension to non-convex intervals follows from the ad- ditive property below. Axioms In this section, we give the axioms that specify the operators described above. We note that the only unbounded quantifiers appearing in the axioms are universal, and that the bounded existential quantifiers that appear are restricted to range over sets or intervals that are parameters in the formula. We envis- age that these objects will be finitely bounded in most applica- tions, in such a way that the quantifiers may be realised by an enumeration. This is indeed the case for REFINETM. Note (i conuez) and (convex i) are both shorthand for i bars i [Lad2]. l (V’i)(; << alltime V i = alltime) characterises alltime as the global time interval. All intervals are contained-in or equal- to alltime The next three axioms characterise dissect(i). l (V convez j < i)(3k E dissect(i))(j < k v j = k) l (Vj E dissect(i))(j convex A (j << i V j = i)) l (Vj)(Vk E d issect(i))(j < k =+ 1 j E dissect(i)) The first axiom states that all convex subintervals of i are con- tained in some interval k in dissect(i), or are equal to such an interval. The second ensures that dissect(i) contains only con- vex subintervals of i, which in the presence of the first ensures that dissect(i) contains at least the maxconsubints of i. The third axiom is not in positive Horn clause form, since the con- sequent is negated. It may easily be turned into the right form by observing that the negation of the antecedent is equivalent to a positive disjunction of the other twelve interval relations enumerated by Allen, who gave the exhaustive list of possible relations between convex intervals. This observation allows us to take the contrapositive statement for our axiom. This has the correct form, even though its intent is more obscure. Our relations include those that are the disjunction of convex inter- val relations, so this disjunction reduces to a single predication in the consequent. l (Vj)(Vk E dissect(i))((j E dissect(i)) =+ (j(llv@V.....)k)) In the presence of the first two axioms for dissect, the third dis- sect axiom ensures that dissect(i) contains only maxconsubints of i. KNOWLEDGE REPRESENTATION / 355 The next three axioms characterise combine. The first says that if you combine a dissected interval, that you get back the original interval under all circumstances. The second asserts that if the set S consists of disjoint, non-meeting convex inter- vals, then combine is indeed the full inverse of dissect. Note that dissect(i) consists only of disjoint, non-meeting intervals, i.e. this is derivable from the dissect axioms. The third combine axiom ensures that combine-ing a set of intervals which overlap or meet does not give more than you want in the resulting inter- val. If there were more, then the extra would form an interval that was a subinterval of combine(S), but which was disjoint from anything in S. The axiom rules this out. An alternative way of axiomatising this property is 0 (Vi E dissect(combine(S))) (( (3j E S)(j starts i) A (3k E S)(k ends i)) V (3j E S)(j = i)) This property ensures that there are no “little bits hanging off the end” of an interval in combine(S) that aren’t there in some interval in S. Notice that every convex interval in dis- sect(combine(S)) is at least as big as any convex interval in S, thus reducing the number of cases we have to worry about stating in the axiom. The cleanest way to implement the correct combine function is probably to iteratively convezify intervals in S which over- Zap or meet. Call the resulting set Sr. All the intervals in 5’1 are disjoint, and non-meeting, and hence dissect(combine(S1)) = 271. Furthermore, combine(S1) = combine(S) by the axioms. However, in this paper, we are only concerned with a correct assertional specification of the functions in a limited logical lan- guage, and we have given this for combine. The assertions may be compiled any appropriate way. l combine(dissect(i)) = i 0 (Vi E S)(Vj E S)((i 4 j v j + i v i = j) A (i conuex A j convex)) & dissect(combine(S)) = S . (V.i>(((j 0 combine(S)) =+ (3 E S)(j 0 i)) A ((combine(S) 8 j) + (3 E S)(i 8 j))) Next, we have the axioms for duration, which specify only that duration is a fully additive function. Giving values of du- ration on the convex intervals will then specify duration com- pletely. The purpose of the special axiom for the case of meeting is to be able to derive different time units. For example, one can specify that there are seven days in a week, by adding a con- dition that seven day-type intervals which meet consecutively form a week-type interval. One can then count in week units or day units merely by adding axioms of the form l (i E DAYS + duration(i) = 1 ) l ( i E WEEKS ==+ duration(i) = I ) as appropriate. Given the days definition, the specification of week, and the axiom, will then guarantee that the duration of a week is seven day units. l duration(i) = C duration(dissect(i)) The sum is taken over all members of dissect(i). a (i 11 j) =+ duration(combine(dissect(i) U dissect(j))) = duration(i) + duration(j) 356 / SCIENCE Finally, the axioms for conuezify specify that convexify(i,j) is the smallest convex interval containing i and j. Convexify is a total, and thus a commutative operation. It is also associative, and we prefer to include both these properties explicitly, even though they follow from the minimal property. i (begins-at v ends-at) convexify(i,j) A j (begins-at V ends-at) convexify(i,j) convexify(i,j) convex (Vk conuex)(Vi) (Vj)( (i < k A j < k) =+ (conuexify(i, j) < k v convexify(i, j) = k)) convexify(i, j) = convexify(j, i) convexify(i, conuexify(j, k)) = conwesify(convexify(i,j), k) Note that the associative and commutative properties of convexify actually follow from its minimal property. However, any reasonable theorem prover would probably prefer to know this explicitly (as ours does). Addit ional Noteworthy Properties The following properties are all consequences of the axioms: l i conuex =+ duration(i) = diameter(i) this follows from the definition of diameter and the prop- erty: l i convex * convexify(i,i) = i l i < j =+ duration(i) < duration(j) which follows from the additive property of duration, given enough subintervals in the universe. Predicates The predicates we wish to use on intervals are specified in [Lad2, Lad,?]. They form a relation algebra in the sense of Tarski [JoTal, JoTu2, Mudl]. W e mention them here only for com- pleteness, since it is not the purpose of this paper to explain the interval calculus. interval relations: We include all the relations between in- tervals defined in [ Lad2]. Axioms l We include the relation product table, and the other ax- ioms needed for specifying the algebra of relations in [ LadS]. See [LadMad, Madl, Jo Tu2], Additional Defined Entities We need the basic (but not primitive) function diameter to ad- equately express properties of, and constraints on, non-convex intervals. We repeat here the definition and properties of di- ameter diameter(i): type real, the largest distance between two subin- tervals of a’ (including the duration of the subintervals) a diameter(i) = duration(convexify(i,i)) We also need, for purposes of specification, the predicates 0 past 11 now II future occupies(P,i): P is type tusk / action / event / assertion i is type interval i is the exact interval over which P takes place / holds / occurs / is true occurs-iu(P,i): true of all i such that i < interval-of(P) which have the properties 0 occupies(P,interval-of(P)) l occupies(P,i) ==+ occurs-in(P,i) l occurs-in(P,i) * (3j << i)(occupies(P, j)) These predicates are provided by, and their properties follow from, the definitions 0 occupies(P,i) 0 i = interval-of(P) l occurs-in(P,i) U interval-of(P) << i Temporal Logic Interval Constants We introduce three interval constants, which correspond to Mc- Taggart’s A-series notion of time [MC T, LadJ], and the standard syntax of temporal logic. The A-series notion conceives of time as consisting of the moving, changing present, the past and the future. This corresponds to the interpretation of the temporal operators in classical tense logic, except that ‘present’ is im- plicit. The semantics of tense logic, however, is similar to Mc- Taggart’s B-series, which consists of immutable points of time, like timestamps, at which there is no change. Change is rep- resented in the B-series by moving from one point to another. The evaluation of a tense-logical formula relative to a point, which is the standard semantical definition, is similar to con- necting the A-series and the B-series notions. We show how to capture the A-series notion in interval calculus. The standard time-of-day clock functions as an A-series to B-series converter. We refer the reader to [A1121 for the terminology and calcu- lus of interval relations. The constants are: now: intuitively, an interval of smallest granularity. In any practical domain of application, intervals will not be in- finitely divisible. If they are, there is still no logical con- tradiction in the axioms presented, as can be shown by a compactness argument from model theory. In this case, now would function like an interval of measure 0. future: intuitively, for those who like their intervals points, the interval ( now, oo); all future time to contain past: intuitively, the interval (-co, now); all past time Axioms 0 now convex l future convex 0 (V conwex i)( 7 now < i * ((i < past) V (i < future))) 0 (Vi)(i 4 now ===+ i < past) l (Vi)(now 4 i ==+ i < future) 0 (V convex i)(( now < i) V (now 4 i) V (i -C now) V (i 11 now) V (now II i) V (i = now)) These axioms characterise the constants. They state that the three convex intervals meet in the right ways, that they include all time, and that time is linearly ordered with respect to the now interval. The Temporal Operators 0 0 P 3 (int(P) = future V future << &t(P)) l OPE (int(P) sometimes-(overlaps V contained-in) future) Standard temporal logic has a syntax that corresponds to Mc- Taggart’s A-series time, and a semantics that corresponds to his B-series notion of time (roughly, timestamps). We indicate that conversion by noting that it is provided already in the standard facilities available on most AI worksta- tions, as a real-time clock, which converts now into a timestamp. We also need to construct past and future from the timestamp. We assume that the clock runs to a certain granularity, say mi- croseconds, and point out that the clock does, in fact, specify a time interval, whose duration is one microsecond in this case. Essentially, the clock tells you which interval the query inter- rupt is contained in. In this context, there are next and previous operators, which return the next and the previous timestamps. They may be implemented by increment and decrement respec- tively. The representation is therefore just timenow(): now to the call to its B-series the clock implements timestamp. the conversion of If there are infinity intervals at both ends of the time structure, (oo and -co for want of better names) then past and future may both be represented by past: convexify(-03, previous(now)) future: convexify(next(now), 00) 0 past convex KNOWLEDGE REPRESENTATION / 357 It is consistent to add such infinity intervals. If you don’t want them, the properties of the past and future B-series intervals may then be inferred from the axioms alone, Defining Time Units We need to reason about years, months, days, minutes and microseconds. We introduce a standard form for an interval which represents an instance of one of these units. All the units will be convex intervals, and we then show how to develop the types of units from these standard forms. Standard Time Units We use sequences of integers to represent our standard units. We use a linear hierarchy of standard units, year, month, day, hour, minute, second, arranged as a sequence. We illustrate its use down to seconds, hence our sequences will have lengths of up to six elements. It should be clear that the hierarchy is easily extendable to smaller units such as microseconds. We illustrate the meanings of sequences of integers, rather than giving an obvious definition. [1986] represents the year 1986 [1986,3] represents the month of March, 1986 [1986,3,21] represents the day of 21st March, 1986 [1986,3,21,7] p re resents the hour starting at 7am on 21st March, 1986 [ 1986,3,21,7,30] represents the minute starting at 7:3Oam on 21st March, 1986 [1986,3,21,7,30,32] p re resents the 33rd second of 7:30am on 21st March, 1986 (the first second starts at 0) We conceive of these intervals as being closed at the left end and open at the right, since this corresponds with normal usage. Notice, as we mentioned, that the standard clock times returned by a time-of-day clock in fact returns the interval in which the interrupt occurred. (It’s not really possible to determine from this which end of an interval should be open and which closed, since the interrupts are serialised). Axioms for Units Certain relations hold between these intervals. All of these intervals are convex, and hence the vocabulary of relations is Allen’s [ All2]. We give examples only of the axioms, since the nature of the rest may easily be inferred from the examples. It is obvious that not all integer sequences of appropriate lengths are goin g to name units in our formalism. We shall not bother with checking bounds on elements of a sequence, since this is a detail of no theoretical interest. We shall assume bounds are checked somehow. We shall use x, y, z,..... for integer variables, and CY, p,.... for sequence variables. Concatenation of sequences is denoted by -. All the axioms are quantifier-free statements. l ([x,1] starts [x]) A ([x,12] ends [x]) January is the first month and December is the last month of the year All months begin with day 1 [x,1,31] ends [x,1] January ends on the 31st x divisible by 4 =S ([x,2,29] ends [x,2]) else ([x,2,28] ends [x,2]) February has 28, or sometimes 29 days appropriate cases for the other months ([x,~,z,O] begins [x,Y,z]) A ([x,Y,@~] ends [x,Y,z]) constraints for hours in a day appropriate cases for minutes and seconds for length(o) = 0 to 5 and x less than the ending numbers for the appropriate length, (Q: -[xl) II b -[x+11) the xth year/month/.. meets the x+lst year/month/.. for length(a) < 5, (CY -[xl) < o! We also need to be able to coalesce representations, to gather months into years, and seconds into minutes. The axiom for this is . (a- 6 starts a) A (a - 7 ends a) =+ convexify(cy - 6, a - 7) = a Interval Types Definable From Unit Types We can now define classes of intervals, based upon the units. l YEARS = { CY I length(o) = 1 } l MONTHS = { Q I length(o) = 2 } l DAYS = { cr I length(a) = 3 } l Similarly for HOURS, MINUTES, SECONDS,..... We may also define units which are not in the list of basic units. Firstly, let us assume that all variables and sequences range over the set DAYS. This will simplify notation for our examples. We define l &+1(G) - (37)(4&r) A (7 II PI) for 0 I i l 4’ is the symmetric, transitive closure of 4, for any binary relation 4 The 4; are the iterated meet relations for DAYS. Note that, as we have defined them, a given CY in DAYS meets exactly one ,LI in DAYS, and is-met-by exactly one 7 in DAYS. WEEKS = { conve~ify(~, P) I d~5b!, P) 1 defines all 7 day intervals as weeks MONDAYS = { a 1 @6)*([1986,3,31], Q) } Needless to say, [1986,3,31] is, in fact, a Monday 358 / SCIENCE We may define other days of the week in a similar way to MON- DAYS, or we may choose to use an implicit definition, such as Bibliography . (SUNDAYS c DAYS) A (combine(SUNDAYS) always-meets combine(MONDAYS)) Our definitions of the interval types show that we need to main- tain the distinction between a non-convex interval I and the set of its maximal convex subintervals dissect(l). All of the unit classes YEARS, MONTHS, . . . . . . as well as some of the defined classes such as WEEKS, satisfy the condition 0 combine(S) = alltime and cannot thus be distinguished purely as interval objects. One of the major reasons for introducing sets into the time structure must be to distinguish between the different classes of time units. Since the set theory is needed, we see no reason not to make cautious use of it, and we may then avoid the need for a proliferation of interval operations, since we may use dissect on an interval, perform set theoretic operations, and then use combine to create the new interval. Acknowledgements We thank Tom Brown, Doug Edwards, Allen Goldberg, Pat Hayes, Bob Riemenschneider, and the referees for discussion and comments, and especially Richard Waldinger for comments, encouragement and his inimitable coffee. All1 : Allen, J.F., Towards a General Theory of Action and Time, Artificial Intelligence 23 (2), July 1984, 123-154. All2 : Allen, J.F., Maintaining Knowledge about Temporal In- tervals, Comm. A.C.M. 26 (11), November 1983, 832-843. AllKau : Allen, J.F. and Kautz, H., A Model of Naive Tem- poral Reasoning, in Hobbs, J.R. and Moore, R.C., editors, Formal Theories of the Commonsense World, Ablex 1985. AllHay : Allen J.F. and Hayes, P. J., A Commonsense Theory of Time, in Proceedings IJCAI 1985, 528-531. Dow : Dowty, D.R. Word Meaning and Montague Grammar, Reidel, 1979. Hum : Humberstone, I.L., Interval Semantics for Tense Logic: Some Remarks, J. Philosophical Logic 8, 1979, 171-196. JoTal : Jonsson, B. and Tarski, A., Boolean Algebras with Operators 1, American J. Mathematics (73), 1951. JoTa : Jonsson, B. and Tarski, A., Boolean Algebras with Operators 11, American J. Mathematics (74), 1952, 127- 162. Lad1 : Ladkin, P.B., Comments on the Representation of Time, Proceedings of the 1985 Distributed Artificial Intelligence Workshop, Sea Ranch, California. Lad2 : Ladkin, P.B., Time Representation: A Taxonomy of Interval Relations, Proceedings of AAAI-86 (this volume). LadMad : Ladkin, P.B. and Maddux, R.D. The Algebra of Time Intervals, in preparation. Mad1 : Maddux, R.D., ‘Topics in Relation Algebras, Ph. D. Thesis, University of California at Berkeley, 1978. McDerl : McDermott, D., A Temporal Logic for Reasoning about Actions and Plans, Cognitive Science 6 (2), April- June 1982, 71-109. McDer2 : McDermott, D., Reasoning about Plans, in Hobbs, J.R. and Moore, R.C., editors, Formal Theories of the Commonsense World, Ablex 1985. McT : McTaggart, J.M.E., The Unreality of Time, Mind, 1908, 457-474. vBen : van Benthem, J.F.A.K., The Logic of Time, Reidel 1983. KNOWLEDGE REPRESENTATION / 359
|
1986
|
160
|
429
|
On the Parallel Complexity of Some Constraint Satisfaction Problems Simon ICasif Department of Electrical Engineering and Computer Science The Johns Hopkins University ABSTRACT Constraint satisfaction networks have been shown to be a very useful tool for knowledge representation in Artificial Intelligence applications. These networks often utilize local constraint propagation techniques to achieve global consistency (consistent labelling in vision). Such methods have been used extensively in the context of image understanding and interpretation, as well as plan- ning, natural language analysis and commonsense reason- ing. In this paper we study the parallel complexity of discrete relaxation, one of the most commonly used con- straint satisfaction techniques. Since the constraint pro- pagation procedures such as discrete relaxation appear to operate locally, it has been previously believed that the relaxation approach for achieving global consistency has a natural parallel solution. Our analysis suggests that a parallel solution is unlikely to improve by much the known sequential solutions. Specifically, we prove that the problem solved by discrete relaxation is log- space complete for P (the class of polynomial time deter- ministic sequential algorithms). Intuitively, this implies that discrete relaxation is inherently sequential and it is unlikely that we can solve the polynomial time version of the consistent labelling problem in logarithmic time by using only a polynomial number of processors. Some practical implications of our result are discussed. 1. Introduction Constraint satisfaction networks have been shown to be a very useful tool for knowledge representation in Artificial Intelligence applications [Winston 841. These networks often utilize local constraint propagation tech- niques to achieve global consistency. Such methods have been used extensively in the context of image understanding and interpretation [Rosenfeld et al. 761, [Haralick & Shapiro 791, [Mackworth 771 as well as plan- ning, natural language analysis and commonsense reason- ing [Winston 841. In particular, this paradigm has been applied to solve the consistent labelling problem (CLP) which is a key problem in many computer vision applications. The consistent labeling problem can be informally defined as follows. Let S be a set of objects. Each object has a set of possible labels associated with it. Additionally, we are given a set of constraints that for each object s and label x describe the compatibility of assigning the label x to object s with assignment of any other label X’ to any other object s’ . Since CLP is known to be NP-complete, the discrete relaxation method has been proposed to reduce the ini- tial ambiguity. The Relaxed Consistent Labeling Prob- lem (RCLP) allows an assignment of a label x to an object s iff for any other object s’ in the domain there exists a valid assignment of a label a! which does not violate the constraints (a formal definition is given in the next section). This formalization allows us to achieve global consistency by local propagation of constraints. Specifically, a discrete relaxation algorithm can discard a label from an object if it is incompatible with all other possible assignments of labels to the remaining objects. The discrete relaxation approach has been successfully applied to numerous computer vision applications [Waltz 75.1, [Kitchen 19801, [B arrow & Tenenbaum 761, [Brooks 811. The sequential time complexity of RCLP is dis- cussed in [Mackworth 22 Freuder 851. In this paper we study the parallel complexity of RCLP. Since the constraint propagation procedures such as discrete relaxation appear to operate locally, it has been previously believed that the relaxation approach for CLP has a natural parallel solution [Rosen- feld et al. 761, [Ballard SC Brown 821, [Winston 841. Our analysis suggests that a parallel solution is unlikely to improve by much the known sequential solutions. Specifically, we prove that the relaxed consistent label- ling problem belongs to the class of inherently sequential problems called log-space complete for P. Intuitively, a problem is log-space complete for P ill a logarithmic time parallel solution for the problem will produce a logarithmic time parallel solution for any poly- nomial time deterministic sequential algorithm. This implies that unless P & NC (the class of problems solv- able in logarithmic parallel time with polynomial number of processors) we cannot solve the problem in logarithmic time using a polynomial number of proces- sors. This result is based on the “parallel computation thesis” proved in [Goldschlager 781 that establishes that parallel time computation is polynomially related to sequential space. Specifically, the class of problems that KNOWLEDGE REPRESENTATION / 349 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. can be solved in logarithmic parallel time with polyno- mial number of processors is equivalent to the class of problems that can be solved in polynomial time using logarithmic space on a sequential machine. For length considerations, we assume that the reader is familiar with elementary complexity theory and log-space reduci- bility techniques [Garey SC Johnson 701 and the literature on discrete relaxation (network consistency algorithms). For completeness we shall provide the necessary definitions in the next two sections. 2. Consistent Labelling Problems and Discrete Relaxation The consistent labelling problem (CLP) and its less restrictive (relaxed) version are formally defined in [Mackworth 771 and [Rosenfeld et al. 761. For complete- ness we give a semiformal definition here. Let v={v,,....v,} be a set of variables. iVit,h each variable Vi we associate a set of labels Li. Now let Yij be a binary predicate that defines the compatibility of assigning labels to objects. Specifically, Pij (5 yj/ )=l iff the assignment of label x to ‘ui is compatible wit,11 the assignment of label y to ‘uuj, The Consistent Label- ling Problem (CLP) is defined as the problem of finding an assignment of labels to the variables that does not violate the constraints given by Pij. More formally, a solution to CLP is a vector (x 1, . . . , x,, ) such that xi is in Li and for each i and j, Pii (xi ,xcj ) = 1. For example, the d-queens problem can be seen as an instance of CLP. To confirm this, associate a variable with each column in the board and let Li = 1,2,3,4 { > for 15; <4. Let Pij (x ,y )=l iff positioning of a queen in - row x at column i is ” safe ” when there is a queen in column j and row y . A.s mentioned in the introduction, the CLP is known to be NP-complete. Therefore several polynomial approximation algorithms were proposed and were shown to perform quite well in practical applications. The most significant class of algorithms are variations on discrete relaxation [Rosenfeld et, al. 761 also known as network consistency algorithms [Mackworth 771. Formally, a solution to the relaxed version of CLP (RCLP) is a set of sets M,, . . . , M,, such that A4i is a subset of Li and a label x is in h4i iff for Ej%RY iMj i #j there is a Yz. in Mj, SUCh that Pij (X YYzj ) = 1. Intuitively, a label x is assigned to a variable iff for every other vari- able there is at least one valid assignment of a label to that other variable that supports the assignment of label x to the first, variable. Clearly, any solution to CLP is also a solution to RCLP but, not vice versa. In this sense discrete relaxation is a form of incomplete limited rea- soning. We call a set n4,, . . . , Al, to be a maximal solution for a RCLP iff there does not exist any other solution S 1, . . . , S, such that n4i C Si for all I< i < 11 . We are only interested in maximal solutions fo7 a-RCLP. This restriction is necessary since any RCLP has a trivial solution: the set of empty sets. Additionally, recall that any solution for a RCLP represents a set, of candidate solutions for the original CLP, which will eventually be verified by a final exhaus- tive check. Thus, by insisting on maximality we guaran- tee that we are not loosing any possible solutions for the original CLP. Therefore, in the remainder of this paper a solution for a RCLP is identified with a maximal solu- tion. 3. The Complexity of Searching AND/OR Graphs In this section we state several preliminary definitions and results that will be used in the following section to analyze the complexity of RCLP. We begin by defining AND/OR graphs [Nilsson 711. An AND/OR graph is a G-tuple (A ,0 ,E ,S ,S ,r ) where A is a set, of AND-nodes, 0 is a set of OR-nodes, E is a set of directed edges connecting nodes in A UO US UF , s is a unique start node in A , S is a set of success nodes and F is a set of failure nodes. The solvability of a node in an ,4ND/OR graph is defined recursively: - If x is a S-node, it is solved. - If x is is an AND-node, then it is solved iff all its successors (defined by the direction of the edges of E ) are solved. - If x is an OR-node then it is solved iff one of its suc- cessors is solved. An AND/OR graph has a solution iff s is solved. Proposition 3.1: (Jones & Laaser) Finding a solution for an AND/OR graph is log-space complete. Proof: This result can be obtained by observing that GAME studied in [Jones SC Laaser 771 is an instance of the problem of AhrD/OR solvability. We now define the class of propositional Horn clauses. A propositional formula H is said to be a proposi- tional Horn clause iff one of the following holds - H is a propositional atom of the form Q, called an assertion. - H is a propositional formula of the form P + P,& )...) 8 P,, denoted by P + p,, *. ., P, and called an implication. - H is a propositional negative atom (literal) of the form 1 P , denoted by +- P and called a goal. We note that our slightly restrictive definition of Horn clauses (not allowing multiple literals in the goal) does not, restrict the expressiveness of the language. 350 / SCIENCE A Propositional Logic Program is a set of proposi- tional Horn clauses with a single goal. We define the unsatisfiability of a set of propositional Horn clauses s as a solvability relation on the set of propositional names in S. The definition is recursive: - If P is an assertion in S then it is solvable. - If P appears on the left hand side in a set of impli- cations of the form P+P,,...,P, then P is solvable iff each one of the Pi s is solvable in at least one of the implications, A propositional logic program is unsatisfiable iff the propositional name that appears in the single nega- tive atom is solvable. The problem of testing whether a propositional logic program is unsatisfiable will be referred to as the Propositional Horn Satisfiability Problem (PHSP). Example: The following set of Horn clauses is unsatisfiable since P is solvable: +P P + Q,R. P +- ST. R t S. T + P. Q. S. The next theorem, though not explicitly stated pre- viously in a published form, is part of the common folk- lore among theoreticians [Pullman 851. Theorem 3.1 (folklore): The problem of testing the satisfiability of proposi- tional Horn clauses is log-space complete. Proof: The proof is by reduction from solvability of AND/OR graphs (GAllE of [Jones & Laaser 771) and will not be presented here in full detail. Generally, “reduction” is the most common technique to show a problem X is log-space complete. Specifically, it is ade- quate to show the problem is in P (the class of polyno- mial time algorithms), and then reduce a known log- space complete problem to X using a function comput- able in logarithmic space (log-space) by a deterministic Turing machine. Since log-space reducibility is a transi- tive relation, we can then deduce that if we had a loga- rithmic time parallel algorithm to solve X, we could also perform every other sequential polynomial time computation in logarithmic time. In our case the reduc- tion of AND/OR graph solvability to propositional Horn satisfiability is immediate (in some sense it is the same problem). We label all the nodes in the Am/OR graph with distinct propositional atoms. Then for each AND- node P connected to P r, . . . , Pn we create a formula P+P,,...,P,. For each OR-node P connected to PI,. *. , P, we create formulae P+P,. . . . P + P, . For each terminal success node Q we create the assertion Q. Finally if the start node of the graph is labelled by P we add the goal + P to the set. It is easy to see that the original graph has a solu- tion iff the set of formulae created in this fashion is unsatisfiable and the transformation can indeed be done in log-space. It is not difficult to verify that the following is also true [Reif 851: Theorem 3.2: Theorem 3.1 holds for propositional logic programs restricted to implications that have at most two atoms on the right hand side of the implication. 4. The Complexity of RCLP In this section we prove our main result, namely that the problem of finding a solution to Relaxed CLP (RCLP) is log-space complete. To accomplish this we show that RCLP is in P and subsequently prove that satisfiability of propositional Horn clauses (PHSP) is reducible to RCLP. The first part of the proof is straightforward since most existing algorithms for RCLP are of polynomial sequential complexity (see [Mackworth & Freuder 851). In fact the edge consistency algorithm as given in [M ac k worth 771 is linear in the number of edges in the constraint graph [Ma&worth & Freuder 851. Theorem 4.1: The Propositional Horn clause satisfiability problem is log-space reducible to the Relaxed Consistent Labelling Problem. Proof: Let Pr be a propositional logic program such that no implication has more then two atoms on its right hand side. We will also assume that all the atoms in Pr are uniquely labeled with integer values. We shall con- struct an RCLP G from Pr such that Pr is unsatisfiable iff a unique variable <P,> in G that corresponds to the unique goal +P, does not have a valid assignment of the label f . The RCLP is con- structed in the following way. 1. For each atom in Pr A we create a unique variable <A>. 2. For each assertion Q in Pr we create a unique variable <SOL VEDg >. KNOWLEDGE REPRESENTATION / 35 1 3. Create a unique variable <P,> that corresponds to the goal t P o of Pr . 4. For each implication of the form P +Q ,R. we add the variable <Q,R> to G . This construction defines all the variables of G . The initial label sets are created as follows: - Each variable with the exception of the <SOLVED > variables is assigned the label 1 . - For each assertion Q in PT we add the label q to the initial label set of < SOLVEDQ >. - For each variable of the form <R> we add the label j to its initial set. - For each variable of the form <S,T> we add the labels fs and f T to its initial set. We are now ready to define the constraints of the problem G . We define the constraints using a compati- bility matrix COM, whose entries are of the form COM[variable,variable,lal~el, label]. COM[V, , tpj ,X ,y ] = 1 ifI the assignment of label y to variable uj is com- patible with the assignment of label z to variable vi. An alternative natural representation is to use a directed multigraph where the nodes correspond to the variables of the problem and the edges are labeled with the con- straints of the problem. It is important to observe that in order to preserve log-space reducibility we do not need to create the entire compatibility matrix COM. For a full description of the RCLP we only need to create a list of the constraints of the form COM[var,var,label,label] = 0. That is, we describe only the incompatible assign- ments. The remaining entries in the matrix can be filled with 1s. For each implication of the form P +--R lve add the constraints COM[<P>,<R>J ,f ] = 1 COM[<P>,<R>J ,I] = 0 For each implication of the form P +Q ,R we add the constraints COM[<P>,<Q,R>,f JQ] = 1 COM[<P>,<Q,R>,f ,f R ] = 1 COM[<P>,<Q,R>,f ,E] = 0 COM[<Q,R>,<R>,f, ,f ] = 1 COM[<Q,R>,<R>,fR $1 = 0 COM[<Q,R>sQ>,fg ,I 1 = 1 COM[<Q,R>,<Q>,&, $1 = 0 Finally, for every variable of the form < SOLVEDg > we add the constraint COM[<Q>, <SOLT’EDg >,f ,q] = 0 This completes the definition of all the “necessary” constraints of the the RCLP. The rest of the matrix COM can be filled with 1s. Note that the label f must be removed from all the variables that correspond to the assertions of the logic program. Using induction on the length of the satisfiability proof it is fairly easy to show that the label f will be removed from the variable <P, > that corresponds to the goal +P, iff P, is solvable. The formal proof is omitted. Now we have to verify that the above construction can be done using only logarithmic space on the work tape of the Turing machine. We shall sketch the main ideas of the proof method. If the construction were to be carried out in the order given above it would have taken linear space (linear in the number of total occurrences of all the atoms in Pr ). Fortunately, since we assumed the atoms were initially numbered by integers, we can follow the above construction in a demand-driven fashion as explained below. To start off we can create all the variables of the form <Q> and their respective label sets. This can be done with logarithmic space consumption since process- ing each one of the N-variables we need ZgN-bits. Fol each assertion we can add the respective label to <SOLVED>. For implications of the form P +-Q ,R we generate a new variable and its respective initial label set. This step requires a counter that can be imple- mented in logarithmic space. Finally, for each implica- tion encountered we can generate the constraints (again in logarithmic space). This completes the generation of all the necessary (see above discussion) information that completely describes the RCLP G . Example: Consider the following PHSP : tP P + Q,R. P + ST. R + S. T t P. Q- s. We construct the following RCLP. The variables of the problem are: <P>, <Q>, <R>, <S>, <T>, <Q,R>, <S,T>, <SOLVED0 >, <SOLVED, >. The initial assignments of labels are as follows: <P>: {f ,I} <Q>: {f A> <R>: {f ,I} <s>: {‘i ,I} <T>: {f ,I} <Q,R>: (f Q JR ,I > <SJ’>: {f sJTJ} <SOLVEDQ >: {q } <SOLVED, >: {s } Finally, the constraints are given in Figure 1. 352 / SCIENCE COM[<P>,<Q,R>,I $1 = 0 COM[<P>,<S,T>,f ,I] = 0 COM[<Q,R>,<R>,& ,I] = 0 COM[<Q,R>,<Q>,fQ ,I] = 0 COM(<S,T>,<S>,fs ,I] = 0 COM[<S,T>,<T>,h- $1 = 0 COM[<R>,<S>,f ,I] = 0 COM[<T>,<P>,I ,I] = 0 COM[<Q>,<SOLVED, >,f ,q] = 0 COM[<S>,<SOLVED, >,f ,s] = 0 Figure 1. 5. Conclusion In this paper we have shown that a very important glass of algorithms which were previously believed to be highly parallelizable are in fact inherently sequential. This negative result needs to be quantified. Essentially, it suggests that the application of massive parallelism will not change significantly the worst case complexity of discrete relaxation (unless one has an exponential number of processors). However, this result does not preclude research in the direction of applying parallel- ism in a more controlled fashion. Specifically, speedups are possible in the case- where the number of processors is significantly smaller than the size of the constraint graph (a very likely case). In this case, it may be possible to obtain a full P-processor speedup. We are currently actively investigating this interesting case. Acknowledgements Thanks are due to Azriel Rosenfeld, Dave Mount and Deepak Sherlekar for their constructive comments that contributed greatly to the final form of this paper. This work was supported by NSF under grant DCR- 18408 while the author was a visiting scientist at the Center for Automation Research, University of Mary- land. REFERENCES PI Ballard, D.H. and C.M. Brown, Computel Vision, Prentice Hall, 1982. PI Barrow, H. G. and J. M. Tenenbaum, MSYS: A system for reasoning about scenes, Technical Note 121, SRI AI Center, Menlo Park, CA, April 1976. [3] Brooks, R A., Symbolic reasoning among 3-D models and 2-D images, Artificial Intelligence 17, pp. 285-348, 1981. PI PI PI PI PI PI 101 WI [121 131 141 1 (151 P61 Garey, M.R and D. S Johnson, Computers and In- tractability: A Guide to NP-Completeness, Freeman, San Francisco, 1979. Goldschlager, L.M., A Unified Approach to Models of Synchronous Parallel Machines, Proc. of the lo-th Symposium on Th.eory of Computing, pp. 89-94, May 1978. Haralick, R. M. and L. G. Shapiro, The consistent labeling problem: Part I, IEEE Trans. Pd. Anal, Mach. Intel. PAMI-1, pp. 173-184, 1979. Jones, N. and T. Laaser, Complete problems for deterministic polynomial time, Theoretical Computer Science 3, pp. 105-117, 1977. Kitchen, L. J., Relaxation applied to matching quantitative relational structures, IEEE Trans. Syst. Man Cybern. SMC-10, pp. 96- 101, 1980. Mackworth, A. and E. Freuder, The complexity of some polynomial network consistency algo- rithms for constraint satisfaction, Artificial Intelligence 25, pp. 65-74, 1985. Mackworth, A. I<., Consistency in networks of re- lations, Artificial Intelligence 8, pp. 99-118, 1977. Nilsson, N. J., Problem-Solving Methods in Artificial Intelligence, McGraw-Hill, New York, 1971. Reif, J., Depth-first search is inherently sequen- tial, Info. Proc. Letters 20, pp. 229-234, 1985. Rosenfeld, A., R. Hummel, and S. Zucker,, Scene labeling by relaxation operations, IEEE Trans. Syst. Man Cybern SMC-6, pp. GO- 433, 1976. Ullman, J., Personal communication. 1985. Waltz, D., Understanding line drawings of scenes with shadows, pp. 19-92 in The Psychology of Computer Vision, ed. P. H. Winston, McGraw-Hill, New York, 1975.. Winston, P.H., Artificial Intelligence, Addison Wesley, 1984. KNOWLEDGE REPRESENTATION / 353
|
1986
|
161
|
430
|
A Four-Valued Semantics for Frame-Based Description Languages Peter F. Patel-Schneider Schlumberger Palo Alto Research 3340 Hillview Avenue Palo Alto, CA 94304 ABSTRACT One severe problem in frame-based description languages in that computing subsumytion in computationally in- tractable for languages of reasonable expressive power. Several partial solutions to this problem are used in knowledge representation systems that incorporate such languages, but none of these solutions are satisfactory if the system is to be of general use in representing knowl- edge. A new solution to this problem is to use a weaker, four-valued semantics for frame-based description lan- guages, thus legitirnirmg a smaller set of subsumption relationships. In thiv way a computationally tractable but expressively powerful knowledge representation sys- tem incorporating a frame-based description language can be built. I Introduction There is a trade-off between expressive power and com- putational tractability in knowledge representation formalisms [Levesque and Brachman, 19851. If the formalism is expres- sively powerful, such as standard first-order logic, then reason- ing in the formalism is time-consuming, perhaps even undecid- able. This may make the formalism unsuitable as the basis of a knowledge representation system. Formalisms that are com- putationally tractable, such as standard databases, are much less expressive. Even many expressively limited formalisms are computationally intractable, as is standard which has NP-complete reasoning. propositional logic, This trade-off is present in frame-based description languages [Brachman and Levesque, 1984). These languages formalize the notion of frames, a notion present in many current knowledge representation systems, as structured types, often called con- cepts. The languages include a set of syntactic operations that are used to form concepts, and other, related, notions such as slots. They also include a formal model-theoretic semantics for these syntactic expressions. Thus frame-based description lan- guages are a sort of logic, one which can be used to represent a useful kind of knowledge. The concept-forming operators vary between different frame- based description languages but generally allow the creation of a concept as the conjunction of a set of more general concepts and a set of restrictions on the attributes of instances of the concept. Such concepts can be loosely rendered as noun phrases such as a student and a female whose department is com- puter science, and who has at least 3 enrolled- courses, each of which is a graduate-course whose department is an engineering-department. Frame-based description languages are part of KL-ONE [Brach- man and Schmolze, 19851, NIKL [Schmolze, 19851, KRYPTON [Brachman et al., 1983, Brachman et al., 19851, and KANDOR [Patel-Schneider, 19841. The most important operation in frame-based description lan- guages is determining if one concept subsumes another. Infor- mally, one concept subsumes another if all instances of the sec- ond must be instances of the first, that is, if the first is more general than the second. For example, the concept person each of whose male friends is a doctor subsumes the concept person each of whose friends is a doctor who has some speciality, in standard frame-based description languages. This is so be- cause, in the standard semantics for frame-based description lan- guages all instances of the second concept must also be instances of the first. Unfortunately, subsumption is a complicated relationship and can be difficult to compute. This problem first came to light during the formalization of part of KL-ONE where it was discov- ered that the subsumption algorithm in KL-ONE was incomplete (Schmolze and Israel, 19831. The complexity of computing sub- sumption in KL-ONE and NIKL, which has a similar frame- based description language, is still unknown and the problem may even be undecidable. More recently, Brachman and Levesque [Brachman and Levesque, 19841 showed that computing sub- sumption in a very simple frame-based description language was NP-hard, indicating that computing subsumption in more ex- pressive frame-based description languages is very difficult, at least in the worst case. Since computing subsumption is the most important operation in frame-based description languages and will be performed oftenr, this is a serious problem in these languages. There are several ways to partially solve this problem. The first partial solution is to simply ignore the problem. The exam- ples used by Brachman and Levesque to show that computing subsumption in their frame-based description language is NP- hard are not likely to occur in actual knowledge bases. Perhaps computing subsumption will be reasonably fast in actual knowl- edge bases. This sort of solution occasionally works well but will fail for more expressive frame-based description languages, such as NIKL's, which have no known total algorithm for computing subsumption, and whose best known solution problem into a theorem-proving problem. is to translate the The second partial solution is to limit the expressive power of the frame-based description language. The problem with this so- lution is that the expressive power must be very severely limited ‘h the current design of KRYPTON, new resolution steps, leading to a great number being asked during dedurtiona. concepts are created as part of of new subsurnption questions .3-i< / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. to achieve computational tractability, as discovered by Brach- man and Levesque. Nevertheless, this was the solution used in the version of KRYPTON actually implemented [Brachman et al., 19851, which has a very limited frame-based description lan- guage, in which subsumption in easy to compute. A combination of these two solutions was used in KANDOR. In this system the frame-based description language is more pow- erful than that of KRYPTON, but still limited expressively. Com- puting subsumption in KANDOR is co-NP-complete in the size of numbers appearing in concepts but is otherwise tractable. More- over, this computation is quite fast in normal circumstances. Also, KANDOR, like many other similar systems, keeps track of subsumption relationships in a concept taxonomy so that each subsumption question need only be asked once. In this way, the worst case behavior of the subsumption problem in KANDOR is rendered less harmful.2 A third solution is to provide only a partial subsumption algorithm, one which does not discover all subsumption rela- tionships, only an easy-to-calculate subset of them. This is the solution used in KL-ONE and NIKL, where only simple subsump- tion relationships are discovered. In this solution the algorithm for computing subsumption is no longer fully defined by the se- mantics of the frame-based description language. There is little basis for deciding exactly which subsumption relationships to discover, except the reasons of expediency and tractability. The danger is that the discovered subsumption relationships will be simply an ad hoc set, with little relationship to the semantics of the frame-based description language, and can be neither char- acterized nor used effectively. Given that none of these solutions is satisfactory for frame- based description languages, where reasonable expressive power implies very difficult subsumption and where the semantics must be followed because the system is supposed to be representa- tional, is there a solution to the problem? Unfortunately there is no solution if the standard semantics for frame-based descrip- tion languages is to be strictly followed. However, there is a way to legitimixe the third solution, by using a weaker semantics for frame-based description languages, one that supports fewer sub- sumption relationships and which has tractable snbsumption.3 This fourth solution is the one that will be explored in this pa- per. Weaker semantics have also been proposed for assertional knowledge representation systems. (An assertional knowledge representation system is concerned with assertions or facts in- stead of frames.) Levesque [Levesque, 19841 suggested that propositional tautological entailment, a weak version of propo- sitional relevance logic, could be used as the basis of a sim- ple knowledge representation system. The advantage of using propositional tautological entailment is that computing infer- ence in it is computationally tractable if formulae are kept in conjunctive normal form. This work was later extended [Patel- Schneider, 19851 to produce a decidable variant of first-order tautological entailment that could be used as the basis of a knowledge representation system.” Both of these efforts were based on a four-valued model-theoretic semantics where propo- sitions can be assigned not only true or false, but also neither ‘However, ARGON [Patel-Schneider et al., 19841, a query language using KANDOR to represent its knowledge, builds conrepts for each query, thus leading to a large number of subsumption questions being asked. Therefore, the performance of subsumption in KANDOR is of vital importance to ARGON. ‘It is important that the new semantirs be weaker than the standard semantics so that all reasoning in it is sound with respect to the standard semantics. 4These developments are along the general line of Prisch who has argued that any Artificial Intelligenre program, knowledge representation systems in particular, should be the complete implementation of some formalism with model-theoretic semantics [Prisch, 19851. true nor false, and also both true and false. The semantics used in this paper are very similar to these four- valued semantics. Of course, there are problems with using different semantics. The standard two-valued Tarskian semantics for logic, which serves as the basis for the standard semantics of frame-based description languages, has been in existence for quite some time now. It is generally agreed that this semantics captures our intuitions of how the world actually is and that the inferences sanctioned by it are a reasonable set of inferences. Any other semantics is liable to be less intuitive than this standard seman- tics and perhaps may be so counter-intuitive that it is useless for knowledge representation purposes. The goal of this endeavor is to produce a semantics for frame-based description languages that is still intuitive but which also has tractable subsumption. II A Frame-Based Description Language The benefits and problems of using four-valued semantics for this purpose will be illustrated using a particular frame-based description language. The language given here is considerably more general than the language 7c presented in [Brachman and Levesque, 19841, for which subsumption in the standard seman- tics is intractable. This language, called 3P, is meant to be similar to the frame-based description languages in KL-ONE and NIKL and the initial specihcations of KRYPTON [Brachman et al., 19831, except for the lack of number restrictions.5 3P has two major syntactic types-concepts and roles, corre- sponding to the frames and slots of most frame-based knowledge representation systems. As in these other systems, concepts rep- resent descriptions of related individuals and roles describe re- lations between these individuals. The intuitive meanings of the various constructs in the language are simple and are based on the intuitive meanings of the basic constructs in typical frame- based knowledge representation systems. Constructs in this lan- guage which have counterparts in X, NIKL, or KRYPTON have analogous intuitive meanings. The grammar of 7% is as follows: (A linear syntax is used here for purposes of clarity and brevity. The keywords in this grammar are derived from the other frame-based description languages.) <concept> ::= <atom> ] (and <concept>+) 1 (some <role>) 1 (all <role> <concept>) 1 (rvm <role> <role>) I (sd <concept> <binding>+) <binding> ::= (E <role> <role>) ] (2 <role> <role>) <role> ::= <atom> 1 (and-role <role>+) I (restr <role> <concept>) Here atoms are the names of primitive concepts or roles. The and construct for concepts and the and-role construct for roles allow the creation of conjunctions. For example, (and adult male person) would represent the concept of something that was an adult, a male, and a person, i.e., a man. The some construct guarantees that there will be at least one filler of the role which is its argument. The all construct restricts the fillers of a role to belong to a certain concept and the rvm (for “role-value-map”) construct similarly restricts the fillers of a role to stand in some other relationship to the individual. The “The reason for not including number restrictions will be seen later. KNOWLEDGE REPRESENTATION / 3-t? restr (for Value restriction”) construct allows for the creation of roles constrained by the types of their fillers. In this way the concept of a person with at least one child, each of whose sons is a lawyer, and each of whose siblings is also a friend is rendered as (and person (some child) (all (restr child male) lawyer) (rvm sibling friend)). The sd (for “structural description”) construct permits the tying together of various fillers by means of some other object, a feature borrowed from KL-ONE. This construct allows the concept of a project-broadcast message or a message for which some project exists such that each sender of the message is a project-member of the project, and each project-member of the project is a recipient of the message to be rendered as (and message (sd project (C sender project-member) (2 recipient project-member) )). Here the senders and the recipients are tied together by means of some project, which has a certain relationship to both the senders and the recipients. III Formal Semantics The above discussion defines the syntax of 3p and indicates intuitively what the constructs are supposed to model. How- ever there is no formal definition of the exact meaning of each construct. This meaning is defined in terms of the following extensional semantics. The basic ideas behind the semantics are similar to the ideas behind other denotational semantics. There is a set of possible worlds or models and a mapping which maps syntactic objects into semantic entities in each of these possible worlds. How- ever, since the semantics is four-valued, the mapping is more complicated. This is because it is not sufficient to simply state conditions specifying where something is true and rely on the fact that where something is not true, it must be false. Instead, separate conditions for truth and falsity must be given, as in three-valued logics. In a particular world, each concept is mapped into two sets of individuals. The first set is the set of individuals that belong to the concept, called its positive estension. The second set is the set of individuals that definitely do not belong to the concept, called its negative extension, Unlike in two-valued semantics, these two sets need not be complements of each other. There may be individuals that are members of neither of these sets, and also individuals that are members of both of these sets. Individuals which are in both the positive and negative ex- tension of a concept are hard to characterize using the above description. A description of the semantics in terms of knowing better characterizes such individuals. Under this reading, the first set is the set of individuals known to belong to the concept and the second set is the set of individuals known not to belong to the concept. Individuals that are members of neither set then are not known to belong to the concept and are not known not to belong to the concept. This is a perfectly reasonable state for a system that is not a perfect reasoner. Individuals that are members of both sets are, inconsistently, both known to belong to the concept and known not to belong to the concept. This is a slightly harder state to rationalize but can be considered a possibility in the light of inconsistent information. Similarly, roles are mapped into two sets of ordered pairs of individuals. There are restrictions on this mapping, corresponding to the intuitive meaning of each of the syntactic constructs of the lan- guage. For example, the positive extension of (and cl ~2) must be the intersection of the positive extension of cl and cz and its negative extension must be the union of their negative exten- sions. In this way the intuitive notion of conjunction is made formal. Now subsumption is defined as one concept subsumes another if the positive extension of the first is always a superset of the positive extension of the second and the negative extension of the first is always a subset of the negative extension of the second. This is the obvious way of defining subsumption in a four-valued semantics. The semantics is strictly defined in terms of situations. A situation is a triple, (LJ, Et, E f). D is a set of individuals. Et is a function from concepts to subsets of D and from roles to subsets of D x D, mapping concepts and roles into their positive extensions. Ef is a function from concepts to subsets of D and from roles to subsets of D x D, mapping concepts and roles into their negative extensions. &t and E f also map bindings into subsets of D x D. Et and f f must satisfy the following constraints: d E f’[(and cl . . . cn)] iff for each i, d E E’[ci] d E &f[(and cl . . . cn)] iff for some i, d E lf[ci] d E E’[(some r)] iff 3e (d,e) E &‘(r] d E Ef[(some r)] X Ve (d,e) E ff[r] d E ft[(aIl r c)] iff Ve (d,e) E Ef[r] or e E E’[c] d E Ef[(aII r c)] iff 3e (d,e) E Et[r] and d E f f[c] d E f t[(rvm r a)] iff Ve (d,e) E ff(r] or (d,e) E f’[s] d E f f[(rvm r s)] iff 3e (d,e) E ft[r] and (d,e) E ff[s] d E ft[(sd c bl . . . bn)] 3’ 3e e E ft[c] and, for each i, (d, e) E f ‘[bi] d E f f[(sd c bl . . . bn)] ifT Ve e E f f[c] or, for some i, (d, e) E f f [bi] (d,e) E ft[(c r s)] ifF Vs (d,z) E ff[r] or (e,z) E f’[s] (d,e) E ff[(c r s)] iff 3~ (d,z) E f’[r] and (e,z) E ff[s] (44 E f’[(1 r 41 ifF Vz (d,z) E ft[r] or (e,z) E ff[s] (d,e) E ff[(z r s)] 8 3s (d,z) E ff[r] and (e,z) E f’[s] (d,e) E f t[(and-role rl . . . rn)] iff for each i, (d, e) E Et (r;] (d, e) E f f [(and-role rl . . . rn)] iff for some i, (d, e) E f f [ri] (d,e) E f’[(restr r c)] ifF (d,e) E f t[r] and (e) E f ‘[c] (d, e) E ff [(restr r c)] ifF (d,e) E ff[r] or (e) E ff[c] For any two concepts, c’ and c, c’ subsumes c if, for every situation, (D,f’,ff), ft[c’] 2 f’[c] and ff[c’] c ff[c]. How well does this semantics reflect intuitions about the meaning of concepts and roles and how well does it model sub- sumption? On the first point, the semantics does rather well. Define a model as a situation where, for every concept c, the positive and negative extensions of c are disjoint and together exhaust the $46 / SCIENCE domain of the model, and, similarly, the positive and negative extensions of roles and bindings are disjoint and exhaustive. In models the above semantics reduces to the standard two-valued semantics for frame-based description languages. The semantics given here is a minimal mutilation required to go from a two- valued semantics to a four-valued semantics. There is nothing added besides what is needed to get from two truth values to four truth values, in this particular way. Semantics based on four truth values, such as this one, the propositional semantics of [Levesque, 19841, and the first-order semantics of [Patel-Schneider, 19851, are reasonable for systems with limited reasoning power. Such systems do not have to- tal information, thus the presence of truth-value gaps, and also cannot resolve inconsistencies, thus allowing for inconsistent sit- uations. The subsumption algorithm works as- follows. First, use the following equivalences to transform roles and concepts into canonical form. 1. commutativity and associativity of and and and-role 2. (all r (and cl cz)) - (and (all r ~1) (all r cz)) 3. (rvm r (and-role s1 82)) 4. 5. 6. = (and (rvm r 81) (rvm r 82)) (rvm r (restr s c)) z (and (rvm r s) (all r c)) (restr (restr r ~1) ~2) E (restr r (and cl ~2)) (and-role (restr r1 c)rz) E (restr (and-role r r2) c) On the second point, modeling subsumption, the semantics also does fairly well. First, since the set of models is a subset of the set of situations, and since the requirements for subsump- tion reduce to the standard ones on models, reasoning in this semantics is sound with respect to the standard semantics. This soundness is an important requirement if the semantics is to cap- ture some of the intuitive ideas behind frame-based description languages. In canonical form, conjuncts of a concept are not themselves conjuncts, the second argument of ails is not a conjunct, the second argument of rvms is an atomic role, and all other roles are of the form (and-role 81 , . . sn) or (restr (and-role a1 . . . sn) c), where each si is an atomic role. Then (and cl . . . cn) is subsumed by (and ci . . . CL), where both are in canonical form, iR for each i from 1 to m there is a J’ in the range from 1 to n such that one of the following cases holds: Second, the actual subsumption relationships in this seman- tics form an interesting set (as will be shown more fully later). The sort of subsumption relationships that are valid are the sim- ple ones, such as (and person (alI (restr friend male) dot tor)) subsuming (and person (all friend (and doctor (some speciality)) )). 1. 2. 3. 4. 5. Subsumption relationships that are valid in the standard two- valued semantics but not here involve reasoning using the law of the excluded middle or modus ponens. For example, (and person (all friend doctor) C: is an atomic concept and cj = c:, CL = (some r’) and cj = (some r) with r’ subsuming r, ci = (all r’ d’) and cj = (all r d) with d’ subsuming d and r’ subsumed by r, (If d’ is of the form (all si d\), then use (restr r’ (some 8:)) instead of r’, and if di is of the form (all si d’,) use (restr si (some 8;)) instead of s\, etc.) c: = (rvm r’ s’) and ci = (rvm r s’) with r’ subsumed by r, (recall that s’ must be an atomic role), c; = (sd d’ b: . . . bi) and there is some J’ such that cj = (sd d bl . . . b,) and d is subsumed by d’ and, for each i, if bi is of the form (G r’ s’), then (and-role s1 . . . s,) is subsumed by s’, where, for each k, there is some i such that bj = (5 r sk) and r subsumes r’, and, if bi is of the form (2 r’ s’), then (and-role r1 . . . q) is subsumed by r’, where, for each k, there is some J’ such that bj = (2 rk S) and s subsumes s’. (alI (restr friend doctor) (some speciality)) ), i.e., a person whose friends are all doctors and whose friends who are doctors all have some speciality, is not subsumed by (and person Also (restr (and-role s1 . . . s,) c) is subsumed by (restr (and-role si . . . sk) c’) iff for all i there exists i such that sj = si and c is subsumed by c’. The other cases for roles are the obvious modifications to this rule. (all friend (some speciality)) ), There are two very important properties of these algorithms. i.e., a person whose friends all have some speciality. This is because, in four-valued situations, it is possible that some friend might both be a doctor and not be a doctor, as well as not specializing, thus falsifying (all friend (some speciality ) ), but falsifying neither (all friend doctor) nor (all (restr friend doctor) (some speciality)).6 Theorem 1 The algorithms correctly determine subsumption in this semantics, i.e., they are both sound and complete.7 Theorem 2 The algorithms run in time proportional to the square of the sum of the sizes of the two expressions. IV Computing Subsumption Therefore, subsumption in this semantics is easy to compute, as opposed to subsumption for 3Z in the two-valued semantics. Showing that subsumption in this semantics is less power- ful than subsumption in the standard semantics does not show that it is any easier to compute. To do this requires defining an algorithm, showing that is an algorithm for determining sub- sumption here, and calculating how fast it runs. This computational gain would not be very interesting if the subsumption relationships in the semantics are totally uninter- esting. Of course, something is lost, but the remaining subsump- tion relationships must, at least, form an interesting subset of the subsumption relationships of the standard two-valued se- mantics. “Note that in a three-valued semantics, 8Ub8UmptiOn8 like this one are still valid. The existence of a friend that is neither a doctor nor not a doctor prevents both the classes from being true and does not force either to be false, thus doing nothing to make the subsumption invalid. ‘The proofs of theae theorems are too long to fit in this paper but will be included in a longer paper on four-valued semantics for frame-based de- scription languages. KNOWLEDGE REPRESENTATION / 347 Fortunately, this is the case. Au examination of the sub- sumption algorithms given above shows that subsumption in this semantics is very closely related to the subsumptions com- puted by the subsumption algorithm of NIKL. (The only impor- tant difference between the two is the caveat attached to case 3 above.) Both are examples of “structural subsumption”, where each piece of structure in the subsuming concept or role must be mirrored by an appropriate piece of structure in the subsumed concept or role. As such, they capture an interesting subset of the subsumption relationships iu the standard semantics for frame-based description languages, one that contains the simple subsumption relationships and leaves the complex and hard-to- compute ones out. V Summary What has been gained from this new semantics for frame- 7 based description languages. First of all, the semantics is a reasonable semantics, especially when considering systems with limited reasoning capabilities. Second, subsumption in this se- mantics is easy to compute, at least for the language given here. Also, the valid subsumption relationships form an interesting set-one that includes the easy subsumptions and leaves out the less obvious ones. This set corresponds closely to the set of subsumption relationships computed in NIKL, lending a degree of credence to that set. Third, certain extensions to the language, such as adding negation and disjunction or adding compositions of roles or structural descriptions as in NIKL, cause no problems. However, there are some problems with the semantics. First, the semantics is not as intuitive as the two-valued semantics. This is a problem with all alternative semantics but the four- valued semantics given here is still a reasonable semantics, espe- cially for limited reasoners. Second, subsumption in this seman- tics gets only the very easy cases, leaving many that might be important, such as those involving a single application of modus ponens. It seems that, in order to get a uniform, simple seman- tics with a fast subsumption algorithm, there is no way around this extreme weakness. The worst problem with this semantics is its inability to solve computational problems involving number restrictions (general- izing some to at-least and adding at-most). Although it is easy to define these concepts in this semantics, by using the size of sets in the situations, reasoning with numbers is hard, just as it is in the standard semantics. This is because identity is a two-valued notion, which legitimizes more deductions than are usual in a four-valued semantics. The most promising way of getting around this problem is to go to a four-valued notion of equality, which, of course, further changes the semantics from the standard one, and introduces several complications to the analysis of subsumption. The most important point about this new semantics is that it forms a principled way to defuse the tradeoff between expres- sive power and computational complexity. It justifies a limited set of subsumption relationships that are easy to compute and, moreover, captures an interesting subset of the standard sub- sumption relationships. This is not a total solution, because no total solutions are possible (unless P = NP) and is not even a finalized solution, because it does not yet handle number re- strictions. However, this semantics does form au important step towards a principled, computationally tractable yet expressively powerful, knowledge representation system, and thus serves to alleviate the computational problems of frame-based description languages reported by Brachman and Levesque. Acknowledgments Ron Brachman and Pat Hayes, as the previous and current leaders of the knowledge (representation) group at SPAR, have been instrumental in maintaining a comfortable research envi- ronment for me. Hector Levesque and Ron Brachman, through their investigation of the complexity of computing subsumption in standard semantics for frame-based description languages, provided the impetus for this research. Dave McAllester and Dan Carnese provided useful comments on earlier drafts of the paper. References [Brachman and Levesque, 19841 Ronald J. Brachman and Hec- tor J. Levesque. The tractability of subsumption in frame- based description languages. In Proceedings AAAI-84, pages 34-37, August 1984. (Brachman and Schmolze, 19851 Ronald J. Brachman and Ja- mes G. Schmolze. An overview of the KL-ONE knowl- edge representation system. Cognitive Science, 9(2):171- 216, April-June 1985. [Brachman et al., 19831 R onald J. Brachman, Richard E. Fikes, and Hector J. Levesque. KRYPTON: a functional approach to knowledge representation. IEEE Computes, 16(10, Spe- cial Issue on Knowledge Representation):67-73, October 1983. [Brachman et al., 19851 Ronald J. Brachman, Victoria Pigman Gilbert, and Hector J. Levesque. An essential hybrid rea- soning system: knowledge and symbol level accounts of KRYPTON. In Proceedings IJCAI-85, pages 532-539, Au- gust 1985. [Frisch, 1985) Al an M. Frisch. Using model theory to specify AI programs. In Proceedings IJCAI-85, pages 148-154, August 1985. [Levesque, 19841 Hector J. Levesque. A logic of implicit and explicit belief. In Proceedings A AAI-84, pages 198-202, August 1984. [Levesque and Brachman, 19851 Hector J. Levesque and Ron- aid J. Brachman. A fundamental tradeoff in knowledge representation and reasoning (revised version). In Ron- ald J. Brachman and Hector J. Levesque, editors, Readings in Knowledge Representation, pages 42-70, Morgan Kauf- mann Publishers, Los Altos, California, 1985. Patel-Schneider, 19841 Peter F. Patel-Schneider. Small can be beautiful in knowledge representation. In Proceedings IEEE Workshop on Principles of Knowledge-Based Sys- tems, pages 11-16, December 1984. Patel-Schneider, 19851 Peter F. Patel-Schneider. A decidable first-order logic for knowledge representation. In Proceed- ings IJCAI-85, pages 455-458, August 1985. [Patel-Schneider et al., 19841 Peter F. Patel-Schneider, Ron- ald J. Brachman, and Hector J. Levesque. ARGON: knowl- edge representation meets information retrieval. In Pso- ceedings of The First Conference on Artificial Intelligence Applications, pages 280-286, December 1984. [Schmolze, 19851 J ames G. Schmolze. The language and seman- tics of NIKL. Draft, BBN Laboratories, April 1985. [Schmolze and Israel, 19831 J ames G. Schmolze and David J. Israel. KL-ONE: semantics and classi&ation. In Re- search in Knowledge Representation for Natural Language Understanding-Annual Report, 1 September 1982-91 Au- gust 1983, pages 27-39, Technical Report 5421, BBN Lab- oratories, 1983. 348 / SCIENCE
|
1986
|
162
|
431
|
ON THE LOGIC OF PROBABILISTIC DEPENDENCIES Judea Pearl Cognitive Systems Laboratory, Computer Science Dept., UCLA, Los Angeles, CA. 90024 ABSTRACT This paper uncovers the axiomatic basis for the probabilistic relation “x is independent of y , given z ” and offers it as a formal definition of informational dependency. Given an initial set of such independence relationships, the axioms established permits us to infer new independencies by non-numeric, logical manipula- tions. Additionally, the paper legitimizes the use of infer- ence networks to represent probabilistic dependencies by establishing a clear correspondence between the two rela- tional structures. Given an arbitrary probabilistic model, P, we demonstrate a construction of a unique edge- minimum graph G such that each time we observe a ver- tex x separated from y by a subset S of vertices, we can be guaranteed that variables x and y are independent in P, given the values of the variables in S . 1. INTRODUCTION Any system that reasons about knowledge and be- liefs must make use of information about dependencies and relevancies. If we have acquired a body of knowledge z and now wish to assess the truth of proposi- tion x, it is important to know whether it would be worthwhile to consult another proposition y , which is not in z. In other words, before we examine y , we need to know if its truth value can potentially generate new infor- mation relative to x, information not available from z. For example, in trying to predict whether I am going to be late for a meeting, it is normally a good idea to ask somebody on the street for the time. However, once I es- tablish the precise time by listening to the radio, asking people for the time becomes superfluous, and their responses would be irrelevant. Similarly, knowing the color of x ‘s car normally tells me nothing about the color of Y ‘s. However, if X were to tell me that he almost mis- took Y’s car for his own, the two pieces of information become relevant to each other -- whatever I learn about the color of X’s car will have bearing on what I believe the color of Y’s car to be. What logic would facilitate this type of reasoning? In probability theory, the notion of relevance is given precise quantitative underpinning using the device of conditional independence. A variable x is said to be *mis work was supported in part by the National Science Foundation, Grant #DSR 83-13875. independent of y given the information z if PhY Iz)=W lz)wY 14 However, it is rather unreasonable to expect people or machines to resort to numerical verification of equalities in order to extract relevance information. The ease and conviction with which people detect relevance relation- ships strongly suggest that such information is readily available from the organizational structure of human memory, not from numerical values assigned to its com- ponents. Accordingly, it would be interesting to explore how assertions about relevance can be inferred qualita- tively from various models of memory and, in particular, whether the logic of such assertions coincides with that of probabilistic dependencies. Since models of human memory are normally por- trayed in terms of semantic networks of concepts and re- lations [Woods 19751, a natural question to ask is wheth- er the notion of probabilistic dependency can be captured by a network representation, in the sense that all depen- dencies and independencies in a given probabilistic model could be detected from the topological properties of some network. For a given probability distribution P and any three variables x, y and z , while it is fairly easy to verify whether knowing z renders x independent of y, P does not dictate which variables should be regarded as direct neighbors. Thus, the topology of networks which display the underlying dependencies is not explicitly given by the numeric representation of probabilities. This paper accomplishes two tasks. First, it un- covers the axiomatic basis for the probabilistic relation ‘ ‘x is independent of y , given z ” and offers it as a for- mal definition for the qualitative notion of informational dependency. Given an initial set of such independence relationships, the axioms established permit us to infer new independencies by non-numeric, logical manipula- tions. Second, the paper legitimizes the use of networks to represent probabilistic dependencies by establishing a clear correspondence between the two relational struc- tures. Given an arbitrary probabilistic model, P, we demonstrate a construction of a unique edge-minimum graph G such that each time we observe a vertex x separated from y by a subset S of vertices, we can be guaranteed that variables x and y are independent in P, given the values of the variables in S. This correspon- dence provides a semantic for the topology of proposi- tional inference networks like those used in expert sys- tems [Duda, Hart and Nilsson 19761. KNOWLEDGE REPRESENTATION / 339 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. AN AXIOMATIC BASIS FOR PROBABILISTIC DEPENDENCIES KG z,y) c=-> uy, z,x) Symmetry @.a) Let u = {a, p, * * . J be a finite set of discrete- valued variables (i.e., partitions) characterized by a joint probability function P (a), and let x, y and z stand for any three subsets of variables in U. We say that x and y are conditionally independent given z if W&Y Iz)=W lww Id whenP(z) > 0 (1) Eq. (1) is a terse notation for the assertion that for any in- stantiation zk of the variables in z and for any instantia- tions Xi and yi of x and y , we have P (X=Xi and y=yj 1 Z=Zk) = P(X=Xi IZ=Zk) PCy"yj IZ=Zk) (2) The requirement P (z )>O guarantees that all the condi- tional probabilities are well defined, and we shall hen- ceforth assume that P >O for any instantiation of the vari- ables in U. This rules out logical and functional depen- dencies among the variables; a case which would require special treatment. We use the notation I (x ,z ,y )P or simply I (x ,z ,y ) to denote the independence of x and y given z ; thus, W,z,y)~ ifSf%w Iz)=W IWW Id (3) Note that I (x , z , y ) implies the conditional independence of all pairs of variables sex and @y , but the converse is not necessarily true. The conditional independence relation I (x ,z ,y ) satisfies the following set of properties [Lauritzen 19821: I(x,z,y)<=->P(xly,z)=P(x)z) (4.a) ~(x,~,y)~=~~~~,~lY~=~~~I~~~~~lY~ (4.b) UX, z,y) -=I=> w-,g: w,y, z)=f(x, ZMY, a4.a I(x,z,y)<==>P(x,y,z)=P(xIz)P(y,z) (44 (5.a) ~(X,Z,Y) ===) we, 4, z,y> (5.b) The proof of these properties can be derived by elementa- ry means from the definition (1). These properties are based on the numeric representation of P and, therefore, would not be adequate as an axiomatic system. We now ask what logical conditions, void of any reference to numerical forms, should constrain the rela- tionship I (x ,z ,y ) if it stands for the statement “x is in- dependent of y , given that we know z .” The next set of properties constitute such a logical basis: Theorem 1: Let x, y and z be three disjoint subsets of variables from U, and let I (x ,z ,y ) stand for the relation “x is independent of y, given z” in some probabilistic model P , then I must satisfy the following set of five in- ~(x,z,y uw) => 1(x, z,y) & 1(X, z, w) Decomposition (6.b) w,z uw,y) & Q&Z uy,w)=~m,LY UN Exchange (6.~ 1 ux,z,y uw> =>I(x,z uw,y) Expansion (6-d 1 ~(x,zyy, w~~~~~,z,y~==~~(x,z,yyw) Contraction (6.e 1 For technical convenience we shall adopt the convention that every variable is independent of the null set i.e., UX, z, 0). The intuitive interpretation of Eqs. (6.~) through (6.e) follows. (6.~) states that if y does not affect x when w is held constant and if, simultaneously, w does not af- fect x when y is held constant, then neither w nor y can affect x . (6.d) states that learning an irrelevant fact (w ) cannot help another irrelevant fact (y ) become relevant. (6.e) can be interpreted to state that if we judge w to be irrelevant (to x) after learning some irrelevant facts y, then w must have been irrelevant before learning y . To- gether, the expansion and contraction properties mean that learning irrelevant facts should not alter the relevance status of other propositions in the system; whatever was relevant remains relevant, and what was ir- relevant remains irrelevant. The proof of Theorem 1 can be derived by elemen- tary means from the definition (3) and from the basic ax- ioms of probability theory. The exchange property is the only one which requires the assumption P (x)>O and will not hold when the variables in U are constrained by logi- cal dependencies. In such a case, Theorem 1 will still re- tain its validity if we regard each logical constraint as having some small probability & of being violated, and let ~30. The proof that Eqs. (6.a) through (6.e) are logically independent can be derived by letting U contain four ele- ments and showing that it is always possible to contrive a subset I of triplets (from the subsets of U) which violates one property and satisfies the other four. A graphical interpretation for properties (6.a) through (6.e) can be obtained by envisioning a graph with a set of vertices U and associating the relationship I (A ,B ,C ) with the statement “ B intervenes between A and C ” or, in other words, “the removal of a set B of nodes would render the nodes in A disconnected from those in C .” The validity of (6.~) through (6.e) is clearly depicted by the chain x -z -y -w . Completeness Conjecture: The set of axioms (6.a) through (6.e) is complete when I is interpreted as a conditional-independence relation. In other words, for every 3-place relation I satisfying (6.a) through (6.e), there exists a probability model P such that dependent conditions: P(x ly, z) =P(x lz) iff W, z,y). 3-d / SCIENCE Although we have not been able to establish a gen- eral proof of completeness, we were not able to find any violating example, i.e., we could not find another general property of conditional independence which is not im- plied by (6.a) through (6.e). 3. A GRAPHICAL REPRESENTATION FOR PROBABILISTIC DEPENDENCIES Let G be an undirected graph and let cx I S I y >G stand for the assertion that removing a subset S of nodes from G would render nodes x and y disconnected. Ideal- ly, we would like to display independence between vari- ables by the lack of connectivity between their corresponding nodes in some graph G. Likewise, we would like to require that finding <x I S I y >G should correspond to conditional independence between x and y given S, namely, <x IS Iy>G => 1(x ,S ,Y)~ and, conversely, 1 (x , S , y )p =, <x 1 S I y >G . This would provide a clear graphical representation for the notion that x does not affect y directly, that its influence is mediated by the variables in S. Unfortunately, we shall next see that these two requirements might be incompati- ble; there might exist no way to display all the dependen- cies and independencies embodied in P by vertex separa- tion in a graph. Definition: An undirected graph G is a dependency map (D -map) of P if there is a one-to-one correspon- dence between the variables in P and the nodes of G, such that for all disjoint subsets, x, y , S , of variables we have: 4x 9s 9Y)P =><Klsly>G (7) Similarly, G is an Independency map (1 -map) of P if: 1(x , s ,y)p <== =Z 1s IY>G 63) A D-map guarantees that vertices found to be connected are, indeed, dependent; however, it may occasionally display dependent variables as separated vertices. An I- map works the opposite way: it guarantees that vertices found to be separated always correspond to genuinely in- dependent variables but does not guarantee that all those shown to be connected are, in fact, dependent. Empty graph are trivial D -maps, while complete graphs are trivi- al I -maps. Given an arbitrary graph G, the theory of Markov Fields [Lauritzen 19821 tells us how to construct a proba- bilistic model P for which G is both a D -map and an I- map. We now ask whether the converse construction is possible. Lemma: There are probability distributions for which no graph can be both a D -map and an 1 -map. Proof: Graph separation always satisfies a Is, b-G => a IsI us2/y>(; for any two sub- sets S 1 and S2 of vertices. Some P ‘s, however, may in- duce both 1(x ,S1,y)p and ZVC)TI(X,S,~S~,~)~. Such P’s cannot have a graph representation which is both an I-map and a D -map because D -mapness forces G to display S l as a cutset separating x and y , while I- mapness prevents S +S2 from separating x and y . No graph can satisfy these two requirements simultaneously. Q.E.D. A simple example illustrating the conditions of the proof is an experiment with two coins and a bell that rings whenever the outcomes of the two coins are the same. If we ignore the bell, the coin outcomes are mutu- ally independent, i.e., S l=O. However, if we notice the bell (Sz), then learning the outcome of one coin should change our opinion about the other coin. The only I-map for this example is a complete graph on the three vari- ables involved. It is obviously not a D-map because it fails to display the basic independence of the coin out- comes. Being unable to provide a graphical description for all independencies, compromise: we settle for the following we will consider only I-maps but will in- sist that the graphs in those maps capture as many of P’s independencies as possible, i.e., they should contain no superfluous edges. Definition: A graph G is a minimal I -map of P if no edge of G can be deleted with destroying its I-mapness. We call such a graph a Markov-Net of P . Theorem 2: Every P has a (unique) minimal Z-map Go = (U,EO) produced by connecting only pairs (cx$) for which I (a , U-a-p , p)p is FALSE. (8) i.e., (a$)~ E0 if Ua, U-a-P, P>p (9) The proof is given in pearl and Paz 19851 and uses only the symmetry and exchange properties of I. Definition: A relevance sphere R,(a) of a variable a E U is any subset S of variables for which I(a,S, U-S-a) and ad S (10) Let R;(a) stand for the set of all relevance spheres of a. A set is called a relevance boundary of a, denoted B,(a), if it is in R,*(a) and if, in addition, none of its proper subsets is in R,*(a). B,(a) is to be interpreted as the smallest set of variables that “shields” a from the influence of all other variables. Note that RI*(a) is non-empty because I (x , z , 0) guarantees that the set S = U-a satisfies (10). Theorem 3: [Pearl and Paz 19851 Every variable a E U has a unique relevance boundary B/(a). B,(a) coincides with the set of vertices &,(a) adjacent to a in the Markov net Go. The proof of Theorem 3 makes use of the expansion property (6.d). Corollary 1: The set of relevance boundaries B,(a) forms a neighbor system, i.e., a collection BI*={Bl(a):aE U) of subsets of U such that (i) a B B/(a), and 00 a E 4(P) iff P+(a), a& U KNOWLEDGE REPRESENTATION / 3-i 1 Corollary 2: The Markov net Go can be constructed by connecting each a to all members of its relevance boun- darv B,(a). The usefulness of this corollary lies in the fact that, in many cases, it is the Markov boundaries B](a) that define the organizational structure of human memory. People find it natural to identify the immediate consequences and/or justifications of each action or event fDoyle 19791, and these relationships constitute the neighborhood semantics for inference nets used in expert systems Duda, et al. 19761. The fact that Br (a) coin- cides with BGo(a) guarantees that many independence re- lationships can be validated by tests for graph separation at the knowledge level itself [pearl 19851. Thus we see that the major graphical properties of probabilistic independencies are consequences of the ex- change and expansion axioms (6.~) and (6.d). Axioms (6.a) through (6.d), were chosen, therefore, as the definition of a general class of dependency models called Graphoids [Pearl & Paz 19851, which possess graphical representations similar to those of Markov nets. Illustration 1 (abstract) To illustrate the role of these axioms, consider a set of four integers U = {(l, 2, 3, 4)), and let I be the set of twelve triplets listed below: I= f-(1, 2, 3), (19 39 4), (2, 3,4), (f-1, 219 3,4), (1,129 31,4), (2, {l, 3}, 4) + symmetrical images} It is easy to see that I satisfies (6.a)-(6.d) and thus it has a unique minimal I -map G 0, shown in Figure 1. 2 9 3 d 4 Figure 1: The Minimal I-Map, GO, of I This graph can be constructed either by deleting the edges (1,4) and (2,4) from the complete graph or by computing from I the relevance boundary of each ele- B/(2) = f 1,319 Suppose that I contained only the last two triplets (and their symmetrical images): I’= {(l, { 2, 3 }, 4), (2, { 1, 3 }, 4) +symmetrical images} I’ is clearly not a probabilistic independence relation be- cause the absence of the triplets (1, 3, 4) and (2, 3, 4) violates the exchange axiom (6.~). Indeed, if we try to construct Go by the usual criterion of edge deletion, the graph in Figure 1 ensues, but it is no longer an I-map of I ‘; it shows 3 separating 1 from 4, while (1, 3,4) is not in I ‘. In fact, the only I -maps of I’ are the three graphs in Figure 2, and the edge-minimum graph is clearly not unique. 263 2+/3 263 4 4 4 Figure 2: The Three I-Maps of I ’ Now consider the list I” = {( 1,2,3), (1,3,4), (2,3,4), ( {I, 2J,3,4)+ images} I” satisfies the tist three axioms, (6.a) through (6.c), but not the expansion axiom (6-d). Since no triple of the form (a, U-a-p, j3) appears in I“, the only I-map for this list is the complete graph. However, the relevance boundaries of I” do not form a neighbor set; e.g., B,pp(4) = 3, &t(2) = { 1, 3,4}, so 2 4 + (4) while 4 E &t(2). Note that I does not possess the contraction pro- perty (6.e) of probabilistic dependencies. Therefore, there is no probabilistic model capable of inducing this set of independence relationships unless we also add the triplet (1, 2, 3) toI. Illustration 2 (application) Consider the task of constructing a Markov net to represent the belief whether or not an agent A is about to be late to a meeting (see Introduction, 1st paragraph). Assume that the agent identifies the following variables as having influence on the main question of being late to a meeting: 1) the time shown on Person-l’s watch; 2) the time shown on Person-2’s watch; 3) the correct current time; 4) the time A will show up at the meeting place; 5) the agreed time for starting the meeting; 6) the time A ‘s partner will actually show up; 7) whether A will be late for the meeting (i.e., will arrive after his partner), The construction of Go can proceed by two methods: 1) the complementary set method; and 2) the relevance-boundary method. The first method requires that, for every pair of variables (a, p), we determine whether fixing the values of all oth- er variables in the system will render our belief in a sen- sitive to the value of p. We know, for example, that 7 will depend on 4, no matter what values are assumed by all the other variables and, on that basis, we may connect node 7 to node 4 and, proceeding in that fashion through 342 / SCIENCE all pairs of variables, the graph of Figure 3 may be con- structed. 7 Figure 3 The relevance-boundary method is more direct; for every variable a in the system we identify the minimal set of variables sufficient to render the belief in a insensi- tive to all other variables in the system. It is a common- sense task, for instance, to decide that, once we know the current time (3), no other variable may affect what we ex- pect to read on Person-l’s watch (1). Similarly, once we know the current time (3) and that we are not about to be late (7), we still must know when our partner will actual- ly show up (6) before we can estimate our arrival time (4) independent of the agreed time (5). On the basis of these considerations, we may connect 1 to 3; 4 to 6, 7 and 3; and so on. After finding the immediate neighbors of any six variables in the system, the graph Go will emerge, identical to that of Figure 3. Once established, Go can be used as an inference instrument. For example, the fact that knowing 4 renders 2 independent of 5 (i.e., 1(2,4,5)) can be inferred from the fact that 4 is a cutset in Go, separating 2 from 5. Deriv- ing this conclusion by syntactic manipulations of axioms (6.a) through (6.e) would probably be more complicated. Additionally, the graphical representation can be used to help maintain consistency and completeness during the knowledge-building phase. One need only ascertain that the relevance boundaries identified by the knowledge pro- vider (e.g., the expert) form a neighbor system. 4. CONCLUSIONS We have shown that the essential qualities charac- terizing the probabilistic notion of conditional indepen- dence are captured by five logical axioms: symmetry (6.a), decomposition (6.b), exchange (6.c), expansion (6.d) and contraction (6.e). The first three axioms enable us to construct an edge-minimum graph in which every cutset corresponds to a genuine independence condition. The fourth axiom is needed to guarantee that the set of neighbors which Go assigns to each variable a is actual- ly the smallest set required to shield a from the effects of alI other variables. The graphical representation associated with con- ditional independence offers an effective inference mechanism for deducing, in any given state of knowledge, which propositional variables are relevant to each other. If we identify the relevance boundaries asso- ciated with each proposition in the system and treat them as neighborhood relations defining a graph Go, then we can correctly deduce independence relationships by test- ing whether the set of currently known propositions con- stitutes a cutset in G, . The probabilistic relation of conditional indepen- dence is shown to possess a rather plausible set of quali- tative properties, consistent with our intuitive notion of “x being irrelevant to y , once we learn z .” Reducing these properties to a set of logical axioms permit us to test whether other calculi of uncertainty also yield facili- ties for connecting relevance to knowledge. Moreover, the axioms established can be viewed as inference rules for deriving new independencies from some initial set. Not all properties of probabilistic dependence can be captured by undirected graphs. For example, the former is non-monotonic and non-transitive (see ‘coins and bell’ example after proof of lemma) while graph separation is both monotonic and transitive. It is for these reasons that directed graphs such as inference nets (Duda et al., 1976) and belief nets (Pearl, 1985) are finding a wider application in reasoning systems. A sys- tematic axiomatization of these graphical representations is currently under way. REFERENCES Doyle, J. (1979), “A Truth Maintenance System,” Artificial Intelligence, vol. 12, no. 3. Duda, R. O., Hart, P. E., and Nilsson, N. J., (1976), “Subjective Bayesian Methods for Rule-Based Inference Systems,’ ’ in Proceedings, I976 National Computer Conference (AFIPS Conference Proceedings), 45, 1075- 1082. Lauritzen, S.L. (1982), Lectures on Contingency Tables, 2nd Ed., University of Aalborg Press, Aalborg, Denmark. Pearl, J. (1985), “Fusion, Propagation and Structuring in Belief Networks,” UCLA CSD Technical Report #850022, Los Angeles, CA., June 1985; to be published in Artificial Intelligence, fall 1986. Pearl, J., and Paz, A., (1985), “Graphoids: A Graph- based Logic for Reasoning about Relevance Relations,” UCLA CSD Technical Report #850038, Los Angeles, CA., December 1985. Woods, W.A. (1975), “What’s in a Link: Foundations for Semantic Networks” in Bobrow and Collins (eds.), Representation and Understanding, Academic Press, Inc. KNOWLEDGE REPRESENTATION / 343
|
1986
|
163
|
432
|
INFERENCE IN A TOPICALLY ORGANIZED SEMANTIC NET Johannes de Haan and Lenhart K. Schubert Department of Computing Science, University of Alberta Edmonton, Alberta, Canada T6G 2Hl ABSTRACT A semantic net system in which knowled; ge is topically or- ganized around concepts has been under development at the University of Alberta for some time. The system is capable of automatic topical classification of modal logic input sentences, concept and topic oriented retrieval, and property inheritance of a general sort. This paper presents an inference method which efficiently determines yes or no answers to relatively sim- ple questions about knowledge in the net. It is a deductive, resolution based method, enhanced by a set of special inference methods, and relies on the classification and retrieval mecha- nisms of the net to maintain its effectiveness, unencumbered by the volume or diversity of knowledge in the net. I INTRODUCTION In 1975, Scott Fahlman and Drew McDermott discussed the so-called “symbol-mapping problem” (Fahlman 1975, McDer- mott 1975). In essence, this is the problem of making simple inferences quickly in a system with a potentially very large, var- ied knowledge base. Among the examples they discussed were the inference that Clyde is grey, given that Clyde is an elephant and elephants are grey, and the inference that Clyde does not live in a teacup or play the piano, given standard knowledge about elephants, teacups, and pianos. What makes the prob- lem hard is not the complexity of the requisite knowledge or reasoning, which are quite modest, but the fact that the right knowledge may be very hard to find: we have a “needle-in-a- haystack” problem. More than a decade later, the problem cannot be considered satisfactorily solved. Progress has been made in “customized” understanding and reasoning systems - systems that can make a wide range of inferences in a circumscribed domain (e.g., the divorce story domain of BORIS, described in Lehnert et al. (1983), or the various domains of expertise of expert systems in medicine, computer configuration, prospecting, and so on). However, the problem of scaling up to systems with a large amount of knowledge about a wide variety of subjects is still very much with us. The work reported here is part of a continuing effort to de- velop a question-answering and English conversational system (ECOSYSTEM) h* h w ic is unencumbered by the volume or di- versity of its knowledge. Our approach to efficient question- answering for large knowledge bases was first sketched in Schu- bert et al. (1979). This sketch motivated the design of a 3-level semantic net organization: at the highest level, knowledge re- sides in a main net (for real world knowledge) and in arbitrarily nested subnets (primarily for “mental worlds” and “narrative worlds”); the next level is the level of concepts, which are used in each subnet as access points for knowledge directly in- volving them; and the third level is the level of topics, which L:,,,,,L:,.,ll.. ,..l.A:..:.J, CL, 1,-a...l-A-- ..l-.-..c - -:..-- ------L IIIGL (II LlllL(LllJ JU” UI” lUt; bllt: KU” wltxlgc: ca” “ Llb a I;‘“tm 1;“ 1 1cep IJ into topically related subsets of facts (Coving ton & Schu bert, 1980). This organization allows highly selective retrieval of knowledge relevant to a query. For example, the question “Is the wolf in the story of Little Red Riding Hood grey?’ (posed in logical form) would prompt access of the Little Red Riding Hood subnet, followed by access within that subnet of the node for the wolf, followed by access of ‘colouring’ information about the wolf and its superordinate concepts. (Additional informa- tion may be accessed during the inference attempt - see Sec- tion III.) The implementation provides for input of modal logic sentences which are automatically converted to clause form, topically classified, and inserted at appropriate concept nodes. In addition, special inference methods have been developed to short-cut taxonomic reasoning and reasoning about colours and time. The new deductive algorithm builds on this work; it ef- ficiently determines yes or no answers to the sorts of questions discussed by Fahlman and McDermott, relying upon the clas- sification and retrieval mechanisms of the knowledge base. In some respects, the method has goals that are similar to those of automatic theorem provers. However, the domains of natural language understanding and theorem-proving are dif- ferent, and two fundamental differences distinguish this method from a theorem-prover: Size of the knowledge base. Theorem provers work on prob- lems in well defined logical or mathematical domains. These systems are artificial and are usually axiomatized by some small set of statements. This is not true of the semantic net, which will incorporate a very large body of knowledge, sufficient at least to carry on an intelligent conversation. Deductive ability. Theorem provers are judged mainly by their deductive ability - better provers solve logically more complex problems. People require minutes or even hours to solve these problems, and yet, they are able to perform the inference needed for natural language understanding almost immediately. It is not unreasonable to suppose that natural language inference is shallower (i.e., requires fewer steps) than that required for mathematical theorem proving. II THE SEMANTIC NET To understand how the inference method works, it is im- portant to understand how the semantic net represents and or- ganizes knowledge. The representational scheme is essentially that as described in Schubert (1976), incorporating changes de- scribed in Schubert et al. (1979) and Covington (1980). The syntax of the net provides for the representation of formulae in higher-order modal logic,* with constants, functions, ex- istentially and universally quantified variables, and the usual truth function connectives (negation, implication, disjunction and conjunction). * Although the net is able to represent and organize modal propositions, the current inference method is restricted to the first order predicate calculus. 334 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. specialization generalization part form colouring appearance translucency texture odour tactile- -i texture quality hardness resilience mental- -l emotional-disposition quality intellectual-disposition behavior communication function (use) Figure 2. Topic access skeletons (TASS) kinship control membership ownership (concept, topic) pairs. For example, the predicate EAT indi- cates the topic ‘feeding’ with respect to its first argument and the topic ‘consumption’ with respect to its second argument. Hence, the clause [ WOLF1 EAT GRANDMA]* is assigned the pairs ( WOLF1 ‘feeding) and (GRANDMA, consumption), and is indexed accordingly in the TAS’s of the concepts WOLF1 and GRANDMA. Similarly, the predicate GREY indicates the topic ‘colouring’, so the clause ~[z WOLF] v [z GREY] is as- signed the pair ( WOLF, colouring), and the clause is indexed under the colouring topic in the TAS for the concept WOLF. Subsequent queries about the colouring of wolves would be able to directly access this clause. Queries about the appearance of Figure 1. Topic hierarchy (TH) Within the main net and each subnet, a dictionary provides direct (hashed) access to named concepts. Facts about these concepts are then organized using a topical hierarchy (TH). Using the TH, it becomes possible to directly access clauses which topically pertain to a concept and to ignore all the rest (which can potentially be a large number of clauses). The struc- ture of the TH also defines relationships among topics, so it is possible to broaden an access to sub-topics or super-topics of a given topic. Figure 1 illustrates a simplified topical hierarchy for concepts which are physical objects. It is not likely that we know something about all topics for all concepts, and to duplicate the entire TH for every concept would waste storage space and increase traversal time across empty topics. To solve this problem, topic access skeletons (TAS) are used. Each TAS is a minimal hierarchy based on the complete TH, and only includes topics about which there is some knowledge, or topics which are needed to preserve the structure of the hierarchy. Figure 2 illustrates simple TAS’s for the specific concept WOLF1 and for the generic concept WOLF. Using pre- defined topical indicators for predicates, a classification algo- rithm automatically assigns to each asserted clause particular I generalization: [ WOLF1 WOLF] WOLF1 --I behavior i feeding: [ WOLF1 EAT GRANDMA] [ WOLF1 EAT LRRH] communication: [ WOLF1 TALK- WITH LRRH] WOLF- generalization: ~[z WOLF] V [zz MAMMAL] specialization: [ WOLF1 WOLF] i colouring: -I[Z WOLF] V [z GREY] appearance shape: . . . wolves would also be able to quickly get colouring is a sub-topic of appearance. to the clause, because The special topic ‘major implication’ is used to classify fun- damental properties of predicates that characterize their mean- ing.** For example, a major implication of the predicate IN might be that if A is in B, then A is smaller that B, and that B is a container or enclosure of some sort. Similarly, an ‘ex- clusion’ topic is used to classify clauses which explicitly define such a relationship (e.g., + CREATURE] v + PLANT]). For the main net and all of its subnets, there is also a hier- archical organization of the concepts within each net. Concepts are organized using a structure called a concept hierarchy (CH), which is essentially a type hierarchy for physical objects. It is used in two ways: (1) for quick associative access to groups of concepts with the same type (just as the TH provides quick access to clauses about the same topic) ; and (2) to guide a prop- erty inheritance mechanism in its search for generalizations or specializations of a given CH for physical objects. concept. Figure 3 presents a simplified To repeat the CH for every subnet is also waste of resources, so a concept access skeleton (CAS) is maintained for each sub- * To improve readability, the syntax places the first argument of a literal before the predicate. ** Major implications were linguistically motivated in Schubert et al. (1979). They are related to Schank’s ACT-based in- ferences, as well as to “terminological facts” in systems like KRYPTON and KL-ONE. KNOWLEDGE REPRESENTATION ! 335 thing - living- thing ’ -i person 1 woman man creature microbe animal bug larger- animal -I rock inanimate mountain natural lake object . . i clothing artifact building . I * Figure 3. Concept hierarchy (CH) girl: [LRRH GIRL] wolf: [ WOLF1 WOLF] thing cottage: [c COTTAGE] hood: [d HOOD] Figure 4. Concept access skeleton (CAS) net. Just as a TAS is a minimal hierarchy based on the TH, a CAS is a minimal hierarchy based on the CH. Figure 4 presents a CAS which associatively organizes some of the constant con- cepts in the story of Little Red Riding Hood* (LRRH). The classification algorithm automatically places clauses which are assigned a (concept, generalization) pair into a CAS whenever concept is an instance. III THE INFERENCE METHOD As mentioned above, we were led to adopt clause form as the most convenient logical form for the purposes of automatic topical classification. It was therefore natural to choose reso- lution as our basic inference method. Our deductive algorithm employs certain familiar strategies, such as set-of-support and preference for simple resolvents. What makes the algorithm unique, however, are the following three features. (1) The set of potential inference steps considered at any one time is severely limited by use of both the concept hierarchy and the topic hi- erarchy (as reflected in each concept’s TAS); (2) there is an on- going decision-making process which trades off inference steps against retrieval steps; and (3) resolution is “generalized”, per- mitting use of special inference methods for specific domains. * Most examples in this paper are loosely based on this story. We will elaborate on the last point first, and then discuss (1) and (‘2) under the heading “Resolution control”. A. Generalized resolution and evaluation Resolving two clauses is usually done by resolving on liter- als from each clause which have the same predicate but op- posing signs. For example, [.LRRH GIRL] resolves against l[LRRH GIRL]. G iven the existence of a special inference method which quickly infers relationships among ‘type’ pred- icates, it is also possible to reduce long inference chains to a single resolution step. For example, (LRRH GIRL] di- rectly resolves against ~LRRH CREATURE], without US- ing the intermediary clauses ~[z GIRL] V [z CHILD] . . . ~[z PERSON] V [z CREATURE]. Or, given a special infer- ence method for colour, it becomes possible to directly resolve [ WOLF1 BROWN] against [ WOLF1 GREY] (Papalaskaris & Schubert 1982, Schubert et al. 1983, Brachman et al. 1983, Stickel 1983, Vilain 1985). This method of “generalized” re- solving can similarly be used for factoring and subsumption (Stickel 1985, Schubert et al. 1986). A crucial advantage of our topical retrieval mechanism is that it allows candidates for generalized resolving to be found as efficiently as candidates for ordinary resolving, on the basis of their classification under the same topic (e.g., [ WOLF1 GREY] and [ WOLF1 BROWN] are both classified as colouring propositions for WOLFl). Evaluation is another means by which the resolution process can be considerably shortened. Every clause from the original question or generated during the proof goes through an eval- uative attempt, to try to achieve an immediate proof (if the clause is false), or to remove the clause from consideration (if it is true). If the clause cannot be evaluated, each of its literals is tried, to try to prove the whole clause true (if the literal is true), or to remove it from the clause (if it is false). The sim- plest evaluative method is to match a clause or literal against previously asserted clauses (the normal form ensures that the input form of a formulae has no affect on the matching process). Generalized resolution and evaluation can be used for any class of predicates for which there exists a special inference method. Currently, the inference algorithm uses special infer- ence methods for types and colours; however, methods for time and part-of relationships have been developed as well (Schu- bert 1979, Papalaskaris & Schubert 1982, Schubert et al. 1983, Schubert et al. 1986, Taugher 1983). B. Resolution control A great many resolution control strategies have appeared in the literature, but none of them has been completely success- ful in containing the usual combinatorial explosion of generated clauses and the ensuing difficulty in finding the ‘right’ ones to resolve with. Nevertheless, resolution proved to be well-suited to our purposes, for two reasons: (1) Proofs required for nat- ural language understanding and ordinary question-answering are generally short (at least when special ‘shortcut’ methods are available) ; and (2) Th e “needle-in-a-haystack” problem is solved in our system by the access organization we have de- scribed. The examination of most resolution proofs that have gone astray soon reveals a large number of resolutions where the unification process made some substitution that did not seem to be semantically valid. For example, it is syntactically possible to resolve jz WOLF] v [Z GREY] against l[LRRH GREY]. However, we intuitively realize that the first clause (knowledge about wolves) can not really be applied to LRRH, and that this resolution is fruitless. To express it another way: the universal variable z is typed to represent WOLF, and should not by substituted by a concept which is of type GIRL. 336 / SCIENCE Briefly, our algorithm avoids fruitless inferences by restrict- ing its search for potential resolving candidates against a given clauses to clauses connected to it by a path (of length 0 or more) in the concept hierarchy, and classified under the same topic. The confinement of resolution to paths in the concept hierarchy is comparable to approaches based on sortal logics (e.g., McSkimin & Minker 1979, Walther 1983). However, our method does not require explicit typing of predicates and ‘sort’ checks during unification. The topical confinement of resolu- tion readily picks out clause pairs containing resolvable literals, either in the ordinary sense or in the generalized sense. The inference algorithm maintains an agenda of potential actions. Each action is relevant to a single clause, and is either a possible resolution that can be performed with that clause, or a retrieval action which might lead to possible resolution actions. Retrieval actions are based directly on the classification of a clause, and are specific to the same kind of (concept, topic) pairs that the classification procedure derives for asserted clauses. Six kinds of retrieval actions can appear on the agenda: (1) (2) (3) (4) (5) (6) clause f-+ (cone, topc, super) is the notation for the action which would retrieve all clauses stored at concept cone and its superconcepts, under topic topc, and form all potential resolutions between clause and the retrieved clauses, placing them on the agenda. clause t+ (cone, topc, sub) is similar to (l), but uses the subconcepts of cone only, excluding instances. clause +-+ (cone, topc, inst) is also similar to (l), but uses instances of cone only. clause et (cone, major-imp) denotes the action which would retrieve all major-implications of concept cone, and form all potential resolutions between clause and the retrieved clauses. clause et (cone, ezcl) is similar to (4), but uses exclusion propositions of concept cont. clause +-+ (cone, inst) denotes the action which would re- trieve all clauses specifying instances of cone and form po- tential resolutions between clause and these clauses. The CH is used to quickly determine the super or subcon- cepts of a given concept, and instances of a given concept. the CAS is used to quickly find The agenda is ordered by the estimated cost of the actions, and the inference method always chooses to do the action with the least cost first. The cost of a possible resolution is high to the extent that the resolvent is expected to be complex (ef- fectively implementing a ‘least complex resolvent’ preference strategy). The cost of a possible retrieval, clause +-+ (cone, . ..). is high to the extent that clause is complex, and the expected number of clauses to be retrieved is high. Each clause c which is to be considered for tion is “loaded” into the network, as follows: 1. Simplify ( evaluate) c, if possible. is false, report a “disproof”. 2. Classify c for insertion and, if not as if it were an asserted clause). possible refuta- If c is true, discard it; if c yet present, 3. If c was classified twice w.r.t the same (cone to factor it; if successful, load the factor(s). insert it (i.e., , topc) pair, try 4. Generate the following possible ing them on the agenda: retrievals relevant to c, plac- (a) if c was classified under ( cone, topc) generate the retrieval c i-+ (cone, topc, super) > and, if coflc is not a constant, P4 (4 the additional retrievals c tt (cone, topc, sub) and c t) (cone, top, inst). if c contains a positive predicate P then generate the retrievals c t) (P, major-imp) and c +-+ (P, ezcl). if c contains a type predicate P with variable argument and c was not classified under any (cone, topc) pair, gen- erate c t+ (P, inst). The complete inference algorithm can now be described: 1. Load the clauses to be refuted. This might yield an imme- diate disproof, but more likely it will put a set of potential retrievals on the agenda. 2. Carry out the potential action with the least cost. If it was a retrieval, this results in a set of potential resolutions being placed on the agenda. If the action was a resolution with resolvent c, then load c. If c evaluates to false when it is loaded, or if c is the null clause, then report a “disproof”. 3. If some predefined resource limit has been return “unknown”, else repeat from Step 2. exceeded, then The method also concurrently searches for a proof, using the dual of the original question. Note that a set-of-support strategy is used, as only clauses from the original set to be refuted, or one of their descendants, is ever considered for a resolution action, IV EXAMPLES 1. To answer the question “Is there a creature (in the story of LRRH)?” , the clause to be refuted for a “yes” answer is ~[z CREATURE] (called ‘c’ for brevity). Loading this clause generates the retrieval c t--t (CREATURE, inst). Using the CAS, the clauses [LRRH GIRL], [ WOLF1 WOLF], . . . are re- trieved, any of which gives an immediate null resolvent by gen- eralized resolution. 2. To answer the question “Is the Wolf grey?“, the clause to be refuted for a “yes” answer is l[ WOLF1 GREY] (=c). Loading this clause generates the retrieval c t--t ( WOLF1 , colouring, super). Using the CH to get from WOLF1 to the generic WOLF concept, the clause ~[z WOLF] V [Z GREY] is retrieved. Resolving against the orginal clause yields -[ WOLF1 WOLF], which immediately evaluates to false when it is loaded. 3. To answer the question “Are all creatures pink?” the clause to be refuted for a “no” answer is ~[z CREATURE] V ~;,~y (= 4 * Loading this clause generates the re- t) (CREATURE, colouring, super) and c f--t (CREATURE, colouring, sub). The sub retrieval yields --I[z WOLF] v [z GRE Y] (=c’). Using generalized resolution (for colours) yields ~[z CREATURE] v ~[z WOLF]. General- ized factoring on this clause yields l[z WOLF] (= c”). A re- trieval for this clause is c” ++ ( WOLF, inst). The CAS is used to find the clause [ WOLF1 WOLF], which resolves against c” to complete the disproof. 4. To answer the question “Does the Wolf live in LRRH’s basket?” (cf. Fahlman’s Clyde-in-the-teacup problem), the clause to be refuted is [ WOLF1 LIVE-IN B.4SKETl] (=c). One of the retrievals generated for this clause is c t+ (LIVE-IN, major-imp). A major implication of LIVE-IN indicates that the WOLF1 would have to be smaller than BASKET1 to live in it.* Further inference is then needed to Alternatively, a major implication of LIVE-IN could merely indicate that if z lives in y, then z is in y (at some time). The knowledge that z is smaller than y would then be retrieved from a major implication for IN. KNOWLEDGE REPRESENTATION i 55- establish that the WOLF1 is not smaller than the basket, and therefore cannot live in it. The best way of establishing the relative sizes of WOLF1 and BASKET1 would be via a special inference method for relationships among physical objects, but the current implementation does not include such a method and instead resorts to explicitly asserting these relationships in the knowledge base. The above questions were chosen to illustrate technical points. More natural questions are also easily handled, such as “Did an animal eat someone?,, and “Is there anything in LRRH’s basket that she likes to eat?“. V DISCUSSION AND FUTURE WORK An implementation of the inference method, written in Berkeley PASCAL, was able to answer a test set of 40 questions in about 15 seconds CPU time on a VAX 11/780, using a knowledge base of over 200 clauses (general and specific knowledge about the story of LRRH). Doubling the size of the knowledge base had no effect on the question-answering time. The successful implementation of the method vindicates the net organization developed earlier, showing that it provides quick selective access to the knowledge needed for simple question- answering. Furthermore, the organizational structure proved useful in guiding and constraining deduction steps. Proofs are confined to vertical paths through the concept hierarchy, and are topically focused, and as a result are very direct, avoiding “meaningless,, deductions, regardless of the amount of knowl- edge stored. Thus, we have made significant progress towards solving the “symbol-mapping” problem. Recent knowledge representation systems somewhat similar in aim to ours include KRYPTON (Brachman et al., 1983)) KL- TWO (Vilain, 1985) and HORNE (Allen et al., 1984). Like ECOSYSTEM, these systems are intended to provide a domain- independent logical representation, and general and special in- ference methods (such as taxonomic methods) applicable to a variety of domains. However, ECOSYSTEM’s concept-centred, topically focused retrieval mechanism, and its use in guiding deduction, appear to be unique. Further, rather than provid- ing alternative inference “tools”, such as forward and backward chaining, we have tried to provide a single, efficient algorithm for deductive question-answering. Also, our overall philosophy has been to provide a perfectly general representation and infer- ence mechanism, which we then seek to accelerate by special methods, as opposed to providing an initially restrictive.rep- resentation and inference mechanism, to be subsequently ex- tended by special inference methods. Numerous extensions to ECOSYTEM are planned. These in- clude extensions to handle temporal information (the tempo- ral system is nearly operational), equality, arithmetic, sets, modalities (including causation), generics (using the approach of Pelletier & Schubert, 1984)) and special methods for “naive physics”. Work on wh-question-answering and on the natural language front end are also under way (Schubert 1984, Schubert & Watanabe 1986). REFERENCES Allen, J. F., Giuliano, M., and Frisch A. M. (1984). The HORNE Reasoning System, TR 126, Computer Science Department, University of Rochester, Rochester NY. Brachman, R. J., Fikes, R. E., and Levesque, H. J. (1983). Krypton: a functional approach to knowledge representa- tion, Computer 16, 67-73. Covington, A. R. (1980). Organization and Representation of Knowledge, M.Sc. Thesis, University of Alberta. Covington, A. R., and Schubert, L. K. (1980). Organiza- tion of modally embedded propositions and of dependent concepts, Proc. of the 3rd National Conference of the CSCSI/SCEIO, Victoria, BC, May, 87-94. Fahlman, S. (1975). A System for Representing and Using Real World Knowledge, AI Lab Memo 331, MIT, Cambridge, Massachusetts. Lehnert, W. G, Dyer, M. G., Johnson, P. N., Yang, C. J., and Harley, S. (1983). BORIS - an experiment in in-depth understanding of narratives, Artificial Intelligence 20, 15- 62, MIT press. McDermott, D. (1975). Symbol-mapping: a technical prob- lem in PLANNER-like systems, SIGART Newsletter 51, (April), p.4. McSkimin, J. R., and Minker, J. (1979). A predicate calcu- lus based semantic network for deductive searching, in As- sociative Networks, N. V. Findler (ed.), Academic Press, 205-238 Papalaskaris, M. A. (1982). Special Purpose Inference Methods M.Sc. Thesis, University of Alberta. Papalaskaris, M. A. and Schubert, L. K. (1982). Inference, incompatible predicates and colours. Proc. of the 4th Na- tional Conference of the CSCSI/SCEIO, Saskatoon, Sask., 97-102. Pelletier F. J., and Schubert L. K. (1984). Two theorems for computing the logical form of mass expressions, Proc. COLING-84, July, Stanford, California, 108-111. Schubert, L. K. (1976). Extending the expressive power of se- mantic networks, Artificial Intelligence 7, 163-198. Schubert, L. K. (1979). Problems with parts. IJCAI-79, Tokyo, Japan, August, 778-784. Schubert, L. K., Goebel, R. G., and Cercone, N. J., (1979). The structure and organization of a semantic net for com- prehension and inference, in Associative Networks, N. V. Findler (ed.), Academic Press, 179-203. Schubert, L. K., Papalaskaris, M. A., and Taugher, J. (1983). Determining type, part, color, and time relationships, Computer (USA) 16, 10, 53-60. Schubert L. K. (1984). On parsing preferences, Proc. COLING- 84, July, Stanford, California, 247-250. Schubert L. K., and Watanabe, L. (1986). What’s in an answer: a theoretical perspective on deductive question-answering, Proc. of the 6th Canadian Conf. on AI (AI-86)) May, Montreal, Canada, 71-77. Schubert, L. K., Papalaskaris, M. A., and Taugher, J. (1986 - to appear). Accelerating deductive inference: special meth- ods for taxonomies, colours and times, in Knowledge Rep- resentation, Cercone, N., and McCalla, G. (eds.), Springer- Verlag, New York. Stickel, M. E. (1983). Th eor resolution: building in nonequa- y tional theories, Proc. AAAI-83, Washington, D.C. , Au- gust, 391-397. Stickel, M. E. (1985). A t u omated deduction by theory reso- lution, Proc. IJCAI-85, Los Angeles, California, August, 1181-1186. Taugher, J. E. (1983). An Efi cient Representation for Time Information, M.Sc. Thesis, University of Alberta. Vilain, M. (1985). The restricted language architecture of a hy- brid representation system, Proc. IJCAI-85, Los Angeles, California, August, 547-551. Walther, C. (1983). A many-sorted calculus based on reso- lution and paramodulation, Proc. IJCAI-89, Karlsruhe, West Germany, August, 882-891. 338 / SCIENCE
|
1986
|
164
|
433
|
ARE THERE PREFERENCE TRADE-OFFS IN ATTACHMENT DECISIONS? Lenhart K. Schubert Department of Computing Science University of Alberta, Edmonton Abstract. The paper argues for an affirmative answer to the question, against the view that correct attachment decisions can be made by a serial process that considers alternatives in some order and accepts the first “satisfactory” alternative. The pitfall in serial strategies is that they are apt to finalize their choice while “the best is yet to come”. 1. Background Given the increasingly comprehensive competence frameworks for grammar developed within linguistics in recent years, computational linguists have been able to formulate increasingly specific performance theories for human (and machine) parsing. In particular, there has been a growing interest in the theory of lexical disambiguation and phrase attachment (e.g., see Frazier & Fodor 1978, Wanner 1980, Marcus 1980, Ford et al. 1981, Shieber 1983, Hirst 1984, Schubert 1984, Wilks et al. 1985). These studies are motivated in part by an interest in psycholinguistics, and in part by a desire to construct practical parsers which emulate human choice behaviour, producing only “preferred’ analyses of sentences rather than all possible analyses. As well, such studies feed back into the grammatical frameworks within which they are conceived, confirming or disconfirming those frameworks to the extent that they make it easy or hard to embed convincing attachment theories within them. The point of departure for most of the studies has been Kimball ‘s principles, especially Right Association (RA) and Minimal Attachment (MA) (Kimball 1973, Frazier & Fodor 1978). Both principles are purely syntactic. RA states that a newly postulated phrase is attached as low in the tree structure to its left as possible. This explains, for example, why the prepositional phrase (PP) for Mary in (1) John bought the book which I had selected for Mary is understood as modifying selected, rather than bought. MA (as strengthened by Frazier & Fodor, 1978) states that a phrase is to be attached into the tree structure to its left using the smallest possible number of additional nonterminal nodes. Thus in (2) John carried the groceries for Mary the PP for Mary is attached to the VP headed by carries rather than to the NP the groceries, since (it is claimed) VP’s can accommodate a PP directly via rule VP -> V NP PP, while NP’s can accommodate a postmodifying PP only via a rule that creates an additional NP node, NP -> NP PP. This accounts for the fact that most readers interpret the PP in (2) as modifying carried, even though RA would appear to favour attachment to groceries. Another principle which has played an important role in recent discussions of attachment priorities is Lexical Preference (LP) (Ford, Bresnan & Kaplan 1981). In essence, LP says that lexical verbs and other lexical items may prefer one pattern of complementation to another. For example, the verb want is said to prefer the pattern of complementation V NPF the longer pattern V NP PP, while the verb position has the opposite preference, and this accounts for the contrast between (3) Mary wants the dress on that rack and (4) Mary positioned the dress on that rack (Note that in (3) LP must be assumed to override MA to account for preferred attachment of the PP to the dress.) There is an older, more “semantic” version of LP due to Wilks (1975a). According to this version, particular senses of lexical verbs (or other items) prefer certain complements to others, not because of the syntactic features of those complements, but because of their semantic categories; i.e., the preferences correspond to selectional restrictions. (Another class of semantic preferences is associated with certain words, chiefly prepositions - see below.) This notion was at the heart of Wilks’ theory of Preference Semantics, according to which sentences are interpreted in such a way as to maximize the density of preferences satisfied (Wilks 1975a). Wilks’ ideas did not find their way into the above theories of attachment, since those theories were concerned for the most part with attachment preferences in “semantically neutral ’ contexts. 2. Preference Trade-offs In Schubert (1984) (henceforth Sch84) syntactic theories of attachment were criticised on several grounds: (i) They often depend on ill-specified or implausible principles of parser operation. (ii) They often depend on questionable assumptions about syntax. (iii) They lack provision for integration with semantic/pragmatic preference principles. (iv) They admit counterexamples even when (i) - (iii) are discounted. An alternative approach was sketched, involving numerically weighted preferences and allowing trade-offs among syntactic and semantic/pragmatic preferences. Syntactic preferences were to be captured by the following two principles. (a) A graded distance effect: immediate constituents of a phrase prefer to be close to the head iexeme of the phrase. The effect is mediated by an “expectation potential’ which decreases with distance from the head lexeme and increases with constituent size; as a result, larger constituents admit larger displacements from the head lexeme; 1 l The exact form of the distance effect is still somewhat NATURAL LANGUAGE / 601 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (b) A rule habituation effect: there is an inhibitory potential 01 “Cost ” associated with each phrase structure rule (including lexical rules), leading to a preference for low-cost rules over high-cost rules. (e.g., ADJ->>a? is preferred to N-> fat). An additional effect suggested paper was the following. in the verbal presentation of the (c) Inhibition by errors : “mild errors ” such as concord errors contribute inhibitory potentials to the phrases in which they occur, Note that (a), although nominally a distance effect, also accounts for RA. Note further that (b), though primarily intended to account for (syntactic) lexical preferences, can also lead to MA - tendencies, if all rules have non -zero cost. (However, I regard it as an open question whether there are any MA-tendencies to be accounted for.) Like syntactic effects, semantic and pragmatic effects are assumed to influence attachment choices through potentials contributed to phrases, called semantic potentials. The following are some possible principles governing semantic potentials (somewhat elaborating the very sketchy proposals in Sch84). (d) Salience in context : the potential of a word sense or phrase is high to the extent that the denotation of that word sense or phrase is salient in the current context. (e) Familiarity of logical-form pattern: the potential of a phrase is high to the extent that its logical translation instantiates a familiar pattern of function-argument combination. (f) Conformity with scripts/frames: the potential of a phrase is high to the extent that it describes a familiar kind of object or situation (such as might be specified in a script or frame). Principle (d) is intended to allow for “semantic priming by spreading activation ‘I, which is the postulated basis for the contrast between the following sentences : (5) The Hollywood producer married (6) The astronomer married the star the star In addition, the principle permits implementation of the idea that the parser prefers phrases interpretable as references to previously introduced entities to phrases that introduce new entities into the discourse context (cf. Crain 61 Steedman 1981). Examples of the sorts of patterns subsumed under (e) might be KICK( THE -BUCKET) predicate -of - locomotion( temporal -term) The first pattern is assumed to match such predicate-argument combinations as kick’( <the’ bucket’>) kick’( <the’ (old’(bucket))>), while the second pattern is assumed to match such predicate-argument combinations as flies’(KIND (time’)) creep by’( <the’ minutes’>). The latter two formulas indicate in an approximate way the logical translations of the sentences “Time flies” and “The minutes creep by”, respectively (see Schubert & Pelletier 1982, Pelletier & Schubert 1984). The claim implicit in (e) is that such idiomatic and quasi -idiomatic patterns induce preferences for the phrases whose traslations they match. (This, I would claim, is part of the reason why “Time flies like an arrow ” is not normally perceived as ambiguous.) Principle (f) accounts for the contrasting PP -attachment preferences in (7) John saw the bird with the yellow feathers and (8) John saw the bird with the binoculars The idea is that “feathers” matches a slot in a “bird” frame, so that “bird with feathers” is recognized as a familiar combination. Similarly “binoculars ’ matches a viewing-instrument slot in a “viewing” frame activated by ” see ” , so that “seeing with binoculars” is recognized as a familiar comoination. More subtle frames, capturing stereotyped social situations, would be needed to account for the PP -attachment preferences in (9) John met the girl he married at a dance and (10) John married the girl he met at a dance. The “potentials I( contributed by (a)-(f) are cumulative, i.e., they are transmitted upward in phrase structure trees, adding to (or, in the case of inhibitory potentials, subtracting from) the overall potential of superordinate phrasal nodes. The parser is assumed to operate left-to-right, maintaining a set of (overlapping and in general incomplete) phrase structure trees, each of which completely covers the material seen so far. Whenever there are three complete, ambiguous phrases within these trees, the tree with the lowest global potential is discarded. (A phrase is ambiguous if it has more than one parent within the set of overlapping trees.) The three-phrase limit, as in Marcus’ PARSIFAL (Marcus 1980), accounts for garden -path phenomena. How do Wilks’ semantically based “lexical preferences” fit into this scheme? I think they correspond roughly to preferences of category (e), i.e., to preferred patterns of function-argument combination. However, as was pointed out in Sch84, preferred function-argument patterns may violate selectional restrictions. For example, the predication flies’(KIND(time’)) generates a positive type-(e) potential even though “flies” (and similarly “creeps by ’ , etc.) presumably selects for a moveable physical object as argument. Apart from the resemblance between type-(e) potentials and Wilks ’ semantic preferences, the preference for structures with high global potential is not unlike the preference for “semantically dense” structures in Preference Semantics. The more recent theory of Wilks et al. (1985) that I will focus on, however, uses a serial decision making process rather than parallel evaluation of alternatives. 3. The Puzzling Case of the Partisan Informants Wilks et al. (1985) (henceforth WHF85) concur with the criticisms of the syntactic preference principles in Sch84, but go on to reject both the preference trade-off approach and a rather different synthesis by Hirst (1984). They describe what they believe to be a simpler and more powerful “semantics-based ” approach. Their proposal merits close examination, in view of the ciaims made for it. I will first discuss their criticisms of the preference trade-off approach and then apply the critical perspective of Sch84 to their own proposal (in Section 4), Among the sentences in Sch84 were the following. fractions shown after them will be explained shortly. ) (The (11) Mary saw the man who had lived with her while on maternity leave 10128 602 / SCIENCE (12) John met the tall, slim, auburn-haired girl that he married at a dance 17/29 (13) John was named after his twin sister 20129 These sentences were intended to illustrate that syntactic preferences can cause a semantically less coherent alternative to prevail over a more coherent one. My informants did indeed find the sentences confusing - at least momentarily. The authors of WHF85, however, insist that their informants were not confused by (11) or (12), and that the ambiguity they perceived in (13) has a non-syntactic explanation. This, they believe, clears the way for their putatively non-syntactic theory. These opposing claims are puzzling. Whose informants are to be trusted? The puzzle is not hard to unravel, however, By “prevail over a more coherent alternative”, I did not necessarily mean “prevail irrevocably “, but only long enough to cause a momentary sense of anomaly or confusion. Often, the informants experienced a double-take, something that might be termed a “huh? oh!” experience. This suggests that for these sentences normal parser operation leads to an anomalous interpretation, prompting re-analysis. To be sure of the data, I have re-tested (11) -(13) more formally and extensively, also including the sentences (14) Mary moved in with her uncle in New York who had fallen ill while on maternity leave 25/29 (15) John said that he will definitely leave yesterday 20129 The fractions alongside (11) -(15) show the proportion of subjects who reported initially arriving at an anomalous reading. In most cases re-analysis led them to a sensible anaysis, but some subjects did not recover; e.g., 6 in the case of (12), and 8 in the case of (15). (A sample test form is reproduced in the Appendix.) The results attest to the reality of the phenomenon at issue. * The informants used in WHF85 presumably were asked only about their eventual interpretation of (ll)-( 13). That they were able to recover from any initial confusion they may have experienced is entirely consistent with my data, at least if their number was small. One point in the discussion of (ll)-( 13) in WHF85 calls for comment, however, namely the alleged role of “information vacuity” in (12) and (13). The authors say that their informants resolutely attach the at-PP in (12) to met; they still do so when dance is replaced by wedding, even though, the authors assert, this requires them to discount the information vacuity of married at a wedding (or Grice’s maxim of quantity). But an informant who rejects a vacuous combination is not discounting Grice’s maxim; on the contrary, he is assuming that the speaker conformed with it! The authors’ assertion that in (13), information vacuity tells against the interpretation of named after . . as named later than . . is equally groundless. The proposition that John was given his name later than his twin sister was given hers is informative, and that is precisely why I chose (13) instead of John was named after his father (from Wilks 1973). The point is that the perfectly sensible named later than . . reading is blocked (at least temporarily) by a powerful lexical preference (which I am inclined to regard as syntactic, though this is perhaps a matter of terminology). Wilks et al. may wish to argue that the “huh? oh!” experience provides no evidence for separate phases of parser operation, z Similar examples have often been discussed in the literature. For example, the reality of the distance effect illustrated by (12) is already rather well documented (see Frazier & Fodor 1979, Ford et al. 1981). one normal and the other involving re-analysis. 3 It may simply indicate that in the normal course of parser operation, rejection of a semantically anomalous combination registers consciously. However, one would then expect at least one of (7), (8) to elicit the “huh? oh!” experience. Experimentally, it turned out that l/37 subjects (3%) reported such an experience for (7) and 6139 (15%) for (8). These fractions are rather low compared with those for (11) -( 15), tending to disconfirm such an explanation. The evidence, therefore, seems to favour my point about the potency of syntactic effects in certain instances. However, I would emphasize that the case for preference trade-offs certainly does not hinge on this point. 1 will now show that the serial, semantics-based strategies in WHF85 (w-hich actually contain more syntax than the authors imply) suffer from some of the shortcomings that the trade-off strategy was designed to remedy. 4. Rules A and B The strategies in WHF85 are claimed to achieve wide coverage without “syntactic rules or complex syntactic0 -semantic weighting “. The first strategy is the following. Rule A: Moving leftwards from the right hand end of a sentence, attach (word or phrase) X to the first entity to the left of X that has a preference that X satisfies. Assume also a pushdown stack for inserting such entities as X into until they satisfy some preference. Assume also some distance limit (to be empirically determined) and a default rule such that, if any X satisfies no preferences, it is attached locally, i.e., immediately to its left. “Preferences ’ here mean verb and noun complement preferences, such as that want prefers a physical object as its object case and a human recipient as its recipient case, and that ticket prefers a place as its direction case (as in ticket to London). Rule A is claimed to be intentionally naive, being stated only to demonstrate “the wide coverage of the data by a single semantics-based rule ” . The first and most obvious observation about rule A is that while it is a single rule in the sense that any algorithm is a single rule, it makes tacit use of 3 separate principles: (i) RA, other things being equal (ii) N/V selectional preferences (iii) a distance limit for attachment I can see no justification for the claim that this method dispenses with syntactic preferences - (i) and (iii) are patently syntactic - or that it achieves greater coverage with simpler means than, say (a) + (e) as stated earlier (or for that matter, RA + MA + LP, despite the difficulties enumerated in Sch84). My (a) covers (i) and (iii) (and in addition allows for the size of the constituent being attached), and (e) is comparable to (ii). Note that (b), (c), (d), (f) are concerned with phenomena (such as lexical disambiguation) not covered by rule A. Moreover, the method is subject to some of the same criticisms as the methods discussed in Sch84. It depends on ill -specified principles of parser operation, lacks provision for integration with preferences determined by context and world knowledge, and is susceptible to classes of counterexamples even within the limits of its intended coverage. 3 That there is a separate recovery mechanism is widely conjectured; e.g., Wilks (1975b: 132), Milne (1982). NATURAL LANGUAGE / 603 Concerning the first point, rule A seemingly relies on a preprocessor that performs lexical disambiguation and packages a sentence into a sequence of disjoint phrases, leaving only certain phrase attachment decisions to be made. But how plausible is it that lexical disambiguation can be decoupled from phrasal disambiguation in general? One would expect phrasal combinations which are made possible by particular lexical choices to influence those very choices. One could conceivably feed all possible phrase sequences, generated by the various combinations of lexical choices, into rule A and select the best result; but it is an open question whether the resulting lexical and phrasal choices will match those of people, and whether the number of alternative phrase sequences that need to be parsed by rule A will be moderate (keeping in mind that before phrases are combined, one cannot in general tell whether a phrase sequence is grammatically possible). Concerning the second point, it is not hard to make up contexts in which the PP associates with the noun in (2) or even in (8). Rule A is simply wrong in such cases, as is any rule that describes particular parser choices without regard for context. The advantage of the trade-off theory is that it lets us have our say about syntactic, lexical, or semantic preferences once and for all; allowing for context is a matter of adding something to the theory, not revising it. Much the same can be said about the role of world knowledge. The following are some sentence pairs in which shifts in attachment preferences are induced by subtle kinds of world knowledge. (16) a. The women discussed the children in the kitchen 23129 b. The women discussed the dogs on the beach W29 (17) a. John broke the vase in the kitchen 15129 b. John broke the vase in the corner 7/29 (18) a. Mary talked with a man on her front porch 26/29 b. Mary talked with a man on a park bench 13/29 As indicated by the fractions, the PP is more likely to be attached to the verb in the a-sentences than in the b-sentences. It is hard to see how a serial algorithm like rule A could be modified to allow for such effects, especially without recourse to weights of some kind. Third, concerning the existence of classes of counterexamples, rule A is acknowledged in WHF85 to be ill-equipped for recognizing standard V/N + PP patterns, because selectional preferences do not in themselves indicate what prepositions may be used to introduce complements. This is remedied in rule B, but at a cost in complexity. More seriously, rule A fails for a-b pairs of the following sort: (19) a. Joe lost the ticket to Paris b. Joe mailed the tickets to London (20) a. Mary hires men who have worked on farms as cowboys b. Mary describes men who have worked on farms as cowboys In order to account for (19)a, WHF85 posit a locative case for ticket as already mentioned. But then, given the right-xf; strategy of rule A, the PP will be attached to tickets in (19)b as well. However, experiments show that readers of (19)b almost always attach the PP to the verb (28129). in most cases without awareness of the alternative (24/28). Similarly in (20)a. to explain the tendency to attach the final PP to worked (22/29), we need to assume a corresponding preference; but this assumption leads rule A to predict attachment to worked in (20)b as well, whereas experimentally most readers attach to describes (18/29). Thus pairs like (19) and (20) show that rule A is apt to make premature’ attachment decisions while “the best is yet to come’. Such pairs are unproblematic for the trade-off theory, since in that theory different verbs or nouns are allowed to “compete’ in parallel for a postmodifier. Let us now turn to the more subtle rule B. One refinement is that lexical entries for verbs and nouns list the prepositions which may introduce their complements. As well, lexical entries for prepositions list the patterns of verb or noun postmodification that these prepositions prefer to participate in. For example, one of the patterns (called preplates) for on is [*do-dynamic, lot-static, point, on41 which is satisfied by a phrase like position on the rack. Rule B is intended only for attachment of PPs immediately following an object NP, and works roughly as follows. 1. Try to attach the PP as in rule A, minus the default rule and distance effect. If this fails move upward to the next sentential level and restart. 2. (Still no attachment at top sentential level) Attempt PP attachment using preferences of the preposition (preplates), starting at the main verb of the sentence and working rightward. 3. (Default) Try to attach the PP to the verb using “relaxed’ versions of the preplates. Similarly try to attach to the object NP. For example, step 2 is decisive in the sentences (21) a. John stabbed the girl in the park b. John loved the girl in the park where, according to WHF85, stabbed in the park satisfies a preplate of &, allowing attachment to the verb in (21)a. In b, loved in the park fails to satisfy any preplate of in, so that attachment to girl is tried next, and this does satisfy a preplate of &. Does rule B escape the criticisms of rule A? Evidently not. The claim that it dispenses with syntactic information is still unwarranted; like A it has proceduralized the syntactic preferences, not eliminated them (any more than Shieber’s Shifting Preference has done so, for example). Essentially the implicit principles are (i) “Strong ’ (N/V-based) and “weak ’ (P -based) semantic preferences (ii) RA (or low attachment) of the unattached PP to constituents with a “strong” preference. (This takes precedence over (iii) when both apply.) (iii) High attachment to constituents with a “weak ” preference for the unattached PP (cf. Ford et al.‘s Invoked Attachment) Step 1 of rule B corresponds to (ii), and steps 2 and 3 to (iii) , (i) is a semantic principle but (ii) and (iii) are syntactic (once semantic preferences have been assigned in accordance with (i)). As with rule A, the exact role of rule B within the parser remains unclear, particularly the interaction with lexical disambiguation. The rule is still incorrect when context or world knowledge come into play, and it is still susceptible to the kinds of counterexamples noted for rule A. Further, it should be noted that the intended coverage of rule B is much more limited than the intended coverage of the preference trade-off account, making complexity comparisons somewhat irrelevant. Some of the phenomena not covered by rule B are attachment of PPs following a sequence of VPs, attachment of adverbs, participles, infinitives, relative and subordinate clauses, noun premodification, lexical ambiguity, garden path phenomena, the effect of concord errors, and distance effects. 604 / SCIENCE I think it quite unlikely that a serial approach like rule B can be expanded so that, functioning as part of a parser, it will account correctly for all such phenomena, and in addition allow for the effects of context and world knowledge. The great advantage of the trade-off approach is that it allows various preference principles to be formulated more or less independently, adding their effect through potentials to a parser whose computational structure is fixed. Moreover, the ideas contained in the original version of Preference Semantics have a rather natural place within such a scheme. Finally, an ever -present phenomenon in sentence comprehension tests such as those I have cited is extensive individual variation. This seems rather easy to account for within a model based on competing preferences of various strengths, and hard to explain in a serial model whose behaviour can be modified only by adding or deleting patterns to which the parser is sensitive. Is it not more natural to assume that there are individual differences in the degree of sensitivity to various symactico-semantic patterns? Acknowledgements I am grateful to Jeff Pelletier of the Philosophy and Computing Science Departments for conducting the experiments reported herein. I have also benefited from discussions with him and other members of the Logical Grammar Study Group, especially Matthew Dryer, who suggested some of the example sentences. The research was supported by NSERC Operating Grant A8818. References Crain, S. & Steedman, M. (1981). The use of context by the Psychological Parser. Paper presented at the Symposium on Modelling Human Parsing Strategies, Center for Cognitive Science, Univ. of Texas, Austin. Ford, M., Bresnan, J. B; Kaplan, R. (1981). A competence-based theory of syntactic closure. In Bresnan, J. (ed .) , The Mental Representation of Grammatical Relations, MIT Press, Cambridge, MA. Frazier, L. & Fodor , J . (1979). The Sausage Machine : a new two-stage parsing model. Cognition 6, 291-325. Hirst, G . (1983). Semantic interpretation against ambiguity. Tech. Rep. CS-83-25, Dept. of Computer Science, Brown Univ., RI. Hirst, G. (1984). A semantic process for syntactic disambiguation. Proc. AAAI-84, Austin, TX, 148-152. Kimball, J. (1973). Seven principles of surface structure parsing in natural language. Cognition 2, 15 -47. Marcus, M. (1980). A Theory of Syntactic Recognition for Natural Language, MIT Press, Cambridge, MA. Milne, R. (1982). Predicting garden path sentences. Cognitive Science 6, 349-373. Pelletier, F. J. & Schubert, L. K. (1984). Two theories for computing the logical form of mass expressions. Proc. COLING-84, July 2-6, Stanford, CA, 247-250. Schubert, L . K. ( 1984). On parsing preferences. Proc . COLING-84, Stanford, CA, 247-50. Schubert, L. K. & Pelletier , F. J. (1982). From English to logic : context -free computation of ‘conventional ’ logical translations. Am. J. of Computational Linguistics 8, 26-44. Shieber, S. M. (1983). Sentence disambiguation by a shift -reduce parsing technique. Proc . IJCAI - 83, Aug. 8 - 12, Karlsruhe, W. Germany, 699 -703. Also in Proc. of the 21st Ann. Meet. of the Assoc. for Computational Linguistics, June 15-17, Cambridge, MA., 113-118. Wanner, E. (1980). The ATN and the sausage machine: which one is baloney? Cognition 8, 209 -225. Wilks, Y. A. (1973). Understanding without proofs. Proc. IJCAI-73, Stanford, CA, 270-277. Wilks, Y. A. (1975a). A preferential pattern-seeking semantics for natural language inference. Artificial Intelligence 6, 53 -74. Wilks, Y. A. (1975b). Methodology in AI and natural language understanding. Proc. Workshop on Theoretical Issues in Natural Language Processing (TINLAP 1975)) Cambridge, MA, 130-3. Wilks, Y. A., Huang, X., & Fass, .D. (1985). Syntax, preference, and right attachment. Proc. IJCAI-85, Aug. 18-23, Los Angeles, CA, 779-784. APPENDIX: Example of Linguistic Quiz Your cooperation is requested in completing the following simple psycholinguistic quiz. This is an anonymous quiz -DO NOT PUT YOUR NAME ON IT. The purpose of the quiz is to gain some insights into the processes of syntactic analysis and interpretation of sentences. There is a test sentence, which is followed by a question. PLEASE READ THE SENTENCE AT NORMAL SPEED, AND THEN IMMEDIATELY ANSWER THE QUESTION THAT FOLLOWS IT AS HONESTLY AS YOU CAN. Test sentence: John saw the bird with the yellow feathers. Question: Which of the following statements best describes your impressions upon reading this sentence? (Indicate your choice with an ‘IX”) : (4 (b) cc> Cd) (d You took “the yellow feathers” as referring to a viewing instrument used by John, rather than as part of the bird (however odd this may have seemed). You eventually took “the yellow feathers” as referring to part of the bird: however, you initially took it as referring to a viewing instrument used by John, and were prompted to reanalyze the sentence because of the oddity of that interpretation. You became conscious that “the yellow feathers” could in principle refer to either part of the bird or a viewing instrument used by John, and you chose the former (more plausible) interpretation without any sense of correcting an initial misunderstanding. You took “the yellow feathers” as referring to part of the bird without becoming conscious of another interpretation. Other (explain) : NATURAL LANGUAGE / 605
|
1986
|
165
|
434
|
COMPREHENSION-DRIVEN GENERATION OF META-TECHNICAL UTTERANCES IN MATH TUTORING* Ingrid Zukerman and Judea Pearl Cognitive Systems Laboratory, Computer Science Department, UCLA, Los Angeles, CA 90024 ABSTRACT A technical discussion often contains conversa- tional expressions like “however,” “as I have stated before,” “next,” etc. These expressions, denoted Meta-technical Utterances (MTUs) carry important information which the listener uses to speed up the comprehension process. In this research we model the meaning of MTUs in terms of their anticipated effect on the listener comprehension, and use these predic- tions to select MTUs and weave them into a computer generated discourse. This paradigm was implemented in a system called FIGMENT, which generates com- mentaries on the solution of algebraic equations. I INTRODUCTION When generating tutorial text, a teacher wishes to present the information in the most accessible manner. Clearly, a necessary precondition is that the teacher transmit the appropriate information items. However, we also notice the presence of expressions like “how- ever, ” “as I have stated before,” “next,” “generally speaking,” etc., which are not part of the subject matter. These expressions, denoted Mcta-Techm’cal Utterances (MTUs), carry important information which assists the listener in the assimilation of the transferred knowledge. Previous research on the semantics of a subset of these utterances (Farnes 1973, Winter 1968, Reichman 1984 and Hoey 1979) indicates that the presence of an MTU can signpost what kind of information is to be presented in the forthcoming sentences. Farnes further claims that “the identification and use by readers of such cues, greatly aids comprehension,” and Hoey points out that problems of comprehension have been shown to arise due to faulty or missing signaling. The text generated by natural language generation systems designed by Davey (1979), Mann and Moore (1980), McKeown (1982), Swartout (1982), Kukich *This work was supported in part by the National Science Foundation Grants IST 81-19045 and DCR 83-13875 (1984) contains mostly MTUs like “however,” “next’ ’ and “therefore,” which directly reflect the speaker’s organization of the subject matter, i.e., they represent the relationship between two or more items of knowledge. For example, if item B violates the expectations established by item A, this relationship is expressed by the utterance: “A, however B.” This type of MTUs shall be denoted Knowledge- Organization MTUs. These, however, account only for a fraction of the MTUs found in natural discourse. Teachers often use MTUs such as “as I have stated before,” “let us try another approach, ” “in other words” and “this equa- tion is somewhat complicated.” These utterances are more intimately connected with the listener’s learning process than with the organization of the subject matter. This paper describes a generative model of the meaning of these MTUs, based on simulating impor- tant aspects of the comprehension process. In the following section we shall present a func- tional classification of both types of MTUs. Then we shall describe the system that generates them. II FUNCTIONAL TAXONOMY OF MTUS The classification of Meta-Technical Utterances presented in this section is based on their function, as seen by the tutor, in transmitting the subject matter to the student. In our taxonomy we recognize three main functions of MTUs: (1) Knowledge Organization, (2) Knowledge Acquisition and (3) Affect Maintenance. A. Knowledge-Organization MTUs The information residing in a tutor’s mind can be visualized as a network whose nodes contain indivi- dual information items, and whose links contain the relations between the nodes. For example: NODE1 contains the purpose of NODE2, or NODE3 is an alternative to NODEl. These relations directly reflect the tutor’s knowledge about equations and their solu- tion, and they roughly correspond to Hallyday and Hasan’s external/internal category (Hallyday & Hasan 1976) and Longacre’s basic heading (Longacre 1976). 606 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The Knowledge-Organization function consists of transmitting these relations. The following headings rely heavily on the taxonomy performed by Hallyday and Hasan, however they have been adjusted to sup- port text-generation. Additive - These MTUs signal additional events (“and,” “also”); realization of expectations (‘ ‘indeed”) or availability of additional alternatives (“alternatively,” “or”). Adversative - These MTUs signal violation of expec- tations (“however,” “nevertheless,” “although,” “but,” “ despite this”); dismissal, which indicates that two different paths in a solution arrive at the same pat- tern (“either way”) or recognition, which signals that an implicit pattern is recognized and made explicit (“notice that”). Causal - These MTUs pertain to the knowledge about the subject matter, and signal the reason fur perform- ing an operation (‘ ‘therefore,’ ’ “So,” “then,” “because of this”); its purpose (“for this purpose,” “to this end”); expectations (“hopefully,” “expecting to get”); result (“as a result,” “in consequence”); means (“this can be accomplished by”), or correct- ness (‘ ‘this works because”). Attributive - These MTUs signal a generality or parti- cularity relationship (‘ ‘in general,” “certain types of”). Temporal - These MTUs signal sequence (“then,” “next,” “ finally”) or partial sequence (‘ ‘at the same time’ ‘). B. Knowledge-Acquisition MTUs The MTUs that perform the Knowledge- Acquisition function are related to the state of the discourse and the interaction between the teacher and the student, rather than to the subject matter itself. They ease the assimilation of the subject matter by alerting the student to prepare adequate mental resources. In the context of tutoring algebra, the Knowledge-Acquisition function is performed by the following types of MTUs: Motivational - A teacher will often use this type of MTUs to motivate a student to listen to the forthcom- ing technical utterance. For example, if a new method is to be taught, the tutor might say: “This method is very quick.” If a student has to practice the same type of equation many times, the teacher might say: “Third degree equations are rather difficult and demand lots of practice. ’ ’ Focal - A student generally attempts to process a forth- coming technical utterance in the currently active focus space (Grosz 1977). If the teacher wants the stu- dent to change the active focus space, he needs to present the student with an MTU to this effect. For example, the Focal MTU “Let us now consider the following equation” closes the focus space corresponding to the previous equation, and opens a new focus space for the next equation. Temporary focus shifts are signaled by MTUs like “incidentally” or “by the way” (Reichman 1984 and Grosz & Sidner 1985). Categorical - These MTUs specify the manner in which a student should use the forthcoming informa- tion to update information in the previous technical utterance. For example, a tutor might say: “Let’s take the first term on the right hand side, namely x*(x-3), . . . ” In this example, the MTU “namely” informs the listener that the explicit term merely paraphrases the preceding positional description and is not to be added to the first term on the right hand side. Categorical MTUs are included in Hallyday and Hasan’s internal category (Hallyday & Hasan 1976) and in Longacre’s elaborative heading (Longacre 1976). Other MTUs in this subclass are: “in other words,” “to be more specific” and “for example.” Implementational - These MTUs prepare the student to select a computational activity required for assimi- lating the technical utterance that follows. We have identified two main types of activities: adding an item to one’s knowledge pool, and verifying the workings of existing knowledge (for possible revision). For instance, if the tutor wishes a student to use the forthcoming technical utterance to verify existing knowledge, he should signal his intent by means of an MTU like “as I have stated before.” On the other hand, if the teacher wants the student to prepare for learning a new subject (i.e., transfer to addition mode) he might say: “Let us now discuss a new topic.” Estimational - These MTUs inform the student that the forthcoming technical utterance is of unusual length and/or complexity. Examples are: “This equa- tion is rather straightforward,” or “The following method entails several computations.” In order to illustrate the importance of Knowledge-Acquisition MTUs, let us examine the fol- lowing imperfect discourse: “Let us consider a linear equation” 3x-7=4 . . . description of solution ,.. ’ Zet us consider a linear equation” 2x+4=5 . . . description of solution . . . The dissonance in this discourse stems from the repetition of preparatory directives in lines 1 and 4. Both directives trigger expectations for receiving a new object, while, in fact, the second object is of the NATURAL LANGUAGE / 60’ same class as the first. The appropriate directive for the fourth line should have been “Let us consider another linear equation.” In order to generate a commentary which includes Knowledge-Acquisition MTUs, we need a module that represents how both technical and meta-technical utter- ances influence the listener’s mental activities. This module would inspect the technical utterances about to be issued, determine their effect on the comprehension processes of the listener, and generate adequate Knowledge-Acquisition MTUs. We call this module Comprehension-Processes Module (see section III). C. Affect-Maintenance MTUs One of the tutor’s goals is to teach which algebraic operations and results are considered favorable and which are not. In addition, the tutor wishes the attitude of the student to remain positive throughout the ses- sion. To achieve these goals, a tutor may need to use Affect-Maintenance MTUs, which we divide into two subclasses according to their goals. Affect-Transference - If the tutor is of the opinion that the forthcoming technical utterance should have an affective impact on the student, he might precede it by an MTU like “Unfortunately” or “Fortunately.” For example, in the sentence “Unfortunately, the only way of solving this equation is to remove parentheses and collect terms,” the affect-transference MTU “unfortunately” indicates that this approach is con- sidered undesirable. Consolatory - A teacher can partially attain the goal of maintaining a positive student attitude throughout a tutorial session by using Knowledge-Acquisition MTUs. There exist, however, situations in which nega- tive affects cannot be prevented by means of these MTUs. For instance, a student may fail to understand a solution method, despite having received preparatory Knowledge-Acquisition MTUs. In cases like this, a teacher should reassure and console the student. This is the purpose of consolatory MTUs such as “Don’t worry, I will explain this a few more times.” Unlike affect-transference MTUs, consolatory MTUs are related to the state of the listener’s learning process. To generate commentaries which have a desired affective influence on a student, a discourse generator needs to neutralize anticipated negative affects by gen- erating adequate Affect-Maintenance MTUs. III DESIGN OF FIGMENT The system outlined in this section was designed to generate fluent and cogent commentaries on alge- braic equations, based on the taxonomy presented in the preceding section. The generation of each com- mentary is performed in three stages. 608 I SCIENCE In the first stage, the strategic components of FIG- MENT produce a technicaljifile, which consists of a list of technical messages (see figure 1). These com- ponents are: (1) Problem-Solving Expert, (2) Model of the Student’s Knowledge and (3) Tutoring Strategist (see Sleeman and Brown 1982). The Problem-Solving Expert solves the equation and produces a graph in which each branch contains an attempted solution alternative. Next, the Tutoring Strategist modifies this graph by suppressing alternatives and steps which are well known to the student and adding explanations where necessary (e.g., purpose of an operation, its description, etc). Both modules use information about the state of the student’s knowledge provided by a Model of the Student’s Knowledge. TOPIC: third-order METHOD: general specific EQUATION: x3 - x2 - x + 1 = 0 (ALTERNATIVE 1) RULE: factor out x2 from terms 1 and 2 PATTERN: x2 is a factor common to terms 1 and 2 EXPECTATION: result has factor common with rest of terms RESULT: x2(x-l) - x + 1 = 0 RULE: rewrite -x+1 as -(x-l) RESULT: x2(x-l)- (x-1)=0 RULE: factor out x- 1 RESULT: (x-1)(x2 - 1) = 0 RULE: apply formula a2 - b2 = (a+b)(a-b) RESULT: (x-1)2(x+1) = 0 CONTINUE: product of factors Fig. 1. Stylized Representation of the Technical Part of the Tutoring Strategist’s Output In the next stage, the Comprehension-Processes Module complements and revises the technical file by adding appropriate MTUs. The affect-transference MTUs and most Knowledge-Organization MTUs can be directly derived from the structure of the technical file. While the Knowledge-Acquisition and consolatory MTUs are generated by simulating some of the comprehension processes activated by a student when reading or listening to an explanation. In the final stage, the Sentence Composer organ- izes the completed message into paragraphs and sentences, and translates it into English. Some Knowledge-Organization MTUs which depend on the final structure of the text, are generated at this stage. The Comprehension-Processes module, the Sen- tence Composer and the Model of the Student’s Knowledge have been fully implemented, as well as those parts of the Problem Solving Expert’s domain knowledge required for the text-generation task. The input to the Comprehension-Processes Module is hand coded, based on the design of the Tutori % Strategist. A. Comprehension-Processes Module The Comprehension-Processes Module generates MTUs for each technical message produced by the Tutoring Strategist. Each technical message is com- posed of a technical part accompanied by processing information. The Comprehension-Processes Module directly derives affect-transference and most Knowledge- Organization MTUs from the structure of the technical file. Temporal and additive MTUs are derived from the sequence of the rules and alternatives (see figure 1); causal MTUs are extracted from the labels of the dif- ferent entities (e.g., PATTERN and RULE are translated into a construct such as “Since x2 is a factor common to the first and second term, we factor it out”). Attri- butive MTUs correspond to the type of method to be discussed, namely “general” or “specific.” Finally, to generate adversative and affect-transference MTUs FIGMENT traces the expectations established by technical and meta-technical utterances, and examines the effect of forthcoming utterances on these expecta- tions (Zukerman 1986). Since the primary role of Knowledge-Acquisition MTUs is to prepare the listener for processing the forthcoming technical utterance, we let the Comprehension-Processes Module simulate several processes which the listener undergoes upon hearing an utterance, and use the result to identify the type of preparation required. If any of these processes results in negative affects, then a Knowledge-Acquisition MTU is prefixed to the utterance. The processes simulated correspond to the different types of Knowledge-Acquisition MTUs presented in section ILB, namely: motivational, focal, categorical, imple- mentational and estimational. Figure 2 illustrates the process of establishing the implementation mode for the forthcoming technical utterance and generating an implementational MTU. Figure 3 depicts the process of selecting a complexity-related (estimational) MTU, and an accompanying consolatory MTU, if necessary. Our model for selecting implementational MTUs reflects the following mental process: if a student receives a technical utterance which was already dis- cussed, but with which he is not very familiar, he will attempt to add it to his knowledge pool (addition mode). Then, upon discovering that the accessed memory location already contains some information, he might experience disrespect or confusion. However, if the tutor transfers him first to verification mode (by generating an MTU such as “As I have said before” or “Let’s go over this once more”), the learning pro- cess can continue unhindered. Alternatively, if a stu- dent is presented with a new technical utterance, and is unable to tell whether he has seen this utterance can student recall the technical utterance? ye/ \o is student familiar can student recognize with this utterance? this utterance as new? veri&tion addition addition \ mode mode mode unknown I I I affect: I affect: disrespect affect: affect: confusion posilive confusion positive new or known? data duplication I Generate 1 Generate conlinue verijcation mode continue ad&ion mode MTU MTU Fig. 2. Process for Generating an Implementational MTU is technical-utterance too difficult? yes / \ no affect: frusfralion insufficient com- pulational power I Gcnerale Complcxily-related MTU (difficult) I is technical-utterance extremely difficult? \ is technical-utterance too easy? affect: exlreme affect: boredom affect: frustration, high waste ofcompu- posilive computaIiona/ po- lational po~cr I wer is insufficient I Generute Generate I Con3ola:ory con:inue Corll”lcxrli.-reiatcd COFlliFUli? MTU ,;lTr/ ie;q) Fig. 3. Process for Generating a Complexity-related .‘LlTLT previously, the teacher should transfer him first to addition mode, calling for an hlTU like “Let’s now consider a new type of equation.” In the simplified model pt-cscnted in figure 3, the difficulty of the information a student can comfortably digest depends on the complexity inherent in the technical utterance, the student’s talent and his previ- ous mastery of this utterance. According to this model, an equation which is extremely difficult for a particular student might elicit the following text: “This equation is very difficult, however you should not be concerned, as I will go over its solution a couple of times.” A more talented student, on the other hand, may not require a complexity-related MTU for this equation. A discrimination net similar to the ones depicted above exists for each type of Knowledge-Acquisition MTU (Zukerman 1986). Each technical utterance traverses each of these nets in order to ascertain which MTUs it requires. After the relevant MTUs have been generated, the output of the Comprehension-Processes Module is composed of a list of technical utterances interleaved with codes which specify requirements for MTUs. Table 1 depicts the MTU requirement-codes generated for the sample input in figure 1. The starred entries correspond to Knowledge-Organization and affect- transference MTUs, while the rest, correspond to Knowledge-Acquisition MTUs. Utterance MTU Type MTU Code TOPIC METHOD METHOD EQUATION RULE (Factor out) EXPECTATION KNOWN RULE (Rewrite) 2 RESULT REALIZATION RULE (Factor out) 3 RULE (Formula) 4 Table 1. MTU Requirement-codes for Sample Input Motivation Focus *Affect Transference *Expectation Focal *Sequence Implementation *Sequence *Expectation *Sequence *Sequence (HICIILIGHTATTRIBUTES) CLOSE NEGATIVE VIOLATION OPEN Most of the processes activated by the Comprehension-Processes Module rely on the hierarchical problem-solving structure of the transferred knowledge (i.e., topic, equation and solution alternatives). This structure is shared by many technical tutoring domains. The extensibility of the Comprehension-Processes Module to these domains hinges on its ability to incorporate new types of techni- cal utterances, since the presence of a technical utter- ance or its accompanying MTUs may influence the need for MTUs in other technical utterances. B. Sentence Composer The Sentence Composer collects the technical information and the generated MTU-codes into a styl- istically sound representation. To perform this task, it activates the following components. A Phrasal Dictionary - This component applies a generation process based on the Augmented Transition Network (ATN) formalism to produce words and expressions commonly used in tutoring technical sub- jects. For example, the word “new,” or a sentence like “we have never seen this topic before” may be gen- erated from the following dictionary entry: NEW = {“new, ” “$person have (never) (study lp pp) $subject (before)“} An Attribute-clause Generator - This component produces sentences containing attributes of a given item, and information regarding the knowledge-status of the student with respect to this item. For instance, the following sentence is generated by this component: “We shall consider a very important topic, which we have not encountered for a while, and is quite chal- lenging .’ ’ In order to generate this type of sentence, the attribute-clause generator applies rhetorical rules, which determine the number of clauses to be produced, and collect the attributes into clauses. Utterance Generators - The English representation of an MTU and the manner in which multiple MTUs interact depend on the technical utterance providing the context. Therefore, for each type of technical- utterance (e.g., topic, method, pattern, etc), the Sen- tence Composer features a dedicated text generator, which applies rhetorical rules to determine the order and manner in which technical and meta-technical utterances shall be presented. These generators enable FIGMENT to generate several MTUs that perform the same function (e.g., specifying implementation mode), but whose English representations differ according to the type of the technical utterance under consideration. For instance, the MTUs “As I have stated before” and “This equation is similar to . ..” put a student in verification mode; however, while the former refers to explanations, the latter applies to equations. The following text illustrates a typical output of FIGMENT’s Sentence Composer: 1 Let us now look at a rather interesting topic, 2 namely third degree equations, which is also 3 challenging. Unfortunately, we shall not examine 4 a general technique for solving equations in this 5 subject. However, we can solve certain types of 6 third degree equations by factoring out common 7 factors, or, alternatively, applying the appropriate 8 factorization formula. Here is an equation: 9 x3 - x2 - x+1=0 10 First, since x2 is a factor common to the first and 610 / SCIENCE 11 second terms, we factor it out. As you know, we 12 perform this operation hoping to get a factor 13 common to the rest of the terms. Through it we 14 get the following result: 15 x2(x-l) - x + 1 = 0 16 Next, we rewrite -x+1 as -(x-l), arriving at the I7 result we were hoping for: 18 x2(x-1) - (x-l) = 0 19 20 Afterwards we factor out x-l, yielding: (x-1)(x2 - 1) = 0 21 We continue by applying the factorization 22 formula a2 23 - b2 = (a+b)(a-b) to x2-1, arriving at the following result: 24 (x-1)2(x+1) = 0 25 26 We obtain the solution by solving separately for each factor. IV CONCLUSIONS It is generally believed that any system which gen- erates continuous discourse must contain models of both the process by which a listener absorbs informa- tion and the affective impact of this information. This paper offers a concrete design of the makeup of these models and their incorporation in a text generation sys- tem as tools for generating fluent an cogent text. Specifically, this paper presents a generally applicable operational taxonomy of MTUs, and demonstrates its usefulness in maintaining continuity in multi-sentential text. It also shows the sufficiency of shallow models of the listener Comprehension-Process to weave appropriate MTUs into technical discourse. The text generated by using these models captures sufficient rhetorical features to support continuous discourse. REFERENCES Davey, A. (1979), Discourse Production. Edinburgh University Press, Edinburgh. Fames N.C. (1973 revd. 1975), Comprehension and the use of Context, Unit 4, Reading Development. Educational Studies: a Post-Experience Course and 2nd Level Course P.E. 261. Grosz B.J. (1977), The Representation and Use of Focus in Dialogue Understanding. Doctoral Disser- tation, University of California, Berkeley. Grosz B.J. and Sidner CL. (1985), Discourse Structure and the Proper Treatment of Interruptions. In IJCAI-85 Proceedings, pp. 832-839. Hallyday M.A.K. and Hasan R. (1976), Cohesion in English. Layman Press, London. Hoey M. (1979), Signaling in Discourse. English Language Research, University of Birmingham, Birmingham Instant Print Limited. Kukich K. (1984), The Feasibility of Automatic Natural Language Report Generation, Doctoral Dissertation, The Interdisciplinary Department of Information Science, University of Pittsburgh, Pennsylvania. Longacre, R.E. (1976), An Anatomy of Speech Notions. Peter de Ridder Press Publications in Tagmemics No. 3. Mann, W.C. and Moore, J.A. (1980), Computer as Author - Results and Prospects. Report No. ISVRR- 79-82, Information Sciences Institute, Los Angeles, January 1980. McKeown, K.R. (1982), Generating Natural Language Text in Response to Questions About Database Structure. Doctoral Dissertation, The Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia. Reichman-Adar R. (1984), Extended person-machine interface. In Artificial Intelligence 22, pp. 157-218. Sleeman D. and Brown J.S. (Eds.) (1982), Intelligent Tutoring Systems, London: Academic Press. Swartout W.R. (1982), XPLAIN: A System for Creat- ing and Explaining Expert Consulting Programs, USC/Information Sciences Institute. Winter E.O. (1968), Some Aspects of Cohesion. In Sentence and Clause in Scientijc English, by R. D. Huddleston et al., Communication Research Centre, Department of General Linguistics, University Col- lege, London, May 1968, pp. 560-604. Zukerman I. (1986), Computer Generation of Meta- technical Utterances in Tutoring Mathematics. Doc- toral Dissertation, University of California, Los Angeles. NATURAL LANGUAGE / 611
|
1986
|
166
|
435
|
A LOGICAL-FORM AND KNOWLEDGE-BASE DESIGN FOR NATURAL LANGUAGE GENERATION* Norman K. Sondheimer USC/information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 ABSTRACT This paper presents a technique for interpreting output demands by a natural language sentence generator in a formally transparent and efficient way. These demands are stated in a logical language. A network knowledge base organizes the concepts of the application domain into categories known to the generator. The logical expressions are interpreted by the generator using the knowledge base and a restricted, but efficient, hybrid knowledge representation system. This design has been used to allow the NIGEL generator to interpret statements in a first-order predicate calculus using the NIKL and KL-TWO knowledge representation systems. The success of this experiment has led to plans for the inclusion of this design in both the evolving Penman natural natural language interface. language generator and the Janus 1. INTRODUCTION We have as a general goal the development of natural language generation capabilities. Our vehicle for these capabilities will be a reusable module designed to meet all of a system’s needs for generated sentences. The generation module must have an input notation in which demands for expression are represented. The notation should be of general applicability. For example, a good notation ought to be general useful in a reasoning system. The notation should have a well-defined semantics. In addition, the generator has to have some way of interpreting This interpretation has to be efficient. the demands. In our research, we have chosen to use formal logic as a demand language. Network knowledge-bases are used to define the domain of discourse in order to help the generator interpret the logical forms. And a restricted, hybrid knowledge representation is utilized to analyze demands for expression using the knowledge base. Arguments for these decisions include the following: Formal logic is a well established means of expressing information with a well-defined semantics. Furthermore, it is commonly used in other aspects of natural language processing, as well as other Al systems. Network knowledge-base notations have been shown to be effective and efficient in language processing. Work on network representations has shown that they too can be given formal semantics [Schmolze and Lipkis 831. Finally, recent work *This research is supported by the Defense Advanced Research Projects Agency under Contract No MDA903 81 C 0335 and by the Air Office of Scientific Research under FQ8671-8401007. Views and conclusions contained in this report are the author’s and should not be interpreted as representing the official opinion or policy of DARPA, AFOSR, the U.S. Government, or any person or agency connected with them. Bernhard Nebel Technische lniversitaet Berlin Sekr. FR 5-8, Franklinstr. 28/29 D-l 000 Berlin 10, West Germany on hybrid knowledge representation systems has shown how to combine the reasoning of logic and network systems [Brachman 851. Restricted-reasoning hybrid systems have shown this reasoning can be done efficiently. On our project, we have: 1. Developed a demand language based on first order logic, 2. Structured a NIKL (New Implementation of KL-ONE) network [Kaczmarek 861 to reflect conceptual distinctions observed by functional systemic linguists. 3. Developed a method for translation of demands for expression into a propositional logic database, 4. Employed KL-TWO [Vilain 851 to analyze the translated demands, and 5. Used the results of the analyses to provide directions to the Nigel English sentence generation system [Mann & Matthiessen 831. This paper presents our design and some of cur experiences with it. Others have attempted to design an interface between a linguistic generation engine and an associated software system using an appropriate information representation [Goldman 75, Appelt 83, Hovy 85, Kukich 85, Jacobs 85, McKeown 851. Still others have depended on information demand representations with similar well-defined semantics and expressive power, e.g., [Shapiro 791. However, he produces a logician’s reading of expressions rather than colloquial English. For example, the popular song “Every man loves a woman.“, might be renc;ered “For all men, there exists a woman that they love.“. The generation component of HAM-ANS [Hoeppner et al. 831 and one effort of McDonald’s [McDonald 831 are probably closest to our design. HAM-ANS also uses a logical language (the same one used for representing the analyzed input), has an extensive network domain model, and has a separable linguistic engine (although not as broad in coverage as Nigel). However, the interface language is close to surface linguistic representation, e.g., there are particular expressions for tense and voice. So while it is easier to generate sentences from such structures, it is correspondingly harder for software systems to produce demands for expressions without having access to significant amounts of linguistic knowledge. McDonald accepts statements in the first order predicate 612 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. calculus, processes them with a grammar, and outputs excellent English forms. It is hard to evaluate the coverage of McDonald’s grammar, however, the program does depend on extensive procedurally-encoded domain-dependent lexical entries (Actually, the entries are language fragments associated with terms in that can appear in the input language). Our domain dependencies are limited to the correct placement of concepts in the NIKL hierarchy and the association of lexical entries with the concepts. These lexical entries are only characterized by syntactic features. In Section 2, we present the component technologies we have applied. Section 3 presents the method by which they are combined. Section 4 presents several examples of their use. We conclude with a section describing the open problems identified by our experiences and our plans for future work. 2. BASIC COMPONENTS The processes and representations we have employed include a unique linguistic component (Nigel), a frame-based network knowledge representation (NIKL), a propositional reasoner that can take advantage of the network knowledge representation (KL-TWO), and our own first order logic meaning representation. 2.1. Nigel The Nigel grammar and generator realizes the functional systemic framework [Halliday 761 at the level of sentence generation. Within this framework, language is viewed as offering a set of grammatical choices to its speakers. Speakers make their choices based on the information they wish to convey and the discourse context they find themselves in. Nigel captures the first of these notions by organizing minimal sets of choices into systems. The grammar is actually just a collection of these systems. The factors the speaker considers in evaluating his communicative goal are shown by questions called inquiries inside of the chooser that is associated with each system [Mann 83a]. A choice alternative in a system is chosen according to the responses to one or more of these inquiries. For example, because the sentences describing different types of processes differ grammatically, the grammar has a system distinguishing the four major process types. This is shown graphically in Figure 2-1 I The chooser associated with this system is represented as a decision in Figure 2-2. The nodes of the tree show inquiries. The branches are labeled with responses to the inquiries immediately to their left. The leaves show the choices made when that point is reached. For example, there is an inquiry, VerbalProcessQ, to test whether the process is one of communication. If the answer is “verbal”, then the alternation “Verbal” is chosen in the system. There are many inquiries like VerbalProcessQ. Elsewhere, as part of deciding on number, Nigel has an inquiry MultiplicityQ that determines whether an object being described is unary or multiple. These are examples of information characterization inquiries. Relational Verbal We t Mental Static Condition Q KhoosP “wball kyc /-A Verbal Process Q (Choose Matcnall \I /La, Mental Process Q \ mmtal (ChOOW Mental) Figure 2-2: Chooser for the Process Type System Another type of inquiry, called information decomposition. picks out of the environment the conceptual entities to be described. For example, at appropriate times, Nigel asks for descriptions of the causers of events, CauserID, or the objects affected in them, AffectedID. One very special inquiry, TermSpecificationlD, establishes the words that will be used. Nigel asks the environment for a set of lexical entries that can be used to describe an entity. So Nigel might find itself being told to describe some event as a “Send” or some object as a “Message”. Nigel currently has over 230 covers a large subset of English. systems and 420 inquiries and Up until the effort described here, the developers of Nigel had only identified the inquiries of the grammar, but not implemented them. 2.2. NIKL NIKL is a network knowledge-base system descended from KL-ONE [Brachman and Schmolze 851. This type of reasoner supports description of the categories of objects, actions, and states of affairs that make up a domain. The central components of the notation are sets of concepts and roles, organized in IS-A hierarchies. The concepts are used to identify the categories of entities. The roles are associated with concepts (as “role restrictions”), and identify the relationships that can hold between actual individuals that belong to the categories. The IS-A hierarchies identify when membership in one category (or the holding of one relationship) entails membership in (or the holding of) another. We have experimented with a mail and calendar NIKL domain model developed for the Consul project [Kaczmarek, Mark, and Sondheimer 831. It has a concept Send that is meant to identify the activity of sending messages. Send IS-A type of Transmit (intended to identify the general activity of transmission of information). Send is distinguished from Transmit by having a role restriction actee that relates Sends to Messages. The concept of Message is defined as being a kind of a communication object, through the IS-A relation to a concept Communication. In addition, role restrictions connect Message to properties of messages which serve to distinguish it from other communication objects. The overall model has over 850 concepts with over 150 roles. I- Material Figu re 2- 1: Sample Nigel System: Process Type In flavor, NIKL is a frame system, with the concepts equivalent to frames and the role restrictions to slots. However, the NIKL representation can be given a formal semantics. NATURAL LANGUAGE / 6 13 2.3. KL-TWO KL-TWO is a hybrid knowledge representation system that uses NIKL’s formal semantics. KL-TWO links another reasoner, PENNI, to NIKL. For our purposes, PENNI, which is an enhanced version of RUP [McAllester 821, can be viewed as restricted to reasoning using propositional logic. As such, PENNI is more restricted t:;an those systems that use first order logic and a general purpose theorem prover. But it is also more efficient. PENNI can be viewed as managing a data base of propositio$i of the form (P a) and (Q a b) where the forms are variable free . The first item in each ordered pair is the name of a concept in an associated NIKL network and the first item in each ordered triple is the name of a role in that network. So the assertion of any form (P a) is a statement that the individual a is a kind of thing described by the concept P. Furthermore, the assertion (Q a b) states that the individuals a and b are related by the abstract relation described by Q. NIKL adds to PENNI the ability to do taxonomic reasoning. Assume the NIKL database contained the concepts just described in discussing NIKL. Assume that we assert just the following three facts: (Transmit x), (actee x y) and (Message y). Using the knowledge base, PENNI is able to recognize that any Transmit, all of whose actees are Messages, is a Send. So if we ask if (Send x) is true, KL-TWO will reply positively. KL-TWO can also retrieve information from its database. For example, if asked what individuals were the actees of x, it could respond with y. 2.4. THE LOGICAL LANGUAGE Our logical language is based on first order logic. To it, we have added restricted quantification, i.e., the ability to restrict the set quantified over. In addition, we allow for equality and some related quantifiers and operators, such as the quantifier for “there exists exactly one . ..I’ (I!), the operator for “the one thing that . ..‘I (1). We permit the formation and manipulation of sets, including a predicate for set membership (ELEMENT-OF). And we have some quantifiers and operators based on Habel’s 77 operator [Habel82]. Figure 2-3 gives three examples of the forms accepted. Included are a few individuals: two people (RICHARD and SMITH), the computer (COMPUTER), a set of messages (MM33), and the current time (NOW). Later on, we will show how these are turned into English by our system. We include in our language a theory of the categories of conceptual entities and their relationships. We have taken what is often referred to as a Davidson approach [Davidson 671. This is marked by quantification over events and state of affairs. We refer to these as ActionOccu rrences and RelationshipOccu rrences, respectively. We associate time and place with these entities. We differ from Davidson by identifying a class of abstract Actions and Relationships that are recorded by ActionOccurrences and RelationshipOccurrences. This approach is inspired by representations that associate time and place indices with formulas [Montague 741. With Actions and Relationships, we associate the participants and circumstances of the actions and states-of-affairs, e.g., the actor and actee. .* PENNI actually works with the quantifier-free predicate calculus with equality. It has a demon-like facility capable of some quantificational reasoning as well. A. (3x E ActionOccurrence)((Xp E Past)(timeofoccurrence(x,p))A (3y E Transmit)(records(x,y)Aactor(y,SMITH)A (3z E Message)actee(y,z))) B. (3x E ActionOccurrence)((Bt E Future)(timeofoccurrence(x,t))A (3y E Display)(records(x,y)Aactor(y,RICHARD)A requestedobject(y,lq((Ilz E ActionOccurrence) ((3~ E Past)(timeofoccurrence(z,p))A (3w E Send)(records(z,w)Aactor(w,SMITH)Aactee(w,q)))))A beneficiary(y,COMPUTER))) C. (3z E ActionOccurrence) (Id E Display)(records(z,d)Aactor(d,RICHARD)A beneficiary(d,COMPUTER)A (Vm E MM33)(3! r E {r](3s E RelationshipOccurrence) (timeofoccurrence(s,NOW)A (3 E Inspectionstatus) (records(s,t)Arange(t,r)Adomain(t,m)))}) requestedobject(d,r)) Figure 2-3: Example Logical Expressions In addition to using the logical language for the demands for expression, we use it to maintain a database of factual information. Besides the “facts of the world”, we assume the availability of such knowledge as: *Hearer, speaker, time and place. *The theme of the ongoing discussion. *Objects assumed to be identifiable to the hearer. Work on maintaining this the work reported here. database is proceeding in parallel with Finally, we have allowed for a speech act operator to be supplied to the generation system along with the logical form. This can be ASSERT, COMMAND, QUERY, ANSWER or REFER. ANSWER is used for Yes/No answers. The others are given the usual interpretation. 3. CONNECTING LANGUAGE AND LOGIC Restating the general problem in terms of our basic components, a logical form submitted as a demand for expression must be interpreted by the Nigel inquiries. Nigel must be able to decompose the expressions and characterize their parts. To achieve this, we have used NIKL to categorize the concepts (or terms) of the domain in terms of Nigel’s implicit categorizations. We have written Nigel inquiries which use the structure of the logical language and the NIKL model to analyze the logical forms, To do this efficiently, we have developed a way to translate the logical form into a KL-TWO database and use its reasoning mechanisms. 3.1. Functional Systemic Categorizations in a NIKL Knowledge Base Our NIKL knowledge base is structured in layers. At the top are concepts and roles that reflect the structure we impose on our logical forms. Here we find concepts like ActionOccurrence and Action, as well as roles like records. At the bottom are the entities of the domain. Here we find concepts like Transmit and Send, as well as roles like requestedobject. All of these concepts and roles must be shown as specializing concepts at a third, intermediate level, which we have introduced to support 614 / SCIENCE Nigel’s generation. 3.2. Logical Forms in KL-TWO Functional systemic linguists take a Whorfian view: that there is a strong connection between the structures of thought and language. We follow them in categorizing domain concepts in a way that reflects the different linguistic structures that describe them. For example, we have distinguished three types of actions (verbal, mental and material) because the clauses that describe these actions differ in structure. For the same reason, we have at least three types of relationships: ascription, circumstantial and generalized possession. Roughly, ascription relates an object to an intrinsic property, such as its color. Circumstantials involve time, place, instrument, etc. In addition to ownership, generalized possession includes such relationships as part/whole and social association. Some of these categories are shown graphically in Figure 3-l. The double arrows are the IS-A links. The single arrows are role restrictions. Figure 3- 1: Example Upper and Intermediate Layer Categories Relating these distinctions to our earlier examples, the concepts Transmit and Send are modeled as subclasses of MaterialAction. Message is a kind of NonConsciousThing. This modeling extends to the role hierarchy, as well. For example, the role requestedobject is modeled as a kind of actee role. The insertion of systemic distinctions does not compromise other factors, since non-linguistic categorizations can co-exist in the knowledge base with the systemic categories. Once the domain model is built we expect the systems using the generator to never have to refer to our middle level concepts. Furthermore, we expect Nigel to never refer to any domain concepts. Since the domain concepts are organized under our middle level, we can note that all domain predicates in logical forms are categbrized in systemic terms. To be complete, the domain model must identify each unary predicate with a concept and each binary predicate with a role. The concepts in a logical form must either reflect the highest, most general, concepts in the network or the lowest layer. The domain predicates must therefore relate through domain concepts to systemic categories. Gary Hendrix [Hendrix 751 developed the notion of Partitioned Semantic Networks in order to add the representational power of quantifier scoping, belief spaces, etc., to the semantic network formalism. This does not pay off in terms of faster inferences, but it allows us to separate the two structures inherent in logical formulas, the quantification scopes and the connections of terms, In partitioned networks, these are represented by hierarchically ordered partitions and network arcs, respectively. This separation of the scope and connection structure is needed. The connection structure can be used to evaluate Nigel’s inquiries against the model, and the scope structure can be used to infer additional information concerning quantification. We translate a logical form into an equivalent KL-TWO structure. All predications appearing in the logical form are put into the PENNI database as assertions. Figure 3-2 shows the set of assertions entered for the formula in Figure 2-3A. These are shown graphically in Figure 3-3 which includes the partitions. KL-TWO does not support partitions. Instead of creating scope partitions, a tree is created which reflects the variable scoping. Here we diverge from Hendrix because of the demands of our language. Separate scopes are kept for the range restriction of a quantification and its predication. In addition the scope of the term forming operators, 1 and v are kept in the scope structure. During the translation, the variables and constants are given unique names so that these assertions are not confused with true assertional knowledge (this is not shown in our examples.). These new entities may be viewed as a kind of hypothetical object that Nigel will describe, but the original logical meaning may still be derived by inspecting the assertions and the scope structure. 3.3. Implementation of Nigel Inquiries Our implementation of Nigel’s inquiries using the connection and scope structures with the NIKL upper structure is fairly straightforward to describe. Since the logical forms reflecting the world view are in the highest level of the NIKL model, the information decomposition inquiries use these structures to do (ActionOccurrence x) (Past p) (timeofoccurrence x p) (Transmit y) (records x y) (actor y SMITH) (Message z) (actee y z) Figure 3-2: Sample PENNI Assertions 3X (ActionOccurrence x) 3P timeofoccurrence / Past PI records (Transmit Y) / actor SMITH Figure 3-3: Sample Partition Structure NATURAL LANGUAGE / 6 15 search and retrieval. With all of the predicates in the domain specializing concepts in the functional systemic level of the NIKL model, information characterization inquiries that consider aspects of the connection structure can test for the truth of appropriate PENNI propositions. The inquiries that relate to information presented in the quantification structure of the logical form will search the scope structure. Finally, to supply lexical entries, we associate lexical entries with NIKL concepts as attached data and use the retrieval methods of PENNI and NIKL to retrieve the appropriate terms. Let’s consider some examples. The generation activity begins with a pointer to the major ProcessOccurrence. By the time CauserID is asked, Nigel has a pointer to what it knows to be a caused Action. CauserlD is realized by a procedure that finds the thing or things that are in actor type relationships to the Action. AffectedID works similarly through the actee predicate. When VerbalProcessQ is asked, Nigel simply asks PENNI if a proposition with VerbalAction and the Action is true. These examples emphasize the use of the connection structure to analyze what functional systemic grammarians call the ideational content of an utterance. In addition, utterances are characterized by interpersonal content, e.g., the relation between the hearer and the speaker, and textual content, e.g., relation to the last utterance. We have been developing methods for storing this information in a PENNI database, so that interpersonal and textual asking questions of PENNI. inquiries can also be answered MultiplicityQ is an example of a more involved process. When it is invoked, Nigel has a pointer to an individual to be described. The inquiry identifies all sets as multiple and any non-set individuals as unitary. For non-set variables, it explores their scoping environment. Its most interesting property involves an entity whose quantification suggests an answer of unitary. If the entity is shown in the logical form as a property of or a part of some entity and it is inside the scope of the quantifier that binds that entity and this second entity must be treated as multiple, then both entities are said to be multiple. TermSpecificationlD is unique in that it explores the NIKL network direct!y. It is given a pointer to a PENNI individual. It accesses the most specific generic concept PENNI has constructed to describe the individual. It looks at this concept and then up through more general categories until it finds a lexical entry associated with a concept. 4. EXAMPLE SENTENCES Space constraints forbid presentation of a complete example. Let’s look at a few points involved in transforming the three example logical forms in Figure 2-3 into English. Assume for Example 2-3A, that, at this moment, the COMPUTER wishes to communicate to RICHARD the information as an assertion, and that SMITH is known by name through the PENNI database. The flow starts with x identified as the central ProcessOccurrence. From there, y is identified as describing the main process. TermSpecificationlD is applied to y in one of the first inquiries processed. This is stated to be a Transmit. However, we are also told that its actee is a Message. Assuming the model described in Section 2.2, PENNI concludes that y is not just a Transmit, but a Send as well. This leads TermSpecificationlD to look first at Send for a lexical entry. Next, Nigel asks for a pointer to the time being referred to and receives back p. Later this is evaluated against the speaking time to establish the tense. Further on, Nigel characterizes the process. The inquiries attempt to prove, in turn, that y is a Relationship, a MentalActive, and a VerbalAction. When none of these queries are answered positively, it concludes that y is a MaterialAction. After establishing that y is a kind of event that is caused, Nigel uses CauserID and AffectedID. It receives back SMITH and z, respectively. The actual decision on how to describe SMITH and z are arrived at during separate passes through the grammar. During the pass for SMITH, TermSpecificationID returns his name, “Smith”. MultiplicityQ is invoked and returns unitary. During the pass for 2, TermSpecificationlD returns “message”, while MultiplicityQ returns unitary. In the end, the sentence “Smith sent a message.” is generated. Looking at Example 2-38, one difference on the level of the outermost ActionOccurrence is the absence of an actee relationship. However, requestedobject is shown in the model as a type of actee relationship and AffectedID returns q. In order to describe q the grammar forms a relative clause, “which was sent by Smith”. There is no overt indication of the type of entity q is. However, from the model of Send, PENNI infers that (Message z) is true. TermSpecificationlD for z returns “message”. Treating the sentence as a command and assuming “show” is associated with Display, Nigel will produce “Show me the message which was sent by Smith.“. Example 2-3C allows us to consider the use of the scope structure in deciding the multiplicity of r. We are required to describe the displaying of the single inspection status (or read status) that is found for each message in a set of messages. As noted, we have modeled InspectionStatus as an Ascription relationship. The grammar uses “of” to describe this sort of relation in a noun phrase. MultiplicityQ evaluates m as multiple. Because r is in m’s scope, it too is evaluated as multiple and the noun phrase is rendered as “the read statuses of the messages”. If the scopings were reversed, the logical form would indicate that there was only one read status for all the messages’ MultiplicityQ would evaluate r as unitary and the noun phrase would be “the read status of the messages”. If both the quantifiers were existential, then each scoping would result in MultiplicityQ evaluating both as unitary. The noun phrase would be rendered as “the read status of the message”. If m were bound by an I!, an L or replaced by a unitary constant, and r bound by a universal quantifier, the rendering would be “the read statuses of the message”. In Figure 4-1, we display a set of sentences to give the reader some idea of the generator’s range as of January 1986. Nigel played the part of both participants in the dialogue, which used hand constructed logical forms and dialogue contexts in the absence of an associated software system. Figure 4-2 gives the logical form which forced the generation of the next to the last sentence in the example set. 616 i SCIENCE *User: What is the read status of the message which was sent by Jones? *System: Message 40 is unread. ‘User: Are any messages which were sent by Jones in the tape file? *System: Yes. There are messages which were sent by Jones. However, the tape is in use. *User: Did Smith send any messages? *System: Message 45 was sent by Smith. *User: Did Smith schedule his New York meeting with me? *System: Yes. *User: What does the window show? *System: The window shows a request by Jones. *User: Did I create the window? *System: The window was created when you asked me to display Message 50. You deleted the message, then you asked me to display it. Figure 4- 1: A Sample Set of Generated Sentences (3e E ActionOccurrence) (3p f Past)(timeofoccurrence(e,p)A (If E ActionOccurrence) (14 E Past)(timeofoccurrence(f,q)A (3r E RelationOccurrence) (3 E Simultaneity) (records(r,t)Adomain(t,p)Arange(t,q))A (3n E NaturalLanguageRequestAction) (records(f,n)Aactor(n,RICHARD)A beneficiary(n,COMPUTER)A (3x E NaturalLanguageRecord)(actee(n,x)A (3d E Command)(forceofnlrecord(x,d)A . (3ee E ActionOccurrence)(focus(x,ee)A (3dd E Display) (records(ee,dd)Aactor(dd,COMPUTER)A actee(dd,MsO))))))))A (3 E Create)(records(e,c)Aactee(c,W34)) Figure 4-2: A More Complex Logical Form 5. CONCLUSION 5.1. Summary To summarize, we have developed a first-order predicate-calculus language which can be used to make demands for expressions to the Nigel grammar. This works by translating the logical forms into two separate structures that are stored in a PENNI database. Nigel inquiries are evaluated against these structures through the aid of a NIKL knowledge base. Discourse context is also stored in the data base and lexical entries are obtained from the knowledge base. Adding this facility to Nigel seems to have added only 10 to 20 percent to Nigel’s run time. The system is currently implemented in Interlisp and runs on the Xerox family of Lisp machines. A reimplementation in Common Lisp is underway. 5.2. Limitations and Future Plans For the sake of presentation, we have simplified our description of the working system. Other facilities include an extensive tense, aspect and temporal reference system. There is also a facility for dynamically constructing logical forms for referring expressions, This is used when constants are found in other logical forms that cannot be referred to by name or through pronoun. There are also certain limitations in our approach. One of which may have occurred to the reader is that the language our system produces is ambiguous in ways formal logic is not. For example, “the read statuses of the messages” has one reading which is different from the logical form we used in our example. While scope ambiguities are deeply ingrained in language, they are not a problem in most communication situations. Related to this problem is a potentially important mismatch between logic and functional systemic grammars. These grammars do not control directly for quantification scope’ They treat it as only one aspect of the decision making process about determiner choice and constituent ordering. Certainly, there is a great deal of evidence that log$$ scoping is not often a factor in the interpretation of utterances Another set of problems concern the limits we place on logical connectives in logical forms’ One limit is the position of negations: we can only negate ProcessOccurrences, e.g., “John didn’t send a message.“. Negation on other forms, e.g., “John sent no messages.“, affects the basic connection with the NIKL model. Furthermore, certain conjunctions have to be shown with a conjunctive Relationship as opposed to logical conjunction’ This includes conjunctions between ProcessOccurrences that lead to compound sentences, as well as all disjunctions. Furthermore, we impose a condition that a demand for expression must concern a single connected set of structures. In operation the system actually ignores parts of the logical form that are independent of the main ProcessOccurrences. Because the underlying grammar can only express one event or state of affair(not counting dependent processes) and its associated circumstances at a time, in order to fit in one sentence all the entities to be mentioned must be somehow connected to one event or state of affair. We expect that the limitations in the last two paragraphs will be overcome as we develop our text generation system, Penman [Mann &3b]. A theory of text structure is being developed at USC/IS1 that will take less restrained forms and map them into multi-sentence text structures [Mann 841. The use of this intermediate facility will mediate for logical connectives and connectivity by presenting the sentence generator with normalized and connected structures. The word choice decisions the system makes also need to be enhanced. It currently takes as specific a term as possible. Unfortunately, this term could convey only part of the necessary information. Or it could convey more information than that conveyed by the process alone, e.g., in our transmit/send example, “send”, unlike “transmit”, conveys the existence of a message. We are currently developing a method of dealing with word choice through descriptions in terms of primitive concepts that will support better matching between demands and lexical resources. l . . For example, Keenan and Faltz state “We feel that the reason for the poor correspondence is that NP scope differences in natural language are not in facl coded or in general reflected in the derivational history of an expression. If so, we have a situation where we need something in LF which really doesn’t correspond to anything in SF” [Keenan 85. p. 211. NATURAL LANGUAGE / 61’ A related limit is the requirement in the current NIKL that all aspects of a concept be present in the logical form in order for the NIKL classifier to have effect. For example, the logical forms must show all aspects of a Send to identify a Transmit as one. A complete model of Send will certainly have more role restrictions than the actee. However, just having an actee which is a Message should be sufficient to indicate that a particular Transmit is a Send. We are working with the developers of NIKL to allow for this type of reasoning. Two other areas of concern relate directly to our most important current activity which is described in the next paragraph. First, it is not clear that first-order logic will be sufficiently expressive for all possible situations. Second, it is not clear the use of hand-built logical forms is sufficient to test our design to its fullest extent. 5.3. JANUS The success of our work to date has led to plans for the inclusion of this design in the Janus natural language interface. Janus is a joint effort between USC/IS1 and BBN, Inc., to build the next generation natural language interface within the natural language technology component of the Strategic Computing Initiative [Walker 851. One feature of the system will be the use of higher-order logics. Plans are underway to test the system in actual use. The future direction of the work presented here will be largely determined by the demands of the Janus effort. ACKNOWLEDGMENTS We gratefully acknowledge the assistance of our colleagues: Bill Mann with whom many of the ideas were developed; Richard Whitney who implemented many of the inquiry operator functions; Tom Galloway who suggested many improvements in the paper, as did Robert Albano, Susanna Cumming, Paul Jacobs, Christian Matthiessen, and Lynn Poulton; and Marc Vilain who gave us KL-TWO. We also gratefully acknowledge the Technische Urliversitaet Berlin and the members of ESPRIT Project 311 who made it possible for the second author to participate in the project described here. References [Appelt 831 Douglas E. Appelt, “Telegram: a grammar formalism for language planning,” in Proceedings of the Eighth International Joinr Conference on Artificial Intelligence, pp. 595-599, IJCAI, Aug 1983. [Brachman 851 R. J. Brachman, V. P. Gilbert, H. J. Levesque, “An Essential Hybrid Reasoning System: Knowledge and Symbol Level Accounts of KRYPTON,” in Proceedings of the Ninth international Joint Conference on Artificial Intelligence, pp. 532-539, Los Angeles, CA, August 1985. [Brachman and Schmolze 851 Brachman, R.J., and Schmolze, J.G., “An Overview of the KL-ONE Knowledge Representation System,” Cognitive Science, August 1985, 171-216. [Davidson 671 D. Davidson, “The Logical Form of Action Sentences,” in N. Rescher (ed.), The Logic of Decision and Action, pp. 81-95, The University of Pittsburgh Press, Pittsburgh, 1967. [Goldman 751 Goldman, N. M., “Conceptual generation,” in R. C. Schank (ed.), Conceptual lnformalion Processing, North-Holland, Amsterdam, 1975. [Habel82] Christopher Habel, “Referential nets with attributes ” in Horecky (ed.), Proc. COLlNG-82, North-Holland, Amsterdam, 1982. [Halliday 761 Halliday, M. A. K., System University Press, London, 1976. and Fun&ion in Language, Oxford [Hendrix 751 G. Hendrix, “Expanding the Utility of Semantic Networks through Partitioning,” in Advance Papers of the Fourfh international Joint Conference on Artificial Intelligence, pp. 115-121, Tbilisi, September 1975. [Hoeppner et al. 831 Wolfgang Hoeppner, Thomas Christaller, Heinz Marburger, Katharina Morik, Bernhard Nebel, Mike O’Leary, Wolfgang Wahlster, “Beyond domain-independence: experience with the development of a German natural language access system to highly diverse background systems,” in Proceedings of the Eighth lnrernational Join? Conference on Artificial Intelligence, pp. 588-594, IJCAI, Aug 1983. [Hovy85] E. H. Hovy, “Integrating Text Planning and Production in Generation,” in Proceedings of the Ninth International Joint Conference on Artificial Intelligence, pp. 115-l 21, Los Angeles, CA, August 1985. [Jacobs 851 Paul Jacobs, A Knowledge-Based Approach to Language Production, Ph.D. thesis, University of California, Berkeley, CA, August 1985. [Kaczmarek 861 T. Kaczmarek, R. Bates, G. Robins, “Recent Developments in NIKL,” in AAAI-86, Proceedings of the National Conference on Artificial Intelligence, AAAI, Philadelphia, PA, August 1966. [Kaczmarek, Mark, and Sondheimer 831 T. Kaczmarek, W. Mark, and N. Sondheimer, “The Consul/CUE Interface: An Integrated Interactive Environment,” in Proceedings of Cl-f/ ‘83 Human Factors in Computing Systems, pp. 98-102, ACM, December 1983. [Keenan 851 Edward L. Keenan, Leonard M. Faltz, Boolean Semantics for Natural Language, Reidel, Boston, 1985. [Kukich 651 Karen Kukich, “Explanation Structures in XSEL,” in Proceedings of the 23rd Annual Meeting, ACL, Jul 1985. [Mann 83a] Mann, W. C., “Inquiry semantics: A functional semantics of natural language grammar,” in Proceedings of the First Annual Conference, Association for Computational Linguistics, European Chapter, September 1983. [Mann 63b] Mann, W. C., “An overview of the Penman text generation system,” in Proceedings of the National Conference on Artificial Intelligence, pp. 261-265, AAAI, August 1963. Also appears as USC/Information Sciences Institute, RR-83-1 14. [Mann 841 Mann, W., Discourse SIrucUres for Text Generation, USC/Information Sciences Institute, Marina del Rey, CA, Technical Report RR-84-l 27, February 1984. [Mann 8. Matthiessen 831 William C. Mann 8 Christian M.I.M. Matthiessen, Nigel: A Systemic Grammar for Text Generation, USC/Information Sciences Institute, Technical Report ISI/RR-83-105, Feb 1983. [McAllester 82) D.A. McAllester, Reasoning Utility Package User’s Manual, Massachusetts Institute Technology, Technical Report, April 1982. [McDonald 631 David D. McDonald, “Natural language generation as a computational problem: an introduction,” in Brady 8. Berwick (eds.), Computationa/ Problems in Discourse, pp. 209-264, MIT Press, Cambridge, 1983. [McKeown 851 Kathleen R. McKeown, Text generation: using discourse strategies and focus constraints to generate natural language text, Cambridge University Press, Cambridge, 1985. [Montague 741 R. Montague, Forma/ Philosophy, Yale University Press, New Haven, CN, 1974. [Schmolze and Lipkis 831 James Schmolze and Thomas Lipkis, “Classification in the KL-ONE Knowledge Representation System,” in Proceedings of the Eighth international Joint Conference on Artificial Intelligence, IJCAI, 1983. [Shapiro 791 Shapiro, S. C., “Generalized augmented transition network grammars for generation from semantic networks,” in Proceedings of the Seventeenth Meeting of the Association for Compurational Linguistics, pp. 25-29, August 1979. [Vilain 851 M. Vilain, “The Restricted Language Architecture of a Hybrid Representation System,” in Proceedings of the Ninth lnternafional Joint Conference on Artificial Intelligence, pp. 547-551, Los Angeles, CA, August 1985. [Walker 851 E. Walker, R. Weischedel, N. Sondheimer, “Natural Language Interface Technology,” in Strategic Sysrems Symposium, DARPA, Monterey, CA, October 1985. 618 / SCIENCE
|
1986
|
167
|
436
|
CSDERSTANDISG PLAN ELLIPSIS Diane J. Litman AT&T Bell Laboratories 3C-308A 600 Mountain Avenue Murray Hill. NJ 07974’ ABSTRACT This paper presents an extended and unified approach to the interpretation of sentence fragments and elliptical utterances within the context of a plan-based theory of dialogue understand- ing. The approach integrates knowledge about plans and knowledge about discourse. enabling the treatment of a variety of difficult linguistic phenomena within a single framework while maintaining the computational advantages of the plan-based approach. I ISTRODUCTIOS Naturally occurring dialogues contain incomplete utterance5 difficult for existing natural language understanding systems to handle. In particular, the interpretation of many of these uttt r- antes depends not on syntactic and semantic knowledge as in most linguistic-based systems [4] [7] [9] [27] [28] but instead on pragmatic knowledge such as the underlying plans and goals of a speaker. For example, Allen [l] uses planning knowledge to interpret Sentence f ragmenrs, syntactically incomplete utterances occurring in isolation or at the beginning of a dialogue, while Carberry [3] uses planning knowledge to interpret a class of ellipr- ical utterances, syntactically incomplete utterances occurring in the course of a dialogue. This paper presents an approach that extends and unifies the interpretation of sentence fragments and elliptical utterances within the context of a plan-based theory of dialogue understanding [ 131. The approach integrates knowledge about plans and knowledge aboLt discourse. enabling the use of a single framework to handle a wide variety of difficult discourse phenomena while maintaining the computational advantages of the plan-based approach. Consider the demands that the following dialogue (recorded at the information booth of a train station in Toronto [ll]) would place on a computer system taking the role of the clerk during the understanding process. 1) Passenger: Trams eomg from here to Ottawa? 2) Clerk: Ottawa. Next one IS at four-thirty. 3) Passenger: How about Wednesday? 1) Clerk: One at nine thirty. nine thirty in the morning. four thirty in the afternoon ..yeah. that’s it. Dialogue 1 Traditional ellipsis resolution methods based on substitution into a preceding linguistic context are unable to handle the sentence fragment corresponding to utterance (1). This is because there is no linguistic context for utterance (1). Thus, the system would need to draw upon an extra-linguistic context of knowledge about the world and likely goals of the speaker. For example. the sys- tern could use the knowledge that people in train stations often want to take train trips to infer that the speaker wants to know the relevant train times for a trip to Ottawa. In other words. the “This paper describes nork done at the Department ot Computer Science. Unwerslty ot Rochester. Rochester SI’ !162’ It aas supported In part by DAR- PA under Grant NOOOlJ-SZ-K-0193. SSF under Grant DCRY351665. and OSR under Grant SO01480-C-0197 system would need to recognize that the speaker’s plan is to take a train trip. and recognize the utterance’s relationship to this plan. Similar points can be made with respect to the interpreta- tion of the elliptical utterance (3). Even though a linguistic con- text has now been established, i.e. utterances (1) and (2). the information explicitly present in this context is still insufficient for the ellipsis resolution task. This is because the previous utter- ances do not contain entities that “Wednesday” can replace. Because the system could again relate the utterance to a larger context such as plans and goals. sentence fragments as well as such cases of ellipsis will be referred to as plan ellipsis. In this case the system could substitute into the plan underlying utter- ance (1) to interpret utterance (3). Furthermore. the system should be able to exploit the fact that words like “how about” often signal such utterance (and thus plan) relationships [5] [7] [8] [20] [25]. Finally, consider what the ellipsis resolution process would look like if “How about Montreal?” were to replace utterance (3). Although in this case substitution into the preceding linguistic context would suffice (with Montreal replacing Ottawa in utter- ance (1)). the ellipsis could alternatlcely be processed by again viewing utterance (3) in terms of the plan underlying utterance (1). A robust system should be able to use and coordinate linguis- tic and plan-based analyses of the same phenomena. The next two sections of this paper present a plan-based framework that addresses these issues. Section II introduces the framework. followed in Section III by details needed for the plan ellipsis resolution process. Section IV illustrates the approach by tracing the processing of the dialogue above. II PLAS RECOGSITIOS ASD DISCOURSE ASALYSIS: AN INTEGRATED FRAMEWORK In a plan-based approach to language understanding. an utterance is considered understood when it has been related to some underlying speaker plan in the domain of discourse. While previous works have explicitly represented and recognized such domain plans (e.g. take a train trip) [I] [2] [7] [24] [25]. the ways that utterances could be related to such plans have been limited and not of particular concern. As a result, a variety of subdialo- gues as well as many forms of plan ellipsis have still proven prob- lematic for the plan-based approach. In the current work a set of domain-independent d/scout-se plans have been introduced to explicitly represent and reason about relationships between utterances and domain plans. Techn- ically, discourse plans refer to domain plans. i.e. thev take domain plans as arguments and are thus meta-plans. Intuiticely. domain plans model the contents of a topic while discourse plans model the actual manipulations of a topic. For example. there are discourse plans to introduce domain plans (topics). continue plans. specify plans. debug plans. and so on. In actualIt>. discourse plans can mampulate other discourse plans as \+cll as domain plan,. i.e. discourse plans can also become topic\ of a NATURAL LANGUAGE / 6 19 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. conversation. In Dialogue 1 “Trains going from here to Ottawa’?” achieves a discourse plan that ~rrrroduces a domain plan to take a trip. Discussion of the domain plan is then contrnued by “Ottawa. Next one is at four-thirty” and modified by “How about Wednes- day?” While the identification and specification of a small set of such utterance relationships has been inspired by many linguistic models of discourse [lo] [17] [18] [20], the reformulation of such relationships within a plan-based framework allows their representation in terms of planning operators and their computa- tion via a plan recognition process [la]. Section III presents detailed representations of both domain and discourse plans. while the plan recognition process [12] [13] [15] is implicitly reviewed when tracing through the example of Section IV. (Briefly, a discourse plan is recognized from every utterance via forward chaining, then a process of constraint satisfaction is used to initiate recognition of the domain and any other discourse plans related to the utterance.) To record and monitor execution of the discourse and domain plans active at any point in a dialogue. a dialogue context in the form of a plan stuck will also be introduced. (Many models of discourse [S] [19] [20] h ave argued that topic manipulations fol- low a stack like discipline.) During a dialogue a stack of execut- ing and suspended plans will be built and maintained by the plan recognizer. each discourse plan referring to the plan below it, with the domain-dependent task plan on the bottom and the origi- nal discourse plan at the top. The recognition of discourse plans will be heuristically controlled by taking into account the intlu- ence of this plan stack; candidate discourse plans will be searched according to a priority order based on linguistic coherence with the current context. For example. the plan recognizer will prefer discourse plans representing stacked topic continuations to those representing topic changes. Finally, when discourse information can be ascertained through purely linguistic means (e.g. syntactic and semantic ellipsis resolution). the information is input to the plan recogni- tion system along with the utterance parse. Such information can then be used to either reinforce or explicitly modify the plan recognizer’s processing. For example. the priority ordering of discourse plans can be overruled based on the presence of linguis- tic clues (e.g. phrases like “how about”) correlated with less likely discourse plans. Note, however, that in the absence of such infor- mation the system can still proceed in the purely plan-based manner described above. III DOMAIS AS-D DISCOURSE PLAS REPRESESTATIOS In terms of the framework described above. resolution of plan ellipsis involves recognitibn of the domain plan underlying the elliptical utterance. and recognition of the discourse plan that actually relates the utterance to this domain plan. A hearer must thus bring to the resolution task some knowledge about typical speaker domain and discourse plans. Schematic knowledge regarding both domain and discourse plans is represented using a standard STRIPS based notation [6] [21]. Every plan schema has a header, a parameterized action description that names the plan. The parumeters of u plun are the parameters in the header. Action descriptions are defined in terms of prerequlsltes, decompo5/trons. effecr.5 and cunstrulnty Prerequisites are conditions that need to hold (or to be made to hold) before the action can be performed. Effects are conditions that will hold after the action has been successfully executed. Decompositions enable hierarchical planning. Although the action description of the header may be usefully thought of as a single action achieving a goal, such an action might not be execut- able. Action descriptions are in actuality composed of executable actions and possibly other action descriptions (i.e. other plans). Finally, associated with each plan is a set of applicability condi- tions called constraints,** which are similar to prerequisites except that the planner never attempts to achieve a constraint if it is false. The plan recognizer uses plan schemas to recognize plan instantiations underlying the production of an utterance. Figure 1 presents a sample domain plan schema for the train station domain. The plan has header “TAKE-TRAIN- TRIP(agent, departTrain, destination)” and parameters “agent,’ “departTrain” and “destination.” where the naming conventions for the parameters reflect an underlying type hierarchy as com- monly found in semantic network systems. The plan is performed by first selecting a train. followed by buying a ticket for the train, then boarding the train. Each of these actions is itself either another action description (i.e. plan schema) or an executable action. The constraints capture the facts that the train taken, i.e departTrain, goes to destlnution, that this fact is the only restrlc- tion on the set of possible candidates tdepartTruin.Set) for dcpp~~t-l- Truin. and that the ticket purchased will be used to take depurt- Truin. The prerequisites and effects are not shown. Similarly the specification of other plan schemas needed in this domain, e.g. SELECT-TRAIN, BUY-TICKET. BOARD, MEET. and so on. are not shown since they will not be needed to process the exam- ple below. HEADER: TAKE-TRAIN-TRIP(agent.departTrain.destination) DECOMPOSITION: SELECT-TRAIN(agent, departTrain, departTrainSet) BUY-TICKET(agent, clerk, ticket) BOARD(agent, departTrain) CONSTRAINTS: EQUAL(destination. arrive-station(departTrain)) EQUAL(destination, arrive-station(departTrainSet)) EQUAL(departTrain, objectcticket)) Figure 1. Domain Plan Schema for the Train Domain Although discourse plans encode knowledge about commun- ication. they are represented in the same way as domain plans except for the fact that they refer to other plans (i.e. they take other plans as arguments and are thus technically meta-plans). Figures 2 and 3 present several examples. The first discourse plan. MODIFY-PLAN, represents the replacement of a plan by one of several possible plan modifications. In particular. a new action is constructed from an old plan action by changing the assignment of a parameter. A modified plan is then constructed and executed by replacing the old action with this modification The constraints specify the relationship between the plan and its modification, using a simple vocabulary for referring to and describing plans (e.g. PARAMETER, STEP). The prerequisite indicates that the plan to be modified must have already been introduced into the discourse context as a previous topic (INTRODUCE-PLAN is shown in Figure 3.) The decomposition specifies that hIODIFY-PLAN may be achieved by requesrrng execution of the modified action As will be discussed below. all discourse plans will be recognized from F(~~P~/I il( r~ [23] such 7* REQUEST. Finally. the NEXT ettect states that /rcit.~.c.rl~/l vv ill be the next action performed tn the moditied plan. and the POP effect and REPLACE constraint explicitly overrule the normal stack behavior of the context mechanism. Informally, instead of returning to oldPlun upon completion of the discourse plan. the plan modification fret\ Plun will instead be executed. hIODIFY- PLAN is often signaled by the clue phrase “how about.” as illus- trated in the dialogue of Section I. 620 / SCIENCE HEADER: INTRODUCE-PLAN(speaker.hearer.action.plan) DECOMPOSITION EFFECTS. REQUEST(speaker, hearer. action) \VdNT(hearer, plan) NEXT(action. plan) CONSTRAINTS: STEP(action, plan) AGENT(action. hearer) HEADER: DECOMPOSITION EFFECTS: IDENTIFY-PARAMETER(speaker. hearer, parameter, action. plan) INFORMR,EF(speaker, hearer. term, proposition) NEXT(action. plan) KNO\$!(hearer. parameter. action, plan) CONSTRAINTS: PARA>fETER(parameter. action) STEP(action, plan) PARAMETER(parameter. proposition) PARA?LIETER(term. proposition) WANT(hearer. plan) HEADER: MODIFY-PLAN(speaker.hearer. change, changee newAction. oldAction. oldPlan. newPlan) PREREQUISITE: WANT(hearer. oldPlanj DECOMPOSITION: REQUEST(speaker, hearer, newAction) EFFECTS: POP(CLOSURE(oldPlan)) NEXT(newAction) CONSTRAINTS: PARAMETER(oldAction, changee) STEP(oldAction, oldPlan) STEP(newAction. new Plan j EQUALfnewAction., SUBSTcchange. changee. oldAction)) EQUAL(TYPE(change), TYPE(changee)) -EQUAL(change, changee) REPLACE(stack, oldstack) Figure 2. The Discourse Relationship of Plan Modification Figure 3 presents the other discourse plans that will be needed for the example (see [13] for a larger set). INTRODUCE-PLAN models topic introduction as well as topic change, i.e. since INTRODUCE-PLAN has no prerequisites it can occur in any discourse context. As specified via the decom- position and constraints, INTRODUCE-PLAN takes a plan of the speaker that involves the hearer and presents it to the hearer, by requesting an action that is in the plan and has the hearer as agent. The effects specify that the hearer (assumed cooperative) will adopt the joint plan as a goal. and that the action requested will be the next action performed in the plan. Just as with MODIFY-PLAN. INTRODUCE-PLAN may be signaled by the clue phrase “how about.” e.g. “How about the movies?” The second plan of Figure 3. IDENTIFY-PARAMETER. models cla- rifications corresponding to parameter specification. In particu- lar. by executing IDENTIFY-PAR.4METER .xpeclker provides lreoret- hith a description of prumerer that is informative enough to allow heurer to execute ucf/utl in plutl. As with the previous discourse plans, the decomposition is specified via a speech act and the relationships between the discourse plan and the plan being clarified are specified by the constraints. To illustrate how these discourse plans represent the rela- tionships between an utterance and its plan context. consider the following (slightly cleaned-up) dialogue fragment between a Figure 3. Plan Introduction and Clarification computer user and operator [16]:*** 1)Liser: 2) 3)Operaror: 1) 5)User: Please mount a magtape for me It’s tapel. \I’e are not allowed to mount that magtape. You \vill have to talk to operator2 about It. How about tape tape2? Dialogue 2 The user’s first utterance itztrodnces a plan out of the tape domain, a plan which he or she then clarfffres (utterance 2) and later tnodl,fres {utterance 5). In terms of instantiations of the schemas given above. utterances (lj, (2) and (5, would be recog- nized as executing INTRODUCE-PLAN(user, system. mount a tape. mount plan), IDENTIFY-PARAMETER(user, system. tape. mount a tape. mount plan). and MODIFY-PLXN(user. SYS- tern. tape?. tapel. mount tape’. mount tapzl. tape1 mount plan. tape2 mount plan). respectively. Although ne\+ domain plans are needed to process this dialogue (e.g. mount a tape), the discourse plans and the plan recognition algorithm trill remain the >amr: across domains [ 131. Finally, all that remains to be discussed are the definitions of the speech acts REQUEST and INFORRIREF. used in the discourse plan decompositions given above. Basically the treat- ment of the speech acts is identical to the treatment given in Allen and Perrault [l]. For example, speech act decompositions are specified in terms of various surfuce linguistic ucts (e.g. SURFACE-REQUEST). utterance templates correlated with sen- tence mood. However. to allqw sentence fragments and elliptical utterances such as definite noun phrases at any point in a discourse, a new surface linguistic act called SURFACE-NP has also been included. (Carberry [3] contains a somewhat similar proposal.) As we will see. the addition of this decomposition con- nects incomplete utterance resolution with the plan recognition process. In particular. an incomplete utterance will be parsed as a SURFACE-NP. then the underlying speech act. discourse and domain plans recognized from the SURFACE-NP via the normal plan recognition process. Figure J presents the details of the SURFACE-NP addition. inhere CONTAINS(x. noun-phrase) means that x involves the noun-phrase as a parameter. or recur- sively as a parameter of a parameter, and so on. As in [l] a typi- cal REQUEST is interr0gatit.e and a typical INFORM declara- tive. INFORhIREF and INFORbIIF are two variations of INFORM needed to handle wh-questions and yes/no questions. respectively. For example. “\Vhen does the train leave?” is a REQUEST to INFORhlREF. HEADER: REQUEST(speaker,hearer.action) DECOMPOSITION. SURFACE-NP(speaker.hearer,noun-phrase) CONSTRAINT: CONTAINS(action,noun-phrase) HEADER: INFORM(speaker.hearer.proposition) DECOMPOSITION: SURFACE-NP(speaker.hearer.noun-phrase) CONSTRAINT: CONTAINS(proposition,noun-phrase) Figure J. Elliptical Speech Xct Schsmas IF’ EXAMPLE This section uses the framework developed in the last tub@ sections to illustrate the system’s processing***’ of Dialogue 1. “‘[ 131 conrams a fu!l XKI~\ >iS c>t this ,InJ sclT.era! otner cZ.\Xmp!25 ““‘ .AlthOugh the brha\lor to be de>ir,brd 15 tu!!! >pr<it!iU b\ :nr tnsor!. the ImpkAmentatlon 1s partial and <orreyx>ndi :I) the maw sontrkbutlon of :he theor! [I : the nen .modei ot p!an rr,zagnlt.oni Han e\ er. ai! jlmulatsd computa- tional procesrrs la\e been imp!rmetn:ed In other c\,rems [13: ;onta,ns r\ tui! dis- cu>s,on of the lmp!enenratlon NATURAL LANGUAGE / 62 1 The example is first traced to show how information missing in fragments and elliptical utterances can be recovered as a side effect of the plan recognition process. Utterance (3) is then sim- plified as in Section I, illustrating the coordination of plan-based analyses with more traditional linguistic analyses when available. A typical syntactic and semantic analysis of “Trains going from here to Ottawa?” e.g. SURFACE-NP (personl, clerkl, departTrainSet1) with EQUAL (arrive-station(departTrainSet1). Ottawa). is given as input to the plan recognition process of for- ward chaining. Although the SURFACE-NP matches the decom- positions of both the REQUEST and INFORM schemas. since the mood is interrogative the plan recognizer prefers the REQUEST as in [l]. If the mood was declarative. the match with the decomposition of the INFORM schema would have been pre- ferred. Furthermore. since in this particular domain the clerk’s role is to provide information via speech acts, the parameter ucrion of REQUEST is hypothesized to be a system INFORM (either an INFORMIF or INFORMREF). constrained to contain the noun-phrase dep~lrrTrtrrnSef1. e.g. INFORMREF( clerkl. per- sonl, ?departTrainSet. EQUAL( ?departTrainSet. departTrain- Setl)). This INFORMREF will henceforth be called 11. As we will see, this instantiation will allow the plan recognizer to postu- late a passenger discourse plan to l/lrr-odlrce a system discourse plan to clurif~ a passenger domain plan to tnke LI trulu rrip. In contrast no plan interpretation will be constructed from the INFORMIF interpretation of the fragment. i.e. no chain of discourse plans to a domain plan can be constructed. The actual plan recognition process proceeds as follows. Since at the beginning of the dialogue there is no context of plan instantiations, the system expects that the speaker will try to rnrroduce a domain plan instantiation. In particular, using the INTRODUCE-PLAN schema, the REQUEST to INFORMREF hypothesized above, and the plan recognition process of forward chaining via plan decompositions, the system matches the REQUEST with the decomposition of INTRODUCE-PLAN. yielding the instantiation INTRODUCE-PLAN(person1. clerkl, 11, ?plan) (call it PLANl). with constraints STEP(Il,?plan) and AGENT(I1, clerkl). As in [l] this hypothesis is then evaluated using a set of plan heuristics, e.g. constraints of any recognized plan must be satisfiable. To satisfy the STEP constraint a plan containing I1 will be created and arbitrarily called PLAN2. Nothing more needs to be done with respect to the second con- straint, since it is already satisfied. The system then attempts to expand PLAN2 using an analo- gous plan recognition process. The recognizer again uses the domain and discourse schemas and postulates that I1 of PLAN2 is the decomposition of an IDENTIFY-PARAMETER(clerk1. per- sonl. ?parameter. ?action. ?plan). Furthermore. in satisfying the constraints on this plan. i.e. 1. PARAMETER(?parameter. ?action) 2. STEP(?action. ?plan) 3. PARAMETER (Tparameter. EQUAL(?departTrainSet. departTrainSet1)) 4. PARAMETER(?departTrainSet. EQUAL(?departTrainSet, departTrainSet1)) 5. WANT(person1, ?plan) a third plan is introduced (constraint 5), containing SELECT- TRAIN as a step (constraint 2). This is because SELECT- TRAIN is the only action that can contain a train set parameter (constraints 1 and 3) as described via the equality of the INFORMREF (constraint J). Just as PLAN2, PLAN3 then becomes input to a new plan recognition process. Using the domain plan schema of Figure 1. SELECT-TRAIN of PLAN3 is hypothesized to be the decomposi- tion of an instantiation of TAKE-TRAIN-TRIP. Since in this case no more plans are introduced. the process of plan recogni- tion also ends. The final hypothesis is that the passenger exe- cuted a discourse plan (PLANl) that introduced a system discourse plan (PLAN21 to clarify a parameter in a passenger domain plan (PLAN3) to take a trip. The various effects of all the plans are then asserted. the postulated plans are expanded top down to include the rest of their steps (based on the plan schemas), and the context mechan- ism pushes the plans onto the empty plan stack that represents the discourse context preceding utterance (1). Note that all three plans are recognized before any are placed on the stack. The updated stack is shown in Figure 5. with PLAN1 at the top. PLAN1 (completed] INTRODUCE-PLAN(personl, clerkl, Il. PLAN2) I REQUEST(person1. clerkl. 11) I SURFACE-NP(person1. clerkl, departTrainSet1) [LAST] with EQUAL(arrive-station(departTrainSet1). Ottawa) PLAN2 IDENTIFY-PAR.4~fETER(clerkl.personl.departTrainSetl.Sl.PLAN3) I Il:INFORMREF(clerkl.personl.‘.‘departTrainSet. El: EQUAL(?departTrainSet.departTrainSetl)) [.l’EXT] PLAN3 TAKE-TRAIN-TRIP(per$onl, ?departTrain, Ottawa) AICKET BOARD Sl: SELECT-TRAIN (personl. clerkl, ?departTicket) (personl.?departTrain,departTrainSetl) (personl.?departTrain) [h’EXT] Figure 5. The Plan Context after Utterance (1) 622 / SCIENCE PLAN2 in the middle. and PLAN3 at the bottom. In other words, the stack encodes the information that PLAN1 was exe- cuted, PLAN2 will be executed upon completion of PLANl. and PLAN3 will be executed upon completion of PLANZ. Solid lines represent plan recognition inferences due to forward chaining, while dotted lines represent inferences due to later plan expan- sion. As desired. the plan recognizer has constructed a plan- based interpretation of the fragment in terms of expected discourse and domain plans, which could then be used to con- struct and generate a response such as the clerk’s “Ottawa. Next one is at four-thirty.” Unfortunately, although the passenger is currently in the train station the train to be boarded leaves on a later date. The passenger thus uses a new utterance, “How about Wednesday?” to again try to obtain the needed information, by modifying the previous plan recognized by the system. The parser analyses “how about” as a clue phrase (using the plan recognizer’s list of standard linguistic clues). “Wednesday” as SURFACE-NP (per- sonl, clerkl, Wednesday). and inputs the information to the plan recognizer. As above the SURFACE-NP is hypothesized to be the decomposition of a REQUEST to perform some type of system INFORM involving Wednesday. Then, using the knowledge that “how about” typically signals either INTRODUCE-PLAN or MODIFY-PLAN. the plan recognizer modifies its processing. In particular, instead of assuming that the REQUEST is a topic con- tinuation (the preferred or most coherent hypothesis in a non-null plan context), the utterance is assumed to be either a topic introduction or a topic modification. The latter hypothesis is most preferred by the coherence heuristics (MODIFY-PLAN builds on the previous context while INTRODUCE-PLAN doesn’t), yield- ing MODIFY-PLAN (personl. clerkl, ?change, ?changee. ?action, ?oldAction, ?oldPlan. ?newPlan). where (1) ?action is some system INFORM involving Wednesday. The plan is then instantiated as follows. Since the REPLACE constraint indicates that MODIFY-PLAN uses an old context (here the stack of Figure S rather than the stack after the system’s response). the prerequisite WANT(clerk1. ?oldPlan) can be satisfied by PLANl. PLAN2 or PLAN3. Since PLAN2 is the most recently discussed. but unfinished. topic (and thus preferred via the coherence heuristics). PLAN1 is popped and the PLAN7 binding tried first. The rest of the parameters are bound via satisfaction of the following constraints: (2) PARAMETER(?oldAction. ?changee) (3) STEP(?oldAction. PL.4N2) (4) STEP(?action, ?newPlan) (5) EQUAL(?action, SUBST(?change, ?changee, ?oldAction)) (6) EQUAL(TYPE(?change), TYPE(?changee)) (7) -EQUAL(?change,?changee)) Constraint (3) can be satisfied by binding ?oldAction to I1 or IDENTIFY-PARAMETER. but with constraints (1) and (5) we know it must be bound to Il. Then. with (l), (2). (5), (6). and (7) ?action gets further specified to an INFORMREF with depart- TrainSet2, where EQUAL( time (departTrainSet2). Wednesday). Finally. satisfaction of constraint (4) results in the creation of a new plan (call it PLAN4) containing the new INFORMREF. This INFORMREF then becomes the input to a new plan recog- nition process. As with the initial INFORMREF Il. from the new INFORMREF (call it 12) the system can recognize an IDENTIFY-PARAMETER of SELECT-TRAIN. PLAN5 is introduced to contain this SELECT-TRAIN. and another recur- sive recognition procedure connects SELECT-TRAIN to the higher level domain plan TAKE-TRAIN-TRIP. Note that although the first recognized plan used the previous context as a template, once the modified step was found the rest of the plan stack had to be re-recognized in order to propagate the modifica- tion. The various effects of all the plans are then asserted. in particular the effect of MODIFY-PLAN pops PLAN2 and its domain plan PLAN3 off the stack. The context mechanism then pushes the new plans on the now empty stack, as shown in Figure 6. As a last example, consider replacing utterance (3) with the utterance “How about Montreal?” In this case the utterance is similar to a type of ellipsis handled linguistically by many existing [COMPLETED] MODIFY-PLAN(personl.clerkl,E2.E1,I2,I1.PLAN~,PL.4N~~ I REQUEST(personl,clerkl.I2) I SURFACE-NP(personl,clerkl.Wednesday) [LAST] IDENTIFY-PARAMETER(clerk .personl.departTrainSet2.S3,.PLANS) I 12:INFORMREF(clerkl.personl.?departTrainSet. E2: EQUAL(?departTrainSet, departTrainSet2)) [NEXT] TAKE-TRAIN-TRIP(person1. ?departTrain. Ottawa) /ICKET BOARD S2: SELECT-TRAIN (personl.clerkl.?departTicket) (personl,?departTrain,departTrainSet?) (personl.?departTrain) with EQUAL(time(departTrainSet2J.Wednesday) EQUAL(arrive-station(departTrainSet?).Ottawa) Figure 6. The Modified Plan Context NATURAL LANGUAGE / 623 systems [4] [7] [9] [27] [28], since the noun-phrase “Montreal” can replace the prekious lexical item “Ottawa.” As mentioned in Sec- tion II. the system will receive such linguistic analyses of discourse phenomena along with the parser input. and use the analyses to constrain the plan recognition process. Even though such phenomena can alternatively be explained in plan-based terms. since linguistic resolution methods are typically simpler than plan-based methods such an approach increases the effi- ciency of the plan recognizer. For example. in the “How about Montreal?” case a complete parse would be input to the plan recognition system rather than just a SURFACE-NP. Although the complete parse would contain previously intermediate plan- based results. since the results are now known from the start the search and constraint satisfaction processes are much quicker. i.e. much of the work now involves plan verification rather than plan construction. The direct recognition of discourse plans through clue phrases (as opposed to their recognition through bearch processes) illustrates similar sacings. As seen above, however. if such results are unavailable or if they do not lead to any plan interpretations. an alternative plan-based analysis of the discourse phenomena -111 eventually be provided. Y COJIPARISOSS ASD CONCLUSIONS Although many elliptical utterances can be understood by directly modifying a previous utterance [4] [7] [9] [27] [28]. such approaches can not handle elliptical utterances when the missing portions refer to entities in a speaker’s non-linguistic or prag- matic context. As discussed above, a pragmatic context is also needed when understanding sentence fragments (as well as when interpretating other linguistic phenomena such as model- interpretative anaphora [22]). Allen and Perrault [l] were among the first to propose a plan-ba>ed pragmatic theory for the interpretation of sentence fragments. The theory was restricted. however. in that the plan recognition process could only deal \+ith utterances in isolation rather than in the context of a dialogue. More recently Carberry has addressed the problem of ellipsis resolution by building on a plan recognition framework for dialogue understanding [2]. How- ever. in order to process intersentential ellipsis the non-elliptical understanding process had to be supplemented with discourse knowledge and mechanisms [3]. Furthermore, the framework could not handle elliptical utterances that also represented topic change. This is in contrast to the current work, where knowledge about a general set of discourse plans was incorporated into a plan-based theory from the beginning. enabling the use of a single framework to recognize plans from sentence fragments, elliptical utterances. and non-elliptical utterances in a wide variety of dialogues. ACKSOWLEDGEhIESTS Thanks to Ron Brachman. Julia Hirschberg. Brian Williams. and Bruce Ballard tor comments on earlier versions of this paper. REFERESCES 1. J. F. Allen and C. R Perrault. “.4nalyzing Intention in Utterancs5”. Arff~‘r~rtr/ Inrriiigr/zce 1.5. 3 (1980). 143-1724. 7 &. S. Carberry. ‘.Tracking User Goals in an Information- Seeking Environment”. AM/. 1Vashington. D.C.. August 1983. 59-63. 3. s. Carberrj. “A Pragmatic+Based Approach to Understanding Intersentential Ellipsis”. ACL. Chicago. July 1985, 188-197. 1. J. G. Carbonell and P. J. Hayes. “Recovery Strategies for Parsing Extragrammatical Language”, AJCL 9. 3-3 (July- December 1983). 123-136. 5 I. R. Cohen. “A Computational Model for the Analysis of Arguments”, Ph.D. Thesis and Tech. Rep. 151. University of Toronto. October 1983. 62-i / SCIENCE 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 31. .3 _b. 23. 7-i. ‘5. 26. 27. ‘8. R. E. Fikes and N. J. Nilsson, “STRIPS: A new Approach to the Application of Theorem Proving to Problem Solving”. Arrlficial Znrellrgeme 2, 314 (1971), 189-208. B. J. Grosz, “The Representation and Use of Focus in Dialogue Understanding”. Technical Note 151, SRI, July 1977. B. J. Grosz and C. L. Sidner, “Discourse Structure and the Proper Treatment of Interruptions”. ZJCAZ, Los Angeles. August 1985. 831-839. G. G. Hendrix. “Human Engineering for Applied Natural Language Processing”. ZJCAZ-77. MIT. August 1977. 183- 191. J. R. Hobbs. “On the Coherence and Structire 01 Discourse”, in The Sfrucflrrr of Drrcourye. Ablex Publishing Corporation, Forthcoming. ‘41~0 CSLI (Stanford) Report No. CSLI-85-37. October 1985. M. K. Horrigan, Modelling Simple Dialogs. May 1977. D. J. Litman and J. F. Allen, “A Plan Recognition Model for Clarification Subdialogues”, Collng84, Stanford, July 1984. 302-311. D. J.’ Litman. Plcrn Recogllrtlotl crud Drrcour-se Atzul~~srr. A/z Integrated Approuciz for C’rlderstunditq Dinloglies. PhD Thesis and Technical Report 170. University of Rochester. 198.5. D. J. Litman. “Linguistic Coherence: A Plan-Based Alternative”. .-ICL. New York City. 1986. D. J. Litman and J. F. Allen. “A Plan Recognition Model for Subdialogues in Conversation”, Cogntri1.e Science. to appear. Also University of Rochester Tech. Rep. 141. November 1984. W. Mann, “Corpus of Computer Operator Transcripts”. Unpublished hlanuscript. ISI, 1970’s. W. C. Mann. “Discourse Structures for Text Generation”. Coil,58~. Stanford. July 1984, 367-375. K. R. McKeown. Generating ,YuturuI Lun,q:Iluge Te.v-r irr Response ro Quesfions clbouf Durubuse Srr-ltc ~14r.e. PhD Thesis. University of Pennsylvania. Philadelphia. 1982. L. Polanyi and R. J. H. Scha, “The Syntax of Discourse”. Test (Speciul Issue: Formu .Cletlrody of Dlycourre AI~LI/J T~S/ 3. 3 (1983). 261-270. R. Reichman, “Plain Speaking: .4 Theory and Grammar of Spontaneous Discourse”. Report No. 4681. Bolt, Beranek and Newman. 1981. E. 0. Sacerdoti. A Structure for Plarzs und Bel:u\,ror-, Elsevier, New York, 1977. I. Sag and J. Hankamer, “Toward a Theory of Anaphoric Processing”, Linq~~tic und Philoroph> . 1983. 335-345. J. R. Searle. in Speech Acts. un Ersuy III the Phtlotoph~ oJ‘ Lunguuge, Cambridge University Press. New York, 1969. C. L. Sidner and D. J. Israel. “Recognizing Intended hleaning and Speakers’ Plans”. ZJCAZ. Vancouver, 1981. 203-208. C. L. Sidner. “Plan Parsing for Intended Response Recognition in Discourse”, Computurronul Z~~relllgence 1. 1 (February 1985). l-10. M. Stefik. “Planning with Constraints (MOLGEN: Part 1)“. ,-!\.~/~‘~c.ILI/ Zj7reiilyence 16 (1981). ill-l-lo. D. L. Waltz and B. A. Goodman. *‘Writing a Natural Language Data Base System”. IJCAZ-77. MIT, August 197;. l-11-150. R. hf. Weischedel and N. K. Sondheimer. “,4n Improved Heuristic for Ellipsis Processing”. ACL. Toronto. June 1982. ,ss-88.
|
1986
|
168
|
437
|
DYNAMICALLY COMBINING SYNTAX AND SEMANTICS IN NATURAL LANGUAGE PROCESSING Steven L. Lytinen Department of Computer Science Yale University Box 2158 Yale Station New Haven, CT 00520, USA ABSTRACT A controversy has existed over the interaction of syntax and semantics in natural language understanding systems. According to theories of integrated parsing, syntactic and semantic processing should take place simultaneously, with the parsing process driven by a single rule base which contains both syntactic and semantic knowledge. This is in sharp contrast to traditional linguistic approaches to language analysis, in which syntact,ic and semantic processing are performed separately from one another, driven by completely separate sets of syntactic and semantic rules. This paper presents an approach to natural language understanding which is a compromise between these two views. It is an integrated approach, in the sense that syntactic and semantic processing take place at the same time. However, unlike previous integrated systems, the approach described here uses largely separate bodies of syntactic and semantic knowledge, which are combined only at the time of processing. I. INTRODUCTION A controversy exists among researchers in natural language processing over the way in which syntax and semantics* * should interact with each other. A modular approach to syntactic and seyantic processing was argued for in [Z]. In this approach, syntactic analysis is performed on an input text, producing a syntactic parse tree, which is then operated on by semantic interpretation rules. This sort of modular approach, or variations in which a limited amount of interaction is permitted between syntactic and semantic components, has been used in many natural language understanding systems, including LUNAR [14], Winograd’s [la] system, and PARSIFAL [5]. In contrast, others have argued for an integrated approach to natural language processing. According to this argument, since semantic information often can be of use in making decisions about the syntactic structure of a text, semantics should be utilized as early as possible in the parsing process. Proponents of the integrated approach have argued that there should be no discernable stages in the language understanding process. Syntactic and semantic processing are performed simultaneously in integrated systems, usually by parsing rules that contain a mixture of both syntactic and *This research was done at the Computer Science Department of Yale University. It was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contract No. N0001482K-0139. *fBy semantics, I mean the traditional linguistic concept of semantics, or knowledge about the meanings of words, as well as progmatics, or knowledge about the world and about how language is used. pemantic information. Also in contrast to t,he modular, approach, no separate syntactic representation is built during language understanding. Instead, a ‘conceptual” representation, or representation of the meaning of the input text, is built directly during processing of the input. Examples of natural language systems which use the integrated approach to parsing include Wilks’ parser (lo] (111, ELI [7]. the Integrated Partial Parser (IPP) [3], and the Word Expert Parser [9]. In this paper, I will argue t,hat both sides of the syntax- semantics controversy are too extreme. Although semantic information should be brought to bear as quickly as possible so as to resolve syntactic ambiguities, I will argue that the way in which this has been accomplished in previous integrated parsers, by statically combining syntactic and semantic knowledge together in parsing rules, is representationally inefficient. As an alternative, I will present an approach to integrated parsing in which syntactic and semantic knowledge are dynamically combined at parse-time. In this approach, an explicit syntactic grammar is used by the system, encoded in syntactic rules similar to those used in PARSIFAL [5]. However, the application of these rules is quite different from syntax-first parsers, in that semantic information is used to determine when to apply particular syntactic rules. This approach to integrated parsing has been implemented in a machine translation system called MOPTRANS, which parses short (1-3 sentences) newspaper stories about terrorism and crime, in English, Spanish, French, German, and Chinese. Translations are produced for these stories in English and/or German. Enough vocabulary, linguistic knowledge, and semantic knowledge have been encoded in the parser to enable it to parse 25-50 stories for each input language. This paper will not include a discussion of MOPTRANS’ semantic analyzer. For a detailed description, see [4]. Instead, t,his paper will focus on the way in which t,he semantics of the system is integrated with syntactic processing, and why this integration is desirable. II. WKY SYNTAX IVEEDS SEMANTICS Consider the following sentences: The cleaners dry-cleaned the coat that Mary found ats the rummage sale for $10. The cleaners dry-cleaned the coat that Mary found in the garbage for $10. The decision as to where to attach the prepositional phrase “for $10” in these two sentences cannot be made on the basis of syntactic information alone. However, due to the differences in meaning of the word “found,n their syntactic structures are not the same. In the first example, since “found” refers to a purchase, it is appropriate to attach “for $10” to it. 574 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. However, in the second sentence, since it does not make sense to find something in the garbage for $10, the prepositional phrase must attach to “dry-cleaned.” In syntax-first parsing, then, the resolution of some syntactic ambiguities must be delayed until semantic interpretation. There is a computational price to pay for this, because often an unresolved syntactic ambiguity can affect the complexity of subsequent syntactic analysis. For example: The cleaners dry-cleaned the coat that Mary found in the garbage for $10 while she was away in New York. If semantics is used immediately to resolve the attachment of “for $10” to the verb “dry-cleaned,” then there is no ambiguity as to where to attach the clause “while she was away in New York.” However, if the PP attachment is not resolved immediately, then it is also possible to attach this clause to “found.” Thus, putting off the resolution of the first ambiguity would result in a syntax-first parser finding this sentence to be 3-way ambiguous. The. third interpretation would not even have to be considered in an integrated parser. Carrying forward ambiguities in syntactic analysis that could be resolved in an integrated parser can cause a combinatorial explosion in the number of syntactic ambiguities that must be considered as the parse continues. For example, consider the following sentence: The stock cars raced by the spectators crowded into the stands at over 200 mph on the track at Indy. This sentence is highly ambiguous syntactically, due to the fact that either “raced” or “crowded” could be the sentence’s main verb, and the prepositional phrases in the sentence could be attached in many different ways. In a syntax-first parser, these ambiguities would cascade. resulting in an increasingly large number of interpretations that would have to be considered during the course of the parse.*** However, the use of semantics drastically reduces the number of syntactic ambiguities that would have to be considered. Semantics can tell us that “raced” in this sentence must be active, because it is unlikely that spectators would race stock cars. This fact also resolves the syntactic ambiguity of “crowded,” since both verbs cannot be active. This, in turn, eliminates many prepositional phrase attachments from consideration. As this example demonstrates, the price for separating syntactic and semantic processing can be quite expensive computationally. Unresolved syntactic ambiguities can build on each other, resulting in the need to consider many syntactic attachments which would be eliminated if semantic processing were done in parallel. III. WHAT’S WRONG WITH PREVIOUS INTEGRATED PARSERS? together into one set of rules. Although this sort of integration accomplishes the goal of utilizing semantic information early on in the parsing process, storing parsing knowledge in this form is highly inefficient. First, the combination of syntactic and semantic information in the parser’s rule base results in the inability to write parsing rules which apply to syntactic categories in general. Consider some of the parsing rules used in the Conceptual Analyzer (CA) (11, a descendant of ELI. Like ELI, much of CA’s parsing knowledge was encoded in the form of requests [6], or test-action pairs, which were stored mainly in the parser’s lexicon. Requests were used to build a conceptual representation of an input text as the parse proceeded. One of the t,asks in building a representation was to fill the slots of a representational structure with the appropriate fillers. For example, to parse the sentence “Fred gave Sally a book,” CA built the representation (ATRANS ACTOR FRED OBJECT BOOK RECIPIENT SALLY), where ATRANS was the Conceptual Dependency (CD) (81 primitive meaning “transfer of control of an object.” To fill in the ACTOR of this action with FRED, CA used the following request: “Gave” request: Look back for a noun group which has the semantic property ANIMATE, which is not the object of a preposition, or the object of a verb, or attached syntactically to anything before it. Place the conceptualization in the ACTOR slot of the ATRANS. Most other verbs in CA had similar requests, looking for a noun group of a certain semantic type before the verb, with the same syntactic restrictions on this noun group, to fill a slot in the conceptualization built by the verb. This slot was not always the ACTOR slot, as it was for “gave.” For example, the RECIPIENT of an ATRANS preceded the verb “received.n However, the request for “received” still shared much of the same information: “Received” request: Look back for a noun group which has the property ANIMATE, which is not attached syntactically to anything before it. Place the conceptualization in the RECIPIENT slot of the ATFZANS built by “received.” These requests, as well as similar requests stored in the dictionary definition of every verb in CA’s dictionary, all shared common syntactic information: namely, that the subjects of verbs precede them, and are not syntactically attached to anything before them. Thus, it would be much more economical to store this common information in only one rule, rather than duplicate it in countless verb-specific rules. However, because this syntactic information was combined with semantic information about the particular slot filled by the subject for each particular verb, and the semantic constraints on what the subject could be, this syntactic information had to be duplicated over and over again. In order to bring semantics into the language understanding process as early as possible, previous integrated systems have compiled syntactic and semantic knowledge Another problem with previous integrated systems and the lack of autonomy of syntax in these systems is evident if we examine the way in which these parsers attempted to resolve certain types of syntactic ambiguities. Consider the way in which CA resolved the syntactic ambiguity in the following sentence: ***For instance, in a left-to-right syntactic parse of this sentence, there would be 12 ways to attach the PP “at over 200 mph.” Considering only syntax, the 2 verbs could be parsed in 4 ways (either one active, both part of unmarked relative clauses attaching to “cars,” or the second relative clause attaching to “spectators”). Then, for each of these 4 interpretations, there are 5 possible places to attach the PP: to Vars,” ‘raced,” “spectators,” “crowded,” or “stands.n This makes 20 possible attachments, 7 of which could be eliminated by various constraints on where PP’s can be attached. The combinatorics are even worse for the subsequent PP’s in the sentence. A small plane stuffed with 1500 pounds of marijuana crashed. The word “stuffed” can function as either a past participle or a past active verb. To resolve this ambiguity, CA used a request which looked for the word “with” appearing after “stuffed.” If it was found, “stuffed” was treated as passive, and the NP to the left of the verb (in this case NATURAL LANGUAGE / 575 “plane”) was the OBJECT being stuffed. This request, if it fired, also activated another request which looked for another verb further on in the sentence, marking the end of the relative clause. This request, and other requests used to resolve other types of syntactic ambiguities, used focal syntactic information in order to perform disambiguation. By this, I mean that only words in the immediate neighborhood of the ambiguity were checked for particular syntactic properties, or for their presence or absence. In this case, the presence of the word “with” immediately after the verb was the local information. The advantage of this was that it was not necessary for the parser to keep track of a separate syntactic analysis. Syntactic ambiguities were resolved by examining shortterm memory to see what words were there or what semantic constituents had been built. However, it is not always the case that these sorts of local checks are enough. Consider the following examples: The soldier called to his sergeant. I saw the soldier called to his sergeant. The slave boy traded for a sack of grain. I saw the slave boy traded for a sack of grain. In these cases, the appearance of a preposition after the verbs “called” and “traded” does not guarantee that the verbs are passive. This is because both verbs can be used either transitively or intransitively. Instead, the information that must be used to determine whether the verbs are active or passive is whether or not there is another verb in the sentence which functions as the main verb. However, since CA did not keep track of more global syntactic information such as whether a particular verb functioned as the main verb of the sentence, it would be much more difficult to write requests for these examples. In general, then, it appears that some syntactic ambiguities cannot always be resolved by using only local syntactic checks. This is because the resolution of syntactic ambiguities sometimes requires more global knowledge about the syntax of a sentence, such as whether a particular verb functions as the main clause verb. Information like this cannot be determined so easily by rules which examine only immediate context. Thus, although we would like for syntactic and semantic processing to be integrated, it seems that a separate syntactic representation must be built during the analysis process in order to resolve some types of ambiguities. IV. A PARSER WHICH SATISFIES BOTH CONSTRAINT’S The MOPTRANS parser overcomes the difficulties that I have outlined in the last two sections. MOPTRANS is an integrated parser, in the sense that syntactic and semantic processing take place in tandem. However, it is different from previous integrated parsers, in that it uses a largely autonomous set of syntactic rules, and a syntactic representation of the input text is built during parsing. MOPTRANS uses PARSIFAL-like parsing rules [5], which specify how sequences of syntactic constituents in the input text can be attached to each othei. However, unlike PARSIFAL and other syntactic parsers, syntax rules in MOPTRANS are only considered and applied if the syntactic attachments that they make are judged by the parser’s semantic analyzer to be semantically appropriat,e. In this way, syntactic and semantic processing are completely integrat,ed. As MOPTRANS parses a piece of text, the semantic and syntactic representations that it builds are kept in its active memory. During parsing, new constituents are added to active 576 / SCIENCE 1 Find possible semantic connections 1 1 betueen concepts in active memory 1 ----------------------->I I /\ I /Are\ I /there\ -------- "o-->-------- l \ any / I \?/ I \/ \I/ /I\ I I I yes I I -----------_-------------- I I Choose Rbestm connection I \I/ /I\ ________----_~~~--_----~~~ I I I I ___------_____------________ I I Find syntactic rules 1 whose semantic actlons /I\ 1 UIII make this connectton I ------_----____ I -_-----------_-----____L____ I Select a rule I I by Indexing I _-------~------________________ I via syntactic I I Choose a rule uhose syntactic I I patterns I I pattern is also satisfied 1 ------- -------_ ------------- --------__________------------- I 1 Remove the I I connectlon I \I/ /\ 1 from the I /Is \ ---__----_____-_-_ I Ilst of 1 <---no--- /there\ ---yes---> I Execute the rule 1 I possible I I connections I \ one / \?I Figure 1: Interaction Between Syntax: and Semantics in MOPTRANS memory as each new word is read. As new constituents are added, semantics is asked if anything in active memory “fits together” well; that is, if there are any semantic attachments that could be made between the elements in active memory. If so, MOPTRANS’ syntactic rules are consulted to see if any of these semantic attachments are syntactically legal. In other words, semantics proposes various attachments, and syntax acts as a filter, choosing which of these attachments makes sense according to the syntax of the input. The interaction syntax and semantics is displayed graphically in Figure between To make this more clear, consider how the following simple sentence is parsed by MOPTRANS: John gave Mary a book. MOPTRANS’ dictionary definitions contain information about what semantic representation the parser should build when it encounters a particular word. Thus, “John” causes the representat,ion PERSON to appear in the parser’s active memory. At the same time, since “John” is a proper noun, the syntactic class NP is also activated. When the word “gaven is processed, MOPTRANS’ definition of this word causes the CD representation ATRANS to be placed in active memory. At this point, MOPTRANS considers the two semantic representations in active memory, PERSON and ATRANS. The semantic analyzer tries to combine these representations in whatever way it can. It concludes that the PERSON could be either the ACTOR or the RECIPIENT of the ATRANS, since the constraints on these roles are that they must be ANIMATE. It also concludes that the PERSON could be the OBJECT of the ATRANS (that is, the thing whose control or possession is being transferred). However, since this role is expected to be a PHYSICAL OBJECT rather than an ANIMATE, the match is not as good as with the ACTOR or RECIPIENT roles.**** This is the point at which the MOPTRANS parser utilizes its syntactic rules. Semantics has determined that 2 possible attachments are preferred. Now the parser examines its syntactic rules to see if any of them could yield either of these attachments. Indeed, the parser’s Subject Rule will assign the PERSON to be the ACTOR of the ATRANS. The Subject Rule looks like this: Subject Rule Syntactic pattern: NP, V (active) Additional restrictions: NP is not already attached syntactically Syntactic assignment: NP is SUBJECT of V, V is indicative (V-IND) Semantic action: NP is ACTOR of V (or another slot, if specified by V> Result: V-IND This rule applies when an NP is followed by a V, and when the NP can fill the ACTOR slot of the semantic representation of the V. The NP is marked as the SUBJECT of the V, and the V is marked as indicative (V-IND). As dictated by the RESULT of the rule, the V-IND is left in active memory, but the NP is removed, since its role as subject prevents many subsequent attachments to it, such as PP attachments. In addition to these syntactic assignments, the semantic representation of the NP “John” is placed in the ACTOR slot of the ATRANS representing the verb. The rest of the sentence is parsed in a similar fashion. To determine how “Mary” should be attached to ‘gave,” semantics is asked for its preference. Just as with ‘John,” “Mary” fits well into the ACTOR or RECIPIENT slots. Since “John” has already been selected as the ACTOR, semantics chooses the RECIPIENT slot for “Mary.” Syntax is consulted to see if any syntactic rules can make this attachment. This time, the Dative Movement rule is found: Dative Movement Rule Syntactic pattern: V-IND, NP Additional restrictions: V-IND allovs dative movement Syntactic assignment: NP is INDIRECT OBJECT of V-IND Semantic action: NP is (semantic) RECIPIENT of V-IND (or another slot, if specified by V-IND) Result: V-IND, NP IVhen applied, this rule assigns “Mary” as the indirect object of “gave,” and places the PERSON concept which represents ‘Mary” into the RECIPIENT slot of the ATRANS. The final NP in the sentence, “the book,” is attached to Ugaven in a similar way. Semantics is asked to determine the best att,achment of “book,” which is represented as a PHYSICAL-OBJECT, to other concepts in active memory, which at this point contains the ATRANS as well as the person representing “Mary.” Semantics determines that the best attachment is to the OBJECT role of the ATRANS. The syntactic rule which can perform this attachment is the Direct Object rule, which is similar in form to the Dative Movement rule above. This rule is applied, yielding the final semantic representation (ATRANS ACTOR PERSON OBJECT PHYSICAL-OBJECT RECIPIENT PERSON), and the ****The way in which the semantic analyzer reaches these not, be discussed in this paper. For more details, see [4]. conclusions wi II syntact,ic markings of “John” as the subject of “gave,” “book” as its direct object, and “Mary” as its indirect object. One important thing to note about the parsing process on this sentence is that although the Direct Object Rule could have applied syntactically when “Mary” was found after the verb, it was never even considered. This is because the semantic analyzer preferred to place “Mary” in the RECIPIENT slot of the ATRANS. Since a syntactic rule was found which accomodated this attachment, namely the Dative Movement rule, the parser never tried to apply the Direct 0 bject rule. The lllOPTRANS parser is able to resolve syntactic ambiguities that proved difficult for past integrated parsers. For the sentence discussed earlier, “I saw the soldier called to his sergeant,” MOPTRANS has no trouble determining that “called” is an unmarked passive, because according to its syntax rules, another indicative verb at this point is not possible. The rule which is applied instead is the Unmarked Passive rule: Unmarked Passive Rule Syntactic pattern: NP, VPP Syntactic assignment: NP is (syntactic) SUBJECT of VPP, VPP is PASSIVE, VPP is a RELATIVE CLAUSE of NP Semantic action: NP is (semantic) OBJECT of S (or another slot, if specified by VPP) Result: NP, VPP “Called” is represented by the Conceptual Dependency primitive MTRANS, which is used to represent any form of communication. Since “soldier” can be attached as either the ACTOR or the OBJECT of an MTRANS, semantics would be happy with either of these attachments. However, the Subject Rule cannot apply at this point, since “soldier” is already attached as the syntactic direct object of “saw.” Thus, this restriction on the Subject Rule prevents this attachment from being made. Instead, the Unmarked Passive Rule applies, since it semantically attaches “soldier” as the OBJECT of the MTRANS, and since “called” is marked as potentially being a past participle (VPP). Unlike syntax-first parsers, the MOPTRANS parser can immediately resolve syntactic ambiguities on the basis of semantic analysis, thereby cutting down on the number of syntactic attachments that it must consider. We have already seen this in the example, “John gave Mary the book,” in which the parser does not even consider if “Mary” is the direct object of “gave.” Let us return now to two examples discussed earlier: The cleaners dry-cleaned the coat that Mary found in the garbage for $10. The cleaners dry-cleaned the the rummage sale for $10. coat that Mary fouDd at MOPTRANS parses the relative clause “that Mary found” with the following rule: Clause Rule for Gap After the Verb (CGAV Rule) Syntactic pattern: NP, RP (relative pronoun) (opt ional), V-IND Additional restrictions: V-IND is not followed by a NP Syntactic assignment: V-IND is a RELATIVE CLAUSE of NP Semantic action: NP is the semantic OBJECT of the V-IND NATURAL LANGUAGE f 5” Result: NP, V-IND (changed to CLAUSE-VERB) “Mary” to be the subject of The Subject Rule assigns “found ,” since ‘Mary” is not yet attached syntactically to anything before it. Then, since no NP follows Yfound,n and since the attachment of “coat” (a PHYSICALOBJECT) as the OBJECT of the ATRANS is semantically acceptable, the CGAV rule applies, assigning “that Mary found” as a relative clause. MOPTRANS parser is integrated, in that syntactic and semantic processing proceed in parallel, but MOPTRANS has a separate body of syntactic knowledge, and builds a representation of the syntactic structure of input sentences. This enables it to use semantics to resolve syntactic ambiguities, and to easily resolve ambiguities that cause difficulties for local syntax-checking rules. When the parser reaches “for $10” in the first example above, the representations of “dry-cleaned” and ‘found” are both still in active memory. The NP “810” is represented as MONEY. The preposition “for” also has a semantic representation, which describes the possible semantic roles that a PP beginning with “for” can fill. One of these roles is called IN-EXCHANGE-FOR. “Dry-cleaned” is represented by the concept PROFESSIONAL-SERVICE, which expects to have its IN-EXCHANGE-FOR role filled with MONEY, since most professional services are done for money. ATRANS, on the other hand, does not explicitly expect an IN-EXCHANGE-FOR role. Thus, semantics prefers to attach the PP “for $10” to PROFESSIONAL-SERVICE and the verb “dry-cleaned.n In the second example, on the other hand, when the PP “at the rummage sale” is attached to “found,n this triggers an inference rule that the ATRANS representing “found” must actually be the concept BUY, since “rummage sale” is a likely setting for this action. BUY, like PROFESSIONALSERVICE, expects the role IN-EXCHANGE-FOR to be filled with MONEY. Thus, semantics has no preference as to which verb to attach “for $10” to. To resolve the ambiguity, a syntactic recency preference is used, thereby attaching “for $10” to ‘found.” Because of this resolution of ambiguity, the MOPTRANS parser does not have to consider ambiguities further on in the sentence that it might otherwise have to. For example, in the sentence, “The cleaners dry-cleaned the coat Mary found in the garbage for $10 while she was away in New York,” the PP attachment rule which MOPTRANS uses removes the representation of “found” from active memory, since the PP attaches to something before the clause containing ‘found.” Therefore, when the parser reads “while she was away In New York,” there is only one possible verb, “dry-cleaned,n to which this clause can be attache& V. CONCLUSION In this paper I have argued that semantic and syntactic analysis should be integrated. By this, I mean that syntactic and semantic processing must proceed at the same time, relying on each other to provide information necessary to resolve both syntactic and semantic ambiguities. Non-integrated, syntax- first parsers must leave some syntactic ambiguities unresolved until the semantic analysis st’age. This can result in a highly inefficient syntactic analysis, because the failure to resolve one syntactic ambiguity can lead to other, “artificial” syntactic ambiguities which would not have to be considered had the original ambiguity been resolved with semantics. These new ambiguities may also be unresolvable using only syntax. If several of these ambiguities are encountered in one sentence, the combinatorics of the situation can get out of hand. Previous integrated parsers have avoided these inefficiencies, but have suffered from problems of their own. Because of the lack of a separate representation of the input text’s syntactic structure, these parser must rely on ‘localn syntax-checking rules to resolve syntactic ambiguities. Some types of ambiguities cannot easily be resolved with local checks. REFERENCES 1. Birnbaum, L., and Selfridge, M. Problems in Conceptual Analysis of Natural Language. Tech. Rept. 168, Yale University Department of Computer Science, October, 1979. 2. Chomsky, N.. A8peCt8 of the Press, Cambridge, Mass., 1965. Theory of Syntaz. MIT 3. Lebowitz, M. Generalization and Memory in an Integrated Understanding System. Ph.D. Th., Yale University, October 1980. Research Report #lSS 4. Lytinen, S. The Organization of Knowledge in a Multi- lingual, Integrated Parser. Ph.D. Th., Yale University, Department of Computer Science, November 1984. 5. Marcus, M. A Theory of Syntactic Recognition for Natural Language. Ph.D. Th., Massachusetts Institute of Technology, February 1978. 6. Riesbeck, C. Conceptual Analysis. In Conceptual In formation Processing, North-Holland, Amsterdam, 1975. 7. Riesbeck, C., and Schank, R.C. Comprehension by Computer: Expectation-based Analysis of Sentences in Context. Tech. Rept. 78, Yale University Department of Computer Science, October, 1976. 8. Schank, R.C. “Conceptual Dependency: A Theory of Natural Language Understanding.” Cognitioe Psychology 3, 4 (1972), 552-631. 9. Small, Steven. Word Expert Parsing: A Theory of Diatributed Word-baaed Natural Language Undercltanding. Ph.D. Th., Department of Computer Science, University of Maryland, 1980. 10. Wilks, Y. An Artificial Intelligence Approach to Machine Translation. In Computer Modela of Thought and Language, Schank, R., and Colby, K., Ed.,W.H. Freeman and Co., San Francisco, 1973, ch. 3, pp. 114-151. 11. Wilks, Y. “A Preferential, Pattern-matching, Semagtics for Natural Language Understanding.” Artificial Intelligence 6, 1 (1975). 12. Winograd, T.. Understanding Natural Language. Academic Press, New York, 1972. 13. Woods, W., Kaplan, R., and Nash-Webber, B. The Lunar Sciences Natural Language Information System: Final Report. Tech. Rept. 2378, Bolt, Beranek and Newman, Inc., 1972. To solve both of these problems at the same time, the 5’23 / SCIENCE
|
1986
|
169
|
438
|
P.I.E.S.: An Engineer’s “Do-It-Yourself’ Knowledge System for Interpretation of Parametric Test Data Jeff Yung-Choa Pan and Jay M. Tenenbaum Schlumberger Palo Alto Research 3340 Hillview Avenue Palo Alto, CA 94304 ABSTRACT PIES is a knowledge system for interpreting the parametric test data collected at the end of complex semiconductor fabri- cation processes. The system transforms hundreds of measure- ments into a concise statement of the overall health of the pro- cess, and the nature and probable cause of any anomalies. A key feature of PIES is the structure of the knowledge-base, which reflects the way fabrication engineers reason causally about semiconductor failures. This structure permits fabrica- tion engineers to do their own knowledge engineering, build- ing the knowledge base, and then maintaining it to reflect pro- cess modifications and operating experience. The approach appears applicable to other process control and diagnosis tasks. 1. Introduction This report summarizes our experience in building PIES (Parametric Interpretation Expert System), a knowledge-based system that diagnoses problems in sem- iconductor fabrication processes by analyzing parametric test data. Parametric measurement, performed on test circuits at the end of a complicated semiconductor fabrication process, provides semiconductor engineers with early information to monitor the “health” of the overall fabrication process. Typ- ically, hundreds of measurements are made on each wafer. The problem is to reduce the resulting ream of data to a con- cise summary of process status: whether it is functioning correctly, and if not, what is the nature and cause of the abnormality. Currently this interpretation task is performed by a group of semiconductor specialists known as failure analysis or yield enhancement engineers. It routinely con- sumes a large proportion of their time. Moreover, it is criti- cal that problems be identified quickly to avoid a major operational loss. For any knowledge system to be effective in this appli- cation, it must be able to deal with two common characteris- tics of engineering domains: (1) knowledge about the domain matures progressively with experience following a “learning curve”; and (2) the process sequence is subjected to continual modification. These characteristics entail on- going maintenance of the knowledge base. Unfortunately, it is impractical to use highly-trained AI professionals for this on-going support function. PIES’ approach to this problem is to provide a knowledge acquisition environment that per- mits the failure analysis engineers, themselves, to build up and maintain the actual contents of the knowledge base. The traditional AI knowledge engineering task has been reduced to initially analyzing the domain, and defining an appropriate structure for the knowledge base. The structure of the knowledge-base reflects the way fabrication engineers reason causally about semiconductor failures. First, measurement deviations are used to infer physical defects of wafer structure, such as the thickness or doping density of some layer being too high. These struc- tural anomalies are then linked to problems in particular pro- cess steps. For example, a wafer layer may be too thick because the wafer was left in an oven too long or the oven temperature was too high. Finally, process problems are traced to root causes (e.g., the wafer was left in the oven too long because a timer broke). The multi-level causal structure of the knowledge base permits fabrication engineers to codify their knowledge of and experience with failures of a fabrication process in a form they find natural: causal links that associate evidence at each level with hypotheses at the next. Thus, there are asso- ciations linking deviated measurements to structural anomalies, anomalies to process problems, and process prob- lems to root causes. A knowledge editor supports and enforces this conceptual structure. The structure of the knowledge base also helps focus the diagnostic reasoning process, by providing natural, inter- mediate levels for hypothesization and verification. Usually, there are many root causes that could account for an observed set of parameter deviations. Instead of directly associating measurements with root causes, it is computa- tionally more efficient to proceed step by step, hypothesizing and prioritizing or ruling out possibilities at the structural and process levels. In addition to being more efficient, this multi-level diagnosis leads to explanations that fabrication engineers find easy to comprehend. A working knowledge-based system incorporating the above concepts was implemented in Franz Lisp on a VAX/Unix system at Schlumberger Palo Alto Research. This core system was then installed at Fairchild’s fabrication facility in Puyallup, Washington, running on a VAX under VMS. The knowledge base was compiled and is maintained solely by failure analysis engineers at the production site. Performance of the system is currently being evaluated. 836 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. Background 2.1. About Semiconductor Fabrication and Parametric Test Semiconductor devices are manufactured in two phases, as shown in figure 1: Wafers are first fabricated in batches (known as “lots”) in the controlled environment of a clean-room; the wafers are then cut into “dice” which are individually packaged and tested. Parametric testing is per- formed on lots at the conclusion of the fabrication process, just before the wafers are cut. The recipe for a modem semiconductor product typi- cally contains more than 100 process steps. Each step is a chemical/physical interaction between a wafer and its environment under precise control of process equipment (e.g., epitaxy, oxidation, etching, ion-implantation). Although the result of each individual process step is moni- tored by a so called in-process test (such as measuring the thickness of an oxide layer) to make sure that it is within tolerance, the combined effect of these process steps cannot be verified until complete execution of the recipe. Hence, the need for parametric testing. When abnormal measurements of some key parame- ters are detected, the wafer is rejected and sent for failure analysis, accompanied by a complete test record of the lot. The job of the failure analysis engineer is to diagnose the process step(s) responsible for the failure and take appropriate corrective action. The daily workload of a failure analysis engineer thus depends on the number of rejected wafers during the previous day, and the difficulties of those cases, each of which takes tens of minutes to hours to diagnose. A knowledge based system, such as PIES, can enhance the productivity of a failure analysis engineer in two ways: fist, it focuses an engineer’s attention by reduc- ing the flood of raw test data to a few likely failure candi- dates; second, it ensures an objective analysis by providing a complete and unbiased assessment of the situation. Semiconductor fabrication was selected as a good experimental domain to pursue our long term interest in applying AI technology to manufacturing. The choice was based on a number of considerations. First, there is high leverage: because of the high volume (millions of die a year), small percentage increases in yield can result in con- siderable increases in profit. Second, the processes are not always well understood, so that actual operating experience is critical to achieving acceptable yields. It is important to be able to codify this experience so that it can be widely replicated and shared. Third, semiconductor fabrication is an ideal domain to pursue AI research on qualitative modeling and reasoning. Due to the ever-changing nature of fabrication technology, a knowledge system that is totally dependent on hand-coded, process-specific, task- specific, experiential knowledge is inefficient to maintain and difficult to generalize. Moreover, semicoductor engineers routinely invoke models of solid-state physics and silicon processing to explain a problem not encoun- tered previously. To achieve the same level of competence as a human engineer, we set as a long-term goal to develop qualitative modeling and reasoning techniques that can supplement PIES’ experience-oriented knowledge base. 2.2. Shallow-Level vs. Deep-Level Approach to Expert Systems A conventional way to build an expert system for diagnosing process faults would be to rely on a knowledge engineer to capture the experience of fabrication engineers in the form of if-then or production rules [ 11. An inference mechanism might then use a forward chaining inference process [2] to transform an input set of parametric symp- toms into a set of possible faults. The approach so described is sometimes referred to as a shallow-level approach [3], because its knowledge base records only aspects of experience acquired from human experts, and not a model of the domain about which the system is sup- posed to be an expert. An alternative deep-level approach would be to perform diagnosis by reasoning with models of the (semiconductor) domain [4]. I FAB LINE OPERATION lOO+ process steps t I . , w e -c l - Packaging , . < i (in clean rooms) (paromerrk resr dam) Figure 1 A Typical Semiconductor Manufacturing Process APPLICATIONS / 837 A shallow-level approach is suitable when experience, not the exercise of theory, plays the key role in performing a task. For a fixed problem, a shallow system can be built in a relative short time, and can be “tuned’ to a high-level of performance, as demonstrated by MYCIN [5]. How- ever, a shallow-level system will require re-engineering of its knowledge base whenever there is a change in the domain. The deep-level approach complements the weakness of the shallow-level system because of its potential to derive solutions for unanticipated situations from the underlying principles of the domain. It is particularly advantageous in engineering-oriented domains where a complete or partial domain theory already exists. The pro- gress made in the direction of qualitative modeling and rea- soning [6, 7, 8, 41 is promising, but the technique needs futher development before it can be useful in practice. PIES’ knowledge base approach falls between shallow and deep level approaches (semi-deep). It is similar to a shallow-level system in that it attempts to help domain experts in formalizing their experience and to apply the knowledge so acquired in diagnosis. On the other hand, it explicitly represents the structure of the domain in terms of multiple causal levels, and uses such conceptual levels to communicate naturally with domain experts (in both knowledge acquisition and diagnostic reporting). 3. Approach 3.1. Overview Figure 2 shows the causal chain through which fabri- cation failures originate and propagate. The root cause is either a malfunction in some fabrication equipment, con- tamination in the source materials or clean-room environ- ment, or a human error. Any of these causes will result in HUMAN OPERATION ert0t (Wan. atip4-ucp, mix-lot, misdid, etc.) FABRICATION PROCESS variations PHYSICAL SILICON STRUCTURE abnormalities PARAMETRIC MEASUREMENTS deviations Figure 2 Multi-Level Propagation of Fabrication Failures variations in the fabrication process which, in turn, will produce physical abnormalities in the wafer structure and corresponding deviations in parametric measurements asso- ciated with that structure. PIES’ diagnosis approach is to isolate the possible causes of observed symptoms by “rev- ersing” this causal chain level by level, following the sequence of measurement deviations --> physical structure abnormalities --> process variations --> rootcauses. The knowledge base in PIES consists of four levels that correspond directly to those in figure 2. At each level we enumerate observed failure modes. For example, at the physical structure level, such modes would include incorrect thickness or doping density of particular wafer layers (e.g., the epitaxial layer). At the fabrication process level, the failure modes would include incorrect tempera- tures or gas densities during particular process steps (e.g., oxidation or ion-implantation). Rules, provided by the fabrication engineer, link failure modes at adjacent levels. Thus, EPI-thickness-high is associated with abnormally high temperature during the epitaxial process stage. Fabrication engineers often find it convenient to organize their knowledge around specific failure cases, each corresponding to an observed or expected anomaly in physical structure. Associated with each such structural anomaly are a set of expected symptoms (i.e., measurement deviations) and a set of possible causes (i.e., process failures). Diagnosis proceeds as a multi-level hypothesis- verification process. Parametric measurements are first preprocessed to transform them from numeric values to qualitative ranges (e.g., normal, high, very high). Each measurement that is abnormal implicates one or more phy- sical structure problems. The expected symptoms associ- ated with each of these hypothesized physical structure problems are compared against the complete set of abnor- mal measurements. A score is assigned corresponding to how well the expected symptoms match the observed ones. The scores are compared and hypotheses with significantly lower scores are eliminated from consideration. The same hypothesis-verification process is then used to select the most probable process failures based on the surviving structural problems. Finally, the root causes are selected that best explain the highest likelihood process failures. This iterated hypothesis-verification approach will identify the primary (i.e., most likely) failures. In many cases, it will also reveal multiple failures, that may be independent of, or causally-related to, the primary failure. The PIES knowledge editor makes it possible for a fabrication engineer, without AI training, to build and maintain the knowledge base. It does this by directly sup- porting PIES’ multi-level case-centered knowledge organi- zation, thereby guiding an engineer to decompose his knowledge in a way that is both natural for him and required by PIES. Using the editor, an engineer can focus on the failure cases at any level. He can create or delete cases, as well as their associational links to other cases at the same or adjacent levels in the causal chain. For exam- ple, having discovered a new type of physical structure failure, he can add it to the knowledge base, along with the expected symptoms and probable causes. 838 / ENGINEERING 3.2. Knowledge Base The top level of PIES’ knowledge base is organized into four explicit causal levels: measurement, physical structure, process, rootcause. As part of the representa- tional mechanism in PIES, the causal sequence among those four levels is described by a set of symbolic links, which are used by both the knowledge editor and the diag- nos tic reasoner. At each causal level, the knowledge base is decom- posed into frame-like structures, called failure cases or cases for short, each encoding knowledge about a type of failure at that level. The cases have “slots” for encoding attributes that describe a particular type of failure. Examples of such attributes in PIES’ current implementation are: the “popu- lar” name commonly used by domain experts to refer to a failure case; comments from fabrication engineers about the failure; and most significantly, four types of associa- tional link which describe how this case is causally related to other types of failure. Other slots are used in conjunction with the knowledge base editor (see below) to group failure cases in ways that users find convenient. A domain expert’s knowledge about possible causal connections between two types of failure is represented in PIES by associational links. A link may be one of two types: causes or caused-by, and is further distinguished between intra-level and inter-level, depending on whether the other failure case it refers to is at the same or a deferent causal level. Each associational link has an associational strength, which is a heuristic estimation of the strength of the causal relationship, and can be one of five quantized states: must, very-likely, likely, probably, maybe. As an example, a common failure in a bipolar ISO-Z process at the physical structure level occurs when an ion- implantation problem alters the distribution of doping in the base region of a transistor. PIES representation for this problem, known as BASE-DLSTRIB MUON-deep, is shown (in its pretty-print form) as the following: Knowledge about a case of physical structure defect: BASE DISTRIBUTION deep ********************************************************** Possible effects at measurement level -- 1: ((parametric-measurement WElOBETA low) very-likely) 2: ((parametric-measurement RB 1 low) probably) 3: ((parametric-measurement RB2 low) very-likely) 4: ((parametric-measurement WElO-CBO low) probably) 5: ((parametric-measurement SOT2-CBO low) probably) 6: ((parametric-measurement SOT-B-SU very-low) probably) 7: ((parametric-measurement SOTBETAF low) probably) Possible causes at process level -- 1: ((BASE-IMPLANT ENERGY high) likely) 2: ((BASE-DRIVE FURNACE-TEMPERATURE high) likely) 3: ((BASE-DRIVE DIFFUSION-TIME long) likely) Possible causes at SAME physical-structure level -- 1: ((BASE-OXIDE THICKNESS low) likely) **********************************************************: In this example, the failure type of BASE DISTRIBUTION deep is said to be causally related to other types of failure at the process level, measurement level, and the physical structure level itself. As indicated, if it occurs, it may result in seven types of measurement deviation, but some of them are more likely to manifest (e.g., WElOBETA) than others (e.g., RBl). 3.3. Knowledge Editor The knowledge editor enables domain experts to build and maintain the PIES knowledge base without on-site help from AI specialists. Acquiring knowledge directly from domain experts has several advantages in practice: it relieves AI specialists from on-site visits and lengthy knowledge engineering sessions with domain experts; it avoids misunderstanding, and thus mistranslation of knowledge from domain experts to AI specialists; and it allows domain experts to incorporate new experience quickly into the knowledge base. This last feature makes the system more suitable than the traditional expert system approach in dealing with a changing domain. The primary function of the knowledge editor is to guide domain experts in codifying their knowledge and expertise in a form consistent with the PIES knowledge base. During a knowledge engineering session, the knowledge editor first allows the domain expert to focus his attention on one of PIES causal levels. Within that par- ticular level, the knowledge editor allows the user to main- tain his own hierarchy of failure concepts. For example, at the physical structure level, he may wish to group together all failures associated with the same wafer layer, and within any one layer, all failures of a particular type (e.g., doping problems). This support of concept hierarchies helps the expert to organize the many types of failure known to the knowledge base. The knowledge editor pro- vides its users with easy commands to create and traverse his hierarchy, to define new failure cases, and subsequently to fill in or modify the contents (slots) of a failure case. In summary, the PIES’ knowledge editor guides a domain expert to decompose his failure-related expertise into the structure required by PIES’ knowledge base. It ensures that the knowledge that is codified is both syntactically and semantically correct. For example, in a knowledge-engineering session to build the knowledge base for diagnosing failures in Fairchild’s ISO-Z bipolar process, our collaborator at Fairchild/Puyallup site chose to focus his attention on the physical structure level. PIES’ editor helped him to organ- ize known cases of physical structure failures into a hierar- chy, and allowed him to traverse the hierarchy to a particu- lar case of interest: BASE-DISTRIBUTION-deep, as shown in figure 3. To organize what he knew about the failure, the expert conceptualized relevant causalities cen- tered around BASE-DISTRIBUTION-deep, as shown in figure 4. The knowledge editor allowed him to establish associational links from BASE-DISTRIBUTION-deep to other known failure cases at efict level (measurement level), cause level (process level), and self (physical struc- ture level). The editor allows him to add, delete, or replace associational links, as necessary. APPLICATIONS / 839 Welcome to FairchikHchlumberger Parametric Interpretation Expert System Available Commands are: <HELP> show new-perspective up top Mark-Case fill edit write write-and-quit exit abort expert beginner reset reset! display-highlight-on display-highlight-of Enter Command (<CR> for listing of Case-Path) =>> newp 1: measurement 2: physical-structure 3: process 4: root-causes Enter selection (0 for redisplay, <CR> for physical-structure) => 2 Case-Library last modified on Wed Jun 12 11:07: 18 1985 Total of 82 cases from file cphysical.cas> loaded! ! You are now referring to the TOP of physical-structure 1: COLLECTOR 2: EPI 3: ETCHED-SILICON 4: ISO-OX 5: SINK 6: EMITTER 7: BASE 8: FIELD 9: SILICON 10: METAL-l 11: ISO-ISLAND 12: VIA 13: BASE-OXIDE 14: LVCEO-RESISTOR 15: GROUND-TAP 16: GUARD-RING 17: SIDEWALL 18: METAL-2 Enter Command (<CR> for listing of Case-Path) =>> 7 You are now referring to (BASE) of physical-structure 1: EXTRINSIC-Q 2: DISTRIBUTION 3: INTRINSIC-Q Enter Command (<CR> for listing of Case-Path) =>> 2 You are now referring to (BASE DISTRIBUTION) of physical-structure 1: deep 2: shallow Enter Command (<CR> for listing of Case-Path) =>> 1 You are now referring to (BASE DISTRIBUTION deep) of physical-structure _-----_-----______-_-----~~~--------- ------______________----------------- CASE-NAME: BASE-DISTRIBUTION-deep at physical-structure level Following symptoms at measurement level are to be resulted from this case: 1: ((parametric-measurement WElOBETA low) very-likely) 2: ((parametric-measurement RB 1 low) probably) 3: ((parametric-measurement RB2 low) very-likely) 4: ((parametric-measurement WElO-CBO low) probably) 5: ((parametric-measurement SOT2-CBO low) probably) 6: ((parametric-measurement SOT-B-SU very-low) probably) 7: ((parametric-measurement SOTBETAF low) probably) --s--- Following causes at process level is to result in this case: 1: ((BASE-IMPLANT ENERGY high) likely) 2: ((BASE-DRIVE FURNACE-TEMPERATURE high) likely) 3: ((BASE-DRIVE DIFFUSION-TIME long) likely) Following causes at CURRENT physical-structure level can result in this case: 1: ((BASE-OXIDE THICKNESS low) likely) Figure 3 A Sample Knowledge Engineering Session under PIES’ Knowledge Editor 840 / ENGINEERING WElOBETA low SOT-MU very-low+’ soTBETAFkm d measurement level BASEOXIDE THICKNESS low + BASE DISTRIBUTION deep: physical-struchn level /BASE-IMPLANT ENEIWY high s BASE-DRIVE FURNACE-TEMPERATURE hi& 1 BASE-DRIVE DIFFUSION-TIME loq process level Figure 4 Organization of Concepts causally related to BASE-DISTRIBUTION-deep From our experience, failure analysis engineers with no AI background were capable of mastering PIES’s lcnowledge editor after a brief (less than an hour) tutorial session. 3.4. Diagnostic Reasoner PIES diagnostic reasoning mechanism exploits the multiple causal level structure of the knowledge base to diagnose rootcause of failure from a given set of parametric test data. Before actually starting the diagnostic process, symbolic “symptoms” have to be abstracted from raw test data (in this experiment, the raw data was recorded by Puyallup’s Keithley tester). The symptom abstraction pro- cess follows two steps: first, noisy data points (due to bad probe contact or random failure) are removed from the data set by a statistical method; then a statistical average and standard deviation is computed for each parametric meas- urement over all wafers in a given lot. This information is compared with expert-provided limits to produce a qualita- tive estimation of the measurement (e.g., EPI-R very-low). The resulting “qualitized” measurements form the initial symptom set. The diagnostic process is performed by progressing level-by-level through a sequence of hypothesization and confirmation steps, as explained in the overview. At each level, a set of probable failures is filtered from initial hypotheses suggested by likely faults isolated at the previ- ous stage of reasoning (or the initial symptom set). The level-to-level isolation cycle repeats itself, following the inverted causal chain, until it reaches a final diagnostic conclusion at the rootcause level. Let us follow through an example of this reasoning chain. EPI-R is a measurement of electrical resistivity from a test structure within a layer of epitaxial material. (It is designed to monitor the result of the epitaxial process.) One possible explanation for an observed low EPI-R meas- urement, which readily follows a basic principle of sem- iconductor physics, is that the EPI layer was too thick -- a physical structure failure directly confirmable by other more expensive, time-consuming material analysis tech- niques. Tracing further back along the causal chain, a thick EPI layer can result from, among other factors, an abnor- mally high temperature during the EPI process. The final step is to identify possible root causes of this failure, which leads to, among others, a faulty thermostat - an equipment failure - which resulted in higher than normal EPI process temperature. At each stage of the level-to-level diagnosis, the isola- tion of failures from hypotheses at the previous level is achieved in three steps: hypothesization, implication, confirmation, and thresholding, The hypothesization step is designed to heuristically retrieve from among all known types of failures a suspect set, that includes only those failure cases which are “rea- sonably” implicated by given symptoms - while the “sensi- tivity” (i.e., how strong tie evidence has to be for a hypothesis to be included in the suspect set) is an adju- stable threshold. The suspect set so derived is by no means exhaustive -- a potential failure may not have been included because the symptoms stipulated for hypothesizing that failure are not observable from the given test circuit. A reasoning step, known as implication, expands the original suspect set by including additional hypotheses that are implicated by any failure case already included in the suspect set. Such implication is based on the intra-level causalities coded in the knowledge base. For example, one intra-level causal link coded in the ISO-Z knowledge base indicates that the physical structure failure: BASE-OXIDE THICKNESS low is a potential cause of another physical structure failure: BASE DISTRIBUTION deep (as shown in figure 4). How- ever, base oxide thickness is not directly monitored by any X30-Z test structure. Therefore, BASE-OXIDE THICK- NESS low can only be included in the suspect set through the implication step, after a failure it may cause (e.g., BASE DISTRIBUTION deep) has been hypothesized. In the confirmation step, expected symptoms of each failure case in the suspect set are matched against the failure hypotheses concluded by the diagnosis process so APPLICATIONS / 841 far. The matching process will compute a “score” for each failure case, indicating how close the case’s expected symptoms match against conclusion derived from the given measurement data. Following the confirmation step, the failure cases in the suspect set are sorted according to their matching scores. Thresholding is done to exclude those failure cases which have relatively low scores. The remaining suspect set serves as the system’s diagnostic conclusion for the current level, and is passed on to the next stage of the rea- soning. 4. Results of the PIES Experiment The PIES experiment was conducted in three stages: knowledge base construction, system tuning, and perfor- mance evaluation. With the PIES knowledge editor installed in the Fairchild/Puyallup production environment, a knowledge base for diagnosing the Fairchild ISO-Z bipolar process was constructed by failure analysis engineers on-site. In the resulting ISO-Z knowledge base, 342 types of failure cases were identified, among which, 101 failure types are associ- ated with the measurement level, 82 with the physical struc- ture level, and 159 with the process level. The knowledge base also encodes about 600 associational links among the identified cases, and is today competently maintained by Puyallup’s failure analysis engineers. The performance of PIES was evaluated by analyzing parametric test data from problem lots which represent a fair sample of challenging cases encountered and recorded dur- ing the production history of the ISO-Z process. For each case of lot-data tested, PIES’ diagnostic result was compared with the recorded conclusion reached by failure analysis engineers at the time of its occurrence. Initially, diagnostic results from only 10 of the 25 cases tested were judged to be satisfactory by experts. The major reason for those unsuccessful diagnoses was, not surpris- ingly, missing knowledge in PIES’ knowledge base. The problems were subsequently corrected by Puyallup engineers with a modification of the knowledge base using the PIES knowledge editor. After this initial system tuning, correct diagnosis was achieved on each of the 25 cases in the origi- nal set. At the next phase, our Puyallup collaborators tested the updated system against test data from another 18 randomly-selected problem lots. Among those, 12 achieved satisfactory diagnostic results, and according to the fab engineers, some even “outperformed’ the original diagnoses. Again, missing knowledge accounted for the misdiagnoses. 5. Conclusions and Future Research Experience at Puyallup with the Fairchild ISO-Z pro- cess suggests that with continued tuning, PIES can become an effective productivity-enhancement tool for failure analysis engineers. More importantly, the Puyallup experi- ment demonstrates the feasibility of transferring responsibil- ity for building and maintaining the knowledge base of an expert system from AI specialists to the people who possess first-hand knowledge of a domain. We believe that this transfer is inevitable if expert systems are to become practi- cal in continually evolving domains such as engineering and manufacturing. The experiment also confirms the expected weakness of any shallow-level approach, namely, a system that relies solely on coded experiential knowledge must be expected to fail when encountering a processing failure not previously seen. In addition to its primary role in process diagnosis, the PIES knowledge base is also valuable as a knowledge carrier to document, propagate, and replicate engineering experi- ence. In the semiconductor industry, a new process is usually developed in an R&D environment and then transferred to manufacturing facilities in different geographical locations. In the transfer, precious operating experience is lost and it is often necessary to physically transfer personnel along with the process to regain acceptable yields. PIES can be used to document the diagnostic experience acquired during a process-development phase, and then pass that experience to manufacturing engineers at remote sites, without moving people. 5.1. Generalizations The same multi-level knowledge structure discussed in this article can be used to interpret parametric test data for any semiconductor fabrication process. Currently, Fair- child engineers at several sites are building PIES knowledge-bases for their latest processes. In a broader sense, PIES can be applied to many other diagnostic prob- lems in which a sequence of causal levels can be clearly identified. Underlying PIES is an explictly-defined “shell” that can be easily reconfigured to reflect the appropriate causal structure. The extensibility of PIES has already been demonstrated by applying it to diagnose problems in a photolithography process. This knowledge base, con- structed by a photolithography expert at Fairchild’s Research Center, encodes causal connections between visually-acquired symptoms (e.g., out of focus along only one axis) and its causes (e.g., stepper stage control gain too high). Many other applications to in-process monitoring and control are under consideration. The ability to do one’s own knowledge-engineering is a very powerful incentive, luring engineers to try new applications. 5.2. Toward a Deeper Knowledge System We have argued previously that in engineering appli- cations, there is a continuing need to update the knowledge-base to reflect changes in the domain. PIES addresses this problem by transferring responsibility for knowledge-base maintenance to the domain experts. An alternative, based on current AI research at SPAR and other laboratories, is to provide the computer with “deeper” models that enable it to account for observed symptoms using fundamental engineering theories of the domain. In the case of semiconductor fabrication, knowledge of device physics and process technology can be used to create models that show how fabrication processes affect wafer structure, and how changes in structure affect electrical behavior of test circuits. These models can be used to derive explanations for fabrication problems not previously encountered [9]. They can also be used to update automati- cally the knowledge base when the process recipe or test circuits change. Finally, they can be used to validate knowledge contributed by domain experts for completeness 842 / ENGINEERING and correctness (e.g., are there any alternative explanations that could account for an observed symptom.) In the near future, we hope to integrate PIES with a system based on causal process models, to realize these advantages. ACKNOWLEDGEMENT The authors would like to express their sincere thanks to Dr. Harry G. Barrow, chief scientist of Schlumberger Palo Alto Research, whose broad vision helped to shape the PIES project. Harry directly contributed many ideas dis- cussed in this paper, and personally implemented part of the PIES system. Mike Slarna and Les Smith from the High Speed Memory and Logic division of Fairchild at Puyallup, Washington, originally suggested the topic of parametric test interpretation, served as the domain experts to construct and maintain the PIES knowledge base for this experiment, and evaluated the system’s performance. Dr. Harold Brown of the Knowledge System Laboratory, Stan- ford University, served as a consultant for this project and contributed valuable ideas. Dr. Tony Crossley of Fairchild’s Palo Alto Reserach Center helped to make this paper readable by semiconductor engineers. Ul VI [31 VI PI El [71 181 PI REFERENCES Davis, R., et al. (1975). Production Rules as a Representation for a Knowledge-Based Consultation Program. Stanford University. STAN-CS-75-5 19. Winston, P. (1984). Artificial Intelligence. Addison Wesley, Pub. Hart, P. (1982). Directions for AI in the Eighties. Fairchild Technical Report No. 612. Pan, Y. (1983). Qualitative Reasonings with Deep- Level Mechanism Models for Diagnoses of Depen- dent Failures. Ph.D. Dissertation. University of Illi- nois, Urbana/Champaign. CSL Report T- 132. Shortliffe, E. (1976). Computer-Based Medical Con- sultation: MYCIN. American Elsevier, Pub. Forbus, K. (1984). Qualitative Process Theory. Ph.D. Dissertation. M.I.T. AI-TR-789. DeKleer, J. (1979). CausaI and Teleological Reason- ings in Circuit Recognition. Ph.D. Dissertation. M.I.T. AI-TR-529. Kuipers, B. and Kassirer, J. (1983). How to Discover a Knowledge Representation for Causal Reasoning by Studying an Expert Physician. Proceedings, 8th International Joint Conference on Artijciai Intelligence. Mohammed, J. and Simmons, R. (1986). Qualitative Simulation of Semiconductor Fabrication. Proceed- ings, AAAI Conference, 1986. APPLICATIONS ! 843
|
1986
|
17
|
439
|
DUAL FRAMES: A NEW TOOL FOR SEMANTIC PARSING Jean-Louis Binot’ IBM Thomas J. Watson Research Center, Hawthorne, P.O. Box 218, Yorktown Heights, N.Y. 10598. ABSTRACT The dual frames method is a new tool for specifying and establishing semantic dependencies, which has been implemented in a parser of French called SABA. This method offers solutions to some typical problems of semantic parsing strategies - such as the difficulty of cop- ing with different types of sentence structures and the amount of work needed to specify the vocabulary of a new domain - by providing a general and flexible tool which can handle all the kinds of meaningful terms which can appear in a sentence. 1. INTRODUCTION Attempts at semantic parsing without the support of a full syntactic component have given rise to different methods, among which those of Schank and his group (Schank et al., 1980), Wilks (1975) and Hayes and Carbonell (198 1) are well known. These methods have generally been successful at processing simple declarative sentences, but are less suited to process other kinds of - or more complex - sentence struc- tures. Among the main problems are the difficulty of coping with dif- ferent word orders and the fact that the burden of untangling complex structures falls on individual semantic information, rather than general syntactic rules, thus increasing the amount of specification needed for the vocabulary of a given domain. Thus, other authors, such as Heidom (1972), Sowa and Way (1986) and Boguraev and Sparck Jones (1983) have preferred to add a semantic component to a syntactic parser. However, purely semantic parsers have other advantages, notably the potential for greater robustness, which justify their further study. This paper presents a new method for semantic parsing called “dual frames”, which attempts to solve or lessen the above problems. This method has been implemented in a semantic parser of French called SABA. The main advantages of dual frames, as we see them, are the following: - a good distribution of semantic information in dictionary entries, in a way which makes specification easier and avoids redundancy; - the capability of processing dependencies between all kinds of meaningful terms (verbs, nouns, pronouns, adjectives, adverbs, coordinate structures) in a uniform way; - the capability of handling in the same way different kinds of sentence structures (such as active and passive voices, interrogative, declarative and imperative forms), and also of processing semantically symmetrical sentences; - the introduction of operations for computing new semantic frames during a parse, and the definition of a powerful inheritance mechanism. The next section provides an outline of the SABA parser. The rest of the paper introduces the dual frames method and details some of its aspects. Thii research was performed at the University of Liege, Belgium. Daniel Ribbens Service d’Informatique, Institut Montefiore, B28, University of Liege, B4000 Liege, Belgium. 2. OVERVIEW OF THE SABA SYSTEM SABA (“Semantic Analyser, Backward Approach”, (Binot, 1985), (Binot et al., 1986)) is a robust and portable semantic parser of written French sentences developed at the University of Liege, Belgium. A prototype of this parser is running in MACLISP and in ZETALISP; it has been tested successfully on a corpus of about 125 French sen- tences. While it is possible to account to some extent for ill-formedness in a syntactic parser (see for example the “parse fitting” method devel- oped by Jensen et al. (1983) for the PLNLP system (Heidom, 1972)), we believe that robustness can better and more easily be achieved through a semantic parsing strategy. The SABA parser is not based on a French grammar, but on semantic procedures which build directly a semantic dependency graph from the natural language input. These procedures are helped by a fragmentation mechanism which allows the system to process complex sentences by splitting them into clauses. The following example is typical of the level of complexity that can be handled by the system: (1) Le gros chien noir aboie furieusement quand des enfants qu’il ne connait pas jouent dans Ie jardin du voisin. (The big black dog barks furiously when children that he doesn’t know are playing in the garden of the neighbour.) To allow for portability, the SABA parser translates its natural language input into an “intermediate” semantic network formalism called SF (for “Sentence Formalism”), the details of which have al- ready been covered elsewhere (Binot, 1984, 1985). The main point of interest here is that before generating the SF output, SABA builds a simplified semantic graph expressing all the semantic dependencies established between the meaningful terms of the sentence. The graph established for sentence (1) is shown in (2). Such a graph is a uniform structure made from oriented binary de- pendencies, where each dependency points from a complement to the term qualified by this complement (which we shall call for short a “complementee”).2 The graph is built by applying a bottom-up strategy based on a repetitive fragmentation mechanism: Parsing strategy: Repeat the following until success or dead end: 1. Fragment the sentence into clauses; 2. Select the innermost clause; 3. Establish relevant semantic dependencies for that clause; 4. Replace the clause, in the text of the sentence, by a special non-temtinal symbol. The fragmentation procedure extends to the left and to the right of each verb until it finds words which identify the limits of a clause; then heuristic rules based on the nature of these limits determine the .! A relative pronoun is not processed as other pronouns, but as a complement of its reference, to which it is tied by a special dependency LR (“Liaison Relative”). NATURAL LANGUAGE / 579 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (2) SIZE AGENT MANNER * -a-- o----> * ------------> 0 <--------- * gros /I\ chien COLOR 0 /I\ aboie furieusement I MOMENT 1 * 1 SITUATION POSSESSIVE noir ----> (J <--------- A <-----&---- * I jouen t jardin voisin AGENT 1 1 LR OBJECT BENEFICIARY f <--- f ---------> 0 <----------- * enfants we connait (NEG) il innermost clause.3 Step 3 is accomplished by applying the dual frames method described hereafter. Lastly, step 4, combined with the repeti- tive nature of the fragmentation mechanism, enables the parser to re- construct correctly the content of higher level clauses once embedded inner clauses have been parsed. The successive states of the input for example (1) are shown below with the result of each fragmentation step. Each step leads to the building of that part of the graph shown in (2) that corresponds to the selected clause. (3) L.e gros chien noir aboie furieusement quand des enfants qu’il ne connait pas jouent dans le jardin du voisin. Le gros chien noir aboie furieusement quand des enfants PR jouent dans le jardin du votiin. L-e gros chien noir aboie furieusement PC. Pp4 3. DUAL FRAMES: BASIC CONCEPTS Dual frames is a new method for the specification and the establishment of semantic dependencies, which has been designed to handle all pos- sible kinds of constituents in a uniform way. Basically, this method consists of using a dual system of semantic property lists respectively called “El&s” and “Tlist~“.~ 3.1. Eiists Elists can be viewed as kinds of case frames, but are assigned to every meaningful term, not only to verbs. Different meanings of the same term catn have different Eiists. As an illustration, let us consider a single clause example: (4) Le chien aboie furieusement dans le jardin. (The dog barks furiously in the garden.) The Elist of the verb “aboyer” (“to bark”) looks like this? (5) ABOYER: ELIST: ((AGENT NOUN~~LA~~~RESTRICTION (CANINE)) ISITUATION NouN~c~Ass~~~s~~ic~ioN (PLACE)) (MOMENT N~~N-~LA~~-RE~TRICTION (TIME)) (MANNER)) This Elist states that “aboyer” can have an AGENT argument (which must be a canine), and arguments of MOMENT, SITUATION and MANNER. NOUN CLASS RESTRICTION introduces a re- striction on the semantic category of nominal arguments. 6 All semantic specifications shown in this paper are only meant as illustrations of Situations of choice arising during the two first steps are handled by a backtrack- ing mechanism. PR, PC and PP denote respectively a relative clause, a conjunctive clause and a main clause. These names stand simply for “LIST of the complementeE” and “LIST of the complemenT”. the concepts described, and not as parts of some universal model. separate specifications can be provided for each application domain. In practice, 3.2. TIists Semantic restrictions alone are not sufficient to obtain correct parses except in very simple cases. The possible roles of a term depend also on the way this term is used in a sentence. The basic idea of Tlists is to list explicitly the possible roles of every meaningful term or construct processed during a parse. Tlists are obtained by the parser in different ways. Tlists of terms such as adjectives and adverbs are specified in the dictionary as intrinsic properties of these terms. Thus the specification shown below states that the adverb “furieusement” (“furiously”) can only fill the MAN- NER role: (6) Furieusement: TLI~T: (MANNER)) A Tlist can also be inherited from another word. The Tlist of a noun, for example, is inherited from the preposition leading the nomi- nal group. To each preposition is assigned a specific Tlist. Thus “jardin” (“garden”) in (4) will inherit the Tlist assigned to the French preposition “dans”: (7) Dans: TLlST: ((SITUATION)) A noun without a preposition, like “&en” (“dog”) in (4), will be said to be introduced by a special dummy preposition called PHI, from which it will inherit its Tlist: (8) PHI: TLIST: ((AGENT VOICE-RESTRICTION (VA)) (OBJECT VERB-~LA~~-RE~TRICTION (TRANSITIVE)) (BENEFICIARY VERB-CLASS-RESTRICTION (STATE EVENT)) (MOMENT) (INSTRUMENT) (NAME) 1 Tlists can also express restrictions, which bear on possible complementees. In fact, Elists and Tlists have exactly the same struc- ture and will hereafter be referred to by the generic name of semantic frames. The above Tlist states that each noun without a preposition can play the following roles: AGENT of an active verb, OBJECT of a transitive verb, BENEFICIARY of a state or an event, MOMENT, INSTRUMENT or NAME. Subordinate clauses are processed like noun groups, except that the TIist of the subordinated verb is inherited from the conjunction leading the clause. Thus, “jouent” (“play”) in (1) will inherit from the con- junction “quand” (“when”) a Tlist containing only the role MO- MENT. 3.3. Establishing dependencies The dependencies that can be established between two terms are de- temtined by an “Elist/Tlist intersection mechanism” described here: Al: Consider only the dependencies which are mentioned both in the Tlist of the complement and in the Elist of the complementee; A2: Among these, retain only the dependencies for which all re- strictions mentioned in the Elist and in the Tlist are satisfied and agreement rules, if any, are also satisfied. 580 / SCIENCE ‘- The set of dependencies established between the terms of a given clause must furthermore satisfy global constraints: a same term cannot be tied by two dependencies of the same name, and the set of all de- pendencies established for any given structure (clause or group) must form a connected graph. We shall apply these rules to example (4). The French word “&en” has at least two possible meanings: “dog” (which belongs to the class of canines) and “gun hammer”, which belongs, say, to the material objects. “Chien” inherits the Tlist of PHI shown in (8). Comparing this Tlist with the Elist of “aboyer” in (5), rule Al yields 2 possible dependencies: AGENT and MOMENT. For the first meaning of “chien”, rule A2 will keep AGENT and discard MO- MENT. The second meaning of “chien” doesn’t satisfy any of the re- strictions in the Elist of “aboyer” and will be discarded because it remains unconnected. Knowing furthermore that “jardin” denotes a place, the system finds easily that the only admissible result for whole clause is: the (9) AGE NT MANNER * ----------> 0 <---------- * chien /I\ aboie furieusement (dog) SITUATION 1 jardin * The distinction between Elists and Tlists, illustrated in the above example, helps to reduce redundancy and ease specification. While Elists are intrinsic properties, Tlists can be inherited from prepositions and conjunctions; the Tlist of PHI shown in (8), which expresses in a few lines the possible semantic roles of prepositionless nouns in French, is a good example of the conciseness that can thus be achieved. It should be noted that unlike Wilks’ paraplates (Wilks, 1975), Tlists are not exclusively related to prepositions. A Tlist is assigned to every meaningful term or construct processed by the system; moreover, and again unlike paraplates, new Tlists can be computed from old ones, as we shall show in section 5. It can also be noted than Elists and Tlists offer some similarities to Sowa’s conceptual graphs (Sowa, 1984); however, while conceptual graphs are basically a representation formalism for concepts, Elists and Tlists were especially designed as a tool for parsing without the support of a grammar, and, as such, can include syntactic restrictions such as the voice restriction in (8). Lastly, let us note that the dual frames method supports prefer- ences a la Wilks as well as more traditional mandatory restrictions, and this without needed additional specifications such as Will& exhaustive list of bare templates. We only require a slight modification of the rules of section 3.3. In the “preference mode”, all dependencies which pass rule Al will be considered as acceptable by the system, and rule A2 will be used to prefer, among them, the ones satisfying the greatest number of restrictions. This other mode has also been implemented in the SABA system, which can run either in restriction mode or in pref- erence mode. 3.4. classes and hierarchies The SABA system offers the possibility of specifying, for each given domain, a hierarchy of classes, which will then be taken into account by the restriction checking mechanism. The system accepts also heterarchies (thus, knives, for example, could be classified both as cutting tools and as piercing tools). To each individual concept is assigned a property CLASSLIST, which enumerates entry points for that term in the hierarchy. If several entry points are given, they are interpreted as a disjunction of classes. This rule is useful to specify different aspects of the same concept. Thus the word “departement” can denote a place, say in a store, or an animate collective (the set of employees working in the corre- sponding place), but does not always denote both concepts simultane- ously. It will be simply specified like this: (10) departement: CLASSLIST: (PLACE EH) 3.5. Robustness As Carbonell and Hayes (1983) noted, case instantiation systems have some inherent robustness, stemming from the fact that they are more or less insensitive to the order of arguments. A French speaking person, for example, could say “The big dog black” in a literal translation of (l), since adjectives, in French, can be placed behind the qualified noun. This kind of mistake is easily handled by the SABA parser. However, the main advantage of semantic parsers does not lie in the insensitivity to some specific kind of mistake, but in the more fun- damental fact that such parsers do not require an exhaustive specifica- tion of all syntactically admissible constructs. This makes it easier to define procedures searching for specific features in a flexible way. Sentence (ll), where the negation marker is misplaced, and (12), where the interrogative construct is incorrect, illustrate some other kinds of mistakes that can be handled by SABA. (11) (12) II n’a travaille jamair. (He worked never.) Tu aimes Marie? (You love Mary?) 4. SUBJECT IDENTIFICATION RULES There are cases in which the basic Elist/Tlist intersection mechanism fails. One of these cases arises with semantically symmetrical struc- tures, such as: ( 13) John lows Mary. where the AGENT of the loving action cannot be determined by se- mantic restrictions alone. A similar problem arises with passive voice: thus no amount of semantic restrictions could allow the parser to choose John as an AGENT in (14) and as a semantic OBJECT in (15). ( 14) John has cheated. (15) John ws cheated. Other semantic systems have faced these problems by introducing, in some way, positional restrictions. These restrictions are implicit in Riesbeck’s expectation mechanism (Riesbeck, 1974), which is tied to a left-to-right parsing order, and in Wilks’ preference system (Wilks, 1975), where everything crucially depends on the template matching order. They appear explicitly in Hayes and Carbonell’s system (Hayes and Carbonell, 1981) as positional markers. A general inconvenience of positional restrictions, however, is that they are strongly related to a specific word order (usually the one used in active declarative sen- tences). In fact, what the above approaches are trying to do, in ad hoc and not fully satisfying ways, is to get around the crucial notion of syntactic subject. We believe that even in a semantic system, the notion of sub- ject is necessary to solve cleanly the problems mentioned above: John is the AGENT in (13) because he is the subject of an action in the ac- tive voice; he is the OBJECT in (15) because he is the subject of an action in the passive voice. We shall propose below a general and se- mantic way of determining and using the notion of subject in a seman- tic parser. Our solution is based on Fillmore’s “subject selection rule” (Fillmore, 1968). The key idea is to define “Subject restriction rules”, which are kinds of inverted selection rules, and then to use these rules to identify the subject:’ Subject restriction rules: 1. The AGENT of an is introduced by PHI; active verb must be the subject if it These rules do not cover the problem, which appears of distinguishing between direct and indirect object. in EJlgLSh but not in French, NATURAL LANGUAGE / j8 1 2. The INSTRUMENT of an active action verb must be the sub- ject if it is introduced by PHI; 3. The OBJECT of a passive action verb must be the subject if it is introduced by PHI; 4. The BENEFICIARY of an event or a state must be the subject if it is introduced by PHI. Subject identification rules: 1. If, in an attempted parse, an argument must satisfy a subject restriction rule in order to fill some case of a verb, then check if another subject has already been identified for that parse. If so, the restriction fails. If not, the argument will be chosen as the subject of the verb and the restriction will be satisfied. 2. In a successful parse, a subject must have been every verb other than an imperative or infinitive. identified for 3. In a successful parse, the subject must precede the verb, except in interrogative sentences when the subject is a personal pronoun.8 Let us look again at example (13). Assuming some typical action Elist for “love”, the parser will match this EList with the Tlist of PHI and find two possible interpretations: (16) AGENT(love,John).OBJECT(love,Mary) OBJECT(love,John).AGENT(love,Mary) In the first case, the subject will be identified as John, and in the second case as Mary. The second interpretation will be discarded be- cause of rule 3. In example (15), “John was cheated”, the interpreta- tion taking John as AGENT will not allow the parser to identify a subject because no subject restriction rule will be activated; it will thus be discarded by rule 2 above. 5. COMPUTING SEMANTIC FRAMES The usefulness of dual frames has been enhanced by defining oper- ations for computing new semantic frames (Elists or Tlists) from ex- isting ones. We distinguish two basic kinds of situations in which such computations are useful. 5.1. Semantic frames Union In ambiguous situations, where different semantic frames of the same type (Elist or Tlist) could be used for the same term and choosing be- tween them would be impossible or too difficult, it is possible to com- pute a resulting semantic frame as a kind of “union” of the given frames. The intuitive idea of semantic frame union is to keep all possible dependencies with the weakest restrictions. More precisely, the rules are : Semantic frames union rules: 1. Every dependency mentioned in frames belongs in the resulting frame; any one of the argument 2. A restriction assigned to a dependency will belong to the re- sulting frame if and only if it belongs to all the argument frames; 3. If a restriction belongs to the resulting frame, its set of ac- ceptable values is formed as the union of the sets of acceptable values of that restriction in all arguments. An example of the use of semantic frames union may be found in the processing of pronouns. The roles that prepositionless pronouns can fill can be at least partially determined by taking into account the surface form of the pronoun itself. Thus “whom” can obviously not be the AGENT of an action, nor the BENEFICIARY of a state, while for “he” these two roles are allowed. These distinctions can be introduced by grouping pronouns into 8 This rule handles verb/subject inversion in French interrogative sentences. different classes and by assigning to each class a Tlist which will be inherited by the pronouns of that class when they are introduced by PHI. Four classes of pronouns are used in the SABA system: S12 (pronouns of the two first persons that can be subject), S3 (pronouns of the third person that can be subject), OD (pronouns that can be di- rect object) and 01 (pronouns that can be indirect object). The Tlists assigned to S12, OD and 01 are: (17) $12: TLIST: ((BENEFICIARY VERB-~LA~~-RE~TRI~TION (STATE EVENT)) (OBJECT VOICE-RESTRICTION (VP>) (AGENT VOICE.wRESTRICTION (VA))) OD: TLI~T: ((OBJECT VOICE~RESTRICTION (VA))) 01: TLisT: ((BENEFICIARY)) S3 is the same as S12 but with an additional INSTRUMENT role. The only problem with the above method is that a pronoun may belong to several classes, as illustrated by the following examples: (18) Nom mangeons. (We eat) ( 19) II nous voit. (He sees us) (20) II nous parie. (He talks to us) The French pronoun “nous” belongs to S12, OD and OI! The Tlist of such pronouns will be determined by applying the “semantic frames union” operation to the Tlists of the different classes. For the Tlists of (17), this operation yields the following result: (21) nous: resulting TLIST: ((BENEFICIARY) (OBJECT VOICE-RESTRICTION (VP VA)) (AGENT VOICE-RESTRICTION (VA))) 5.2. Semantic frames Intersection In constraint situations, where different semantic frames of the same kind (Elists or Tlists) should sinlultaneously be taken into account for the same term, a resulting semantic frame can be computed as a kind of “intersection” of the given frames. Semantic frames intersection is used in the SABA system for the processing of coordinate structures. We apply a generalized version of one of Fillmore’s rules (Fillmore, 1968), stating that meaningful terms or structures can only be coordinated if they can play the same se- mantic role with respect to the rest of the sentence. In the dual frames method, this amounts to saying that the intersection of the Tlists of the conjuncts should not be empty. This intersection will then be taken as the resulting Tlist of the coordinate structure. Thus, in (22): (22) Jean viendra ce soir ou demain. (John will come this evening or tomorrow.) the Tlist of “ce soir” (“this evening”) wilI be inherited from PHI (as shown in (8)), and the Tlist of the adverb “demain” (“tomorrow”), which is an intrinsic property of that adverb, is shown in (23); the re- sulting Tlist of the coordinate structure “ce soir ou demain” will then be computed as shown in (24) (23) Demain: Tlist: ((MOMENT)) (24) ce soir ou demain: resulting TLIST: ((MOMENT)) The above example is deceptively simple. In the general case, the dependencies belonging to the intersection have associated restrictions which must be taken into account, as well as the hierarchies of admis- sible values for these restrictions. The general rules of semantic frames intersection are: Semantic frames intersection rules: 1. A given dependency will belong to the resulting semantic frame if and only if it belongs to all argument frames; 582 / SCIENCE 2. If a restriction is assigned to a dependency in at least one of the argument frames, it will appear in the resulting frame; 3. The set of acceptable values for any restriction in the resulting frame is the set (possibly empty) of all nearest common successors of the sets of acceptable values for the restriction in all argument frames. 6. SEMANTIC FRAMES INHERITANCE The idea of case inheritance was already expressed by Chamiak (1981). However, inheriting case names, as was suggested, is not suf- ficient. One must also account for possible restrictions or preferences, with their sets of acceptable values, and for the hierarchies, or even heterarchies, associated with these sets. The operations defined in the previous section allow us to define a powerful Elist inheritance mech- anism which can combine Elists at different levels of the hierarchy. As an example, declaration (25) states that every action usually has an animate (EA) AGENT and arguments of SITUATION and of MO- MENT, while (26) indicates that the AGENT of the specific action of “programmer” (“programming”) must be a human (EH). These Elists will be combined in order to produce (27). (25) (;~CII;LASS-PR~PERTY AcTioN ((AGENT NOUN~CLAS~~RE~TRICTION (~A11 (MOMENT NouN~~LA~~~RE~TR~CT~ON (TIME)) (SITUATION NOUN-CLASS-RESTRICTION (PLACE))) (26) programmer: ELIST: ((AGENT NOUN~CLA~~~RE~TR~~TION (~~11) (27) programmer: resulting ELIST: ((AGENT NOUN~~LA~~~RE~TRICTION (EH)) (SITUATION NOUN-CLASS-RESTRICTION (PLACE)) (MOMENT NouN~cLA~~~RE~TR~CTION (TIME))) The rules of semantic frames inheritance are the following: Semantic frames inheritance rules: 1. Elists inherited from different predecessors in the hierarchy will be combined by using semantic frames intersection; 2. Elists inherited by different classes in the CLASSLIST of a term will be combined by using semantic frames union; 3. Eli& at different levels of the hierarchy will be combined by using an operation called semantic frames merging, which keeps all possible dependencies but with the strongest restrictions. This merging combines rule 1 of semantic frames union and rules 2 and 3 of semantic frames intersection. The effect of the third rule can be observed in example (27): all dependencies from (25) and (26) were kept, and the AGENT de- pendency retained the strongest restriction (“human” being considered as the nearest common successor of “human” (EH) and “animate” (EA) in the hierarchy). 7. CONCLUSION As illustrated in this paper, the dual frames method offers several ad- vantages with respect to existing semantic systems. The systematic separation of all semantic information between Elists and Tlists pro- vides both flexibility and conciseness of specification; subject identifi- cation rules free the method from positional restrictions; and, lastly, a powerful inheritance mechanism based on well defined semantic frames operations eases the task of specifying a new application do- main. The use of well defined list or graph operations in natural language processing has received increasing attention. At first glance, our “se- mantic frames union” could be likened to Sowa’s “join” (Sowa, 1984) or to the concept of unification (Shieber, 1985). However, unlike unification, frames union never fails, since its purpose is to handle am- biguities between possibly conflicting interpretations. In fact, the op- eration which can be taken as a special form of unification is the “semantic frames merging” of section 6. Let us note however that, while unification is usually presented as a fundamental primitive, frames merging derives from the two basic operations of frames union and frames intersection which, as we have shown, have reasons to exist in their own right. Many other issues of the SABA parser were not discussed here, including resolution of lexical ambiguities, of attachment ambiguities, of quantifier scope and of pronoun reference. More details can be found in (Binot, 1985). ACKNOWLEDGMENTS Thanks are due to Yorick Wilks for his helpful and in-depth comments on the SABA system, to John Sowa, of IBM Systems Research Insti- tute, for an interesting discussion about conceptual graphs, and to George Heidom, Karen Jensen and Norman Haas, of IBM Research, for many judicious comments on previous versions of this paper. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. REFERENCES J-L. Binot, A set-oriented semantic network formalism for the representation of sentence meaning, In Proc. ECAI84, Pise, 1984, pp. 147-156. J-L. Binot, SABA: wrs un systeme portable d’analyse du francais ecrit, Ph.D. dissertation, University of Liege, 1985. J-L. Binot, P-J. Gailly and D. Ribbens, Elements d’une interface portable et robuste pour le francais ecrit, In Proc. Huitiemes Journees de I’Informatique Francophone, Grenoble, 1986. B.K. Boguraev and K. Sparck Jones, How to drive a database front-end using general semantic information, In Proc. Conference on Applied Natural Language Processing, Santa Monica, 1983. J.G. Carbonell and P.J. Hayes, Recovery strategies for parsing extragrammatical language, AJCL 9:3-4, 1983. E. Chamiak, The case-slot identity theory, Cognitiw Science 5, 1981, pp. 285-292. C. J. Fillmore, The case for case, in (Bach and Harms, eds.) Uni- versals in Linguistic Theory, Holt, Rinehart and Winston Inc., New-York, 1968. P.J. Hayes and J.G. Carbonell, Multi-strategy construction- specific parsing for flexible data base query and update, In Proc. IJCAI8I, Vancouver, 1981, pp. 432-459. G.E. Heidom, Natural language inputs to a simulation program- ming system, Naval Postgraduate School Report, Monterey, California, 1972. K. Jensen, G.E. Heidom, L.A. Miller and Y. Ravin, Parse fitting and prose fixing: getting a hold on ill-formedness, AJCL 9:3-4, 1983. C. K. Riesbeck, Computational understanding: analysis of sen- tence and context, Memo AIM-238, Stanford University, 1974. R.C. Schank, M. Lebowitz and L. Bimbaum, An integrated understander, AJCL 6: 1, 1980. S. Shieber, An introduction to unification-based approaches to grammar, Tutorial session, 23rd Meeting of the ACL, 1985. J.F. Sowa, Conceptual structures - information processing in mind and machine, Addison-Wesley, 1984. J.F. Sowa and E.C. Way, Implementing a semantic interpreter using conceptual graphs, IBM Journal of Research and Dewlop- ment 30: 1, 1986. Y. Wilks, An intelligent analyser and understander of English, CACM 18:5, May 1975. NATURAL LANGUAGE / 5x3
|
1986
|
170
|
440
|
A NEAT1 THEORY OF MARKER PASSING Eugene Charniak Department of Computer Science Brown University Providence, Rhode Island 02912 Abstract We describe here the theory behind the language comprehension program Wimp. Wimp understands by first, finding paths between the open-class words in a sentence using a marker passing, or spreading-activation, technique. This paper is primarily concerned with the “meaning” (or interpretation) of such paths. We argue that they are best thought of as backbones of proofs that the terms (words) at, either end of the paths exist in the story and show how viewing paths in this way naturally leads to the kinds of inferences which are normally thought to characterize “understanding.” In a companion paper we show how this interpretation also accomplishes much of the work normally expected in the parsing of language (noun-phrase reference, word-sense disambiguation, etc) so we only briefly touch on this topic here. Wimp has been implemented and works on all of the examples herein. I Introduction This paper describes Wimp (Wholy Integrated Marker Passer), a program which understands simple stories in English. Wimp uses incoming words (in particular the open-class words) as input, to a marker passer which finds connections between these words. These connections, or paths go to a path checker which makes sure that the paths “make sense” and extracts from them the facts which are needed to plausibly claim that the input has been “understood.” (In particular we concentrate on questions of character motivation and causality.) The overall structure of Wimp is this: Knowledge Representation Syntax < Path ’ Checker I I I I To summarize how Wimp works, consider its operation on “Jack went. to the supermarket” We simplify by assuming that Wimp is given some preparsed internal representation. (In section 5 we briefly consider how Wimp works when it starts directly off the input English.) The pre-parsed version looks like this: (inst go1 go) ’ There is a going event go1 (= (agent gol) jackl) 1. for which jack1 is the agent (= (destination gol) smarketl); and smarketl is the destination. (name jack1 jack) ; Jack1 has the name “jacle” (Inst smarketl smarket) ; and smarketl is a supermarket. In line with previous work on story understanding and recognition of speaker intention [Sc77,Sc78, Wi78,Pe80, Wo81] we assume that a minimum “understanding” of the sentence would include the fact ’ “Neat” here is as opposed to “scruffy” - Abelson’s terms for the two styles of cog- nitive science research. This work has benefited from conversations with Lawrence Birnbaum, Tom Dean, Jim Hendler, and Drew McDermott, and has been supported in part by the National Science Foundation under grants IST 8416034 and ET 8515005 and Office of Naval Research under grant N00014-79-C-0529. that, Jack will be shopping at the supermarket mentioned in the sentence. Thus the result of the understanding process should be the addition of the following to the database. (Inst super-shop1 smarket-shopping) (= (agent super-shopl) jackl) ; A supermarket shopping (= (store-of super-shopl) smarketl) event jor which jack1 is the I agent and smarketl is the (- (go-step super-shopl) gol) . store ia the reason for going. , We call these the abductive assumptions for the sentence. Wimp makes these abductive assumptions because it, finds a path between “went” and “supermarket” which goes through smarket-shopping. The Wimp’s path checker considers this path to be a “proof” of the fact that “going” and “supermarkets” appear in the story. How- ever, to make this proof go through it must, make some assumptions - the abductive assumptions listed above. For Wimp to “believe” a path simply means to believe the abductive assumptions which are required for the path’s proof to go through, and thus the assump- tions are added to the database. Wimp is related to several strands of work within AI, the most obvious being the language comprehension work of [Gr84,Al85], and [No86]. All th ree of these system use a marker passer to find paths which are then evaluated in some fashion to produce the inferences required for story comprehension. (This is also the model suggested in [Ch83].) Th e major differences between the work reported on here and these models are a) the current work, but, not the others pro- vides a formal basis for evaluating paths, and b) the current work uses the path finding and evaluation process not, just for finding important inferences, but also for all the aspects of language pars- ing which require semantic or real-world knowledge. In this later regard it, is somewhat like the work of [Ri85] and the early work of [QuSS]. Less obviously, Wimp is related to the resolution residues of [Ge84] in the use of resolution to produce explanations, and to cdn- nection graph8 of [Ko75] in the use of graphs over first, order formu- las to find proofs. Lastly since Wimp (among other things) tries to determine character’s plans, it, is also related to work which has been done on this, such as [Wi78,Pe80, Wo81] although Wimp uses quite different methods. II Marker Passing We shall assume a database of first-order predicate-calculus formulas indexed by the terms which appear in them. (This is what our knowledge representation language Frail (FRame-based Al Language) [Ch82] gives us.) So terms have pointers to all formulas in which they appear, and the formulas point to their other terms. Thus we have a network where nodes are terms, and links are first- order formulas. For example: (If (and (Inst 7x smarket-shopping) ; The store-of a (- (store-of ?x) ?str)) ; supermarket-shopping is (inst ?str smarket)) a supermarket. I (In Frail this rule would actually look like (inst (store-of ?x:smarket- shopping) smarket) where ?x:smarket-shopping says that ?x can be bound to any instance of a smarket-shopping, and the equality would be handled automatically. We ignore this here and use the more bulky traditional representation for such facts.) This rule would 584 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. form a link between the tokens smarket-shopping and smarket and would be part of the path which Wimp uses to establish Jack’s motivation for his action. Until section 5 we assume that the input comes pre-parsed, so Wimp passes marks from each predicate cal- culus term as it comes in. Marks are 5-tuples, containing the origin of the mark, the node and link from whence it came, the “date”-of creation, and a positive integer called “zorch” which is a rough measure of the “strength” of the mark. When marks are nassed I from a new origin there is a initial zorch and each time a mark is passed from one node to the next the zorch is divided by the branching factor at the node just left. The zorch still available is recorded in the mark at the new node. Should it fall below 1 no further marks are propagated. Here we show what a portion of a Frail database would like like after begin marked from-go1 at time 2 and smarketl at time 3 with an initial zorch of 75. (Prior nodes were deleted from marks to improve legibility.) [go1 ,origln,75,2] [go1 ,Isal,3,2] [smarketl, (smarketl, lnst2,75,3] store-of2,15,3] Inst2 2l smarketl [smarketl, orlgin,75,3] The date part of the mark allows marks to “decay” exponen- tially over time. Dates are measured by how many times the marker passer has been called to pass marks from a new origin. After a certain half life (currently 4) the zorch at a node is divided in half. Should this cause it to fall below 1 the mark is removed. If in the course of marking a node is found with marks from origins other than the one from which marks are currently flowing the marker passer reports an intersection and the link portion of the mark is used to reconstruct the path from the two origins. Zorch on marks also allows for a crude indicator path “strength.” This path torch is the zorch on the mark, times the zorch of the incoming mark, divided by the branching factor on the node where they meet. This is, in fact equal to the following: path-zorch= initial-zorch2 iEbrachi i=l where brachi is the branching factor at the ith node in the path. Paths are reported back in decreasing path-zorch order. In the example network the marker passer would find two paths, of which we concentrate on this one: (go1 Ml go gostep shopplng Isa1 store-of2 smarket Inst2 smarketl) smarket-shopping The atomic names on links of which we describe later. are the names of formulas, the content III Path Checking and the Meaning of Paths We said that a path is the backbone of a proof that the terms at either end exist in the story. To be a bit more precise, it is a proof that the lnst statement associated with the term is true. For the supermarket we want to prove (Inst smarketl smarket) is true. It may not be obvious why this is a reasonable thing to do, so an explanation is in order. A standard platitude is that understanding something is relat- ing it to what one already knows. The exact nature of such “relat- ing” is not obvious, but one extreme example would be to prove that what one is told must be true on the basis of what one already knows. To a first approximation that is the view we take here, but qualified in two important ways. First, most of what one hears (e.g., “Jack went to the supermarket.“) is new information, and thus not deducible from previous knowledge. Rather, we want to prove what one is told given certain assumptions. That is, Wimp’s path checker tries to create a conditional proof, where the conditions are the abductive assumptions. The second constraint on our “proof” comes from the marker passer. Since Wimp is designed to help with parsing, it must work prior to syntactic analysis, and therefore works directly off the words. Thus there is no possibility of passing marks from the pro- positional content of the sentence - only from terms denoted by words in the content. To do otherwise would require postponing marker passing until after parsing. Furthermore marker passing requires having a pre-existing network. Thus to pass marks based upon the propositional content, from, say, “Jack went to the super- market” to “Jack is at the supermarket” would require that such facts already be in the database, which they are not. Therefore, if our “proof” is to prove anything, it can only prove the inst propositions which started it all, since those are the only ones which can be deduced directly off the incoming terms (or later words). Thus the path checker tries to prove the terms at the ends of the paths and uses the path as the backbone of the proof. In general the proof is a conditional one where the conditions are the abductive assumptions we have been talking about. One major problem in all of this is that if we are allowed to make assumptions we can prove anything at all (if only by assuming something false). Therefore there must be constraints on the assumptions we allow, a topic which we discuss shortly. First let us make more precise the idea of treating the path as the backbone of a proof. The easiest way to do this is to treat this as a proof by contradiction using resolution. We therefore assume that all of our formulas are in conjunctive normal form, and the proof starts out by negating the conjunction of the inst formulas at either end. Wimp does not actually use resolution, but it is close, and it is easiest to see Wimp from that vantage point. Starting from the negated lnst conjunction each formula in the path is resolved against the remaining disjuncts, starting with the formulas at the two ends, and working toward the middle. For reasons dis- cussed later, Wimp may clash against the converse of the formula in the path. For the moment we ignore this. The resolvents in the procedure are those (and only those) found in the path. Thus there is no combinatorial explosion since there is no search for resolvents. (The search is effectively done by the marker passer.) Let us consider the example of the path between go1 and smarketl shown in the example network. The formulas used are these: go-step +st Pshp shopplng) V -(- (go-step ?shp) ?go) V (inst ?go go) 4nst ?shp smarket-shopping) V (Inst ?shp shwplw) store-of2 -&t ?shb smarket-sho&ln$ V Y(= (store-of ?shp) ?str) V (Inst Pstr smarket) We then start with NATURAL LANGUAGE / 585 -(inst go1 go) V +inst smarketl smarket) of this would be the path from “restaurant” to “hamburger” in “Jack went to the restaurant. He decided on a hamburger.“) Sup- pose we resolve against the converse of lsa3. This is then unified with go-step giving +inst smarketl smarket) V y(inst ?shp shopping) V -(= (go-step ?shp) gol) (inst ?mlk milk) V +nst ?mlk food) Then we can carry this forward as follows: We then resolve this against isal giving +inst smarketl smarket) V y(inst ?shp smarket-shopping) V Y(= (go-step ?shp) go’) T(inst Pshp smarket-shopping) V ; after converse of isa. +inst milk1 food) V -(= (store-of ?shp) smarketl) This is resolved against store-of2 to give us: +inst ?shp smarket-shopping) V -(= (go-step ?shp) gol) V Y(= (store-of ?shp) smarketl) y(inst Pshp smarket-shopping) V ; after purchased2 V -(- (store-of ?shp) smarketl) V -(= (purchased ?shp) milkl) We get similar assumptions, although using a converse counts as an At this point we have resolved against everything in the path. If this is a path that the system chooses to believe (more on why and how particular paths are chosen later) then the system must rene- gate the remaining clauses, and add them to the database. Note that in renegating the clauses the universally quantified variable ?shp gets flipped to an existentially quantified variable, which then is turned into a skolem constant, giving this: (inst supershopl smarket-shopping) (= (go-step supershopl) gol) (= (store-of supershopl) smarketl) These are, of course, just the abductive assumptions we suggested would be the minimum for claiming to understand the sentence. There are two complications to clear up before we move on to judging the relative “believability” of paths as interpreted by the proof process. First, note that some of the abductive assumptions left by the procedure could already be provable from the database.. For example, if we already knew that Jack had a plan to shop at the supermarket, say supershop- then one of the clauses left at the end of the path proof could be removed by clashing it against (inst supershop- smarket-shopping) This, of course, changes the remain- ing clauses (which still become abductive assumptions), and they would now become: (= (go-step supershop22) gol) (= (store-of supershop22) smarketl) IV Selecting the Best Paths We have seen how paths can be interpreted as proofs, and how the assumptions needed to make the proofs go through are the abductive assumptions required by story comprehension. We also noted that some of the paths are “believed” which means adding their abductive assumptions to the database. We now look at how the “believable” subset of the paths is singled out. Roughly speaking, a path-proof goes through three stages in its route to belief. First its abductive assumptions must be inter- nally consistent, and consistent with what is already known. This is handled by trying to prove each assumption false, and if this fails adding it to the database and trying the next one. Second, the assumptions must have predictive power, in that they must make enough true predictions about the text to warrant making the assumptions. And finally, there cannot be any other path which is equally good, but which makes incompatible assumptions. (By incompatible we mean to include “contradictory”, but allow for other forms of incompatibility as well. More on this later.) Returning to the second stage, the basic idea is a crude approximation of the justification of scientific theories. A scientific theory is good to the degree it makes true predictions about the world, and bad to the degree that new assumptions are needed to make such predictions. To a first approximation the rule we use is That is, the going and the supermarket would be linked to the pre- viously known supermarket shopping plan, rather than a newly minted one. In fact, Wimp tries to prove all of the abductive assumptions, and when this is possible it creates alternative versions of the path proof, one for each way of binding variables in the proofs, plus one where the initial abductive assumption is left unproved. Which of the alternatives is actually believed is decided by the mechanisms described in the next section. that the number of true predictions minus the number of assump- tions (a number we call the path’s figure of merit) must be greater than or equal to zero. For example, in the “go to the store” exam- ple we have already seen that there are three assumptions: (inst supershopl smarket-shopplng) ; Call this shop-assum (= (go-step supershopl) gal) . this go-assum, (- (store-of supershopl) smarketl) I- and this store-assum. There are, as well, three true predictions assumptions, so the figure of merit is zero. which follow from these The last thing which needs clearing up is why we sometimes resolve against the converse of the formula in the path. To see how this could arise, consider this path from smarketl to milk1 store-of2 -(inst ?shp smarket-shopping) V ,(= (store-of ?shp) ?str) V (inst ?str smarket) purchased2 +inst Pshp smarket-shopping) V -(= (purchased ?fd) ?str) V (inst ?fd food) isa -(inst ?mlk milk) V (inst ?mik food) (inst smarketl smarket) From Shop-assum, store-assum, and store-of2. (insi go1 go) ’ From shop-assum, go-assum, and go-step. (= (destination gal) smarketl) From shop-assum, store-assum, go-assum, plus a rule [not given) , ; that destination8 of go-steps are the store-of shopping events. Intuitively this corresponds to the chain of reasoning that super- market shopping predicts the existence of supermarkets (as the store) and food (as the purchased) and milk is food. When we try to apply the path-proof procedure we get stuck after this: The idea of requiring paths to have explanatory power solves a puzzle which crops up in previous work. For example, [$85] has a rule that prevents the marker passer from finding a path of roughly the following form: y(inst smarketl smarket) V 4inst milk1 milk) ; initial disl*unction >’ origin1 . . . plan1 planpart plan.2 . . . origin2 -(inst Pshp smarket-shopping) V after atOTe-Of2 -(inst milk1 milk) V -(= (store-of ?shp) smirketl) where the connections between planpart and the two plans say that planpart is a substep of both plan1 and plan,??, or an object used in both. Alterman justifies this rule because one action or object is seldom used in more than one plan at a time. But killing two birds with one stone is generally considered a good thing to do, so it is hard to see the logical basis for such a rule. At this point nothing else can resolve. The problem is that the for- mulas are not sufficient to prove (inst milk1 milk) but at best only (inst milk1 food) since shopping at the supermarket does predict that there is food involved, but not necessarily milk. (Another example 586 / SCIENCE The thing to notice from our point of view is that such paths typically have a large number of assumptions. For example, if we know of two reasons for getting a rope, say, making a noose and jumping rope, then a path that went from make-noose to rope to Jump-rope would usually require at least the following assumptions (Inst make-noose1 make-noose) (- (patient make-noose’) rope’) (inst jump-rope1 jump-rope) (= (instrument Jump-ropel) ropel) plus whatever else was required for the path. Unless noose making and jumping rope account for a great deal of evidence such a path could not have sufficient explanatory power. Of course, if there were such predictions then this would be exactly the case where Alterman’s rule would break down. Most obviously, if we already knew that the agent was planning one of these activities then some of the above would not have to be assumed and a two birds path would come to mind. For example “Jack decided to jump rope and then kill himself.” Thus we see that Alterman’s rule is a first approximation to a more complicated reality, one which the sufficient-explanatory-power rule captures much more accurately. himself. He got a rope.” a path through making a noose is pre- ferred because it explains more, while in “Jack got a rope. He wanted to kill himself.” the explanations are initially equal, and it is not until the second sentence with “kill” that a path to rope (through making a noose) eventually finds an explanation of the action in the first sentence. For the most part all paths reported by the marker passer are judged by the path checker. However, as a minor efficiency measure if a believable path is found which path zorch p and no believable but conflicting paths are found with path zorch greater than p/10 then no further paths are considered. This eliminates from con- sideration many marginal paths. It has also been noticed [Ch83,Al85] that isa plateaus - paths which meet due to two objects being in the same class (e.g., a path from ‘Lboy” to “dog” meeting at animal because they are both animals) - are essentially useless and have to be pruned out. Simi- lar arguments show how this rule also follows from our theory. V Parsing with Wimp We have now done what we set out to do, explain the meaning of the paths found in our marker-passing approach to language comprehension. While we could now stop, we cannot resist the opportunity to show the elegance of this theory by indicating how it solves problems found in parsing language - in particular those where one’s understanding of the story is required to aid in the disambiguation of the input (e.g., noun-phrase reference, word-sense disambiguation etc). (For a better description, see [Ch86].) Syntax is still separate from Wimp, so now Wimp gets, in effect, the phrase marker from the syntactic parser. So it is told that a certain word is the main verb, and that certain head nouns stand in various relations to it. For example “Jack went to the supermarket” would be given to Wimp like this: (syntactic-rel subject* went1 jack’) ; Jack is the subject of went. (syntactic-rel to went1 smarketl) ; He went “to smarketl.” (word-M jack1 “jack”) ; These relate particular (word-in& went1 “go”) . instances of the word to the (word-M smarketl “supermarket”) 1. dictionary entry We said that the rule of at least as many predictions as assumptionS is an approximate one. The actual rule is considerably more complicated, with what appear at this point to be special cases. There is, as well, a complication dealing with subsidiary evi- dence which is also rather ugly. In both cases we take the uglyness as an indication that the theory is inadequate at this point, and the section on future work spells out some of the problems, and what we see as the best bet for improving things. It is possible that there be several paths, each of which has sufficient predictive power by the above criteria, but which are not compatible. Thus all path-proofs with sufficient predictive power are compared by comparing their assumptions and predictions. For example, if two paths have the same assumptions they are equivalent, so obviously there is no point in believing both. We can arbitrarily ignore one or the other. Similarly if two paths are con- tradictory Wimp should only believe one or the other, but here the choice is not arbitrary. Wimp chooses the path with the highest figures of merit. If two or more are tied for highest, then none are believed and all computations are thrown away in the hope that later evidence allows a decision. There is one other case of some interest, and that is where the paths are incomparable since there are no contradictions in believ- ing both, and they make different assumptions and predictions. Here we distinguish two cases: those which are truly compatible and those which are covertly incompatible. The basic idea is that for two path proofs to be truly compatible their conjoined mert (the number of predictions they make jointly, minus the number of assumptions they require jointly) must be greater than or equal to zero. Note that two path-proofs can individually be explanatory, while jointly they are not, if they have, say, disjoint assumptions, but share some predictions. This comes up in the cases where an action can be explained in more than one way. Both explanations by themselves might be explanatory, but together they are no good because they predict essentially the same facts. So a sentence like “Jack got a rope” could be explained by assuming he is jumping rope or hanging up laundry, but not both because they share same predictions (there is a get and a rope and the rope is the patient of the get) yet have different assumptions, so together they are not explanatory. In such cases Wimp believes only the better of the two. If neither is better than neither is believed. So after “rope” in “Jack wanted to kill These formulas are now used by Wimp as things to predict. Thus predictions can be either state of affairs in the story, or descriptions of what is in the sentence. Note that a path must go from a word to one of the concepts which that word could denote before finding a path to another word. Thus all paths have in them a choice of word meaning, and this choice becomes explicit as an abductive assumption in the course of doing the path proof. In this way word-sense disambigua- tion is automatically done in the course of path proofs. For exam- ple, in “Jack went to the restaurant. He decided on a milkshake. He got the straw.” a path is found from “straw” to drink-straw and then to drink ending up at milkshake This disambiguates “straw .” We handle noun-phrase reference in much the same way. Each new noun phrase is initially assumed to refer to a newly minted internal object. (We distinguish between “denote” which we take as a relation between a word (or symbol) and an object in the world, and “refer” which we use as a relation between a word and an object in the first-order language.) Wimp decides that this noun- phrase refers to an already present term by including an equality statement in the abductive assumptions of a path it believes. For example, in “Jack went to the supermarket. He found the milk on the shelf. He paid for it. ” the “it” at the end is assumed to refer to milk’, (a term created during the second sentence), because of the abductive assumption (= it1 milkl) This assumption is required because the best path proof for pay1 saw it as the paying step of the shopping already created in line one, and milk1 was already esta- blished as the purchased in this shopping event, and thus had to be the same as the it1 (C urrently our knowledge representation language only allows single objects as the fillers of the “slots” (first order functions). If this were not the case this example would be more difficult, but the same reasoning should apply.) Lastly Wimp gives semantic guidance to its ATN parser. As each open class word is parsed the ATN finds all possible parses up to that point and they (in the form given above) are handed off to NATURAL LANGUAGE / 58’ Wimp. Then each path evaluates itself with respect to each of the possible parses. Since the parses produce formulas which are used as predictions, a path may predict the formulas in one parse but not another. For example, in “Alice killed the boy with poison” the “poison/kill” path predicts that “poison” modifies “kill” and not “boy” (the other possible parse). Thus each path selects the parse which maximizes its figure of merit. When a path is selected as the best, it in turn selects a parse (the one it used) and this parse is passed back to the ATN which then follows up on this parse and kills off the rest. (A less extreme version might relegate the others to a “back burner” queue.) VI Problems and Future Research There are many areas where Wimp needs more work, from its basic knowledge representation to its ability to syntactically parse English. We concentrate here on the areas of direct relevance to this paper, namely marker passing and path checking. We noted earlier that the actual algorithm for determining sufficient explanatory power (as represented by the figure of merit) is more complex than we let on. For example, currently a “predic- tion” that there is a physical object in the story, or a person, is not counted as a prediction at all. This implemented as a check on those candidates put forward as a prediction. This is not unreasonable. After all, given the ubiquity of peo- ple in stories, predicting that there is a person is no big deal, as opposed to predicting a supermarket. What is unreasonable is that this is implemented as a special-case exception and that it is an all or nothing affair. Much better would be to somehow note that predicting a physical object is no prediction at all, a person only a little better, a parent still better, a telephone - not bad, while a computer dial-up device is a pretty good prediction. Another problem with the current scheme is what would hap- pen in our “go to the supermarket” example if it knew that some- one would go to the supermarket if he or she were dateing someone who worked there. Depending on the exact axiomitization Wimp might not be able to rule this possibility out, even if no mention of the date had been made. We are currently looking at the use of probabilities to solve these and other problems. While we would keep the basic idea of path proofs, we would replace the idea of explanatory power by the probability that the abductive assumptions are true given the evi- dence from the story. So, for example, this would solve the first problem because in normal bayesian updating the posterior proba- bility of a proposition given some evidence is inversely proportional to the prior probability of the evidence (given various independence assumptions, which typically have to be made to keep the number of statistical parameters in reasonable bounds). This would exactly capture the gradation we suggested in how much various predictions should count. However, probably the most controversial aspect of this work is the use of marker passing in the first place. The problem with marker passing is that it is not obvious if it can do the job of finding important inferences in a very *large and interconnected database. Or to be more precise, can it find the important infer- ences without finding so many unimportant ones that it becomes useless as an attention focusing device? Since Wimp to date has used a very small database (about 75 nodes and 225 facts) it pro- vides no test. Indeed, Wimp finds a lot of garbage. For the simple examples we have run (the above examples, plus similar ones like “Jack put some ice in a bowl. The bowl was wet.” and “Jack wanted to use the stereo. He pushed the on-off button.“) an aver- age call to the marker passer returns about 40 paths, of which 20 are quickly eliminated by a check for isa plateava (paths which go up to meet at a common Isa ancestor) and similar (but more techni- cal) garbage. Of the remaining about one out of ten (2 on the aver- age) is actually a good path. At least one other researcher (Norvig, personal communication) has found about the same one out of ten ratio. Nevertheless, the fear is that the ratio will worsen as the size of the database increases Thus, while we intend to keep exploring the use of marker passing (there is no obvious alternative at this point), we surely intend to keep an open mind on its long-range utility. Interestingly the major results of this paper, the idea of path-proofs and their relation to story understanding, aid us in thinking about a possible liberation from marker passing. The interpretation for paths we have suggested is independent of how those paths are discovered. A185. Ch82. Ch83. Ch86. Ge84. Gr84. Ko75. No86. Pe80. Qu69. Ri85. sc77. Sc78. Wi78. Wo81. VII References Alterman, Richard., “A dictionary based on concept coher- ence,” Artificial Intelligence 25(2) pp. 153-186 (1985). Charniak, Eugene, Gavin, Michael, and Hendler, James, Frail .Z..Z, Some Documentation, Brown University Depart- ment of Computer Science (1982). Charniak, Eugene, “Passing markers: A theory of contextu- al influence in language comprehension,” Cognitive Science 7(3)(1983). Charniak, Eugene, “A Single-Semantic-Process Theory of Parsing,” Technical Report, Department of Computer Sci- ence, Brown University (1986). Genesereth, Michael R., “The use of design descriptions in automated diagnosis,” Artificial Intelligence 24 pp. 411-436 (1984). Granger, Richard H., Eiselt, Kurt P., and Holbrook, Jen- nifer K., Parsing with parallelism: a spreading-activation model of inference processing during tezt understanding, Ir- vine Computational Intelligence Project (1984). Kowalski, Robert, “A proof procedure using connection graphs,” Journal of the Association for Computing Machinery 22(4) pp. 572-595 (1975). Norvig, Peter., “Inference processes and knowledge represen- tation for text understanding,” Ph.D. Thesis, Department of Computer Science, University of California at Berkeley (1986). Perrault, C. Raymond and Allen, James F., “A plan-based analysis of indirect speech acts,” American Journal of Com- putational Linguistics 6(3-4) pp. 167-182 (1980). Quillian, M. Ross, “The teachable language comprehender: a simulation program and theory of language,” Communica- tions of the ACM 12(8) pp. 459-476 (1969). Riesbeck, Christopher K. and Martin, Charles E., “Direct memory access parsing,” 354, Department of Computer Sci- ence Yale University (1985). Schank, Roger C. and Abelson, Robert P., Scripts, Plans, Goals, and Understanding, Lawrence Erlbaum, Hillsdale, N.J. (1977). Schmidt, C.F., Sridharan, N.S., and Goodson, J.L., “The plan recognition problem: an intersection of psychology and artificial intelligence,” Artificial Intelligence 11 pp. 45-83 (1978). Wilensky, Robert, “Why John married Mary: Understanding stories involving recurring goals,” Cognitive Science 2(3)(1978). Wong, Douglas, “Language comprehension in a problem solver,” Proc. Ijcoi 7(1981). 588 / SCIENCE
|
1986
|
171
|
441
|
Using Commonsense Knowledge to Disambiguate Prepositional Phrase Modifiers Dr. K. Dahlgren IBM Los Angeles Scientific Center 11601 Wilshire Blvd. Los Angeles, CA 90025 Abstract This paper describes a method using commonsense knowledge for discarding spurious syntactic ambiguities introduced by post- verbal prepositional phrase attachment during parsing. A com- pletely naive parser will generate three parses for sentences of the form NP-V-Det-N-PP. The prepositions alone are insufficiently pre- cise in meaning to guide selection among competing parses. The method is imbedded in the Kind Types System (KT) which employs commonsense knowledge of concepts, including prototype and in- herent features (generic information) and ontological classifica- tions. The generic information is drawn from published psycholinguistic studies on how average people typically view the world. This method is employed in preference strategies which appeal to the meaning of the preposition combined with information about the verbs and nouns associated with it drawn from the text and from the generic and ontological databases. These determine which syntactic structures generated by a semantically naive parser are commonsensically plausible. The method was suc- cessful in 93% of cases tested. 1. Semantically Implausible Syntactic Ambiguities A problem for text understanding systems is that syntactic rules alone produce numerous ambiguities, many of which are not se- mantically possible (or likely) interpretations. Consider sentence (l), for which any standard parser would produce three distinct syntactic structures. Figure 1 is a syntactic tree showing the parse for (1) in which the key belongs to the lock. The with-phrase is a constituent of the noun phrase headed by lock (NP constituency). Figure 2 displays the parse in which the with-phrase is a constitu- ent of the verb phrase (VP constituency). Figure 3 shows the parse in which the with-phrase modifies the sentence (S-modification), so that the event of buying the lock takes place with the key. Only one of these syntactic possibilities is semantically possible for (l), namely the one in which the prepositional phrase is a complement of the NP whose head is lock. Similarly, only VP constituency is semantically possible for (2), and only S-modification for (3). 1) John bought the lock with the key. 2) John bought the lock with five dollars. 3) John bought the lock in the afternoon. 4) John took the key to the lock. Clearly, the semantically impossible syntactic ambiguities gen- erated for (l)-(3) are spurious. On the other hand, some syntactic ambiguities correspond to possible semantic ambiguities. In sen- tence (4), both the VP constituency and NP constituency parses are semantically possible. It is easy to imagine a situation in which John physically carries the key over to the lock. However, in this case the preferred reading maps to NP constituency because the head of the to-phrase is typically ‘a part of’ or ‘used for’ the head of the direct object NP. A text understanding system that can guess NP constituency in this case is not only practical and workable, it is also superior to one which chooses randomly. The commonsense disambiguation method to be described in this pa- per assigns constituency for prepositional phrases according to commonsense preference, and the only ambiguities which remain J. McDowell University of Southern California Los Angeles, CA 90069 Figure 1 NP Constituency S A NP VP / A i 7 JK John bought Det N th!! ,I PRE&Np /A with Det N Figure 2 VP Constituency NP VP / - Figure 3 S- Modification NATURAL LANGUAGE / 589 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. after the preference strategy has been invoked are the semantically and corn monsensically possible ambiguities, such as those in (5). and Schmolze, 1985) , which differ in which are richer than first order log ic. employing representations 5) John saw the man in the park. John built the houses by the sea. (l)-(4) are all of the form NP-V-NP-PP. The same considerations apply when there are multiple PPs. Consider sentence (6), which has eight parses, of which only one is semantically possible (to the pasture is a VP constituent, and in the afternoon modifies the S). 6) The boy took the cow to the pasture in the afternoon. In addition to a standard syntactic lexicon, KT employs commonsense knowledge encoded in PROLOG axioms in several databases. The Ontological Database encodes basic distinctions such as REAL vs ABSTRACT. The Generic Lexicon lists prototypical and inherent features of nouns and verbs. The Feature Typing Database classifies features as colors, etc. The Kind Types Database lists constraints on feature types associated with kind types such as persons, artifacts, etc. 2. Using Commonsense Knowledge to Disambiguate One solution to the problem of spurious ambiguities is semantically-driven parsing, which forces you to give up the speed and parsimony of autonomous parsing (Arens, 1981, Wilks, 1985). Another solution is to ask the user to disambiguate, as in Tomita (1985). This works well in the database querying environment, but not for text understanding, where human intervention is not feasi- ble. A third possibility, to be described here, is to use the know- ledge already needed to understand the text to eliminate spurious parses. Exemplifying the method with sentences (1) - (3), in (l), the with-phrase must be a NP constituent because locks typically have keys. English speakers know this, and that is why (1) is unambig- uous. In (2), the with-phrase is a VP constituent because buy is a verb of exchange. In (3), the fact that afternoon is a temporal noun forces the interpretation in which the PP modifies the S because only events have a temporal argument. The method described here uses commonsense knowledge as- sociated with concepts to choose among possible parses for a sentence with a prepositional phrase to the right of the verb. (Prepositional phrases in the subject of the sentence are not am- biguous in the same way, because PPs after the head of the subject noun must be NP constituents. We assume that preposed PPs as in “In the spring, we go dancing” show up at the end of the S after parsing.) A text understanding system can eliminate spurious parses by employing preference strategies in the spirit of Warren (1981). The disambiguation method is independent of the type of parser and grammar. After each parse is generated, and before semantic interpretation, word-level commonsense knowledge is employed to decide whether the parse is preferred. The knowledge used here derives from empirical psycholinguistic studies of proto- types associated with common nouns and with verbs (Rosch et al, 1976, Graesser, 1985, Ashcraft, 1976, Dahlgren, 1985). In order to make the system seem to understand the text, this knowledge is needed anyway (Hayes 1985, Hobbs et al, 1985, Lenat et al, 1986). We found that in limited text understanding, where the grammaticality of the text can be assumed, the level of detail called for in Waltz (1981), Gawron (1982), and Herskovits (1985) is not necessary. This paper will first outline the text understanding system and its use of commonsense knowledge, then describe how this know- ledge is used in preference strategies for prepositional ambiguities, and finally list the preference strategy rules and discuss their im- plementation. The Ontological Database reflects top-level category cuts in commonsense knowledge. It is an original cross-classification scheme, taking into account previous work (Keil, 1979, Tenenbaum, 1985). The ontology is represented as PROLOG axioms, and every common noun and verb which KT recognizes is attached at one of its leaves in a rule like (7). Using ontological information, the KT system deduces taxonomically inherited information about the sorts mentioned in the text, as described below. The Generic Lexicon contains prototypical and inherent fea- tures of common nouns which are taken from empirical studies of prototypes. The entry for chicken is shown in (8). The first argu- ment is a list of prototype features, and the second a list of inherent features. (7) bird(X) + chicken(X). (8) chicken{ { white.scratch(X).farm. meat.eggs.roost(X) } , { lay(X,Y) & egg(Y).handleable. haspart(legs,2).cluck(X) } ). The features with variables are logic translations of verb phrases. The Feature Typing database types every feature which occurs in the Generic Lexicon as a color, size, behavior, function, and so on. Thirty-three feature types account for all of the features. The text in (9) is representative syntax KT works with. of the range of vocabulary and 9) John is a miner who lives in a small town. John raises a chicken which lays brown eggs. John digs for coal. The text is translated into PROLOG axioms to form a Textual Database. Queries to the system are translated into PROLOG goals. Queries are answered by a problem solver which consults the Textual, Ontological, Generic and Typing databases. For a higher-level question such as “What color is the coal?“, KT finds the generic information for coal (by Prolog predicate matching). Then it compares all the colors in the Feature Typing Database to the list of generic features and finds that “black” is an inherent feature of coal. For a lower-level question, such as “Is the coal black?“, only the generic information is accessed. The system identities higher- level questions by looking at a list of higher-level predicates (the types). The answers to queries can come from any of these sources, as illustrated in (10). (10) Textual Database 3. Commonsense Knowledge in the Kind Types System (KT) The Kind Types (KT) system (Dahlgren and Mcdowell, 1986) is written in VM/PROLOG. KT reads geography text using an IBM-internal parser and logic translator (Stabler & Tarnawsky), and partially understands it because of commonsense knowledge in its Ontology and Generic Lexicon. It can answer queries both of what the text says directly, and of information the typical speaker of English would infer from the text because of commonsense know- ledge. KT’s representations are based upon a theory which iden- tifies lexical meaning with commonsense knowledge of concepts, so that there is no difference in form of representation between word meanings and encyclopedic knowledge. This theoretical ba- sis is shared with other approaches such as KL-ONE (Brachman Is John a miner? -- Yes Where does John live? -- In a small town Ontological Database What is a miner? - A miner is a role, sentient, concrete, social, individual, real, and an entity. What is digging? -- A motion, goal oriented, temporal, individual, and real. 590 / SCIENCE Generic Database Is the miner rugged? --Probably so. Does the chicken lay eggs? -Inherently so. Does John use a shovel? -Probably so. Typing and Generic Databases What color is the coal? --Typically black. What size is the chicken? --Inherently handleable. 4. Commonsense Knowledge used in the Preference Strategy Our preference strategy assigns PP constituency according to information from several sources: syntactic information about the preposition (PREP) and the verb (V); information from the Ontological Database about the V, direct object (DO), and object of the preposition (POBJ); and information from the Generic Database about the V, DO and POBJ. Table 1 lists the preference rules pre- position by preposition. 4.1 Ontological Class of Object of the Preposition While it is possible for the POBJ to belong to any ontological class, membership in a small set of such classes restricts the PP-assignment possibilities for many prepositions. For example, If the POBJ is temporal, the PP modifies the S. in the morning, for six days If the Prep is by, the POBJ is sentient and the DO is propositional, the PP modifies the NP. the book by Chomsky If the Prep is at or in and the POBJ is abstract, the PP modifies the S. at once, in detail If the Prep is on, and the POBJ is propositional, the PP modifies the NP. the book on love Sometimes it is necessary to consider not only the ontological class of the object of the preposition, but also the ontological class of the DO. For example, in the report by the committee it is nec- essary to know not only that report is PROPOSITIONAL, but that committee is SENTIENT. Other ontological classes which play a role in the preference strategies are PLACE, EMOTION, ROUTE, MEASURE, RELATION, and DIRECTION. A global rule assigns locative and directional PPs to the VP as in (ll), though later spe- cific prepositional rules may assign them as as S modifiers if the global rule fails as in (12). 11) John put the book in the living room. 12) John read the book in the living room. PLACES can be social places (factory, hospital) or natural places (valley, mountain). EMOTIONS enter into PPs in such phrases as under duress, in fear, from hatred, with courage, etc. ROUTES are terms like way, road, path. MEASURES appear in PPs with to (to a degree, to a point). DIRECTIONS figure prominently in physical descriptions (to the North, on the South). PPs headed by with and without are NP constituents if the DO is a RELATION (connection with, contact with). 4.2 Ontological Class of The Direct Object Much less crucial is information about the ontological class of the DO. As described above, the fact that a DO is PROPOSITIONAL is important only in the case of two classes of prepositional objects and then only for certain prepositions. If the DO is a MEASURE, then the PP is an NP constituent (enough food for their needs, much about the world). 4.3 Ontological Class of Verb The nature of the verb itself can sometimes induce a preference for a PP assignment without reference to any other information. For instance, the verbs be and stand and other intransitive STATIVES like them automatically take any PP as a VP constituent (be on time, stand in the rain). For this reason the global rule for statives must precede the global rule assigning all temporal phrases as S modifiers. Mental verbs force VP constituency for PPs headed by of, for, and about (Gawron, 1982), as illustrated by (13) as opposed to (14). 13) John thought of his sweetheart while 14) John repaired the roof of the house. he waited. Verbs of exchange, such as buy, typically take three arguments, the object exchanged, the source, and the goal. In addition, the me- dium of exchange and the benefactee of the exchange can be ex- pressed. With such verbs, PPs headed by for, from, and with, are VP constituents. 4.4 Generic Information So far just the ontological classification of the verb and the NPs to its right have been considered. The Generic Database was en- coded originally in order to describe prototypical and inherent fea- tures of objects for purposes of understanding the meaning of text. It was found to be useful as well in addressing the problem of PP attachment. The generic relations between nouns illustrated in this section are encoded in the Generic Database for nouns. In the case of with and on it is sufficient that the POBJ is mentioned in the prototype description for the DO (car with a wheel, birds with feathers, hair on the head). A locative relationship can also be part of generic information. A stove is typically in the kitchen, a book on a shelf, etc. The typical MATERIAL from which an object is made is included in the generic informatlon for physical objects. A house is made from wood, a window from glass. We call SOLVERS the nouns which are typi- cally associated with other nouns by means of to (the key to the lock, the answer to the probiem). SIZE is a generic feature and is encoded in terms of 13 reality-related size ranges which have mnemonic names such as microscopic, person-sized, etc. An ob- ject must be of the order of at least two sizes larger than the sub- ject of a sentence in order for it to be a suitable location for an action by the subject. That is, a PP headed by a locative preposi- tion will be a S modifier if the prepositional object is a suitable lo- cation for the action of the subject as in (15) and otherwise it will be a VP or NP constituent as in (16). 15) John mixed the salad in the kitchen. 16) John mixed the salad in the salad bowl. The relation between a verb and the generic INSTRUMENT with which the verbal action is carried out is encoded in the Generic Database for verbs along with other information, such as selectional restrictions on verb arguments. In (17), because knife is a typical INSTRUMENT for the verb cut, the PP is assigned to the VP, but it is assigned to the S in (18). 17) John cut the loaf with a knife. 18) John cut the loaf with glee. NATURAL LANGUAGE / j9 1 4.5 Syntax Certain syntactic constructions can also force PP interpretation. There is a large class of intransitive verbs which are ill-formed un- less accompanied by certain prepositions (depend on, look for, make up, get along with, revolve about, cooperate with, turn to, di- vide into, provide for). These are conventional co-occurrence re- quirements, and they force the PP to be interpreted as a VP constituent. Adjectives + PP require attachment to the phrase (XP) containing the adjective (suitable for a child, useful for parsing). On the other hand, the comparative construction forces NP constituency (the largest book in the library, uglier than the man at the desk). We showed above that the ontological class of the POBJ can determine correct PP constituency. Generics and names are syn- tactic classes and also guide PP parsing. 19) John read the book on Peter the Great. 20) John read the book on dogs. For each preposition there is a general rule which takes effect when all the specific rules fail. This is a kind of Elsewhere Condi- tion for the syntax of prepositions. Generalizing the results of the study, locative PPs are NP constituents; directional PPs are VP constituents; and time/manner/place PPs are S modifiers. How- ever, whether or not a PP falls into one of these classes is a func- tion of the prepositional head in combination with the verb and the prepositional object and not of the preposition alone. 5. Success Rate of the Preference Strategy The PP rules were developed intuitively by considering the interaction of each preposition with one, two and three-place verbs. Then the PP rules were hand-tested on 4500 words of schoolbook geography texts, the original corpus upon which the lexicon and ontology in the KT system were built. The PP-attachment rules were developed independently of these texts, but the success rate was a surprising 100%. Then, as a check against these results, the rules were hand-tested on a second group of three short sample texts, These were {l) a 461-word excerpt from a novel, (2) a 415-word excerpt from a work on the history of science, and (3) a 409-word excerpt from a technical article. We assumed parser translation of the texts into strings of the form NP-V-NP-PP for sub- mission to our rules. We also ignored passive by-phrases because the parser recognizes them as distinct from ordinary PPs. On the latter three texts the rules were tested as if the vocabulary was resident in our system. The overall success rate for the second group is 93%. The failures in these tests are of two types. The first type of failure was in idiomatic phrases, most of which have the function of asides or sentence qualifiers (at all, in effect, in every case, under my eyes, in particular, according to). We do not view this as a defect in our system since any system must be able eventually to deal with idiomatic phrases. The second type of fail- ure was outright failure of the rules. If we ignore the failures due to idiomatic phrases then the average success rate for the second group is much higher, 98%. One reason why the success rate is so high is the high occurrence rate of of-phrases. These constitute 32% of the second group. In every case we have seen so far, they attach to the NP immediately to the left. 6. Implementation Preference rules for thirteen English prepositions are listed in Table 1. First, seven global rules are attempted. If none of these rules applies, the procedure relevant to the preposition is called upon. Although there is no single, general algorithm for assigning constituency for prepositions, three points compensate for this lack of generality: the set of prepositions in English is a closed and small set, some rules are used for several prepositions, and for each preposition, the list of rules is short (usually three). The phrase structures which are input to the rules are: VP(V-DO-Prep-POBJ), VP(V-Prep-POBJ), VP(V-Adj-Prep-POBJ) and VP(V-Prep-NP). VP(V-comparative) The seven global rules are listed below. Lexical (V + Prep) means that the relationship between the verb and the preposition is lexical, as described in Section 4.5. Stative (V) means the verb is stative, measure(DO) that the direct object is a measure. Adj and comparative mean that such a construction occurs in the sentence. I. lexical(V + Prep) -> vp-attach( PP) 2. stative(V)-- > vp-attach( PP) 3. time( POBJ) --> s-attach{ PP) 4. xp(...Adj-PP...) --> xp-attach 5. measure(D0) --> np-attach(PP) 6. motion(V) & DO B endofclause --> vp-attach(PP) 7. comparative -- > np-attach To illustrate the application of the rules, consider the rule for of. applied to the sentence “John buys the book of poems”. The global rules are tried, and they fail. Then the first of-rule consults the Ontological Database to see whether the verb is mental. This fails, so the solution is NP constituency. The with-rule illustrates more complex reasoning. Consider the sentence “Sam bought the car with the wheel.” The first with-rule consults the entry for car in the Generic Lexicon, looking for men- tion of wheel there, and finds it, as cars inherently have wheels. The rule succeeds and the PP is assigned NP constituency. In contrast, consider the sentence “Sam fixed the car with a wrench”. The global rules fail, and the first with-rule tests whether a generic relationship exists between the DO and the POBJ, in the Generic Lexicon, and whether the DO is a relation in the Ontological Data- base, and fails. The next with-rule checks whether wrench is a typical instrument of the verb fix in the Generic Lexicon. This suc- ceeds, so the PP is assigned VP constituency. Finally, consider the sentence “Sam fixed the car with Mary.” No generic relation can be found between car and Mary or fix and Mary, so the elsewhere rule applies, and the PP modifies the S. These generic relationships exist for a number of prepositions but are not mentioned in the rules because they are subsumed by the elsewhere condition. For example, such relationships exist for uses of for, as in the wheel for the car and the cap for the jar, but since the rules are written so that NP-attachment is the elsewhere rule, this kind of relationship does not show up directly. In the in-rules, notice that first the generic relation of location (a place the DO is typically found) is checked for in the Generic Lexicon. If that fails, and VP constituency fails, a check is carried out in the Ontological Database for whether the POBJ is a place. This order captures the difference between (21) and (22). Our sys- tem chooses S modification for (23), but it is actually ambiguous. 21) John saw the horse in the barn. 22) John walked the horse in the city. 23) John saw the horse in the city. The rules work for constructions which have no DO. There are several types of these. One is the type where the verb must always cooccur with a certain preposition (depend on). These are covered by the first global rule, which checks for such constraints in the Syntactic Lexicon. Another type is STATIVE verbs, as in “John lives in the house”, which are covered by the second global rule. Notice that intransitive constructions are excluded from the sixth global rule which assigns VP constituency for sentences such as “John put the book on the table”. This means that the rules will prefer S-attachment in some cases where sentences are commonsensically ambiguous, as in (24). 24) John ran at the woman. John ran by the park. 592 / SCIENCE Conclusion ACKNOWLEDGEMENTS The preference strategy presented here can be applied to the output of any type of parser, and the commonsense knowledge can be represented in any language desired. The content of the know- ledge derives from available empirical studies. Thus the method is broadly applicable. The method interfaces autonomous syntactic and semantic components of a natural language understanding system, discarding implausible syntactic trees before they are fully interpreted semantically. It is also possible to apply the preference strategy during the parse, by first generating all the possible places to attach a PP, and looking ahead to parse the object of the PP. At this point all of the information needed by the preference strat- egy is available, and the rules can be applied, thus eliminating the expense of generating parses only to discard them later. Table 1 of-rules mental(V) -- > vp-attach{ PP) Elsewhere - > np-attach( PP) on-rules (location(DO,POBJ) OR generic(POBJ) OR name(POBJ) OR (propositional(D0) & abstract(POBJ)) -> np-attach(PP) Elsewhere --> s-attach( PP) for-rules place(POBJ) OR sentient(POBJ) OR mental(V) OR exchange(V) -- > vp-attach{ PP) distance(POBJ) --> s-attach(PP) Elsewhere --> np-attach(PP) at-rules abstract(POBJ) OR place( POBJ) --> s-attach( PP) Elsewhere --> np-attach(PP) in-rules abstract(POBJ) OR emotion(POBJ) OR place(POBJ) -- > s-at-tach( PP) Elsewhere --> np-attach(PP) by-rules location( DO,POBJ) -- > np-attach{ PP) propositional(D0) & sentient(POBJ) --> np-attach(PP) Elsewhere --> s-attach(PP) under-rules twosizeslarger(POBJ,SUBJ) OR propositional(POBJ) - > s-attach( PP) Elsewhere -- > np-attach( PP) about-rules mental(V) OR motion(V) --> vp-attach( PP) Elsewhere - > np-attach( PP) to-rules solver(D0, POBJ) OR route(D0) -> np-attach geometric(V) & direction( POBJ) -- > s-attach{ PP) place(D0) & direction(POBJ) --> np-attach(PP) measurement( POBJ) -- > s-attach Elsewhere --> vp-attach(PP) with and without-rules partof(DO,POBJ) OR relation(D0) --> np-attach(PP) instrument( POBJ,V) -- > vp-attach(PP) Elsewhere -- > s-attach(PP) from-rules material( POBJ) OR emotion( POBJ) -- > s-attach(PP) exchange(V) --> vp-attach(PP) Elsewhere --> np-attach(PP) We gratefully acknowledge the suggestions of Edward P. Stabler and the support of Juan Rivero. REFERENCES (1) Arens, Y. Using language and context in the analysis of text. IJCAI, Vancouver, (1981). (2) Ashcraft, M.H. Property norms for typical and atypical items from 17 categories Memory and Cognition 6:3 (1976) 227-232. (3) Dahlgren, K. The cognitive structure of social categories. Cognitive Science 9 (1985) 379-398. (4) Dahlgren, K. and J. McDowell. Kind types in knowledge representation. COLING86, Bonn (1986). (5) Gawron, J.M. Lexical representations and the semantics of complementation. Dissertation. University of California, Berkeley, (1983). (6) Graesser, A. and L. Clark. Structure andprocedures of implicit Know/edge . Ablex (1985). (7) Hayes, P.J. The second naive physics manifesto. Forma/ Theories of the Commonsense World. Ed. J.R. Hobbs and R.C. Moore. Ablex (1985). (8) Herskovits, A. Semantics and pragmatics of locative expressions. Cognitive Science9 (1985) 341-378. (9) Hobbs, Blenko, Croft, Hager, Kautz, Kube and Shoham. Commonsense Summer: Final Report. Stanford, California: CSLI Report (1985). (10) Keil, F. C. Semantic and Conceptual Development. Harvard U Press (1979). (11) Lenat, D., M. Prakash, and M. Shephard. CCY: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. A/ Magazine 4 (1986) 65-85. (12) McCord, M. The lexical base for semantic interpretation in a prolog parser. Workshop on the Lexicon, Parsing and Seman tic Interpretation, CUNY Graduate Center (1985). (13) Rosch, Mervis, Gray, Johnson, D.M. and Boyes-Braem. Basic objects in natural categories. Cognitive Psychology 8 (1976) 382-439. (14) Stabler, E.P., Jr., and G.O. Tarnawsky. NUProlog---A Prolog-based natural language facility. To appear. (15) Tenenbaum, J.D. Taxonomic reasoning. Proc. IJCAI, Los Angeles (1985). (16) Tomita, M. An efficient context-free parsing algorithm for natural languages. Proc. IJCAI, Los Angeles (1985). (17) Waltz, D. L. Toward a detailed model of processing for language describing the physical world. Proc. IJCAI, Los Angeles (1981). (18) Warren, D.H.D. Efficient processing of interactive relational database queries expressed in logic. 7th /nt/.Conf. on Very Large Databases, Cannes, France (1981). (19) Wilks, Y., X. Huang and D. Fass. Syntax, preference and right attachment./JCA/, Los Angeles (1985). through-rules Elsewhere -- > s-attach( PP) NATURAL LANGUAGE / 593
|
1986
|
172
|
442
|
Beyond Exploratory Programming: A Methodology and Environment for Conceptual Natural Language Processing Philip Johnson Wendy Lehnert Department of Computer and Information Science University of Massachusetts Amherst, Mass. 01003 Abstract This paper presents an attempt to synthesize a methodology and environment which has features both of traditional software development methodologies and exploratory programming en- vironments. The environment aids the development of concep- tual natural language analyzers, a problem area where neither of these approaches alone adequately supports the construction of modifiable and maintainable systems. The paper describes problems with traditional approaches, the new “parallel” devel- opment methodology, and its supporting environment, called the PLUMber’s Apprentice. Introduction AI software systems are rarely developed “by the book.” De- spite the plethora of design methodologies available today, AI programmers often perceive their problems as unamenable to so- lution through Ystructured” analysis and design techniques. In- stead, we have come to rely on a set of unabashedly unstructured programming techniques or “design heuristics”, collectively re- ferred to as “exploratory programming.” While these techniques may be sufficient for developing systems to test research hy- potheses, they do not provide much aid in attaining the goals of production quality systems: reliability, extensibility, and main- tainability. This paper describes a software development methodology and environment for conceptual natural language processing, a domain in which the desire for production quality systems far exceeds their availability. The methodology attempts to find a middle ground between the inflexibility of structured design methodologies and the looseness of exploratory programming. This is accomplished by extending the exploratory programming paradigm to include the “structured growth” of requirements and specifications as well as implementation level descriptions of the system. From another perspective, it could be viewed as removing the strict ordering of requirements - specifications - design - implementation from traditional design methodologies. To motivate this approach, we first examine the strengths and weaknesses of current design methodologies, and discuss why these approaches cannot be successfully applied to concep- tual natural language processing. Next, the methodology and its supporting environment, called the “PLUMber’s Apprentice”, is introduced and examples of its use are described. Finally, sev- eral approaches for the automation of the design process within this paradigm are discussed. Current Programming Methodologies The majority of software development methodologies are based] upon the traditional “software lifecycle,” which divides the de- velopment process into the following stages: Requirements: The needs of the user community are as- sessed and described, usually informally. Specifications: Requirements obtained are used to produce a formal and complete description of the behavior of the final system. Design: High-level algorithms and data structures are con- structed which together implement the functionality of the specifications. Modules and the interfaces between them are specified. Implementation: The design is translated into the source language. Testing: The system is checked to ensure it runs correctly and implements all specifications. Maintenance: The system is modified to support changes in functionality desired by the user community. Many languages and tools have been developed for these phases, some examples of which are described in [1,15,20,25]. In addition, [3] gives an overview of several major design method- ologies. An assumption underlying this work is “linearity” in the software lifecycle- requirements can be specified and fixed be- fore specifications, specifications before design, and so on. Thus tools and techniques for design can rely on complete and fixed specifications, for example. While this assumption is perfectly valid in many problem areas, it is often violated in AI applica- tions. Sheil [23] describes some of the problems in using these ap- proaches in problem domains where specifications cannot be completely specified and frozen in advance. Since specifications are used to generate the module structure and interfaces be- tween them, the system structure reflects its initial functionality. Changes in the specifications which cut across module bound- aries will be difficult to implement, due to both the inherent complexity of such a process, and because features of structured languages (like strong typing) tend to complicate those types of changes. Automating the implementation process (i.e., making specifi- cations “executable”) is one answer to the problem of frequently changing requirements[9,11,28]. If the system is responsible for generating the implementation, then system development cen- 59-i / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. ters around the maintenance of specifications. While human intervention is usually required to make the system-generated implementation efficient, this can often be delayed until the re- quirements for the system have stabilized. However, all of these approaches require complete specification for the behavior of the system. Unfortunately, this type of specification language remains beyond the state of the art for problem areas such as vision, robotics, or natural language understanding, so the ap- plication of this approach in these areas does not appear to be imminent. The exploratory programming approach begins by dismissing this view of the software lifecycle altogether. Sandewall [Zl] terms the development process “structured growth” : An initial program with a pure and simple struc- ture is written, tested, and then allowed to grow by increasing the ambition of its modules.. . The growth can occur both “horizontally”, through the addition of more facilities, and “vertically”, through a deepening of existing facilities and making them more powerful in some sense. As Sheil (op. cit.) explains, this approach necessitates a great deal of automated support, including sophisticated en- vironments for entering, inspecting, debugging, and modifying code. In addition, he claims it requires a language with late bind- ing and weak type checking, in order to minimize the amount of language-level design “rigidity”. Unfortunately, the lack of higher-level descriptions for requirements and specifications has a cost. One benefit of these descriptions is that they can aid the developer in implementing a system which naturally reflects the structure of the problem domain. An exploratory programmer has no such guidelines, and thus system structures can become highly idiosyncratic, dependent primarily upon the order with which the programmer decided to “increase the ambition of his modules.” Requirements and specifications also have an impor- tant role as documentation, which can also be lost in exploratory environments. The PLUMber’s Apprentice Conceptual Natural Language Processing. Conceptual natural language processing (CNLP) is not well suited to either traditional methodologies or exploratory par- adigms. It suffers from a variety of developmental problems, including: l CNLP development generally proceeds by first developing a system to handle a small number of sentences, eventu- ally extending this system to cover the full domain. Fre- quently this process resembles the building of a ‘<house of cards”, where minor changes can cause the whole structure to come crashing down, requiring extensive redesign. l Maintenance is a terrible problem. In addition to the threat of a “fallen house of cards”, CNLP designs tend to be highly idiosyncratic, based upon which particular subset of the domain was handled first, and how the de- veloper decided to extend it outward from there. It is of- ten difficult or impossible to understand how a particular modification alters the global behavior without re-running the system on a large set of sentences. l Due to the immaturity of the field, there are a variety of techniques for CNLP: novices learn primarily by experi- mentation and first-hand experience. l Compounding the above problems is the fact that nat- ural language interfaces often need to evolve quickly and continuously- both in the front end (the types of sentences to be handled) and the back end (the set of commands or representations output by the interface). These problems make it difficult to apply existing methodolo- gies successfully. The change in requirements and specifications as the front and back ends evolve makes the traditional life cy- cle model unsuitable. The limited availability of generic CNLP strategies precludes automated approaches. What appears to be necessary is a combination of traditional and exploratory ap- proaches. If the developer could create and manipulate higher level descriptions of the system, its conceptual structure and its limitations would become more apparent earlier in the devel- opment process. However, the developer must still be able to experiment with different implementations, and thus the high- level descriptions must be able to evolve easily and continuously with the implementation. The PLUMber’s Apprentice is an at- tempt to achieve this goal of fluid yet explicit structure. The “Consistency Criterion.” The PLUMber’s Apprentice includes a set of languages which describe the developing system with differing levels and types of abstraction. These languages are analogous to those devel- oped for formalizing the initial phases of the traditional lifecycle model- requirements, specifications, design, and implementa- tion. The PLUMber’s Apprentice also includes facilities simi- lar to the “power” tools of exploratory environments- including a graphical interface providing several views of the evolution of memory structures during system execution, and editors for creating and modifying each description of the system[lO]. Unlike either traditional or exploratory facilities, however, the PLUMber’s Apprentice also includes tools for assessing and maintaining the consistency of each system description with each of the other descriptions. This feature has a number of implica- tions: l Unlike the traditional lifecycle model, where each stage must be completed before moving on to the next, the PLUMber’s Apprentice allows the descriptions to be devel- oped in any order. Furthermore, the system allows these descriptions to co-exist in a state of partial completion, as long as what has been specified is not contradictory. l Consistency checking allows development to occur under a variety of paradigms. The traditional top-down approach, proceeding from abstract to concrete descriptions is, of course, supported. An “exploratory” paradigm, using sim- ply the implementation language and the power tools is also possible. But a more powerful “parallel” development process is also possible. Since partial descriptions of the system at any level of abstraction are supported, the de- veloper is free to ‘Lexplore” the system structure on all of these levels simultaneously. This corresponds more closely to the way in which programmers naturally develop soft- ware: for example, implementation issues often arise dur- ing requirements analysis, or during implementation the NATURAL LANGUAGE / 595 developer may become aware of additional implicit speci- fications. This information is usually lost - if not to the original developers, then certainly to their successors. The PLUMber’s Apprentice allows these decisions to be cap- tured at whatever point in the development process they are discovered. l Finally, these tools allow the languages to be used in sur- prising ways. When the implementation is found to be in conflict with the higher level descriptions, the developer might modify the implementation to bring it into line with the “conceptual model”. On the other hand, the developer might just as easily change the specification to more ac- curately reflect the new behavior. The mere existence of accurate high level descriptions serves as invaluable docu- mentation aid. In addition, novices attempting to under- stand the system might test their understanding by adding new high level descriptions and seeing if they are consistent with the actual system. When extensions to the system are made, the Apprentice can ensure that they are faithful to the “spirit” of the system as expressed by the higher-level descriptions. PLUM: The Implementation Level Language. PLUM (The Predictive Language Understanding Mechan- ism) incorporates principles of conceptual sentence analysis de- veloped by a number of researchers in the field of natural lan- guage processing [5,6,7,8,12,16,18,19,24,26,27]. Unlike syntactic sentence analyzers that strive to produce a syntactic parse tree in response to a sentence, a conceptual sentence analyzer attempts to produce a conceptual meaning representation that captures the semantic content of a sentence. Particular constituents within a sentence (typically verbs) are associated with predictive concept frames that allow an un- derstander to identify meaningful relationships among other parts of the sentence. For example, the active form of the verb “to give” will predict an actor or agent, an object, and a recipient. Syntactically, the actor is expected to correspond to the sub- ject of the sentence, the object corresponds to the direct object, and the recipient corresponds to an indirect object or object of a prepositional phrase. The goal of a conceptual analyzer is to fill each slot in its top-level concept frame with appropriate slot fillers found throughout the sentence. Slots inside frames can be filled with memory tokens (normally associated with noun phrases) or other concept frames. For example, the top level concept frame for “John told Mary that he was going home” contains slots for an information source (John), an information recipient (Mary), and the information being transfered (the fact that John was going home). In this case, another concept frame representing John going home is needed to fill the object slot of the top level concept frame. An excellent introduction to con- ceptual sentence analysis can be found in 1221. Documentation specific to PLUM is available in [14] and [13]. In PLUM, a declarative structure is associated with each con- cept frame. This structure is called a prediction prototype. Prediction prototypes describe not only the concept frame to be instantiated (added to memory) during sentence analysis, but everything PLUM requires in order to achieve this instantiation as well. Four key components give PLUM all it needs to know: l The Concept Frame describes the frame structure and spec- ifies any slot constraints that narrow the set of possible slot fillers appropriate for a frame instantiation. l The Control Structure specifies useful search routines for each slot in the Concept Frame. Some searches must look backward for information already present in the sen- tence while others must search forward, effectively waiting to see what the rest of the sentence holds in store. In either case, it may be necessary to halt a search at clause bound- aries or other sentence boundaries. The Control Structure describes such search parameters by means of a simple grammar called an Expect Clause. When PLUM reads a Prediction Prototype, it interprets all the Expect Clauses within the Control Structure, producing executable search routines for possible run-time execution during sentence analysis. l The Predicted Prototypes are a list of other Prediction Prototypes that will be triggered if the current Prediction Prototype is successfully instantiated. Prediction Proto- types can be triggered during sentence analysis two ways: (1) a word encountered in the sentence can trigger a Pre- diction Prototype, or (2) an instantiated Prediction Pro- totype can trigger new Prediction Prototypes. The first method corresponds to “bottom-up” sentence processing, while the second enables “top-down” sentence analysis. In conceptual sentence analysis, the general idea is to go bottom-up until you know enough to go top-down. l The Required Slots list all slots in the Concept Frame that must be filled before the frame can be instantiated. This is an optional component within the Prediction Pro- totype, but a default requirement is assumed in the event that no Required Slots are specified. If this component is omitted, PLUM considers the corresponding Concept Frame to be instantiated only if one of its slots if success- fully filled. Any slot will suffice, but at least one must be filled. The idea of a Prediction Prototype is easiest to understand with a concrete example in hand. The following (slightly sim- plified) Prediction Prototype defines a Conceptual Dependency case frame for ATRANS, an abstract transfer of possession. This prototype is triggered by an active ATRANS verb (e.g. “to give”). (create-pred type (ATRANS) comment (triggered by ‘give’ & other active ATRANS verbs) concept-frame (actor = (animate) act = ATRANS object = (physob j > source = (same-as actor) recipient = (animate)) control-structure ((expect actor in past referent) (expect object in future referent) (expect recipient in future referent) (expect recipient in future direction-to value)) predicts (cd) required-slots (actor object recipient)) 5% / SCIENCE In this prototype, all slots are filled by referent frames (cor- responding to memory tokens created by noun phrases) that satisfy the slot constraints specified by the Concept Frame. The object being transfered is expected to be a physical object while the recipient of the transfer is presumed to be animate. These slot constraints are needed to sort out direct objects and indirect objects, since the searches specified by the expect clauses only know to search for referents appearing after the verb. Notice that two expect clauses are defined for the recipient slot. One is designed to pick up recipients that appear as indirect objects (John gave Mary the book) and the other is designed to cover cases where the recipient appears in a prepositional phrase (John gave the book to Mary). When a system designer builds a natural language interface with PLUM, he or she designs Prediction Prototypes as if pro- gramming in a highly declarative programming language. Since all slot constraints are readily found in the definition for the Concept Frame, and the searches used to fill these slots are de- scribed declaratively in the Control Structure, the role of any prototype in the processing of an individual sentence is rela- tively easy to understand. The difficulties arise when a large number of prototypes are designed to interact with one another in their slot-filling activities. It is much harder to anticipate all the possible interactions between prototypes that can arise in various sentences. The traditional manner of controlling interactions is by pe- riodically checking the system out on a large testbed of sen- tences. Whenever a new Prediction Prototype is added to a system that previously checked out, there is the possibility that some side effect from the new prototype will cause new inter- ference effects within the previously consistent system of Pre- diction Prototypes. Unfortunately, this process only uncovers system inconsistencies, it does not provide any direction either in pinpointing the source of the inconsistency within the proto- type definitions, or in determining the necessary modifications to the system. Often the modifications made to remove one inconsistency simply produce another. To combat this problem, the PLUMber’s Apprentice pro- vides two languages: abstract prototypes for describing the system structure, and abstract instantiation sequences for] describing processing strategies. Use of these languages aids the developer in understanding not only how to extend the system successfully, but also when the current system design is not ad- equate for the desired modifications. Abstract Prototypes. To help the developer manage the complexity of dozens or hundreds of prototypes in a system, the PLUMber’s Apprentice provides abstract prototypes. Abstract prototypes are simi- lar to Smalltalk Classes and Zetalisp Flavors, enabling the devel- oper to specify inheritance hierarchies of the properties of pro- totypes. While Flavors and Classes impose ‘(top-down” devel- opment (requiring the abstract description before an instance of it can be made), abstract prototypes can be developed “bottom- up” or “middle-out”, in addition to top-down. For example, the developer might first develop a set of prototypes, then develop the abstract prototypes to modularize the system and to capture the common properties between them. Figure 1 contains one useful hierarchy for VMSPLUM[13], a prototype natural language help facility for the VAX/VMS com- mand language. In this case, the leaf nodes are PLUM proto- types, the others being abstractions. Sometimes the abstraction can specify properties that must hold true of its instances’ struc- tures (such as the requirement that <command-frame>s search for command qualifiers.) They might also group prototypes by related behavior, rather than structure (for example, <values>.) In the former case, the Apprentice can check to ensure that an addition to that class has the required properties. In the latter case, new additions to the class can be checked for the prereq- uisite behavior. More than one inheritance hierarchy could be mapped onto the same set of prototypes, resulting in multiple “perspectives” or viewpoints on the system. WNSPLUN-prototypes> ccommand-f raune> cqualif ication) <qualifiers> <values> <structural> / \ <basic> <extended, A <integers> <time> <top-level> delete COPY VP print 6ince confirm 1 3 11:34 help-frame Figure 1: A Partial Prototype Abstraction Hierarchy foor’ VMSPLUM. NATURAL LANGUAGE / 597 The developer could have generated the abstract prototype hierarchy in Figure 1 in several ways. Perhaps a small inter- face handling only <basic> commands without qualifiers was implemented initially, and the structure grew “upwards” and “outwards” from there. This corresponds to the “exploratory” approach. Or, the developer might have begun by considering all the cases that could occur, and built “downwards” from there, corresponding to the “top-down” approach. The most efficient path is probably somewhere in between; perhaps first developing a “rapid prototype” implementation, then creating an abstract structural model, and then using that model to systematically extend the interface. The hierarchy aids greatly in this exten- sion by providing a kind of “semantic type checking” on the prototypes. Finally, the hierarchy provides aid to the novice in understanding the similarities between and requirements of the dozens or hundreds of prototypes in a PLUM interface. Abstract Instantiation Sequences. The control structure of the PLUM implementation language is decentralized, being specified within each of the prototypes. While this has the advantage of automatically defining a “lo- cal context” for the search of memory, and a natural notation for description of. an individual prototype’s control structure, it has the disadvantage of leaving implicit the global behav- ior of the system, and thus obscuring global processing strate- gies. Abstract Instantiation Sequences are designed to allow the developer to make explicit the sequence of PLUM events necessary and sufficient for prototype instantiation. By making this sequence explicit, the rationale for the structure of the ex- pect clauses becomes much more obvious. The sequences also serve an important debugging function, by providing a method for catching unanticipated interactions among prototypes which might result in their addition to memory in a context unforeseen by the developer. The abstract instantiation language is related in spirit to Constrained Expressions [‘2,4]. The developer gives an algebraic expression, whose terms are “atomic PLUM events” and whose operators express sequencing and causality information. Atomic PLUM events are predicates whose arguments are simply proto- types, either abstract or implementation-level, and which denote the fundamental state changes of the system. For example, a few of the atomic PLUM events in VMSPLUM are: (predict <command-frame>) (Prediction of a member of the <command-frame> class of prototypes.) (instantiate help-frame) (Instantiation of the help-frame prediction prototype.) (slot-fill <top-level> <command-frame>) (A member of the <top-level> class of prototypes has a slot filled by a member of the <command-frame> class of prototypes.) The operators determine the legal orderings of these events dur- ing system execution, and their relationship to each other. The current operators are: l 1 1 (Strict Concatenation) System events must be directly contiguous in time. . , . (Loose C’oncatenation) One event follows another with any arbitrary intervening events. OR (Alternation) At least one of the events must occur. -> (Causality) The first event “causes” the following event. Causality is context-sensitive; verifying that Instantiate (X: -> Predict(Y) requires different analyses than verifying that slot-fill(X,Y) -> Instantiate(X). # (Non-ordered Occurrence) The specified set of events must occur, but in any order. NOT (Negation) The specified event or expression must not occur. For example, one global processing strategy used in VMS- PLUM is the following: The necessary and sufficient set of events for the instantiation of top-level frames are as follows: First, there must occur a prediction of an instance of a <top-level> prototype. After some (possibly zero) intervening events, an instance of a <command-frame> prototype is instantiated. This event causes the slot- filling of the <top-level> prototype, which causes its instantiation. Following this, no prototype of the class <command-frame> may be instantiated. This strategy can be expressed by the following abstract in- stantiation sequence: (def -instantiation-sequent (predict <top-level>) e (<top-level>) . * . (instantiate <command-frame>) -> (slot-fill <top-level> <command-class>) -> (instantiate <top-level>) . . . (not (instantiate <command-frame>))) few of the errors which this abstraction can identify are: Ambiguities in the sentence which cause the instantiation of two <command-frame> prototypes. Modification to help-frame such that slot filling by a <command-frame> prototype no longer results in instan- tiation of help-frame. Instantiation of help-frame without slot filling by a <command-frame> prototype. Future Directions. The PLUMber’s Apprentice is currently under construction at the University of Massachusetts. While its facilities have only been applied to small systems, such as the VMSPLUM interface, initial results have been encouraging. A number of directions for future extension of the system are currently under study, including: l A language for uussertions”. Occasionally the developer may want to constrain the behavior of the system in a fashion not directly related either to the structure of the prototypes, or to the sequence of events necessary for their instantiation. Frequently this takes the form of restric- 598 / SCIENCE tions on the input (form of the query), or output (form of the final memory structure), or some intermediate stage of processing. To accomodate this, the Apprentice provides a “hook” for developers. At the present time, this simply takes the form of lisp function defined by the developer, which is called at a point in time during execution specified by the developer, and which may access (but not modify) the values of internal memory structures. Greater experi- ence with these “assertions” about system behavior may allow the development of a language specifically for their expression. l User testing of the methodology and environment. Much more experience with the environment needs to be ob- tained before definitive conclusions about its worth can be made. It would be interesting to determine what de- velopmental “path” is taken by users in this environment- what are the methodological differences between novice and expert interface developers? Can abstract processing structures and strategies be identified and ported to new domains? Automated system extension by analogy. The abstraction languages provide a great deal of information about the structure and function of the implementation. Enough high-level information can be specified about the VMS- PLUM interface that the system can automatically add certain kinds of new VMS commands to the interface by analogy, though this requires making structural changes to certain prototypes, and adding new instances of abstract classes. It may be possible to provide facilities for exten- sion by analogy at the system level, in addition to the prototype level. l Automated prototype hierarchy generation. Given a set of prototypes, there exists an unbounded number of possi- ble abstraction hierarchies. However, only a few of them reflect the conceptual structure of the system. It seems likely that by analyzing the use and occurrence of proto- types in successfully processed queries, automated aid in construction of the hierarchy could be provided. Conclusion. Programming methodologies in artificial intelligence serve a dual purpose that is not present in other areas of computer sci- ence. On the one hand, those of us involved in AI applications are trying to build useful and reliable systems for large user populations. On the other hand, those of us who engage in ba- sic research often write programs for the sole purpose of testing out an idea or investigating a new problem area. It is not rea- sonable to assume that a single programming methodology will be equally effective in the service of both goals [17]. But what should we do when technologies discovered by basic research entail a level of conceptual complexity that makes technology transfer into application systems problematic? We can either reject the technology as being unmanageable and ill-conceived (as Dijkstra might recommend), or we can design tools to help us manage these more demanding levels of program complexity. Techniques in natural language processing provide especially compelling arguments for more powerful software development tools, since the information processing requirements of natural language are both highly demanding and highly idiosyncratic. While traditional programming methods encourage us to iden- tify and exploit linguistic regularities, the heart of the natural language problem is more accurately characterized by inevitable exceptions to almost any rule, non-generalizable irregularities, assumptions that might be wrong, and adequate (as opposed to correct) interpretations. This paper argues for a new set of languages to aid the de- sign of natural language systems. More specifically, we are im- plementing a set of specification languages for a conceptually- oriented language analyzer, PLUM. These languages describe declarative and procedural information about a developing lan- guage interface at varying levels of abstraction. Unlike most languages for specification or design, the PLUM abstraction lan- guages do not impose a developmental sequence: the designer may freely intermix design, specification, and implementation. The specification-level languages can be used for debugging, pro- gramunderstanding, and prototype synthesis, as well as for spec- ification. This freedom appears to allow a more “ergonomic” de- velopment process - one that is better suited to think about systems during their development. the way people Acknowledgements. this work supported in part by NSF Presidential Young Investigator Award NSFIST-8351863 and in part by the Advanced Research Projecta Agency of the Depsrtment of Defense and monitored by the Office of Naval Research under contract no. NO001485K-0017. The authors gratefully acknowledge comments and criticisms of the ideas in this paper by John Brolio and Jack Wileden. REFERENCES PI PI PI PI 151 161 171 PI M. Alford, “SREM at the Age of Eight: The Distributed Computing Design System”, Computer, April 1985. G. Avrunin, L. Dillon, J. Wileden, W. Riddle, “Constrained Expressions: Adding Analysis Capabilities to Design Meth- ods for Concurrent Software Systems”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 2, February 1986. G.D. Bergland, “A Guided Tour of Program Development Methodologies”, IEEE Computer, October 1981. R. Campbell, A. N. Habermann, “The Specification of Pro- cess Synchronization by Path Expressions,” Lecture Notes in Computer Science, vol. 16, Springer-Verlag, Heidelberg, 1974, 89-102. E. Charniak, “A Parser with Something for Everyone”, Technical Report No. CS-70, Department of Computer Sci- ence, Brown University, Providence, RI. 1981. G. DeJong, “Prediction and Substantiation: A New Ap- proach to Natural Language Processing”, Cognitive Sci- ence, 3, 1979. M. Dyer, In-Depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension, MIT Press, Cambridge, MA. 1983. A.V. Gershman, “Knowledge-Based Parsing” (Ph.D. the- sis) Research Report 156, Department of Computer Science, Yale University, New Haven, CT. 1979. NATURAL LANGUAGE / 599 PI PI Pll P21 P31 PI PI P61 P71 PI PI PO1 WI PI WI I241 1251 C. Green, D. Luckham, R. Balzer, T. Cheatham, C. Rich, “Report on a Knowledge-Based Software Assistant”, Tech. Report KES. u.83.2, Kestrel Institute, Palo Alto, Ca. July, 1983. P. Johnson, “Requirements Definition for a Plumber’s Ap- Prentice”, in Proceedings of the Second Annual Workshop on Theoretical Issues in Conceptual Information Process- ing, May 1985. P. Kruchten, E. Schonberg, J. Schwartz, “Software Pro- totyping Using the SETL Programming Language”, IEEE Software, October, 1984 PI WI PI M. Lebowitz, “Memory-Based Parsing”, Artificial Intelli- gence, 21, 4. 1983. W. Lehnert, K. Narasimhan, B. Draper, B. Stucky, M. Sul- livan, “Experiments with PLUM”, Counselor Project Tech- nical Memo No. 2, May, 1985. W. Lehnert and S. Rosenberg, “The PLUM Users Manual,” Counselor Project Technical Memo No. I, May 1985. B. Liskov, S. Zilles, “Specification Techniques for Data Ab- stractions”, IEEE Transactions of Software Engineering, Vol. SE-l, No. 1, March 1975 S. Lytinen, “The Organization of Knowledge in a Multi- lingual, Integrated Parser”, (Ph.D. thesis) Research Re- port 940, Department of Computer Science, Yale University, New Haven, CT. 1984. D. Partridge, Y. Wilks, “Does AI Have A Methodol- ogy Different From Software Engineering?“, Unpublished Manuscript, New Mexico State University, 1986. C.K. Riesbeck and C.E. Martin. “Direct Memory Access Parsing”, Research Report 354, Department of Computer Science, Yale University, New Haven, CT. 1985. C. Riesbeck and R. Schank, “Expectation-based Analysis of Sentences in Context”, Research Report 78, Department of Computer Science, Yale University, New Haven, CT. 1976. D. Ross, K. Schoman, Jr., “Structured Analysis for Re- quirements Definition,” IEEE Transactions on Software Engineering Vol. SE-3, No. 1, Jan. 1977. E. Sandewall, “Programming in an Interactive Environ- ment: The Lisp Experience”, Computing Surveys lO( 1)) 1978. R.C. Schank and C.K. Riesbeck, Inside Computer Un- derstanding, Hillsdale, NJ, Lawrence Erlbaum Associates. 1981. B. Sheil, “Power Tools for Programmers,” Datamation, February, 1983. S. Small, “Word Expert Parsing: A Theory of Distributed Word-based Natural Language Understanding”, (Ph.D. thesis) TR-954, Department of Computer Science, Univer- sity of Maryland. 1980. D. Teichroew, E. Hershey, “PSL/PSA: A Computer-Aided Technique for Structured Documentation and Analysis of Information Processing Systems”, IEEE Transactions on Software Engineering, Vol. SE-3, No. 1, Jan. 1977 D.L. Waltz and J.B. Pollack, “Phenomenologically Plausi- ble Parsing” in Proceedings of the 1984 American Associa- tion for Artificial Intelligence Conference, 1984. R. Wilensky, “A Knowledge-based Approach to Language Processing” in Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981. P. Zave, “An Operational Approach to Requirements Spec- ification for Embedded Systems,” IEEE Transactions on Software Engineering, Vol. SE-8, No. 5, May 1982. 600 / SCIENCE
|
1986
|
173
|
443
|
A SIMPLE MOTION PLANNING ALGORITHM FOR GENERAL ROBOT MANIPULATORS Tom&s Lozano-P&ez MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, Mass. 02139 USA Abstract: This paper presents a simple and efficient algorithm, using configuration space, to plan collision-free motions for general manipu- lators. We describe an implementation of the algorithm for manipula- tors made up of revolute joints. The configuration-space obstacles for an n degree-of-freedom manipulator are approximated by sets of n- 1 dimensional slices, recursively built up from one dimensional slices. This obstacle representation leads to an efficient approximation of the free space outside of the configuration-space obstacles. 1. Introduction This paper presents an implementation of a new motion planning al- gorithm for general robot manipulators moving among three-dimen- sional polyhedral obstacles. The algorithm has a number of advan- tages: it is simple to implement, it is fast for manipulators with few degrees of freedom, it can deal with manipulators having many degrees of freedom (including redundant manipulators), and it can deal with cluttered environments and non-convex polyhedral obsta- cles. An example path obtained from an implementation of the algo- rithm is shown in Figure 1. The ability to automatically plan collision-free motions for a ma- nipulator given geometric models of the manipulator and the task is one of the capabilities required to achieve task-level robot programming [ 151. Task-level programming is one of the principal goals of research in robotics. It is the ability to specify the robot motions required to achieve a task in terms of task-level commands, such as “Insert pin-A in hole-B”, rather than robot-level commands, such as “Move to 0.1,0.35,1.6”. The motaon-plannzng problem, in its simplest form, is to find a path from a specified starting robot configuration to a specified goal configuration that avoids collisions with a known set of stationary obstacles. Note that this problem is significantly different from, and quite a bit harder than, the cofliszon detectzon problem: detecting whether a known robot configuration or a path would cause a col- lision [l, 4j. Motion planning is also different from on-lzne obstacle avoidance: modifying a known robot path so as to avoid unforeseen obstacles 16, 9, 10, 11:. Although general-purpose task-level programming is still many years away, some of the techniques developed for tdsk-level prograrn- ming are relevant to existing robot applications. There is. for exam- ple, increasing emphasis among major robot users on developing tech- niques for off-line programming, by human programmers. using CA11 models of the manipulator and the task. In many of these applications motion planning plays a central role. Arc welding is a good example; specifying robot paths for welding along complex three-dimensional paths is a time-consuming and tedious process. The development of practical motion-planning algorithms could reduce significantly the programming time for these applications. A great deal of research has been devoted to the motion-planning problem within the last five to eight years, e.g., [2, 3, 5, 7, 8, 12, 13, 14, 16, 17, 19, 201. But, few of these methods are simple enough and powerful enough to be practical. Practical algorithms are particularly scarce for manipulators made up of revolute joints, the most popu- lar type of industrial robot. 1 know of only three previous motion- planning algorithms that are both efficient and reasonably general for revolute manipulators with three or more degrees of freedom ‘2, 7, 121. Brooks’s algorithm [2] h as demonstrated impressive results. but is fairly complex. Faverjon’s algorithm 17 , on the other hand, is appealingly simple. The basic approach of the algorithm described here is closely related to the method described by Faverjon. Many of the details of the present algorithm, however, especially the treatment of three-dimensional constraints and the free space representation. are new and more general. Figure 1. A path for all SIX link \ of a I’rlma. plus a three-fingered hand. obtained using the algorithm dr+t r ibed here 626 I SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The approach taken in this algorithm is similar to that of 17, 8, 12, 131 in that it involves quantizing joint angles. It differs in this respect from exact algorithms such as [17. 19j. On the other hand, the quantization approach lends itself readily to efficient computer implementation. joints are known as JTFC JOZ?~!$. The purpose of this paper is to show that motion planning for general manipulators can be both simple and relatively efficient in most practical cases. I see no reason why motion planning should be any less practical than computing renderings of three dimensional solids in computer graphics. In both cases. there are many simple numerical computations that can benefit from hardware support. In fact, it is worth noting that in the examples in Figure 1 it took about Figure 2. Slice Projection of a three-dimensional obstacle into a list of two-dimensional slices that are in turn represented by one- dimensional slices. the same time to compute the hidden-surface displays in the figures as to compute the paths. Note that a slice projection is a conservatave approximation of a segment of an n dimensional C-space obstacle. An approximation of the full obstacle is built as the union of a number of n- 1 dimensional slice projections, each for a different range of values of the same joint 2. The Basic Approach: Slice Projection The configuratzon of a moving object is any set of parameters that parameter (Figure 2). Each of the n - 1 dimensional slice projections, completely specify the position of every point on the object. Con- in turn, can be approximated by the union of n - 2 dimensional figuratzon space (C-space) is the space of configurations of a moving slice projections and so on, until we have a union of one dimensional object. The set of joint angles of a robot manipulator constitute volumes, that is, linear ranges. This process is illustrated graphically a configuration. Therefore, a robot’s joint space is a configuration in Figure 2. Note that the slice projection can be continued one more space. The Cartesian parameters of the robot’s end effector, on the step until only zero dimensional volumes (points) remain, but this is other hand, do not usually constitute a configuration because of the wasteful. multiplicity of solutions to a robot’s inverse kinematics. It is possible Consider a simple two-link planar manipulator whose joint pa- to map the obstacles in the robot’s workspace into its configuration rameters are q1 and 92. C-space obstacles for such a manipulator are space ‘3. 4; 5. 13. 14,. These C-spacr obstacles represent those config- two dimensional. The one dimensional slice projection of a C-space urations of the moving object that would cause collisions. J+ce space obstacle C for q1 E [a,@; is some set of linear ranges {RE} for 92. is defined to be the complement of the C-space obstacles. The ranges must be such that if there exists a value of 42, call it w, Motion planning requires an explicit characterization of the robot’s and a value q1 E [a,/?], call it c, for which (c, w) E C, then w is in free space. The characterization may not be complete, for example, one of the R, (Figure 2). II may cover only a subset of the free space But. without a charac- A representation of a configuration space with obstacles is illus- trrlzation of the free spate. one is reduced to tridl and error methods trated in Figure 3b, for the two link manipulator and obstacles shown to find a path. In this paper we show how to compute approximate in Figure 3a. The actual configuration space is the surface of a torus characterizations of the free space for sample manipulators. By simple since the top and bottom edge of the diagram coincide (0 = 2~), as manipulators we mean manipulators composed of a non-branching se- do the left and right edge. The obstacles are approximated as a set quence of links connected by either revolute or prismatic joints (see of 92 rauges (shown dark) for a set of values of 41. The resolution is 181 for a treatment of the kinematics of simple manipulators). We 2 degrees d]OJIg the qr axis. restrict the position of link zero of a simple manipulator to be fixed. Most industrial manipulators (not including parallel-jaw grippers) are simple manipulators in this sense. The C-space obstacles for a manipulator with n joints are, m general, n-dimensional volumes. Let C denote an n dimensional C- space obstacle for a manipulator with n joints. We represent approx- imations of C by the union of n - 1 dimensional slzce proJec2zons 113, I4 Each n - 1 dimensional configuration in a slice projection of C represents a range of n dimensional configurations (differing only in the value of a single joint parameter) that intersects C. .4 slice projection of an n dimensional C-space obstacle is defined by a range of values for one of the defining parameters of the C-space and an n - I dimensional volume. Let the q = (ql, . , qn) denote a configuration, where each qt is a joint parameter, each of which measures either angular displacement (for revolute joints) or linear displacement (for prismatic joints). Let {q j q3 E [a,/3 } be the set of all configurations for which qJ t :a,8 and let rJ be a projection a) b) Figure 3. (a) Two link revolute manipulator and obstacles. (b) Two dimensional C-space with obstacles approximated by a list of one dimensional slice projections (shown dark). The initial and final positions of the manipulator are shown in the input space and the operator such that Qn..‘..qn) = (Sl.“‘) q3-1*q3-1 .“‘) qn) Then, the slice projection of the obstacle C for values of q3 c 0, /3] is C-space The definition of slice projection is illustrated in Figure 2. In the I’or general manipulators with z links, the configuration space example above. joint 3 aGo\e i< called the .silrf ~oznt while the other can be c onst rutted as follows: PERCEPTION AND ROBOTICS / 627 To compute C-space(z): 1 Ignore links beyond link 1. Find the ranges of legal values of qz by considering rotating link 2 around the positions of joint z determined by the current value ranges of ql(. . . qa- 1. 2. If z = n then stop, else sample the legal range of qz at the specified resolution. Compute C-space(z I 1) for each of these value ranges of qz. Observe that the basic computation to be done is that of determining the ranges of legal values for a joint parameter given ranges of values of the previous joints. This computation is the subject of Section 3. The recursive nature of the C-space computation calls for a re- cursive data structure to represent the C-space. In my implementa- tion I use a tree whose depth is n- 1, where n is the number of joints, and whose branching factor is the number of intervals into which the legal joint parameter range for each joint is divided (Figure 4). The leaves of the tree are ranges of legal (or forbidden) values for the joint parameter n. Many of the internal nodes in the tree will have no descendants because they produce a collision of some link z < n. The main advantage of a representation met,hod built on recur- sive slice projection is its simplicity. All operations on the represen- tation boil down to dealing with linear ranges, for which very simple and efficient implementations are possible. The disadvantages are the loss of accuracy, and the rapid increase of storage and processing time with dimensionality of the C-space. Contrast this approach with one that represents the boundaries of the obstacles by their defining equations ~4. 5;. Using the defining equations IS cleaner and more accurate. but the algorithms for dealing with interactions between obstacle boundaries are very complex. 1 believe that the simplicity of slice projection outweighs its drawbacks. These drawbacks can be significantl! redured by exercising care in the implementation of the algorithms 61 Figure 4. The recursive nature of the C-space leads to a recursive data structure: an n-level tree whose leaves represent legal ranges of configurations for the robot manipulator. 3. Slice Projections for Polygons The key step in our approach is computing one dimensional slice projections of C-space obstacles. That is. determining the range of forbidden values of one joint parameter, given ranges of values for all previous joint parameters. We will illustrate how these ranges may be computed by considering the case of planar revolute manipulators and obstacles. Assume that joint k: a revolute joint, is the free joint for a one- dimensional slice projection and that the previous joints are fixed at known values. Note that we assume, for now, that the previous joints are fixed at szngle values rather than ranges of values; we will see in Section 3.3 how to relax this restriction. We require that the configuration of the first k - 1 links be safe, that is, no link intersects an obstacle. This is guaranteed by the recursive computation we saw in Section 2. Given these assumptions, we need to find the ranges of values of the single joint parameter qk that are forbidden by the presence of objects in the workspace. The ranges of forbidden values for qk will be bounded by angles where link k is just touching an obstacle. For polygonal links moving among polygonal obstacles, the extremal contacts happen when a vertex of one object is in contact with an edge of another object. Therefore, the first step in computing the forbidden ranges for qk is to identify those crztzcal values of qk for which some obstacle vertex is in contact with a link edge or some link vertex is in contact with an obstacle edge (Figure 5). The link is constrained to rotate about its joint, therefore ever! point on the link follows a circular path when the link rotates. The link vertices, in particular, are constrained to known circular paths. The intersection of these paths with obstacle edges determine some of the critical values of qk, for example, B in Figure 5. ,4s the link rotates, the obstacle vertices also follow known circular paths relative to the link. The intersection of these circles with link edges detcrmrne the remaining critical values for qk. for example. i\ in Figure 5 Figure 5. Contact conditions for computing one dimensional slice projections: (a) Vertex of obstacle and edge of link (b) vertex of link and edge of obstacle. The circles indicate the path of the vertices as the link rotates around the specified joint. Determining whether a vertex and an edge segment can intersect requires first intersecting the circle traced out by the vertex and the infinite line supporting the edge to compute the potential intersection points. The existence of such an intersection is a necessary condition for a contact between link and obstacle, but it is not suficzenl. Three additional constraints must hold (Figure 6): in-edge constraznt - the intersection point must be within the finite edge segment, not just the the line supporting the edge; orzentataon constraznt - the orientation of the edges at the potential contact must be compatible. that is, the edges that define the contact vertex must both be outside of the contact edge: rFachabzlzty constraznt - for non-convex objects, there must not be other contacts that prevent reaching this point. The in-edge constraint can be tested trivially given the potential contact point and the endpoints of the contact edge. Since we know that the contact point is on the line of the edge. all that remains to be determined is whether it lies between the endpoints of the edge. This can be done by ensuring that the r and y coordinates of the contact point are within the range of T and y coordinates defined by the edge endpoints. Note that for contacts involving link edges and obstacle vertices, the position of the endpoints of the link edge must be rotated around the joint position by the computed value of the joint angle at the contact. The orientation constraint can also be tested simply. All that is required is that the two edges forming the contact vertex be on the outside of the contact edge. Polygon edges are typically oriented so that they revolve in a counterclockwise direction about the boundary. Therefore, the outside of the polygon is on the right of the edge as we 628 / SCIENCE travrrce the boundarv. <Given this. the feaslbillty of a tontact can be In\ 01~ rd 1n the (on: a( t Figure 6 Given the intersection of a vertex circle and an edge line, the following conditions must be met for a feasible contact: (a) The contact must be in the edge segment. contact 1 satisfies this but 1’ doe< not (b) The edges that define the rontart vertex must both be outFide nf the con~ac t edge. contact I satisfies this but contact 2 does not. (c) The contact must be reachable. contact 1 satisfies this. but contact 3 does not (this condition is only reievant for non-convex ObJCTtS) lJl6 dll the c’intact~ o! tht, link V.II tr ri “1\er1 obstdclC. that sdtibf) the !Ir\t 1v.C’ c ~‘I1-t ralnl. 1‘~ r ed(h c~~ntn 1 nrlglfs q bsf’ tictfbrnlinr~ v.ht,thtlr \<liur. of ‘lj gJf’dtP1 itldn p (dllif’ (~~!!l-l~~n or Mhcthcr \alucs Icss thdrl q TdUSf’ rolliilon (Section ‘i.2). Thr contact angles together with the collision direction: ran be merged to forrn the ranges of forbidden values for qk. This process is illustrated in Figure 7. Figure 7 Ccjnstruct1ng ranges of forbidden value3 using the potential (ontac t angle5 and the collision directions. Our discussion thus far ha? been limited IO situations where’ nil the joints except the lact have known fixed values. The definition of onf--dimensional :l~ce projection\ a lln\vc all the joints. save one free joint. to be withIn a range. not just a single value. Me can read]]) convert. the 511ce projectIon problem (for ranges of joint values) to thfa simpler troscczction proje( tion problem (for tingle Joint values) \ve haxc alrcddb d;scu\sed The idea is to replate the shape of the link under consideration b> the area it bleeps out when +he joint? dvf1nmg the slice mole within their specified vaiue ranges 1:;. 14: Any safe placement of the expanded link represents a range of legal displacements of the original link within the specified joint ranges. In most cases. instead of computing the exact swept volumes. we can use a very simple approximation method. Assume the manip- ulator is posItioned at the configuration defined by the midpoint of all the joint Lalue ranges specified for the slice projection Compute the, magnitude. rk. of the largest carteslan displacement of any point on link k 1n response to an) displacement within the specified range of Joint value5 If Ne ’ grow” each link by its corresponding radius Ek. the grown 11nh includes the snt’pt area. 4. Slice Projections for Polyhedra The basic dpprodch described in Section 3 carries over directly to three dimensional manipulators and obstacles. There is, houever. one significant difference there dre three types of contacts possible bet\vecn three dirnensional polyhedra. The three contact types are (type A) x’ertcx of obstacle and face of link. (type B) vertex of link d11d fate of rjbstacle, and (type C) edge of link and edge of obstacle. I,ct 11~ consider t>‘pe B contacts first Each revolutr Jn1nt 15 char- actc>r1/ctl b:, ~11 d\is of rotation. As the Joint rotates. lmk Lerticcs trrl( (8 circle\ 1n a plane whose normal is the joint axis. The intersec- tli,ll of tllii ( 1r( I(’ with the plane supporting an ob<tncle fncri define> tMo candidate po1ntc of contact. As in thi, t~~~o-dlrrlerlsi~,nal ca5e. pos- \lt)]c (~~rit~~~ ti rJJu>t satlsf? thrrxe c~>riitrnirlt~ to b<’ fe‘d-lt)l? in-face (cjnst rdint t ilc c ontnc t must t,c within the c)t,>t nc 16’ fat c. orli’rltdti0n constraint all of the link edges meeting at the vertex must be outside of the obstacle. and reachability constraint - for non-convex polyhe- dra. there must not be any earlier contacts that prevent reaching this one. The in-face constraint can be checked using any of the existing algorithms for testing whether a point is in a polygon. The orienta- tion constraint can be enforced by checking that the dot products of the face normal with each of the vectors from the contact vertex to ad- jacent vertices is positive 5 The reachability constraint is enforced exactly as in the two-dimensional case by merging the forbidden angle ranges. Type .4 contacts are handled analogously to type B contacts except that no& the vertex belongs to an obstacle and the face to a link. The axis of rotation is still that of the manipulator joint. Detecting type <‘ contacts require detecting the intersection of a 11ne (supporting a hnk edge) rotating about the joint axis and a sta- tionary line (supporting an obstacle edge). Of course. an intersection point must be inside both edge segments to be feasible. There is also an orient ation constraint M hich is a bit rnore difficult to derive than those for t!pe A and B contacts but not particularly difficult to check (for the derivation. see ,5 ). 5. Free Space Representation Having obtained a toniervative approximation of the C-space obsta- (les. t h<k free +pace is simpl\ the complement of all the obstacles. Since the obstacles are ultimately rep1esented as sets of linear ranges. the complement is tr1vial to compute. A two dimensional free space, for example. ~111 be represented as a list of one dimensional slices. Each slitr represents the ranges of legal values of q2 for some small range of Lalues of 41. This is 1n itself a reasonabl> convenient representa- t ion of the free space but not very compact. If we were to try to fired paths through the individual slices a great deal of time would bc Lrnsted Learching through near]\. idfantica! slices. -4 more compact reprcx4entat iun i+ railed for. one that capture< iorne of the coherence The free space rcprt+entat1on I use 1s made up of rfqzons A rf’gion 1s made up out of olerlapplng ranges from a set of adjacent <licci (l‘igurtb 8) ‘I‘hzs arca of common overlap of all the slices in a region I\ ret tangular and called the region’s kernel In practice, we require some rninlmum overlap between slices in the same regions to alold vrr\ ndrrou kernels. Free space regions are non-convex and so points within the region ITI a;, not alwaS s be connect able by a straight line. There is, however. a <implr rnt~thod for moving between points within the region: move from each point along its slice to the edge of the kernel and connect these kernel points with a straight lme. PERCEPTION AND ROBOTICS / 629 To search for a path between points in different regions requires representing the connectivity of the regions. We build a regaon graph where the nodes are regions and the links indicate regions with com- mon boundary Associated with each region are a set of links to adjacent regions. each link records the area of overlap Regions have neighbors primarily in the q1 direction: for these neighbors, the range of qz values at the common region boundary is stored with the link. a) Figure 8. (a) R e g ion definition for two link C-space. The rectangular regions are the region kernels. The shaded area shows region Rx. (b) Region graph corresponding to the regions in part A. The link labels indicate the existence of a common boundary in the q1 and/or q2 directions. By construction, regions only have q2 neighbors at the 0 = 2n bound- ary, anywhere else the region is bounded above and below by obsta- cles. In general, each n dimensional slice is represented as a list of n - 1 dimensional slices and one dimensional slices are a list of ranges of joint values. We have seen that two dimensional regions are con- structed by joining neighboring one dimensional slice-projections. In principle, we could construct three dimensional regions by joining neighboring two dimensional regions, and so on Instead, for three dimensional C-spaces we simply build two dimensional regions for each range of values of the first joint parameter and represent the connectivity among these regions in the region graph (Figure 9). The connectivity is determined by detecting overlap between region ker- nels in neighboring two dimensional slices, that is, slices obtained by incrementing or decrementing the first joint parameter. When overlap exists. the area of overlap is associated with the corresponding link in the region graph. This method is readily extended to n dimensional slices by considering as neighbors slices obtained by incrementing or decrementing one of the first n - 2 joint parameters used to define the two dimensional slice. Path searching is done by an A’ search in the region graph from the region containing the start point to the region containing the goal point. Figure 9. Region connectivity for three dimensional slices; regions can have neighbors in q1 direction. 6. Heuristics for building the C-space Having built a C-space, it may be searched repeatedly for different paths. Changes to the environment. however. will cause parts of the C-space to be recomputed. In rapidly changing environments, it may not be appropriate to compute the complete C-space since only small sections of the C-space will ever be traversed The path shown in Figure 1 was computed using two simple hcuristrcs to subset the C-space: First plan a path for the first 3 links and a simple bounding box for the rest of the manipulator (the last three lrnks. the end-effector and the load). The origin and goal for this path arc chosen to be the closest points in free space to the actual origin and goal. Having found such a path, there remains finding pdt h\ III the full-dimonsional C-space between the actual origin (resp. goal) and the origin (resp. goal) of the path. This strategy has the effect of decoupling the degrees of freedom. For all these paths, we compute only the portion of the C-space bounded by the joint values of the origin and goal configurations. 7. Discussion The main advantages of the algorithm described here are: it is simple to implement, it is fast for manipulators with few degrees of free- dom, it can deal with manipulators having many degrees of freedom including redundant manipulators, and it can deal with cluttered en- vironments and non-convex polyhedral obstacles. The total wall-clock time to compute the C-space obstacles and then plan a path for the two-link example shown in Figure 3 and 10 is six seconds on a Sym- bolics 3600 Lisp Machine with floating-point operations performed in software. These times could be improved by carefully re-coding the algorithm, but they are already quite a bit faster than a human using an interactive programming system (on-line or off-line). Figure 10. (a) Regions for example in Figure 3 (b) Path found be- tween start (1) and goal (4) configuratrons (c) Some intermediate configurations. The main disadvantages of the algorithm are: the approxima- tions introduced by the quantization may cause the algorithm to miss legal paths in very tight environments, and the rapid growth in exe- cution time with the number of robot joints. This last drawback is probably inherent in any general motion planner: the worst-case time bound will be exponential in the number of degrees of freedom 19 The performance of this algorithm shows that motion planning algorithms can be fast enough and simple enough for practical use. I believe that in many applications automatic motion planning will be more time effective than interactive off-line programming of robots. In fact: the planning times will probably be on the order of the times required to perform hidden surface elimination in graphics systems. 630 / SCIENCE Acknowledgments. This report describes research done at the Arti- ficial Intelligence Laboratory of the Mnssnchusetts Institute of Technology. Support for the Laboratory’s Artificial Intelligence research is provided in part by R grant from the System Development Foundation, in part by the Advanced Research Projects Agency under Office of Nnvnl Research con- tracts NCICiO14-85-K-0214 rend NO0014-82-K-0334 nnd in part by the Office of Navnl Research under contract NOOO14-82-K-0494. The author’s research is also supported by an NSF Presidential Young Investigator grant. Bibliography 1. J. W. Boyse, “Interference Detection Among Solids and Sur- faces”, Comm. of ACM, Vol. 22, No. 1, Jan. 1979. 2. R. A. Brooks, “Planning Collision-Free Motions for Pick-and- Place Operations”, Intl. J. Robotacs Research, Vol. 2, No. 4, 1983. 3. R. A. Brooks and T. Lozano-Pkrez, “A Subdivision Algorithm in Configuration Space for Findpath with Rotation”, in Proc. Eighth Int. Joint Conf. on A I, Aug. 1983. Also IEEE Trans. on SMC, Vol. SMC-15, No. 2, 224 -233, Mar/Apr 1985. Also MIT AI Memo 684, Feb. 1983. 4. J. F. Canny, “Collision Detection for Moving Polyhedra”, Proc. European Conf. A I, 1984. Also MIT AI Memo 806, Oct. 1984. 5. B. R. Donald, “Motion Planning with Six Degrees of Freedom”, MIT AI Tech. Rep. 791, May 1984. 6. E. Freund, “Collision Avoidance in Multi-Robot Systems”, Proc. Second Intl. Symp. Robotics Research, Kyoto, August 1984. Published by MIT Press, Cambridge, Mass. 7. B. Faverjon, “Obstacle Avoidance Using an Octree in the Con- figuration Space of a Manipulator”, Proc. IEEE Intl. Conf. Robotics, Atlanta, March 1984. 8. L. Gouzenes, “Strategies for Solving Collision-Free Trajectory Problems for Mobile and Manipulator Robots”, Intl. J. Robotacs Research, Vol. 3, No. 4, 1984. 9. N. Hogan, “Impedance Control: An Approach to Manipulation”, Amer. Control Conf., June 1984. 10. 0. Khatib and J. F. Le Maitre, “Dynamic Control of Manip- ulators Operating in a Complex Environment”, Proc. Third CISM-IFToMM. Udine, Italy, Sept. 1978. 11. B. H. Krogh, “Feedback Obstacle Avoidance Control”, Proc. 21st Allerton Conf., Univ. of Ill., Oct. 1983. 12. C. Laugier and F. Germain, “An Adaptive Collision-Free Tra- jectory Planner”, Proc. Int. Conf. Adv. Robotics, Tokyo, Sept. 1985. 13. T. Lozano-Perez, “Automatic Planning of Manipulator Transfer Movements”, IEEE Trans. on SMC, Vol. ShlC-11, No. 10, 681 - 698. Oct. 1981. Also MIT AI Memo 606, Dec. 1980. 14. T. Lozano-Pbrez, “Spatial Planning: A Configuration Space Ap- proach”, IEEE Trans. on Computers, Vol C-32, No. 2, 108 - 120, Feb. 1983. Also MIT AI Memo 605, Dec. 1980. 15. T. Lozano-Perez, “Robot Programming”. Proceedzngs of the IEEE, Vol 71: No. 7, 821 - 841, July 1983. Also MIT AI Memo 698, Dec. 1982. 16. T. Lozano-PCrez and M. A. Wesley, “An Algorithm for Planning Collision-Free Paths Among Polyhedral Obstacles”1 Comm. of the ACM. Vol. 22: No. 10. 560 - 570. October 1979. 17. C. O’Dtinlaing. M. Sharir. and C. K. Yap, “Retraction: A New Approach to Motion Planning’. 15th ACM STOC. 207 220, 1983. 18. R. P. Paul. Robot Manipulators. \1IT Press. 1981. 19. J. Schv.artz and M. Sharir. “On the> Pinno ILIover’s Problem II”. Courant Inst. of Math. Sci. Tech. Rep. 41. Feb. 1982. 20. S. Udupa, “Collision Detection and Avoidance in Computer Con- trolled Manipulators”, Proc. Fifth Intl. Joint Conf. AI, Cam- bridge, 1977. PERCEPTION AND ROBOTICS / 63 1
|
1986
|
174
|
444
|
Tactile Recognition by Probing: Identifying a Polygon on a Plane R. E. Ellis Edward M. Riseman Allen R. Hanson Laboratory for Perceptual Robotics Department of Computer and Information Science University of Massachusetts, Amherst, MA 01003 ABSTRACT An outstanding problem in model-based recognition of objects by robot systems is how the system should proceed when the acquired data are in8ufRcient to identify uniquely the model instance and model pose that best interpret the object. In this paper, we consider the situation in which some tactile data about the object are already available, but can be ambiguously interpreted. The problem is thus to acquire and process new tactile data in a sequential and eflicient manner, so that the object can be recognised and its location and orientation determined. An object model, in this initial analysis of the problem, is a polygon located on a plane; the case of planar objects present8 8ome in- teresting problems, and is also an important prelude to recognition of three-dimensional (polyhedral) objects. 1. Introduction This work addresses the question of how a robot equip ped with a tactile sensor can recognise and locate an ob- ject in its workspace. Our system for the recognition of objects from tactile data haa the following overall struc- ture: 1. 2. Acquire the initial set of tactile data. Interpret these data by sequentially applying local and global geometric constraints between the data and the object models, i.e., find the possible transla- tions and rotations of each model that are consistent with the data. 3. Repeatedly: l Find a path along which to move a sensor. e Execute the path, stopping when the sensor come8 into contact with an object. 0 Interpret the acquired datum: either it uniquely identifiee the object, or it reduces the set of in- terpretatione to a new, smaller set. This research was supported in part by the Office of Naval Re- search under Contract NO0014-84-K-0564, by the General Dynamics Corporation under Grant DEY-601550, and by the National Science Foundation under Grant DCR-8318776. This work concentrate8 on the problem of intelligently ac- quiring new data (the third principal item). The questions of how to acquire the initial data and how to interpret them, while important, are peripheral to the present dis- cussion. In our research paradigm we suppose that there is a single object in the robot’8 workspace, and that some initial data-acquisition strategy, e.g., regular or random sensing, haa been used to gather tactile data. These tactile data are contact points on the object’s surface; each datum is a pair of vectors, representing the approximate location and local surface normal of that part of the object. Briefly, our acquisition methodology is to examine un- sensed portions of the object that is in the workspace. When there are multiple interpretations of the initial data (e.g., several different models could fit the data) there are a number of faces from different model8 that have not yet been sensed. If we imagine the model8 to be superposed, then some of the unsensed face8 “line up” - if a sensor placed on the tip of a long rod were moved along a special line, it would pass through (or pierce) these faces. Since only one of these model interpretation8 can really occur, we can often tell which one is the correct one by determin- ing which face was hit, i.e., which position and local surface normal were actually detected by the 8ensor. In 8ome cases there will still be ambiguity, but it will usually have been reduced. Identification of an object from ambiguous data can thus be accomplished if a line can be found that passes through an unsensed face of each valid model interpreta- tion. Our method for finding these lines involveschanging the representation of the problem, and asking what sheaf of lines can possibly pass through each unsensed face of each model. The intersection of the sheaves of a set of faces is the sheaf of lines that pass through all of the faces. We show that it is possible to find an element of this intersec- tion - and thus find a sensing path for the robot - in an efficient and general manner. 1.1 Related Work There is very little work, past or present, on ways of in- telligently and automatically acquiring tactile data for the 6j2 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. purposes of object identification. There are only two rea- sonably well-known work8 of significance: that of Allen and Bajcsy, [ 11, and Luo et al. [6]. The former work used vision to reduce the number of possible models, and used surface- following to verify the model instance; this approach, while effective in the experiments they describe, is very time- consuming. The latter work also used vision initially, and simple tactile feature8 subsequently, to search through a decision tree; the sensing strategy is very simple, consist- ing of repeated rotation of the Sensor about the object. While it is effective in simple cases, the author8 point out its shortcoming8 in dealing with smooth or highly symmet- ric objects. The recognition methodology we follow here is that of Grimson and Lozano-Perez, who in several papers have de- tailed a model-based tactile recognition approach [3], [4], [S]. In this methodology, an object is represented as a poly- gon, i.e., as a set of line segments. The tactile data consist of a position, a local surface normal, and the maximum error value of each of these quantities (an error circle and error cone, respectively). Interpretation of tactile data consists in finding an assignment of each datum to a model face. Our approach extends this methodology by showing how to acquire new data when multiple valid interpreta- tions exist. Interpretation of these new data is very rapid, because the path along which the sensor is moved has been calculated from the known valid interpretations; there are very few assignments of a newly acquired datum, and most of the possible assignments can be rapidly predicted from the geometric relationships between the path and the mod- els. The approach presented here, and related issues, are discussed more fully in [2]. 2. Piercing a Set of Line Segments The principal conceptual problem in finding a path along which to move a tactile sensor is that of finding a line that pierce8 a number of line segments (or faces - the terms will be used interchangeably below). The line is the sensing path, and the segments are unsensed faces from the various interpretation8 of the data. There are a number of other non-trivial considerations, e.g., how to evaluate the path and how to find it efficiently, but the core problem is that of finding the parameter8 of a line that passes through a set of segments. 2.1 Finding the Path Parameters In the plane, a line has two degrees of freedom, one of position and one of direction. It is possible to restrict the starting position of the sensing path to lie on some locus; without loss of generality, let us suppose that the starting locus is the X axis.’ The endpoints of each face can be expressed as a pair of points, and the face is a line segment between these points. Thus, we seek the parameter8 of a line that intersects the X axis, and pierces each of a given set of line segments. The path parameters can be expressed as an X inter- cept, and a direction. In order to make it clear that some later formulae have important linear forms, the path direc- tion will be expressed as the slope (Y of the line, taken with respect to the Y axis. That is, if the angle between the path and the Y axis is 8, we will deal only with the slope of the line which is o = tan8. From these preliminaries, the procedure for finding a sensing path can be developed. Beginning with the sim- plest possible case, suppose that we wish to find a path that intersects a particular point P. The bounds on the slopes of the lines passing through P will be referred to as amin and amar. The crucial observation is that the problem can be in- verted from finding a path that pierces the point, to finding the sheaf of line8 going through the point that satisfy the constraints. For any given slope a, the line passing through the point P intersects the X axis at a single, determinable point that we will denote as Q. Let us call the intemec- tion of this line with the X axis the projection of P. The projection function, which varies with the slope of the line, is This function yields the point on the X axis that pierce8 the point P with a path whose slope is a. Now, we can represent this projection function a8 a line in a new, Euclidean space. One axis of this space is the original X axis, and the other is the angular A axis (pro- nounced “alpha”). In this new space, the projection func- tion may be represented as Q&) = (Px - a*PY , f-g (1) This function may be interpreted a8 giving the position of a point Qp in X-A space, derived by projecting the original point P (in X-Y space) onto the X axis in the direction a. By virtue of the definition of this line, it ha8 a very useful property: the coordinate8 of any point on this line directly encode the parameters of a path starting on the X axis that pierces the original point P. The projection line Qp in X-A space thus completely describes the sheaf of paths that pierce P, under the constraints we have set, out above. From this basis, we can derive more useful results. In two dimensions, we wish to pierce not points but line seg- ments. This more complex problem can be solved by em- ploying the parametric representation of a line, and using Equation 1 to determine how each point on the line seg- ment would project. We can represent a line parametrically a8 a point and ‘ We may rotate and translate an arbitrary starting line so that it coincides with the X axis, and transform the faces according~. PERCEPTION AND ROBOTICS / 63.1 a displacement along some direction; letting the point be P and the direction vector D, the equation for a line is WI = P + A. D. Substitution in Equation 1 gives the projection formula of a line, which is a function of the path slope a and the line parameter A, a8 QL(A, a) = (Lx(A) - Q-LY(q 9 4 (2) = (Px + A*Dx - a.Pv - AeDy , a) where the subscripts indicate component8 of the line, point, and direction. This formula is non-linear in a and A. How- ever, for 6xed A, it is linear in a; in particular, the end- points of a segment project into lines in X-A space. If Dy is nonzero, i.e., if the line segment is not parallel to the X axis, then by Equation 2 the projected line8 of the endpoints will have different slopes. Figure 1 shows a pro- jection of an edge into X-A space; the boundary is not a parallelogram because the edge was originally tilted with respect to the X axis. The left and right line segments represent the projections of the edge boundary points PI and P2 into X-A space. Figure 1: Projection of an edge into X-A space. Note that the locus of point8 in X-A that is described by this projection, when cy and X are bounded indepen- dently, is a trapezoid. In particular, the parallel segments of the trapezoid are parallel to the X axis, and the other segments have the slopes described above. The points in the interior of this trapezoid have the property that their coordinates represent the parameter8 of a sensing path that pierce8 the desired line segment. If we take two distinct line segments, and project each into X-A epace, the result is two trapezoids. The intersection of these two trapezoids is a convex set of points, whose coordinates describe the parameters of the set of paths that intersect both of the origin4 line segments. Figure 2 shows the projection of two distinct edge8 in X-A space; the area of intersection is the set of points that specifies the sheaf of paths that pierce both original edges. The general two-degree-of-freedom problem can thus be expressed, in these new terms, as finding a point that is in the interior of the intersection of a number of trapezoids in X-A space. The coordinates of any interior point-represent the position on the X axis and the direction Q of a line A *ma2 I\ -- a’ 7 I I I I amin I\ Figure 2: Two edges in X-A space, and their intersection. that passes through the face8 whose projection is part of this intersection. Once these parameter values have been found, the rotation and translation can be reversed, and the start point and path direction expressed in the natural coordinates of the X-Y plane. 3. Computing a Sensing Path Our approach to calculating a sensing path involves ex- amining each unsensed face I$ in turn, and trying to find a path through it and as many other face8 as possible. A real-world constraint is that when a sensor contacts a sur- face at too oblique an angle, it either skids off or returns unreliable data. We can thus form the set of unsensed face8 {J”i. .. &-‘k} such that the angle between the normal of Fi and the normal of any face in the set is less than the sensor skid angle; let us call this the candidate set formed from F;. In outline, our algorithm for finding a sensing path is: 1. Calculate the candidate set of each unsensed face. 2. Sort the candidate sets according pretations are present in each. to how many inter- 3. Find and test a feasible path through each candidate set: (a) Find the projection parameter a’ which creates the maximum overlap of candidate faces with the generating face Fi. (b) Find an X value, in this projection, that is in the intersection of the projections. (c) Determine the ability of the path to distinguish amongst the current interpretations. This algorithm is more fully described in [2]. Here, we will only outline the computational approach, describe the complexity of the algorithm, and discuss some of the path evaluation issues. 3.1 Computing the Path Parameters In X-A space, the projection of a given face is a convex polygon, and the coordinates of points in its interior and 634 I SCIENCE boundary represent parameters of the sheaf of lines passing through the face. Thus, the parameters of the sheaf passing through a set of faces is represented by the intersection of the sheaves of each face, which is also a convex polygon; let us call this the sheaf polygon. The problem we must solve, then, is Anding a single point in the interior of the sheaf polygon that is produced by intersecting the projections of as many faces, from different interpretations, as is possible. Our method for finding a point in the sheaf polygon is to examine only the regions near its vertices. Because a vertex is formed from the intersection of the projections of endpoints of different faces - which we call the critical points of the projection - this simple geometric observation reduces the search from a full two-dimensional one to a search over a finite set of points. Combinatorially, there are in general O(Nz) points to examine if there are N faces in a candidate set. Figure 3 shows the projections of three edges and the critical points that lie within the [a,,,{,,, (r,,,] bounds; some of the critical points in this example have the same cy value. The region labelled S (which is bounded above by the line (Y = (Y,,,) is the sheaf polygon for all three edges. A A Figure 3: Three edges in X-A space, and their critical points. Once these critical points have been found, we must determine how many faces have projections that contain each of these points. This test is simple, but adds a level of complexity to the algorithm, since we must check N faces for each of O(w) critical points; the net worst-case complexity is thus O(NS). In practice, however, very good paths can be found well before this worst-case behaviour becomes significant. 3.2 Evaluating a Sensing Path That a path intersects several unsensed faces does not imply that a tactile sensor can determine which face has been contacted. There are limits to the ability of sensors to discriminate depth and orientation, and regardless, it is possible for several unsensed faces to be exactly coincident. These conditions must be examined to determine how good a path is at reducing the number of interpretations. Three properties of tactile sensors are that they have a finite ability to discriminate depth and contact normals, and that if the contact angle is too oblique then the sensed normal value is either unreliable, or unavailable because the sensor skids off the object surface. Once a path has been found, all pierced faces must be examined to determine how these constraints apply. It often happens that unsensed faces appear in similar two-dimensional configurations, and so they could not be distinguished by a tactile sensor. We address such cases by forming an ambiguity tree for the interpretations. At each level of this wary tree, the nodes represent the set of interpretations that are possible if a particular sense datum (or class of sense data) are found. The width of the tree at any level indicates how many equivalence classes of interpretations there are with respect to the path from which it was formed, and the depth indicates how many paths must be followed by the sensor, in the worst case, if the object in the workspace is to be uniquely identifled. An example ambiguity tree is given in Figure 4. Be- ginning at the root, there are 21 interpretations; the first path uniquely identifies 5 of these, one datum indicates an equivalence class of 2 interpretations, and if any other da- tum were detected (or if no contact took place) then more motion would be required do distinguish among the re- maining 14 interpretations. For this complex tree, at most 4 paths would have to be traversed to identify the object, but most likely only 2 would be needed. 4. Experimental Result8 An extensive series of simulations have been performed using this algorithm and the six polygonal test objects shown in Figure 5. The experiments involved using only two tactile data, which are shown in the midst of the objects; the dots are the positional information, and the spikes the direction of the local surface normal. These data were chosen because of the considerable ambiguity with which they can be interpreted. There was a very smalI amount of positional error associated with each datum, and a normal direction error of about 4 degrees. Table 1 gives the number of valid interpretations of each object. Table Object Name robot-hand human-hand telephone boot camera beer-bottle 1: Interpretations of 2 points. 1 Number of Number of Faces Interpretations 12 4 18 3 12 2 13 3 12 6 8 3 PERCEPTION AND ROBOTICS / 635 The simplest experiments attempted to distinguish the four poses of the Robot-Hand object. A number of paths will uniquely identify these four interpretations. Figure 6 shows the four interpretations; the circles indicate where the identifying path contacts the object, and the dots and spikes indicate the position and sensed normal of the given tactile data. It was assumed that the sensor could deter- mine local surface normal and depth very well and had a sensor skid angle of 89 degrees, i.e., any slight touch of the surface would be suficient to gather data. The path actually contacts each face parallel to its surface normal, so the latter design parameter could be tightened consid- erably without affecting the result. Of interest was how many distinct interpretations of these data could be identified with a single path. The an- swer is, with the above path constraints, that 20 out of the 21 can be contacted, 17 of these being terminal nodes in the ambiguity tree. (A second path is required to distinguish among the remaining interpretations.) Table 2 summarises the results of various runs, indicating the models used, the number of possible interpretations, and the number of in- terpretations that had at least one path pierced. Table 2: Distinguishing Multiple Objects. Interpretations Objects Found Distinguished robot-hand 4 7 human-hand 3 robot-hand 4 human-hand 3 12 telephone 2 boot 3 camera 6 beer-bottle 9 3 It is not always possible to distinguish all of the in- terpretations with a single path; in such cases, multiple paths must be found. To test the system’s capability, we reduced the sensor skid angle to 45 degrees, permitted the path-finding to stop when 7 interpretations could be distin- guished, and ran it successively on the full object set. As is summarised in Table 3, the first path would distinguish 7 of the 21 interpretations; if none of these interpretations was the correct one (the worst case), the second path would distinguish 7 of the remaining 14, the third path would dis- tinguish 6 (which is optimal), and the last path is trivial. Relative computation times for the interpretation phase, each path-finding phase, and the time needed to find the best single path indicate the efficacy of a multi-path ap preach to object recognition. In these units, a manipulator could be expected to take about 60 timesteps to execute a path, so after the first one is found the time to compute the next path is comparable to physical transit time. 636 / SCIENCE Table 3: Multiple-Path Identification. Pass Verification Candidate Formation Path 1 (limited search) Path 2 (limited search) Path 3 (exhaustive search) Path 4 (trivial search) TOTAL TIME Optimal Path Interpk pierced cost - 42 - 151 7 137 7 54 6 87 1 1 472 19 1389 The ambiguity tree for this large run is shown in Fig- ure 4. Each level shows the interpretations identified by some distinct datum along the sensing path. The entry { } indicates that the next level should be explored if the datum found is not one of those expected. {1,2,3,-a,21} { 18, 14}15 lo 2o 21 lQ {j 4 7 5 13 11 17 16 {} {Q,l} 6 l2 0 A 1 9 1 8 Figure 4: Ambiguity tree for 21 interpretations, limited path search. 5. Conclusions We have defined a methodology for acquiring new tac- tile data in a model-based recognition scheme when the available data are not sufficient to uniquely identify the object in question. A method was proposed for finding a path along which to move a tactile sensor so that the maxi- mum amount of information can be gained from the sensor motion. Simulations show that this method is practical and effective in gathering tactile data to recognise simple objects on a planar surface. This method extends to the three-dimensional case, in which objects are represented as polyhedra, but that prob lem is significantly harder. The non-linearities of the pro- jection equations are not simplified by the boundary con- ditions (as was the case here), so the problem becomes one P ] Allen, P., and Bqjcsy, R.: 1985. Object recognition using vision and touch. Proceedinga of the Ninth Inter- national Joint Conference on Artificial Intelligence, pp. 1131-l 137. of finding a point in the intersection of a four-dimensional structure which is bounded by curved hypersurfaces. Lin- earising the 3-D problem, and producing both analytical characterisations and search heuristics, is a topic of ongo- ing research. REFERENCES (21 Ellis, R.E., Hanson, A.R., and Riseman, E.M.: 1986. A tactile recognition strategy for planar objects. COINS Technical Report, Department of Computer and Information Science, University of Massachusetts. [3] Gaston, P.C., and Loaanc+Phes, T.: 1984. Tac- tile recognition and localization using object models: The case of polyhedra on a plane. IEEE Tranaactiona on Pattern Analyeis and Machine Intelligence, 6(3):257- 265. [4] Grimson, W.E.L., and LosanePdres, T.: 1984. Model-based recognition and localization from sparse range or tactile data. International Journal of Robotica Research, 3(3):3-35. [S] Grimson, W.E.L., and Loaano-P&es, T.: 1985. Recognition and localization of overlapping parts from sparse data in two and three dimensions. Proceeding8 of the IEEE Sympocrium on Robotics (lOaS), pp. 61-66. [6] Luo, R.-C., Tsai, W PH., and Lin, J.C.: 1984. Object recognition with combined tactile and visual in- formation. Proceedinga 01 the Fourth International Con- ference on Robot Vbion and Seneory Controls, pp. 183- 196. Figure 5: Object models and initial tactile data used in the experiments. (The reader can find interpretations of these objects by copying the tactile data onto a transpar- ent sheet, and moving the sheet about to find places on the models where the position and local-surface-normal con- straints are simultaneously satisfied.) Figure 6: Four interpretations, and where the path con- tacts each. PERCEPTION AND ROBOTICS / 637
|
1986
|
175
|
445
|
Abstraction and Representation of Continuous Variables in Connectionist Networks Eric Saund Department of Brain and Cognitive Sciences and the Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, Massachusetts 02139 Abstract A method is presented for using connectionist networks of simple computing elements to discover a particular type of constraint in multidimensional data. Suppose that some data source provides samples consisting of n-dimensional feature-vectors, but that this data all happens to lie on an m-dimensional surface embedded in the n-dimensional fea- ture space. Then occurrences of data can be more concisely described by specifying an m-dimensional location on the embedded surface than by reciting all n components of the feature vector. The recoding of data in such a way is a form of abstraction. This paper describes a method for performing this type of abstraction in connectionist net- works of simple computing elements. We present a scheme for representing the values of continuous (scalar) variables in subsets of units. The backpropagation weight updat- ing method for training connectionist networks is extended by the use of auxiliary pressure in order to coax hidden units into the prescribed representation for scalar-valued variables. 1 Introduction A key theme in Artificial Intelligence is to discover good representations for the problem at hand. A good represen- tation makes explicit information useful to the computa- tion, it strips away obscuring clutter, it reduces information to its essentials. An important step in generating a good representation is to expose constraint in the information to be dealt with. Knowledge of constraint allows one to condense a great deal of unwieldy data into a concise description. In the structure This paper describes research done at the Artificial In- telligence Laboratory of the Massachusetts Institute of Tech- nology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-85-K-0124. The author is supported by a fellowship from the NASA Graduate Stu- dent Reseachers Program. from motion problem in vision, for example [Ullman, 19791, the movement of a large number of dots in an image can be described concisely in terms of the motion of a single coordinate frame-provided one possesses knowledge of the constraint, namely, the rigid relative locations of the points in space. For many information processing tasks such as storage, transmission, and matching to memory, a concise description of information is preferable to a redundant one. Often, constraint in a particular domain is found by careful analysis of the problem. One may discover causal relationships between variables, or, as in the structure from motion case, mathematical formulations that capture the structure of the problem. But for many problems there may exist no elegant or transparent expression of the constraint operating. Figure 1 illustrates a sample of (z, y) points generated by some unknown data source. Evidently, the source operates un- Y X Figure 1. Samples of two-dimensional data constrained to lie on a one-dimensional curve. der some constraint because ail data points appear to lie on a one-dimensional curve. If one possesses knowledge of this curve then one may express data samples with only one number, the location along the curve, instead of the two numbers required to spell out the (5,~) coordinates. However, location along the curve in figure 1 cannot be cal- culated from x and y by any formula because the curve has no simple analytical description. In order to take advan- tage of constraint implicit in the data, we must have some way of using knowledge of the underlying curve. This is the problem addressed by this paper: to provide the means to capture constraint in multidimensional data, even when the constraint is of such arbitrary form as that in figure 1. 638 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2 Background This form of data abstraction has been called “dimensional- ity reduction” by Kohonen [ 19841. The problem is to define a coordinate system on an m-dimensional surface embed- ded in an n-dimensional space, plus a mapping between this coordinate system and that of the embedding space, when n-dimensional data samples are all drawn from loca- tions on the embedded surface. Kohonen has implemented a method for performing dimensionality reduction which is modeled after a theory of the self-organization of topo- graphic mappings by cells in the brain. Kohonen’s method suffers from a number of drawbacks. The result delivered by this method confounds the shape of any underlying, lower-dimensional, constraint surface with the probability distribution of data samples over that surface. Further- more, the computation of any given data sample’s descrip- tion in terms of location on the constraint surface requires explicit search over all locations on that surface. We propose a different mechanism for achieving dimen- sionality reduction using connectionist networks of simple computing elements. Of particular interest are demonstra- tions by Hinton, et al [ 19841, and Rumelhart, et al [1985], that “hidden units” in these networks are able to attain ab- stract representations capturing constraint in binary input data. A prototypical example is the encoder problem (see fig- ure 2). Here, the activity, uk, in a layer of eight output units is calculated from the activity, a3 in the middle layer 0 0 0 0 Figure 2. a. 8-3-8 network. Activity at the input layer b. drives hidden layer. Activity in the hidden layer drives output. b. Activity in an 8-3-8 network trained for the encoder problem. Input is constrained so that only one input unit is ON at a time. Activity at output matches input. The information as to which input unit is ON is able to be transmitted via the hidden unit layer of only three units. Size of circle represents unit’s activity. of three units: (1) where WJk is the connection weight between the jth mid- dle layer unit and the kth output layer unit. The activity in the middle layer is calculated from the activity in thk input layer in a similar way. The middle layer units are called “hidden” units because their activity is not directly accessible either at network input or output. f is typically a sigmoidal nonlinear function, for example, f(x) = gp (2) The goal of the encoder problem is to set the weights such that if any single unit in the input layer is set ON (near l), and the rest set OFF (near 0), the activity will propagate so that only the corresponding output unit will be ON. Because data presented to the input layer is con- strained, so that only a subset of all possible patterns of activity in the input layer ever actually appear, the in- formation as to which unit is ON may be propagated to the output layer through a middle layer containing fewer than eight units. In particular, the three middle layer units may transmit the information by adopting a binary code. Rumelhart, Hinton, and their colleagues have described a method, called backpropagation, for training a network to achieve such behavior without directly specifying the be- havior of the hidden layer: repeatedly present the network with sample input and allow activity to propagate to the output layer, observe the error between the desired out- put and the observed activity at the output layer, and use these errors to update the network weights. This method is described in more detail below. This paper describes a method for extending the back- propagation weight updating scheme for connectionist net- works in order to perform dimensionality reduction over continuous-valued data. 3 Representing Scalar Variables J$‘e must start by endowing networks with the ability to represent not just the binary variables, ON and OFF, but variables continuous over some interval. For convenience let this interval be (0,l). One conceivable way of representing scalars in simple units is via a unit’s activity level itself. Since only one weight connects any middle-layer unit to any given output unit, this strategy is clearly inadequate for representing anything but linear relationships between variables. The relationship between x and y in figure 1 is not linear, so the relationship between x and some hidden variable, u, and between y and u must not both be linear. Another possibility is to quantize the scalar variables and let individual units represent successive intervals. Be- PERCEPTION AND ROBOTICS / 639 cause quantized numbers can only be as accurate as the number of units (and therefore intervals) provided, we sug- gest modifying this tactic somewhat by allowing different units to represent different subintervals, but then smearing activity over units covering nearby subintervals. Figure 3a shows such a smearing function, S,, which in this case happens to be the derivative of the function, f, of equa- tion (2). Other smearing functions, such as the gaussian, may be used. The parameter, w, controls the width of the smearing effect. The number expressed in a pattern of activity may be decoded as the placement of the smearing function, S, at the location, x, within the interval, which minimizes the least-square difference (figure 3b), Q(x) = C(Sz-* - ai) Under this method of encoding continuous numbers, resolution of better than 1 part in 50 may be achieved using eight units. For the remainder of this paper we refer to any subset of units whose activity displays this characteristic form for encoding scalar values as exhibiting “scalarized” behavior. r . 0 I Figure 3. a. Activity pattern in a set of 9 units representing the number, .43. Profile is of the smearing function, S,. b. The number represented by the activity in a set of units is found by sliding the profile of the smear- ing function, S, along the number line to the location where it best fits the activity in the units. c. Smearing function, S,, for several values of the smearing width parameter, w. 4 Training the Network The backpropagation method for training connected net- works to exhibit some desired input/output behavior may be derived by expressing the relationship between a.) a cost for error in output behavior, and b.) the strengths of individual weights in the network. Following Rumelhart, et al [ 19851, define cost E=CE,=CC( apk - tpk)2 = c c $k (4 P k P k as the cost over all output units, k, of error between output ak and target OUtpUt, t k, summed over all Sets Of presen- tations, p, of input data. Weights will be adjusted a slight amount at each presentation, p, so as to attempt to reduce Ep. The amount to adjust each weight connecting a middle layer unit and an output unit is proportional to (from (1) and (4)) 8EP aEp &Z, ask -=-------- $k aak ask awjk = bpkf’(sk)aj Take AWjk = qbpkf’(sk)aj (6) as the amount to adjust weight wjk at presentation p. Q is a parameter controlling learning rate. Adjustments of weights between the input and middle layers is proportional to 3EP dE daj dsj -= --- aw;j daj asj aw,j (7) (8) Defining = c (bpkf’(Sk)Wjk) f’(+i k (9) bpp3 = c ~pkf’(sk)Wjkr k (10) we arrive at a recursive formula for propagating error in output back through the network. Take Awq = rlbpJ’(+, (11) Essentially, this method for updating weights performs a localized gradient descent in error cost over weight space. 5 Auxiliary Error to Constrain Hidden Unit Activity In order to perform dimensionality reduction we will pro- vide one subset of units for each scalar to be encoded b3 the network. The activity in each such subset of units must display scalarized behavior; such a subset of units is called a scalar-set. See figure 4. An m-dimensional surface embed- ded in an n-dimensional data space will have n scalar-sets in the input layer, m scalar-sets in the hidden layer, and n 640 / SCIENCE output layer layer input layer Figure 4. Network of scalar-sets for performing dimensionality- reduction. Heavy lines between scalar-sets indicate weight connection between every unit in each scalar-set. scalar sets in the output layer. The training procedure will consist of repeatedly presenting data samples to the input layer, observing activity at the output layer, and updating weights in the network according to equations (6) and (11). If all of the data presentations to scalar-sets at the input layer conform to scalarized representations for the scalar components of the data vector, then after a suitable train- ing period the output will come to faithfully mimic the in- put. Unfortunately, there is no guarantee that the hidden units will adopt the scalarized representation in their trans- mission of activity from the input layer to the output layer. In general their coding behavior will be unpredictable, de- pending upon the initial randomized state of the network and the order of data sample presentation. What is needed is a way to force the scalar-sets in the hidden layer into adopting the prescribed scalarized pattern of activity. For this purpose we introduce an auxiliary error term, Tj, to be added to the error in activity at the hidden layer, 6j, which was propagated back from error in activity at the output layer (10). The weights connecting the input layer and the middle layer are now updated according to Awij = (63 + Yj)f’(Sj)ai (12) 7 must be of a suitable nature to pressure the hidden units into becoming scalarized as the training proceeds. We com- pute a set of r3 for the units of every hidden layer scalar-set independently, as follows: We may view the activity over the set of units in a scalar-set as the transformation, by the smearing function, S, of some underlying “liklihood” distribution, p(j), over values in the interval, 0 < j < 1. The activity in a scalar- set is the convolution of the liklihood distribution with the smearing function, sampled at every unit. Scalarized ac- tivity occurs when the underlying distribution is the Dirac Delta function. The strategy we suggest for adding auxil- iary pressure to the scalar-set activity is simply to encour- age scalarized behavior: add some factor to sharpen the peaks and lower the valleys of the liklihood distribution, to make it more like the Delta function. A convenient way of doing this is to raise the underlying distribution to some positive power, and normalize so that the total area is unity. In the general case, if this procedure were repeatedly ap- plied to some distribution, one peak would eventually win out over all others. The procedure is summarized by the following equation: r(j) = (S *N { [S-’ * c(i)]“)) -dj> (13) The activity in the scalar-set, u(j) is deconvolved with the smoothing function, S, to reveal the underlying liklihood distribution. This is raised to the power, q > 0, and then normalized (by N). Th is new underlying liklihood is now convolved with the smoothing function, S, and y is taken as the error between this result and the current activity in the scalar-set. Now, on every training trial the weight updating term, bj, pressures hidden units to adopt activities that will re- duce the error between input layer activity and output layer activity, while the auxiliary error term, rj, pressures hid- den units to adopt scalarized activity. In reality, a tradeoff parameter is introduced in equation (12) to weight the rel- ative pressures from 6 and 7. In an actual implementation, a(j) is not a continuous function, but rather consists of the activity in the finite, usually small, number of units in the scalar-set. There- fore the bandwidth available for conveying the underlying liklihood, p(j), is small; sharp peaks in p(j) are not repre- sentable because high spatial frequency information cannot be carried in a. An alternative expression for 7 has been found acceptable in practice: Yj = N [S * a:] - a(j) Square the activity in each unit, convolve this squared ac- tivity in the scalar-set with the smearing function, S, then normalize so that the total activity in the scalar-set sums to a constant. This procedure for generating qj approximates the effect of encouraging scalarized patterns of activity in the scalar-set. 6 Sculpting the Energy Landscape As noted above, the network training procedure carries out gradient descent. Weights are updated at each training pre- sentation so as to reduce the energy cost, E. This cost is high when activity in the output layer differs from activity in the input layer, and, due to the auxiliary error term, 7, the cost is high when activity in hidden layer scalar-sets does not conform to the characteristic scalarized represen- tation for scalar numbers. If, as is usually the case, no prior knowledge of constraint operating upon the data source is available, the network is initialized with random values for all weights, and E will be large at the outset. Simple gradient descent methods commonly suffer from the problem that there may exist local minima in the energy landscape that are far from the global minimum. Once a network falls into a local minimum there is no escape. PERCEPTION AND ROBOTICS / 64 1 The local minimum phenomenon has been reported by Rumelhart, et al [ 19851, in normal binary-variable connec- tionist training, where the only pressure to adjust weights comes from error between output and input activity. It should perhaps not then be surprising to encounter local minima in the dimensionality reduction problem, where we impose an energy cost factor due to non-scalarlike behav- ior in hidden units, in addition to normal cost for output activity deviation from input. In effect, what we are do- ing is adding two energy landscapes to one another. The weight adjustment that reduces one energy cost component may raise the other. Figure 5 is a simple illustration of one way in which adding two energy landscapes can create local minima. Figure 5. a. Neither of the energy potentials whose contours are shown has local minima by itself. b. But if they are moved near one another and added, be created in the resulting landscape. local Two strategies have been proposed for surmounting the local minimum problem. One is simulated annealing in a Boltzmann machine [Kirkpatrick, et al, 1983; Hinton, et al, 1984‘. Briefly, simulated annealing allows the training process to probabilistically adjust weights so as to increase energy cost. This allows the procedure to jump out of lo- cal minima in energy. Boltzmann machine learning can be slow, and it requires certain conditions on the shape of the energy landscape in order to have a good chance of working. We have not investigated its applicability to the dimensionality-reduction problem. Another strategy for skirting local minima involves chang- ing the shape of the energy landscape itself as training pro- ceeds. The idea is to introduce a parameter that makes the landscape very smooth at first, so that the network may easily converge to the local (and global) minimum. Then, gradually reduce this parameter to slowly change the landscape back into the “bumpy” energy potential whose minimum defines the network behavior actually desired. A variant on this technique has been used by Hopfield and Tank [1985] to train networks to find good (but not opti- mal) solutions to the traveling salesman problem (see also [Koch, et al, 19851). For the dimensionality reduction problem we take as the energy landscape smoothing parameter the parameter, w. of the smearing function, S,. At the beginning of a train- ing session, the activity in all scalar-sets describing scalar- valued numbers is smeared across virtually all of the units within each scalar-set. Figure 3c illustrates the activity across a scalar-set under a variety of smoothing parame- ters, w. This strategy creates two related effects. First, it re- duces the precision to which the data values presented as input activity, and sought by the output error term. are resolved. Thus, local kinks and details of any constraint curve constraining the input data are blurred over more or less, depending upon w. Second, under smearing with a large w, auxiliary error on the hidden layers pressures each unit’s activity to be not too different from its neigh- bor’s activity. The activity in hidden unit layers is thereby encouraged to organize itself into adopting the scalarized representation. Training begins with the smearing parameter, w, set to a high value. The parameter is gradually reduced to its final, highest resolution smearing value according to a training schedule. Typically several thousand data-sampling/weight- updating trials are performed for each of five intermediate values for w. 7 Performance To date, this dimensionality-reduction method has been tested for cases where the input data is two dimensional, but constrained to lie on a one-dimensional curve (n = 2, m = l), and where the input data is three-dimensional, but constrained to lie on a two-dimensional surface (n = 3,m = 2). Figure 6 illustrates the underlying constraint curve for an n = 2, m = 1, test case. The X’s represent locations in- dicated by output activity computed by the network when the input is drawn from points on the constraint curve. The extent to which X’s lie on the curve simply demonstrate that network output conforms to input. Circles represent network output when scalar values are injected into the hid- den layer. (This is done only to evaluate network behavior and is not part of the training procedure). The number next to each circle indicates the scalar value presented to the hidden layer. Figure 7 illustrates the n = 3, m = 2 case in a similar way. Figure 7a is the true underlying constraint surface. Figure 7b represents network output for input data drawn from the constraint surface. Figure 7c illustrates network output when successive (u, V) pairs, 0 < u < 1,0 < v < 1, are injected directly into the hidden layer. Note in figure 6 that the constraint surface is found suc- cessfully even though it doubles back on itself in both the TC and y dimensions. In general, the more units presented 642 / SCIENCE to a scalar-set, the better resolution available for capturing a constraint curve. Analysis of the behavior of individual units in a hidden layer scalar-set indicates that each unit in the hidden layer typically encodes a locally linear approxi- mation to one portion of the constraint curve. Some types of constraint curve cannot be discovered by this procedure. These are curves that double back on themselves radically. Figure 8 illustrates. The reason for this failure is that points such as a. and b. in figure 8 ap- pear indistinguishable to the network early in the training procedure when S, causes very heavy smearing of their co- ordinate representations. They are therefore assigned sim- ilar encodings in the hidden unit layer. As w is decreased, later in the training procedure, the network remains stuck in a local minimum of trying to encode both a. and b. using nearby hidden scalar values, when in fact it turns out that they are on opposite ends of the constraint curve and so should be assigned very different encodings in the hidden scalar-set. The energy landscape sculpting strategy does not work when, as the landscape smoothing parameter is decreased, the global minimum in energy potential sud- denly appears in a very different location in weight space from where the previous local minimum had been. A training protocol of 2000 trials for each of five val- ues of the smearing parameter, w, takes approximately 30 minutes for the n = 2, m = 1, case, with resolution of eight units per scalar-set, on a Symbolics 3600 lacking floating point hardware. We anticipate that the extension of this technique to larger dimensionalities of input data, n, should be straight- forward. The extension to greater dimensionality of the underlying constraint surface, m, remains somewhat uncer- tain and must be the subject of future research. It would also be important to develop some way to decide automati- cally what the appropriate constraint dimensionality, m, is for a given set of input data. 8 Conclusion We have presented a mechanism for performing a dimen- sionality reduction type of abstraction over multidimen- sional data constrained to lie on a lower-dimensional surface embedded in the data feature space. A technique is given for representing in connectionist networks the scalar com- ponents of continuous vector-valued data. An auxiliary er- ror pressure is introduced in order to pressure hidden units in the network into adopting this representation for scalar values. This method has been shown capable of capturing a wide variety of underlying constraints implicit in data sam- ples, despite the lack of any concise mathematical descrip- tion of the constraint itself. Note that nowhere is the con- straint curve described explicitly; its shape remains implicit in the weight connections in the network. The network constructed by this method is able to use knowledge of constraint in order to encode information in a more concise representation than its original description as a data vector. We conjecture that such an abstraction mechanism may prove a useful building block for intelligent information processing systems. Acknolwedgements This work was carried out under the supervision of Professors Eric Grimson and Whitman Richards. I thank Dr. Jay McClelland and especially Aaron Bobick for valu- able technical discussion. The presentation was much im- proved by comments from Jim Mahoney, Chris Atkeson, and Anselm Spoerri. References Ackley, D., Hinton, G., and Sejnowski, T., [1985], “A Learning Algorithm for Boltzmann Machines”, Cog- nitive Science, 9, 147-169. Hinton, G., Sejnowski, T., and Ackley, D., [1984], “Boltz- mann Machines: Constraint Satisfaction Networks that Learn”, Technical Report CMU-CS-84-119, Carnegie Mellon University. Hopfield, J., and Tank, D., [1985], “Neural Computation in Optimization Problems”, Biological Cybernetics, 1985. Kirkpatrick, S., Gelatt, Sl, and Vecchi, M., [1983], “Opti- mization by Simulated Annealing”, Science, 220, 671- 680. Koch, C., Marroquin, J., and Yuille, A., [ 19851, “Analog ‘Neural’ Networks in Early Vision”, MIT AI Memo 751, MIT. Kohonen, T., [1984], Self-Orgainization and Associative Memory, Springer-Verlag, Berlin. Rumelhart, D., Hinton, G., and Williams, R., 119851, “Learn- ing Internal Representations by Error Propagation”, ICS Report 8506, Institute for Cognitive Science, UCSD. Rumelhart, D., McClelland, J, PDP Research Group, 119861, Parallel Distributed Processing: Explorations in the Structure of Cognition, Bradford Books, Cambridge, MA. Ullman, S., ; 19791, The Interpretation of Visual Motion, MIT Press, Cambridge, MA. PERCEPTION AND ROBOTICS / G-i.3 Figure 6. a. One-dimensional constraint in two-dimensional data. X’s represent network output when input is taken from the constraint curve. Each circle represents net- work output when the hidden layer is injected with the scalarized representation of the number next to the cir- cle. In this example scalar-sets were of size, 14 units. Figure 8. Failure of network training occurs when the con- straint surface doubles back on itself sharply. Note how the number represented in hidden unit activity jumps around with respect to distance on the constraint curve. Figure 7. a. Two-dimensional constraint surface in three dimensions. b. Activity at output layer when input layer is fed data on the constraint surface. c. Activity at output layer when hidden layers are injected with scalar values between 0 and 1. In this example scalar- sets were of size, 14 units. 644 I SCIENCE
|
1986
|
176
|
446
|
SIMD Tree Algorithms for Image Correlation Hussein A. H. Ibrahim John R. Kender David Elliot Shaw Department of Computer Science Columbia University New York, N.Y. 10027 Abstract This paper examines the applicability of fine-grained “pure” tree SIMD machines, which are amenable to highly efficient VLSI implementation, to image correlation which is a representative of low-level image window- based operations. A particular massively parallel machine called NON- VON is used for purposes of explication and performance evaluation. Several algorithms are presented for image shifting and correlation operations. Novel algorithmic techniques are described, such as vertical pipelining, subproblem partitioning, associative matching, and data duplication that effectively exploit the massive parallelism available in fine- grained SIMD tree machines. Limitations of SIMD pure tree machines are also addressed. They tend to correspond to situations in which the root of the tree may become a significant communication bottleneck, or in situations in which MIMD techniques would be more effective than the SIMD approaches considered in this paper. Performance results have been projected for the NON-VON machine (using only its tree connections, in order to address the issues of concern in this paper). Index terms: Vision hardware, image correlation, parallel processing 1 Introduction Image understanding applications frequently involve processing large quantities of input image data. These computations can be performed simultaneously on many or all of the image elements. Consequently, parallel machines have great potential for the rapid and cost-effective execution of image understanding tasks. Although parallel architectures show considerable promise for the execution of tasks characteristic of all levels of computer vision, from low-level signal processing and the extraction of primitive geometric properties through high-level object recognition and scene analysis, our concern in the present paper will be with the class of low-level window-based tasks. In window-based operations, the output value of a local operation at a specific image point is a function of the image values at this point and at a number of points in its immediate neighborhood. A number of parallel architectures [2], [6], [3], [7], [8], [91 have been proposed for application to image analysis problems. Of particular concern m this paper is the class of parallel machines characterized by a very large number of relatively small, simple processing elements (PE’s), interconnected to form a binary tree. We will refer to such machines as fme-grained tree-structured machines. Finally, we will restrict our attention to machines in which all PE’s simultaneously execute the samt instruction on different data elements -- a mode Flynn referred to as single instruction stream, multiple data stream (SIMD) execution [4]. This paper reports the results of investigations into the applicability of fine- grained tree-structured SIMD machines to image correlation. In order to make possible a detailed performance analysis, a number of image correlation algorithms were developed for a particular massively parallel machine, called NON-VON. The NONIVON 1 prototype, which imolements only some of the features of the full NON-VON architecture, WA developed-at Columbia University, and has been operational since January, 1985. While the full architecture supports other interconnection topologies and execution modes that might be expected to offer significant performance enhancements in a number of vision applications, only its tree- structured communication capabilities and its SIMD mode of execution are used in the algorithms &scribed in this paper. Novel algorithmic techniques are described that effectively exploit the massive parallelism available in fine-grained SIMD tree machines, and the capability of trees to perform algebraically commutative and associative operations (such as addition) in hme logarrthmic in the number of pixels. These techniques can be summarized as vertical pipelining, subproblem decomposition. hardware-assisted associative matching, and data duplic&on, e&h of which is characterized by little or noCross-tree data flow. Limitations of SIMD pure tree machines are also addressed. They tend to correspond to situations in which the root of the tree may become a significant communication bottleneck, or in situations in which MIMD techniques would be more effective than the SIMS approaches considered in this paper. In the following section, we describe the NON-VON architecture and introduce a Pascal-based parallel programming language that will be used to resent the algorithms that form the central focus of this paper. In Section s we discuss issues related to image I/O and system initialization, The igorithms themselves are presented and discussed in Sections 4. The final section summarizes the apparent advantages and limitations of fine-grained SIMD “pure” tree machines for the kinds of window-based image understanding algorithms we have studied. 2 The NON-VON Architecture The name NON-VON is used to describe a family of massively parallel machines designed to provide high performance on a number of computational tasks, with special emphasis on various artificial intelligence, database management, and other symbolic information processing a lications. J!i Figure 1 depicts the top-level organization of the general N-VON architecture [lo] that includes: 1. A massively parallel active memory based on a very large number (as many as a million) of small processing elements (SPE’s). The SPE’s are configured as a complete binary tree whose leaves are also interconnected to form a two-dimensional orthogonal mesh. 2. A smaller number of large processing elements (LPE’s), interconnected by a high-bandwidth interconnection network. 3. A secondary processing intelligent disk drives. subsystem (SPS) based on a bank of Only some of the subsystems depicted in figure 1, however, are directly relevant to the concerns of this paper. In particular, the algorithms we will describe are strictly SIMD in nature, and do not require the use of multiple LPE’s (or of the high-speed network used to connect them), or the mesh communication. For purposes of this paper, we will thus assume only a single LPE, which will be referred to as the control processor (CP). The active memory is constructed using custom VLSI circuits~~ ?p; recently fabricated of which contains four 8-bit SPE’s. compiises a small local RAM, a modest amount of processing logic, and an I/O switch that permits the machine to be dynamically reconfigured to support various forms of inter-processor communication. Two modes of communication will be employed in the algorithms presented in this paper. First, global bus communication, supporting both broadcast by the CP to all SPE’s in the active memory, as required for SIMD execution, and data transfers from a single selected SPE to the CP. No concurrency is achieved when data is transferred from one SPE to another through the CP using the global communication instructions. Second, tree communication, supporting data transfers among SPE’s that are physically adjacent within the active memory tree. Instructions support data transfers to the pater~ (P), left child (LC), and right child (RC) SPE’s. Full concurrency is achieved in this mode, since all nodes can communicate with their physical tree neighbors in parallel. PERCEPTION AND ROBOTICS / 645 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. - -LPE Network I Leaf Mesh Connections I A Small Processing Element 0 Large Processing Element 0 Disk Head 8nd Intelligent Head Unit Figure 1: Organization of the General NON-VON Machine Special modes of communication are employed in the execution of two NON-VON instructions. The RESOLVE instruction is used to disable all but a single SPE chosen from among a specified set of SPE’s. This is an example of a hardware multiple match resolution scheme, in the terminology, of the *literature of associative processors. Upon executing a RESOLVE mstruchon, the CP is able to determine whether the operation resulted in any SPE being enabled. If only a single SPE is enabled, the REPORT instruction may be used to transfer data from that SPE to the CP using global bus communication. The first member of the NON-VON family, NON-VON 1, has been operational at Columbia since January, 1985. Constructed using chips containing only a single SPE, the NON-VON 1 prototype was assembled primarily to evaluate certain electrical, timing, and layout area characteristics, and to validate the essential architectural principles underlying the NON-VON machine. NON-VON 3 [14] is based on modified chips containing four 8-bit SPE’s, and incorporating the mesh connections. For purposes of calculating the running time of the algorithms described in this paper, we have used the NON-VON 3 instruction set, and have assumed an execution speed of four million instructions per second 1141. From the viewpoint of “pure” tree machines implementation, the less extensive connectivity of the strictly tree-structured topology results in lower implementation costs. In addition, “pure” tree machines permit the use of a processor embedding scheme having a fixed crtip pinout, independent of the number of embedded SPE’s (unlike those involving mesh connections, in which the number of required pins grows as the square root of the number of embedded SPE’s). This makes it possible to exploit decreasing VLSI device dimensions without redesigning the machine by simply replacing the old active memory chips with new ones containing a larger number of SPE’s. A detailed discussion of the implementation and performance advantages of tree machines is discussed in [7]. To describe the NON-VON algorithms presented in this paper, we will use a PASCAL-based parallel language [l], referred to as N-PASCAL. One new data type and two extra constructs distinguish it from standard PASCAL. The new data type vector variable is used to express parallelism at the level of the individual data element. Vector variables, which are indicated by upper-case letters in the N-PASCAL procedures that follow, refer to a set of individual variables, one element of which is found in each SPE. Operations involving vector variables result in the simultaneous manipulation of all elements in the set. Standard PASCAL scalar variables, whose names appear in italics, reside in the CP, and represent individual data elements. Small bold letters are used for the reserved keywords of the language. When a vector variable appears on the left hand side of an assignment statement, for example, with a scalar expression on the right hand side, the value of the scalar expression is simultaneously assigned to all members of the set of data elements represented by that vector variable. If the right hand side is itself a vector variable or expression, the assignment statement results in a different local assignment within each SPE. A given operation may be made applicable to a particular subset of the SPE’s through the use of the where statement, which implements a form of parallel conditional statement. The where statement has the following syntax: where <conditional expression> do <statement> [elsewhere <statement>] The where statement causes the statement following the do keyword to be executed in exactly those SPE’s in which the boolean expression <conditional expression> (which will typically involve one or more vector variables) is satisfied. If the optional elsewhere clause is included, the statement following the elsewhere keyword is executed in those SPE’s in which <conditional expression> is not satisfied. NON-VON’s tree communication capabilities are supported in N-PASCAL through the use of Pascal built-in functions. Function names that start with ‘N-’ correspond to NON-VON machine instructions, and their parameters correspond to the arguments of these instructions. 3 Initializing and Loading the NON-VON Tree Two hierarchical data structures that represent images in the NON-VON tree; namely multi-resolution pyramids and binary image trees, anz described in [83. In these schemes, each PE is used to represent a rectangle of pixels from the original image; the coordinates and dimensions of this rectangle are stored within the PE. All PE’s at the same tree level correspond to rectangles of the same size. The leaf PE’s correspond to single pixels of the image, while non-leaf PE’s correspond to higher order rectangles (size of the rectangles increase by a factor of two going from one level to the level above it in the tree). Prior to the execution of any of the algorithms described in this paper, the NON-VON tree is initialized to explicitly store certain image-independent spatial and control information in each PE. In particular, the variables XADD and YADD used in the N- PASCAL procedures presented in the following sections are initialized to contain the horizontal and vertical coordinates, respectively, of the upper leftmost pixel in the rectangle in question (where the origin is the upper leftmost pixel in the image and the coordinate values increase to the right and downward). The variables XSIDE and YSIDE are initialized to contain the width and length, respectively, of the rectangle in pixels. The variable CURLEV is initialized to contain the level number within image hierarchy, where the root is at level 0. This image-independent initialization procedure takes time proportional to the number of levels in the tree (19 levels in the case of a 512 X 512 image. The NON-VON 3 code for this procedure initializes a tree of 15 levels in approximately 0.3 msec [8]. Image-dependent information is also stored within each PE during the process of image loading or pyramid construction, For a gray-scale image, the N-PASCAL integer vector variable GRAY-LEVEL is used to store the gray-scale value of the pixel or the image rectangle corresponding to each PE. In the case of a binary image, the image values are stored into the boolean vector variable BINARY. Likewise, other similar vectors would be used to hold other image-dependent features (for example, color or texture information). Conceptually, an image is loaded or unloaded through the root of the tree, with the image pixels presented to the root in raster scan order. Each pixel value, together with its image coordinates, is broadcast throughout the tree; only the leaf pixel with matching coordinates stores it. This procedure can be modified to store blocks of data in an intermediate level SPE’s and then use root-to-leaf pipelining to load them in the leaf SPE’s [83. Unloading the tree can be done similarly, with the appropriate PE presenting data to the root in response to the broadcasting of the image coordinates of the desired pixel. In this fashion, the tree acts as a simple random access memory. However, the tree can also be referenced as an associative memory, with PE’s presenting to the root the image coordinates of a given pixel value. Since the components of the triple (XADD, YADD, GREY LEVEL) are stored explicitly, this entails no added cost, and permitsother exotic output modes based on standard data base retrieval operations. Depending on its definition, usually the creation of a multi-resolution pyramid for either gray-scale or binary images can be integrated with image loading in the following way. In the most common case, non-leaf nodes of the pyramid are defined as the simple average of all its descendants. It is not difficult for each PE to compute if the broadcast image pixel is a descendant; the image coordinates of the pixel must lie within the image rectangle the PE represents. (This computation is described in detail in Section 4.5.) These selected PE’s add and store the image value in an accumulator; all accumulators are then normalized in parallel at the end of image load. For grey-scale image pyramids, the normalization takes the form of division by the area of the PE’s rectangle. For binary image pyramids, this normalization is done by a comparison; in the simplest case it is a comparison against half the area of the rectangle. (Note that all normalization computation involve only simple shift operations.) The cost 646 / SCIENCE of this integration of image load with pyramid creation is the cost of doing a range comparison by each PE: a constant term. incremental In pure tree machines, loading and unloading through the root can be a bottleneck for algorithms with extensive I/O operations. In an actual NON- VON configuration, image data would be loaded and unloaded in parallel through all PE’s at some intermediate level in the active memory tree, or through the edges of the mesh at the leaf level. (Image input from sensors, however, would still be dominated by the sensor speed, unless the camera were modified to also allow such subimage partitioning.) In the following sections, we assume that the image has already been loaded in the NON-VON tree, and that the above N-PASCAL variables has been set accordingly. 4 Image Correlation Correlatton techniques are special cases of image convolution techniques. They are widely used in many image understanding tasks, including simple filtering to detect a particular feature in an image, edge detection, image registration, motion and stereo analysis, and object detection by template matching [2]. In this section, we discuss the utility of fine-grained tree- structured SIMD machines for image correlation and related image operations, known as local or window-based operations. Unlike pixel-based operations such as histogramming or thresholding where spatial location is unimportant, the output value of a local operation at a specific point is a functton of the image values at this point and at a number of points in its immediate neighborhood. The techniques and algorithms developed in this section to compute image correlation are applicable to most other local operations; consequently, their performance is indicative of window-based operations in general. For the most part, pure tree architectures must finesse their lack of nearest-neighbor (mesh) connections. This calls for some care in decomposing problems into the recursive subproblem format that trees excel in. Occasionally, limited amounts of vertical pipelining can be exploited as well. However, as the following correlation algorithms demonstrate, without nearest neighbor connections even clever correlation algorithms do not perform very well on a pure tree. Image correlation involves determining the location at which a relatively small template image best matches the input image; the goodness of the match is measured by a value of a correlation function. The correlation function can be defined in many ways. One common correlation measure is the cross-correlation, defined for each possible relative location of the input image and the template, Let us assume an image array X and a template array Y, with x and y representing the elements of X and Y respectively. Then: cross correlufion= CxPi (1) - On a sequential machine, the zme required to execute such a function, using a direct approach for small templates, is O(nt), where n is the number of pixels in the image and t is the number of pixels in the template, since the value of the cross correlation must be evaluated with the template centered over each pixel ofthe image. Conceptually, the image is repeatedly shifted under the template and the sum evaluated in each possible position. On a parallel machine, the efficient implementation of this image shifr operation is paramount. (In fact, the mesh connections incorporated in NON-VON 3 were intended in large part to support it.) Thus, we devote the next two subsections to algorithms to perform the image shift operation on a pure tree machine, one for images represented as binary image trees, and one for those stored as a two-dimensional array in the leaves. These algorithms then form the basis for the image correlation algorithms introduced in the following subsections. 4.1 Image Shift Algorithms: Binary Image Trees Shifting a binary image that is represented as a binary image tree can be done in two steps. First, a full, shifted binary image is created from the source binary image tree; secondly, a new binary image tree is created from this new image. The first step involves reporting the size and location information of all white (“figure”) rectangles, one by one, to the CP using the RESOLVE instruction. For each reported rectangle, the new location of the rectangle is computed using the reported location and the horizontal and vertical shifting required. The shifted location information and rectangle size are then rebroadcast, and all leaf PE’s in parallel compute whether their pixels fall within the new rectangle boundary (including the effects of wraparound, if desired). Those that do associatively match then reset their initially black (“background’) new image pixels to white. The second step:- creating the new binary image tree representation of the shifted Image--Is described in 181. This algorithm executes in time proportional to the number of white rectangles in the binary image tree representation of the image. Typically, this number is of O(d), where d is the diameter of the image [5]. Thus, the time required to execute this operation is O(n1’2), where n is the image size, since the creation of the image dominates the subsequent creation of the new image tree. Although the performance of the algorithm is dependent on the number of rectangles, it is independent of the distance to be shifted; this is the converse of the performance of a mesh-connected machine. The NON-VON 3 code executes about 55 instructions per reported rectangle [8]. Thus, shifting a 128 X 128 binary image containing 500 rectangles requires about 6.875 msec. 4.2 Image Shift Algorithms: Gray Scale Images Next, we describe the algorithm to perform image shifting for a gray-scale image stored as a two-dimensional array on the leaf level. For the sake of simplicity, we consider the case of shifting the gray-scale image one posttion in the left direction. Slightly modified versions of this algorithm may be used to shift the gray-scale image one position in the three other directions. Recall that gray-scale images are stored in the leaf PE’s, and that the leaf PE’s of a subtree in the NON-VON tree correspond to a rectangle of the stored image. Figure 1 shows two adjacent k x k squares of the image and the NON-VON tree representation of these two subimages. Shifting the image one position in the left direction involves a sequence of g ta&p s of increasing complexity and decreasing efficiency. In the first step, f the pixels of the image can be easily left shifted in parallel since the occupy the right leaves of a parent; they are transferred through it to the le rt leaf. In the second step, those leaves which are on the leftmost boundary of 2 X 2 squares can be transferred through their great grand parents to the rightmost boundary of their left neighboring 2 X 2 squares. However, full parallelism is no longer possible because two values must pass through each great grandparent. Similarly, as shown in the figure, for any k which is a power of two, the leftmost k image values from all k x k right subtrees can be transferred to (a) (b) k A I k- kxk elements kxk elements Figure 2: A 2k x k Subimage and its NON-VON Tree Representation the righmost side of their corresponding left subtrees through the common roots of the two subtrees (PE3). Parallelism is degraded further, since k values must pass through a common root; however, all k x k subtrees still arc ganged together in SIMD fashion. The last step transfers n’“/2 pixels PERCEPTION AND ROBOTICS / 6-t’ throu h each of the two sons of the full tree’s root; since they must be trans erred one at a time, this number gives a trivial lower bound on the P complexity of any left shift algorithm. This bound can actually be obtained by a limited form of vertical pipelining. The basic procedure to transfer k elements sends them up the right subtree one by one in a pipelined fashion. Since the k values all lie along the leftmost border of the subimage square, they all have relative subimage x coordinates of zero, and relative subimag’e y coordinates in the range 0 to k-1. The pipelining is therefore ordered by their relative y coordinate. After a number of steps equal to the height of the right subtree, the fust element reaches its root, PEl. This element is then transferred to the root of the left subtree, PE2, through the common root of the two subtrees, PE3. The algorithm continues to send the elements as they arrive in PEl through PE3 to PE2, and moves the elements that have arrived to PE2 one level down the left subtree. Thus, in time proportional to (k + log k), image elements on the boundary of A x k subimages are shifted one position left. Shiftin the whole image requires repeating this operation for k = 1, 2, 4, . . . , (n 18 )/2, where n is the image size. The time required to shift the whole ima e is proportional to thesum1+2+4+... + (nln)/2, which is equal to n1 If - 1. Thus, the time required to shift the whole image one position left is of O(n112); the algorithm is therefore optimal to within a constant factor. The N-PASCAL algorithm to perform the shifting of k elements on the leftmost boundary of all k x k right subimages to the rightmost boundary of their neighboring k x k left subimages follows. The main routine, given last, calls five other ancillary routines which separately: select an element, pipeline it up, pass it through the common root, pipeline it down, and deposit it correctly. Procedure subimage-left-shif(k, h: integer); var i, j: integer; vector-var RELATIVE X, RELATIVE-Y, SHIFT-UP: integer; SHIFT LC,SHIF’-RC, SHIFT-DOWN, NEW-VAIL integer; LEAF:booIean; /* The following procedure enables in the right subtrees those PE’s corresponding to pixels with relative subimage locations (O,n), and prepares the gray-scale values for transfer via SHIFT_UP. */ Procedure pick-efemenr(n: integer); begin SHIFr-UP := 0; where (LEAF = true) and (RELATIVE-X = 0) and (RELATIVE-Y = n) do SHIFT-UP := GRAY-VALUE; end; /* This procedure pipelines SHIFT-UPS up the right subtrees. Sin= it applies to the full tree, it gets its limited pipeline effect by vertically pipelining zeros everywhere else and ORing them at the parents. *I Procedure move-up; begin where LEAF = false do begin N_RECV8(LC, SHIFT-UP, SHIFT_LC); N_RECV8(RC, SHIFT-UP, SHIFT-RC); N_OR8(SHIFT_UP, SHIFT-LC, SHIFT-RC); end; end; /* The following procedure transfers the up-pipelined SHIFT-UPS in the roots of the right subtrees into the down-pipelining SHIFT_DOWNs in the roots of the left subtrees. */ procedure move-around; mdn N_RECV8(RC, SHIFT-UP, SHIFT-RC); N9ND8(LC, SHIFT-RC, SHIFT-LC); where (CURLEV = h-l) do SHIFT_DOWN := SHIFT-LC; end; /* This procedure pipelines SHIFT_DOWNS down the left subtrees. */ Procedure move-down; begin NJECV8(P, SHIFT_DOWN, SHIFT-DOWN); end; /* The following procedure enables in the left subtrees those PE’s corresponding to pixels with relative subimage locations (k-l,n), and stores the transferred value in NEW-VAL. */ Procedure assign-element(n: integer); begin where (LEAF = true) and (RELATIVE-X = k-1) and (RELATIVE-Y = n) do NEW-VAL := SHIFT-DOWN; end; ROUTINE I* MAIN begin /* 1. Compute the address of each image point relative to its k x k block, and mark leaf PE’s. */ RELATIVE X := XADD mod k; RELATIVEIY := YADD mod k; mark-leaf(LEAF); /* 2. Call the various procedures to pipeline the boundary elements the two blocks. Note that h is level of the subtree roots. */ between for i := 1 to (k+2*h) do begin if i c= k then pick-element(i- 1); if i < k+h then move-up; if (i >= h) and (i c= h+k) then move-around; if i >= h then move-down; if i >re: 2*h-1 then assign-element(i-2*h+l): end; end; To shift the whole gray-scale image one position, this procedure is called with values of k ranging from 1 to n 1’2/2. (For somewhat greater efficiency, the cases of k equal to 1 and 2 can be explicitly coded.) For each element shifted, 72 instructions are executed, requiring 18 j.tsec. For a 128 X 128 gray-scale image, 2.4 msec is needed to shift the whole image one pixel to the left, Shifting the gray-scale image more than one pixel is performed-by executing this algorithm a number of times equal to the number of shifts required. This pure tree-based algorithm is still very slow by comparison with the trivial algorithm that would be used in a machine containing physical mesh connections. In the current version of NON-VON 3, for example, where the leaves are interconnected (through one-bit ports) to form a w&dimensional orthogonal mesh, a single-pixel shift would require approximately 2 pet in the case of a gray-scale image, and 250 nsec for a binary image. 4.3 Image Correlation Algorithms: A Direct Approach The first algorithm is a direct parallel implementation of the standard sequential machine algorithm; it exploits the limited vertical pipelining of the image shift algorithm given above, and serves as a (fairly expensive) baseline. In this algorithm, each leaf PE stores exactly one pixel of the image, and at each step it computes and accumulates one term of the cross correlation sum for that pixel’s eventual output.. To compute all but the central correlation function term, each leaf PE reads the value of image points in its neighboring PE’s using the shift operation described in the previous subsection, and multiplies it by the corresponding template value which has been broadcast to all PE’s. Consequently, the algorithm consists of a repeated sequence of image shift and compute steps. This sequence is repeated a number of times depending on the template size. For simplicity, we assume that the template size is 3 X 3. Procedure image-corrl(temp: array[-l..l,-l..l] of integer); vector-var SHIFT-VAL, CORR-VAL: integer; begin /* 1. Compute the first term of the correlation function. The array remp stores the template values, and CORR-VAL is the accumulator. */ CORR-VAL := remp[O,O] * GRAY-VALUE; I* 2. Compute the rest of the correlation terms by shifting in a spiral fashion. The function shift 1(X, Y) shifts left the image represented by the vector variable X, and s&es the result image in the vector variable Y. Shift-u, shift-d, and shift-r are defined similarly. */ shiftJ(GRAY-VALUE, SHIFT-VAL); CORR-VAL := CORR-VAL + lemp[O,l] * SHIFT-VAL; shift-d(SHIFT_VAL, SHIFT-VAL); CORR-VAL, := CORR-VAL + remp[l,l] * SHIFT_VAL; shift r(SHIlT-VAL, SHIFT_VAL); COl&-VAL := CORR-VAL + temp[l,Ol * SHIFT-VAL; 648 / SCIENCE shift-r(SHIFI-VAL, SHIFT_VAL); CORR-VAL := CORR-VAL + remp[l,-1] * SHIFT-ML; shift-u(SHIF-VAL, SHIFT-VAL); CORR-VAL := CORR-VAL + temp[O,-1] * SHIFT-V& shift-u(SHIFT-VAL, SHIFT-VAL); CORR-VAL := CORR-VAL + zemp[-1,-l] * SHIFT-VAL; shiftJ(SHIFT_VAL, SHIFT-VAL); CORR-VAL := CORR-VAL + zemp[-1,0] * SHIFT-VAL; shift-l(SHIFT-VAL, SHIFT-VAL); CORR-VAL := CORR-VAL + temp[-1,1] * SHIFT-VAL; end; Note that the image shifts are order dependent, principally because diagonal shifts are not supported. If the template is an odd square (the most common case), an outwardly spiraling shift order as shown in figure 3 is always g;kble, thus involving every required image position without any wasted Figure 3: Data Flow in Spiral Convolution Algorithm The time required to execute the function is O(t(s+c)), where t is the template size, c is the time required to compute a term in the correlation function, and s is the execution time of the image shift operation. On a pum tree machine like the mesh-free NON-VON configuration considered in lh;~ paper, the image shift time is 0(n1’2) for typical images, where n is the image size. Thus, the time required can be expressed in terms of image size as O(t(n1’2+c)). If NON-VON’s mesh connections were used, the image shift operation would be performed in constant time, and the computation of image correlation would require only Q(K) time. In the procedure described above, the correlation function term is computed by first multiplying two 8-bit integers, and then performing a 32-bit integer add (for a 15-level tree). This operation takes about 30 psec on the present version of NONVON 3. As noted earlier, the image shift operation for a 128 X 128 image takes about 2.4 msec on NON-VON when the mesh connections are not used. For a 3 X 3 template, this correlation function executes in 19.2 msec, a time dominated by the image shift time. 4.4 Image Correlation Algorithms: Template Duplication The second approach incorporates two ideas to minimize the number of image shifts. The first idea actually sacrifices some parallelism by not computing all points’ correlation in parallel. It partitions the points over time, and only computes a carefully chosen subset of points at each iteration. However, these it computes without interleaved image shifts, and with the advantages of pure parallelism for multiplies and logarithmic vertical pipelining for summation. Secondly, it recovers some of the lost parallelism by pre-shifting the template and redundantly storing these values so that many of these subsets of points can be computed without any image shifting at all. The net result is that, although the number of iterations of the algorithm as a whole increases as a function of the size of the mu-&ion. the time of the algorithm decreases since the dominant cost-- the iumber of image shifts--decreases. This approach logically partitions the whole image into subimages of a constant size greater than the template size; each subimage becomes the focus of correlation operations that operate only on pixels entirely within its boundary. This partitioning is strictly logical; the image representation remains the same, with one pixel stored in each PE, and with subimages represented by subtrees of equal size. However, the image representation is augmented by having each PE separately (and therefore redundantly) store re-shifted template values. In extreme cases, each PE has one image pixel 1 ut many of these values; since pre-shifting can introduce dummy zeros, there may even be more of these values than there are values in the template itself. Though costly in space, the only need for time-demanding image shifting occurs when attempting a correlation on an image point near a subimage boundary. The following specific example illustrates the algorithm Figure 4 depicts a subimage of size 4 X 4 and a template of size 3 X 3. There are four central locations in this subimage at which the correlation function can be computed using only the subimage gray values: they are at the relative subimage locations 5,6,9, and 10. Central pixels Figure 4: Image Correlation Template in a 4 X 4 Subimage The computation of the correlation function for these four locations is performed by storing in every PE of the subtree four values; they are the values that the template achieves at that PE when the template is centered over each of the locations 5, 6, 9, and 10. This pre-shifting and storage can be done in parallel over all the subimages in time proportional to central area size (four) plus template size (nine), as follows. In each PE, an array indexed by relative central location is initialized with dummy zero values. Template coordinate and value triples are then broadcast one at a time, with each PE calculating which relative image location would be the template center if it were to take on the given broadcast template coordinates. Only if the relative location is one of the central ones is the template value stored opposite the appropriate central location index. After the template is broadcast, the correlation is computed for all pixels, in every subtree, at relative location 5. All the products necessary for the correlation are formed in parallel by the 16 Pe’s; seven of the products are dummies. The sum is formed using vertical pipelining to the root of the subtree, from whence it is deposited into the proper output image pixel at location 5. All correlations for pixels in relative locations 6, 9, and 10 are then computed likewise. There are many ways to compute the correlation at the remaining border pixels. The followmg way is optimal, and is easily generalizable under certain assumptions; see the discussion below. To compute the correlation for relative locations 7 and 11, the whole image is shifted one position to the left, into a temporary integer vector variable. Since the values formerly at 7 and 11 are now in the central area, their necessary pixel neighborhood is entirely within the subtree. Two correlations are now computed by the subtree as if for relative locations 6 and 10; however, the respective results are deposited in relative locations 7 and 11. Shifting this temporary left- shifted image one position down, into a second temporary, makes it possible to compute the correlation for relative locations 2 and 3 in a similar way. Shifting the temporary left-shifted image one position up, into the second temporary, suffices for relative locations 14 and 15. The other six boundary points can done in mirror image fashion, starting with a right shift of the original image. Note that only six image shifts are required in this approach, instead of the eight shifts in the standard algorithm described earlier in this section. This is the minimum number of shifts possible: there are 12 border locations that must be shifted into the central area, and a unit image shift can brin f at most two new image locations there. The example algorithm is there ore optimal with respect to shifts, given the subimage and template sizes. However, it should be evident from the unusual sequence of shifts and the need for temporary variables that some care was necessary to attain full shift efficiency. We now present the example algorithm in N-PASCAL in a simplified form, PERCEPTION AND ROBOTICS / 649 and then discuss its generalization. The algorithm assumes that the PE’s have stored in REL LOC their relative location in their subtree; this is computable over theentire tree in constant time, via shifts and adds on XADD and YADD. It also assumes that the templates have been pre- loaded so that in every PE, TEMP[i] is the appropriate template coefficient for calculating the correlation for relative location i; here, only values of i of 5, 6,9, and 10 are valid indices. In practice, TEMP would be addressed by relative two-dimensional coordinates within the central area (i.e. TEMP[O..l,O..l]), at an slight incremental cost of indexing overhead. Additionally, the algorithm and the set-up code for REL-LOC and TEMP would be parameterized with respect to template size. Procedure image-corr2; var i, j, subtree-height: integer; vector var CO&-VAL, HOR-SHIFT-VAL, VER-SHIFT-VAL: integer; CORR-L, CORR-R, X, Y, REL-LOC: integer; TEMP: array[5..10] of integer; LEAP : boolean; /* The following procedure computes the correlation function for relative location rel. The vector array TEMP stores the template values for calculating the central locations (in this case, four of them). Since the subimage size is 16, the root of the subimage is at level log(16); stored in &tree-height, it is used for vertical pipeline summation and deposit of the final correlation value into locationfinal.*/ Procedure comp~corr(relfinal: integer); begin X :- Y * TEMP[reZ]; for j :- 1 to subtree-height do begin N_RECV8(LC, x, CORR-L); N_RECV8(RC, X, CORR-R); X :- CORR-L + CORR-R; end; for j := 1 to subtree-height do N-~~8(P, x x); where REL-LOC =final do CORR-VAL : end; = x; begin /* 1. Compute the correlation function for points at central locations. */ Y := GRAY-VALUE; for i := 5,6,9, 10 do camp-corr(i,i); /* 2. Compute the rest of the correlation terms. Because of the shifts, the relative template locations are not the same as the deposit locations. */ shiftJ(GRAYJ4LUE HOR-SHIFTJ’AL); Y := HOR-SHIFT_&%; comp_corr(7,6); camp-corr( 11,lO); shift-d(HOR-SHIFT-VA, VER-SHIFTJ’AL); Y := VER-SHIFT_VAL: comp_corr(2,5); comp_corr(3,6); shift-u(I-IOR-SHIFTJU,, VER-SHIFT-VAL); Y := VER-SHlFTJAL: camp-corr( 14,9); camp-corr( 15,lO); shift-r(GRAY-VALUE, HOR-SHIFT-VAL); Y := HOR-SHIFIJAL; comp_corr(4,5); comp_corr(8,9); shift-d(HOR-SHIFT-VAL, VER-SHIFT VAL); Y := VER-SHIFT-VAL: comp_corr(O,5); camp-corr( 1,6); shift-u(I-IOR-SHIFT VAL, VERSHIFTJ4I.J; Y := VER SHIFT V-a: comp_corr(l2,9); camp-corr( 13,lO); end; The algorithm is generalizable over all template sizes, although full shift efficiency is not always possible. Assume that the template size, t, is an odd square (t = (2b+1)2; b is for “border”). Assume that the subimage area, a, is a square of a power of two (a = w2; w is for “width”); trivially, a must be greater than r. We show that the algorithm generalizes when w is chosen to be greater than or equal to 46. Such a choice of w is always possible, exe cir wi . for the unlikely case that template width is more than half the image The number of central locations is given by (w-2b)2; this is obtained by trimming a border of width b away from the subimage on all its sides. The remaining &-(w-2b)2 locations are subject to image shifts. A unit image shift can bring at most w-2b new image locations into the central locations. A trivial lower bound on the number of shifts is therefore given by 4b(w-b)l(w-26); this is rarely an integer. (It is not hard to prove that it is an integer if and only if the central locations form a square whose side is a power of two.) An image shift one strip of w-2b Dixels position left enables computation in a on the rightmost border of the central one-pixel locations. wide Left shifts can be rkpeated a total gf b times; this enables all pixels immediately to the right of the central area. After saving this fully left-shifted image, b repeated down shifts enable computation in the right half of the pixels immediately above the central location. This is where the constraint w >= 46 is necessary. It insures that the one pixel high strip of w-2b pixels that are enabled after each down shift will stretch backwards from the right subimage boundary to at least the subimage center line. Starting again from the stored fully left-shifted image, b repeated up shifts enable the rest of the pixels in the right half of the subimage. The left half of the subimage is done in mirror image symmetry. Thus. the total number of shifts is 6b. which is always less than t-1, the number of shifts necessary for the naive algorithm.. The algorithm is actuallv auite efficient. since if b is held constant while w grows, the trivial lower bound asymptotically approaches 4b. Further, for some commonly employed sizes of templates--b = 1, 2, and 4-&e number of shifts actually attains its lower bound. Since the number of shifts is only proportional to the square root of the template size, b, the choice of w has no effect on shift timing. It should therefore be chosen as the small as the constraint allows, in order to minimize the amount of seuuential correlations and the size of the local pre-shifted template arrays. The above N-PASCAL algorithm executes in time proportional to TEMP clear time plus template broadcast time plus shift times plus correlation times: correlations are Derformed once ner relative location in the subimage. Thus,’ cost is proportional to (w-2b)2 ; t + bs + aloga, where s is the unit shift time. Since w is chosen to dominate 6, a dominates t; in terms of image and template sizes, this becomes O(tn1’2 + aloga). For moderate a, time is again dominated by shifting. In the case of 3 X 3 templates, this time is approximately 14.4 msec. 4.5 Ima % The thir e Correlation Algorithms: Image Du et lication algorithm for correlation on fine-grain tree-smlctured SIMD machines integrates correlation with image load in order to redundantly store image information. In limited cases. this adds only a small constant factor to rmage load time, but totally eliminates image shifting, yielding an algorithm whose complexity is proportional to template size. It has two drawbacks: it cannot be used on data already present in the tree, and it requires O(r) amounts of local storage in each PE. However, neither of these is usually problem in practice, since correlation is usually among first operations performed, and templates are usually of moderate size with respect to PE memory capacity. We briefly sketch the algorithm here. On image load, the triples (xadd, yadd, gray-value) are broadcast throughout the tree as before. However, each PE executes a slightly more complex associative match to test whether the incoming gray value ought to be stored. Instead of a strict comparison to its local XADD and YADD coordinates, it instead sees if the incoming point is within a half-template width of its local coordinates, and if so, it stores it within a gray value array addressed by offsets relative to the local coordinate. That is, assuming a template of size (2b+1)2, the image load code is: Procedure image-corr-load (xadd, yadd, gray-value: integer); vector var MD, YADD, XOFFSET, YOFFSET: integer; GRAY-VALUE: array[-b..b, -b..b] of integer; begin XOFFSET := xadd-XADD; YOFFSET := yadd-YADD; if abs(XOFFSET) <= b and abs(YOFFSET) <= b then GRAY-VALUE [XOFFSET, YOFFSET] := gray-value; end; /* Correlation code is straightforward: */ Procedure image_corr3(remp: array[-b..b,-b..b] of integer); var i, j: integer; vector var GRAY-VALUE: array[-b..b, -b..b] of integer; 650 / SCIENCE CORR-VAL: integer; begin coRRJ4L := 0; for i := -b to b do for j := -b to b do CORR-VAL := CORR-VAL + temp[ij] * GRAY-VALUE&j] end: This algorithm can be used in a partial form, in which only some of the image gray values that surround a point are stored in each PE; the rest are obtained in the manner of the first, direct algorithm by shifting. In fact, the first and third algorithms form a continuum, in which a varying amount of pre-shifted image information is stored locally. The first, direct method stores only the image point represented by a PE; the third redundantly stores each point’s full template neighborhood; intermediate algorithms can be easily devised. This continuum has a second dimension as well: the amount of template information pre-shifted and stored in each PE. As shown previously, the first, naive method stores no template information locally; the second method stores a subset determined by the relation of template size to subtree size. This second amount of pre-shifted information can be varied as well. It is not hard to write algorithms that redundantly store both some image points and some template points. Such redundancy appear the be the only way around the communication bottlenecks in pure tree machines. 5 Conclusion In this paper, we have investigated the applicability of fine-grained SIMD “pure” tree machines to the execution of window-based low-level irna e understanding tasks. Parallel algorithms were introduced and analyzed or B image shifting and correlation. Issues related to the representation of images in tree machines, and to the rapid parallel input and output of images in such machines, have also been briefly addressed. The algorithms incorporate novel techniques that exploit vertical pipelining of the tree- structured communication topology, and that reduce the effects of communication bottleneck that might otherwise be associated with the root of the tree. These algorithms also illustrate the relative disadvantage of a pure tree machine (by comparison with a mesh-connected parallel machine) in the case of such window-based operations as image correlation. This limitation may be seen even more clearly by comparing the performance of a correlation algorithm that uses only NON-VON’s tree connections with one that employs its mesh connections as well. Using the mesh connections, NON-VON is able to complete the correlation task in approximately 0.5 msec -- a major improvement over the pure tree algorithm that executes in 19.2 msec for a 3 X 3 template. One technique we have adopted to deal with the problem of communication within the tree involves partitioning the problem into a number of smaller problems that fit within a set of independent subuees, within each of which communication is performed locally. This approach, which is exemplified in the second image correlation algorithm described in Section 4, takes advantage of the fact that far less communication may be required between the subtrees allocated to the different subproblems than would be required if the problem were executed by the full tree in the absence of subdivision. Performance results have been projected for the NON-VON machine (using only its tree connections, in order to address the issues of concern in this paper). Image correlation with templates of size 7 X 7 executes in 114 msec on NON-VON using only the tree part of the machine and the first algorithm. The MPP executes the same algorithm in one msec, while the estimated execution time on a VAX 11/750, on a CLIP 4, and on an ICL DAP are 3000, 16, and 16 msec respectively. In general terms, the favorable performance and cost/performance results obtained have tended to derive from: 1. The effective use of an unusually high degree of parallelism, made possible by the machine’s very fine granularity, and augmented by the redundant storage of data. 2. The extensive use of broadcast communication, content- addressable matching and other associative processing techniques. 3. The natural mapping of hierarchical and multi-resolution techniques developed by other researchers onto the machine’s tree-structured physical topology. 4. The use of the tree to perform algebraically commutative and associative operations (such as addition) in time logarithmic in the number of pixels. 5. The simplicity and cost-effectiveness with which tree- structured machines can be implemented using VLSI technology. Those problems for which the limitations of SIMD pure tree machines arc most apparent tend to correspond to: 1. Situations in which the root of the tree may become a significant communication bottleneck. 2. Situations in which MIMD techniques would be more effective than the SIMD approaches considered in this paper. Although techniques have been described that may be used to minimize the impact of these limitations in certain circumstances, it would appear that the incorporation of other communication topologies and modes of instruction execution may offer significant performance and cost/performance advantages over fine-grained SIMD “pure” tree machines in certain image understanding applications. The construction and evaluation of such machines in the context of practical image understanding problems thus remains an interesting area for future research. References 1. Bacon, D., Ibrahim, H., Newman, R., Piol, A., and Sharma, S. The NON-VON PASCAL, Tecnical Report, Computer Science Department, Columbia University, May, 1982. 2. Ballard, D. H., and Brown, C. M.. Computer Vision. Prentice Hall, 1982. 3. Duff, M. J. B. A Large Scale Integrated Circuit Array Parallel Processor. Proceedings of the IEE Conference on Pattern Recognition and Image Processing, 1976, pp. 728-733. 4. Dyer, C. R. A VLSI Pyramid Machine for Hierarchical Parallel Image Processing. Proceedings of the IEEE Conference on Pattern Recognition and Image Processing, 1981, pp. 381-386. 5. Dyer, C. R. “The Space Efficiency of Quadtrees”. Computer Graphics and Image Processing 19 (1982), 335-348. 6. Flynn, M. J. “Some Computer Organizations and Their Effectiveness”. IEEE Transactions on Computers 21,9 (September 1972). 7. Ibrahim, H. A. H. Tree Machines: Architecture and Algorithms. Tecnical Report, Computer Science Department, Columbia University, June, 1983. 8. Ibrahim, H. A. H. Image Understanding Algorithms on Fine-Grained Tree-Structured SIMD Machine. Ph.D. Th., Columbia University, 1984. 9. Kruse, B. The PICAP Picture Processing Laboratory. Proceedings of the IEEE Conference on Pattern Recognition and Image Processing, 1976, pp. 875-88 1. 10. Kushner, T., Wu, A. U., and Rosenfeld, A. “Image Processing on ZMOB”. IEEE Transactions on Computers 31, 10 (October 1982). 11. Potter, J. L. “Image Processing on the Massively Parallel Processor”. IEEE Computer Magazine 16, 1 (January 1983). 12. Reeves, A. P. “Parallel Computer Architectures for Image Processing”. Computer Graphics and Image Processing 25 (1984), 68-88. 13. Shaw, D. E. The NON-VON Supercomputer. Tecnical Report, Computer Science Department, Columbia University, August, 1982. 14. Shaw, D. E., and Sabety T. M. “The Multiple-Processor PPS Chip of the NON-VON 3 Supercomputer”. Integration the VLSI journal 3,3 (September 1985). PERCEPTION AND ROBOTICS / 65 1
|
1986
|
177
|
447
|
ON THE RECONSTRUCTION OF A SCENE FROM TWO UNREGISTERED IMAGES Harit P Trivedi* GEC Research Limited Long Range Research Laboratory Wembley, Middlesex, UK ABSTRACT It is sometimes desirable to compute depth from unregistered pairs of images. I show that it is possible to calculate the two 'epicentres' and the relation governing pairs of epipolar lines, given 8 corresponding points in the two images in any coordinate system. This reduces the matching problem to one dimensional searches along pairs of epipolar lines and can be readily automated using any stereo algorithm. Depth, however, does not seem to be derivable without extra information. I show how to compute depth in two such instances, each involving two 'pieces' of information. 1 INTRODUCTION One often encounters unregistered pairs of stereo images (e.g. in microscopy) from which three dimensional information is nevertheless desired. This provided the motivation for the work reported here. Longuet-Higgins (1981) has shown that the camera geometry is fixed (assuming perspective projection) by the coordinates of 8 corresponding points in a certain coordinate frame. The latter entails knowledge of the 'natural origins' (defined as the point where the respective optic axis meets the image plane) and the orientations of both the image coordinate systems - in other words, the registration information. He also gave an algorithm to compute depth given this information. When images are unregistered, however, neither the natural origins nor the relative image orientation may be known. To what extent can one then succeed in recovering structure (depth)? I show that it is possible in the absence of any registration information whatever (i.e., given just the 8 corresponding points in arbitrary image coordinate systems) to work out the location of the 'epicentres' - where the interocular axis intersects the image planes and through which all epipolar lines pass - and the relation governing pairs of epipolar lines (defined in Section 4), one in each image. This reduces the rest of the matching problem to one dimensional searches along pairs of epipolar lines - which can be automated using any stereo algorithm. a---- *Now at BP Research Centre, Chretsey Road, Sunbury-on-Thames, Middlesex TW16 7LN, UK Although it seems that structure cannot be inferred from the image data alone in the absence of any registration information whatever, full registration information is also not necessary. For example, given either (a) the direction of displacement (two direction cosines) of one camera with respect to the optic axis of the other, or (b) the orientation of the optic axis of one camera with respect to that of the other, I show how structure can be recovered. 2 BACKGROUND I keep to the notation used by Longuet-Higgins (1981). Let a point in the scene have 3D coordinates (xl&&) and (X'I,X'2,X'3) with respect to the left and the right optic centres. Then its left and right image coordinates (measured from the natural origins) (x1/x3&‘&), and are (x1,x2) = (x’l ,x12) = (x’l/x’3,x’2/x’3), assuming unit focal length in both images without loss of generality. Thus image coordinates x3=1=x13, so that Xi=Xi/X3 and (i,j=1,2,3). X'j=X'j/X'3 Let the right camera position and orientation be obtained by displacing the left camera by a vector T and then rotating it so that its new orientarion can be obtained from the applying the rotation matrix RT. old by Then the two sets of 3D coordinates are related by X'j=Rjk(Xk-Tk), implicit summation convention implied hereinafter. Now from the Cartesian components of 1, construct an antisymmetric matrix S Longuet-Higgins shows that the matrix Q=RS satisfies the relations X'iQijXj=O , (i,j=1,2,3) (1) and hence X'iQijxj=O , (i,j=1,2,3) (2) for any point. Notice that (1) and (2) continue to hold under image magnification and length-scale changes to the displacement T. For convenience, one chooses lTl=l. Given eight independent pairs of correspon?Ting points - barring special cases 652 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (see Longuet-Higgins (1984)) -, it is straightforward to compute the 8 independent ratios of the elements of Q as solutions to an 8 by 8 linear simultaneous system of equations. In the same paper, Longuet-Higgins also shows how to extract R and T (from Q), and hence structure. 3 TRANSFORMATION UNDER ROTATION AND TRANSLATION Now consider a rotation of the right image (described by the rotation matrix RZ'(g)) about its optic axis -the z' axis - by some angle 'g' as introducing registration error in the orientation. By writing (2) as a matrix equation (x' )TQ x=0; (3) i .e., (RZ'(dx')T(R,'(g)Q) x=0, (4) we immediately see that the image pair still satisfies an equation of the form (2) but with Q +Q'=Rz' (s>Q. (5) All that needs to be done to get things right is to absorb the extra rotation in R, i.e., R +&I (g).Rl. (6) Next we consider the effect of displacing the image origins by (uI,u2) and (u'I,u'2) in the left and the right images respectively. Then Xi + C’Xi’Ui, u3=u3'=0). and x'j + E'j=x' *-u'j, (i,j=1,2,3; Starting with ($) yields, after algebraic manipulation, the relation c'iQ"ijcj=O, or, (c ' )TQ"k >=O; (7) where u3=O=u'3, Q"ij=Qij, (i,j=1,2) Q" lj=Qn+r, Q”23=Q23=s, Q"31 =Q31 +r ' , Qi’32=Q32+s ' , Q"33=Q33+to+tlo+v; i.e., Q +Q"= Here till 412 421 Q22 Q13+r Q23+s (74 (7b) (7c) (7d) (74 Uf) (79) 1 (8) Q31+r' Q32+s' Q33+to+t'o+vJ l Pa) [ r’s’I=Cu’1u’21 Qll Q12 i I Q21 Q22 3 (gb) to=Q31q+Q3pq, m> t’o=u’l.Q13+u’2-Q23, (W and v=r'.uI+s' .u2=u'I.r+u'2.s. (94 Combined rotations and translations of the image coordinate systems can be readily described by replacing Q in (7)-(g) with Q' of (5). The image coordinates, therefore, always obey a relation of the form (2), or equivalently, (3), whatever the coordinate system. Using this observation, I show how to work out the locations of the epicentres and the relation governing pairs of epipolar lines. 4 EPICENTRES AND EPIPOLAR LINES Where the interocular axis intersects the image planes are the two epicentres. Now imagine a family of planes passing through the interocular axis. Each such plane intersects each image plane in a straight line (which naturally passes through the respective epicentre), giving rise to pairs of epipolar lines. Let the left and the right epicentres be lotted at (~I,7c2) and (~'I,n'2). The equation of a straight line of slope m passing through (nI,n2) is (52-n2)=m(cI-nI). Similarly, denoting by m' the slope of the corresponding epipolar line, the equation of the latter is (c'2-n12)=m'(<'I-7+). [The geometric motivation presented here is not essential. One can simply postulate the existence of epicentres and epipolar lines and the arguments go through.] Now any point on a certain epipolar line in one image can match any point on the corresponding epipolar line in the other image. Given that all matched points obey by inserting for <'2 and c2 from above into the matrix representation of (7), that =o (10) for all values of 51 and F'I. The left hand side is a second order inhomogeneous polynomial in 51 and E'1 and can vanish identically if and only if the coefficient of each term vanishes. This yields four equations. The first of them, arising from the vanishing coefficient of the (c'I)(<I) term, immediately gives the relation m=-(Q"II+m1.Q"21)/(Q"12+m' l Q"22) (11) governing the slopes of a pair of epipolar lines. Note that it is independent of the normalisation of Q" . The solution to the rest of the matching PERCEPTION AND ROBOTICS / 653 problem can be mechanised by the use of any stereo algorithm. The condition that the coefficient of the term in 5'1 must vanish yields, after substituting (11) for m, a polynomial in m' which must vanish. Equating the coefficient of each power of m' to zero gives two linear inhomogeneous equations in the two unknowns ~c1 and 712: Similarly, the condition that the coefficient of the term in c1 vanish yields WlP’21 = c-Q"31 -Q"321. (13) That the constant term also vanishes can be verified by inserting the coordinates of the two epicentres in (7) and using (12) and (13). In the process, one obtains two interesting equations - one for each epicentre: [n ‘1, 71: ‘2, llQ”=O, (14) and Q"[nl, 712, llT=O; (15) implying that det lQ"l=O. (16) Q21=T2.R23-T3.R22, and Q22=T3.R21-Tl.R23. This serves as a check on the accuracy of the data and the calculations. Alternatively, observing that the last row and column of Q" in (8) are linear combinations of the rows and columns of Q, it is readily seen that detlQ"l=O if and only if detlQl=O. That detlQl=O follows from the fact that detlQl=detlRl.detlSl, and it can be verified that detlSl=O. (14) Starting with (3) and using the equivalents of and (15) in the 'natural' coordinate system, i.e., [p'1, ~‘2, llQ=O (14& and where (~1~~2) and (~'1,~'2) are the epicentres in the natural coordinate system, an alternative form of Q" can also be given: Q"ij=Qij, (i.j=1,2) Q’3i=(U’l-P’l)Qli+(U12_p12)Q2i) (i=W) (17) 5 SCENE RECONSTRUCTION Longuet-Higgins gives a method of recovering structure from Q. He also points out three equations relating the diagonal and the off-diagonal (17))s elements of the matrix QTQ (his eqn. the rotation matrix dropping out in the process. Three equations are not sufficient to determine the four unknowns u1,u2,u'1 and u'2 needed to recover Q from (8) or (17). Thus given Q" alone, it does not seem possible to recover Q (whence structure). It is possible to recover structure, however, given either (a) the direction of displacement of one camera with respect to the optic axis of the other, or, (b) the orientation of the optic axis of one camera with respect to that of the other. Note that Qij=Q"ij, (i,j=1,2). From the image data, therefore, one can obtain three ratios between these four elements. Now, from Q=RS, Given R, and using EiX%=Ek, (i,j,k a cyclic permutation of 1,2,3), where FJ,, refers to the mth row of R regarded as a vector, (18) yields - =(R13.Q22-R23=Q12)/R32. (19) The two expressions for T3 in (19) provide an accuracy check. More importantly, it can happen that the right image (say) was rotated about its original position. This corresponds to an unknown rotation about the z' axis - represented by W(g) - g being the angle. The two expressions for T3 then force a constraint on tan(g). To see this, write the final rotation matrix as R + Rz'(d.R, where R is known (for example, (Arfken 1970)). That is, 65-i / SCIENCE Equating the two expressions for T3 in (19) and substituting for the new R from (20), one obtains tan(g)=-a/b, where a=(R13.Q21-R23=Q11)/R31 - (R13~Q22-R23~Q12)/R32, and b=(R23.Q21+R13.Qll)/R31 - (R23~Q22+R13.Q12)/R32. Wa) There are two possible solutions for g given tan (9). If the two images are coarsely aligned (by eye, say) then the small angle solution is the desired solution. Next consider known displacement (TI,T2,T3). Denoting the ratio QII/QI2 by a, (computed from data measurement), and setting RI2/RII=aI and RdRll=a2, it can be readily shown that a2=T3.(al+ax)/(T2+ax.T1)=flo (22) is a linear function of aI. Similarly, denoting the ratio Q22/Q21 by ay (measured), and setting R2I/R22=bI and R23/R22=b2 it can be verified that b2=T3.(bI+ay)/(TI+ay.T2)=f2(bI) is a linear function of bI. Then (23) 2 2 Rll+R12+R13- ' 2 -1 and R:l+Rg2+RE3=1 imply (24) and R;2=l/(l+b:+f;(bl)). (25) The rotation matrix R is characterised by the four unknowns RlI,R22,aI and bI, and has the form and & X3 =&k , (i,j,k being a cyclic permutation of 1,2,3), where I&,, refers to the mth row of R regarded as a vector, - gives aI+bI+fI(aI)+f2(bI)=O . (27) Since fI(aI) and f2(bI) are linear functions of aI and bI respectively, (27) takes the form or, bI=(c4-c2.aI)/(c3+cI.aI); where 2 - c1=T3p c2=(T2+ax.Tl)(T +a .T )+a T2, 1 y 2 y'3 C3=(T2+ax+(T +a .T )+a .T2, Iy2 x3 and c4=-ax.ay. T$ (28) @a > (28b) (28~) (28d) There is now the last piece of unused information, the ratio Q22/Q12=ayx (measured). Writing this out explicitly, squaring it [to get rid of the square-roots from (24) and (25)], and using (22)-(28), one obtains a fourth degree polynomial equation in al: where (+Cafx. hl.h2-h3.h4] +(a$a~x. (el.h2+hl.e2)-(e .h th 3 4 3*e4)' nI=a,.n2, n2=T3/(T2+axJl), (29) Every relationship following from the equation PERCEPTION AND ROBOTICS i 655 n3=ay.n4, nq=T3/(Tl+ay.T2); (29a) dI=(T3-TI.nI)2, d2=(l+ng).c$+2.n .n .c c +(l+ng).c$, 3 4 3’4 d3=(l+ni), dq<(T3-TI.nJ).cJ-n3.c3.TI12; (29b) eI=-2.TI.n2.(T3-TI.nI), e2=2.[(l+ng).c c +n 1' 3 3*n4*(c1*c4-c2*c3) 'C2mC4. (l+n2,)1 e3=2.nI.n2, eq=-2[(T3-TI.n4).c4-n3.c3.TI].[cI.n3.TI +q.(TyTpq)l; (W hl=(Tpd2, h2=(l+ng).c:-2.c l.c2.n3.n4+(l+ng).c; , h3=(l+ng), and hq<cI.n3.TI+c2.(Tg-Tl.n4)12. (294 Efficient subroutines exist (e.g. NAG) for obtaining the four roots of the polynomial. Having obtained al, R can be calculated using (22)-(26) and (28). Since R is real, only real roots are of interest. Of the real roots, only those which yield positive depth (both X3 and X'3>0) for all points are acceptable. Empirically, the polynomial always appears to have two real roots. Each root has a single combination of the signs of RII and R22 which yields positive depth for all data points. The nonveridical solution, however, produces a large origin shift (typically five times the image width) in one image, and small depths (typically a few tenths of the interocular distance). If the positions of the natural origins are known even roughly (e.g., they may be known to lie somewhere within the pictures), the veridical solution can be chosen quite unambiguously. Given 1 it is thus possible to compute R, and vice versa. (171, Hence Q can also be computed. From (8) or after re:caling Q", the unknown coordinates !:8; ~~)o~~~ir!~d" ~'2) of the natural origins can The image coordinates can then be appropriateli transformed into their natural systems, whence depth can be calculated by the method prescribed by Longuet-Higgins: X3=L- (&-x'1~3).J / ml-x'1&3).~1, (30) X1=(X1)(X3), X2=&)(X3), and (31) X'j=Rjk(Xk-Tk). (i,j,k range over 1,2,3) (32) Note that x,ll',X,X' are now in the natural image -- coordinate system. 6 SUMMARY Given 8 corresponding points in two images without any registration information whatsoever, it is possible to calculate the two epicentres and the relation governing the pairs of epipolar lines. The rest of the matching problem reduces to one dimensional searches along the epipolar lines and can be automated using any stereo matching algorithm. Although it would appear that structure cannot be inferred from the image data alone in the absence of any registration information whatever, full registration information is also not necessary. For example, given either (a) the direction of displacement of one camera with respect to the optic axis of the other, or, (b) the orientation of the optic axis of one camera with respect to that of the other, methods were described to obtain structure. his Cl1 VI L31 L-41 ACKNOWLEDGEMENTS It is a pleasure to thank Bernard Buxton for critical reading of the manuscript. REFERENCES Longuet-Higgins, H.C. "A computer algorithm for reconstructing a scene from two projections". Nature, 293 (1981) 133-5. Longuet-Higgins, H.C. "The reconstruction of a scene from two projections - configurations that defeat the 8-point alsorithm" In the Proceedings of the First Intern- Conference on the Applications of Artificial Intelligence. Denver, Colorado, USA, (1984) pp. 395-397 Arfken, G. Mathematical methods for The NAG (Numerical Algorithms Group) Fortran Library, subroutine C02AEF. 656 I SCIENCE
|
1986
|
178
|
448
|
Depth and Flow From Motion Energy David J. Heeger CIS Department AI Center University of Pennsylvania SRI International Philadelphia, Pa. 19104 Menlo Park, Ca. 94025 Abstract This paper presents a model of motion perception that utilizes the output of motion-sensitive spatiotemporal fil- ters. The power spectrum of a moving texture occupies a tilted plane in the spatiotemporal-frequency domain. The model uses 3-D (space-time) Gabor filters to sample this power spectrum. By combining the outputs of sev- eral such filters, the model estimates the velocity of the moving texture - without first computing component (or normal) velocity. A parallel implementation of the model encodes velocity as the peak in a distribution of velocity- sensitive units. For a fixed 3-D rigid-body motion, depth values parameterize a line through image-velocity space. The model estimates depth by finding the peak in the dis- tribution of velocity-sensitive units lying along this line. In this way, depth and velocity are simultaneously ex- tracted. 1 Introduction Image motion may be used to estimate both the motion of ob- jects in S-space and 3-D structure/depth. Motion informat.ion ma) also he utilized for petcepetua,l organization, since regions that move in a “coherent” fashion may correspond to meaningful segments of the world around 11s. 0ptical flow, a 2-D velocity vector for each small region of tile visual field, is one representation of image motion. To com- pute a velocity vector locally for each region of an image, there must hr motion information, i.e., changes in intensity over Gme, everywhere in the visual field. Depth may be recovered from im- age motion given prior knowledge of the 3-D rigid-body motion parameters. A drnse dept,h mpp is recoverable only if there is motion information throughout, the visual field. \C’ithout texture, a perfc>cLly smooth surface yields an image sequence in which most local regions do not change clcrer time. Rut in a highly tcxt,ured world {e.g., natural outdoor scenes with trees and grays), there is motion information throughout the visual field. This paper addresses the issues of extract,ing \-clocity and depth for each region of t,he visual field by taking advantage of the abundance of motion information in highly image seqitcnces. machine vision efforts that try t,o from image motion utilize just two frames from an image se- quencc --- either matching features from one frame to the next, extract information [l] or computing the change in intensity between successive frames along the image gradient direction [2}. In a highiy tex- tured world neither of these approaches seems appropriate, since there may br too many features for matching to be successful Figure 1: {from Ad&on and Bergen 151) Spatiotemporal Ori- entation. (a) a vertical bar translating to the right (b) the space a-time cube for a vertical bar moving to the right. (c) an L - t slice through t,he space-time cube. and the image gradient direction may vary randomly from point to point. There have recently been several approaches to motion mea- surcment based on spatiotemporal filtering [3,4,5,6,7] that uti- lize a large number of frames sampled closely together in time. These papers describe families of mot,ion sensitive mechanisms each of which is selective for mobion in different directions. In the next section, I describe a family of motion-sensitive Gabor filters. The mathematics of motion in the spatiotemporal- frequency domain, discussed i?l Section 3, is used in Section 4 to derive a model for extracting image velocity from the outputs of these filters. Section 5 presents a parallel implementation of the model that operates as a collectNion of velocity-eensitive mechanisms. Section 6 discusses how depth is encoded by these vclocit,y-sensitive mechanisms given prior knowledge of the 3-D r;gid-body motion parameters. Section 7 discusses the model’s outpcl~s for strongly oriented patterns that suffer from the aper- ture problem and suggests some future directions for this re- search. 2 Motion-Sensitive Filters The conrclpt of orientation in spare-time is well explained by lIdelson and lkrgen (51. Figure 1 shows the space-t#ime cube for a vertical bar moving to the right. The slope of the edgea in an Y - t slice through this cube equals the horizontal component of the bar’s vrlocity. The most successful technique for estimating edge orientation ha.q been based on linear systems t.heory, e.g., as depicted in Figure l(c) convolution with linear filters. A 1-D Crabor filter [8] is simply a sine wave multiplied by a PERCEPTION AND ROBOTICS / 657 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. WY wx Figure 2: The power spectra of the 12 motion-sensitive, Ga- bor-energy Gltcrs are posit,ioned in pairs on a cube in the spa- tiotemporal-frequency domain. G aussian window: Gabar(t) = G(t) sin(wt + fo) (1) where G(t) is a Gaussian. The power spectrnm of a sine wave, sin(wt), is a pair of impulses located at. w and --w in t,he fre- qucncy domain. The power spectrum of a Gaussian is itself a Gaussian (i.e., it is a lowpass filter). Since multiplication in the space (or time) domain is equivalent to convolution in the fre- quency domain, the power spectrum of a Gabor filter is a pair of Gaussians centered at (3 and -w in the frequency domain, i.e., it is an oriented bandpass filter. Thus, a Gabor function is loca!ized in a Gaussian window in t,he space (or time) domain, and it is localized in a pair of Gaussian windows in the frequency domain. Similarly, an example of a 3-D Gabor filter is Gnbor,(.7, y, t) = C(z, y, 1) sin(w,,z + wgOy + ulot) (2) wh:>re G(z,y,l) is a 3-D Gaussian. This function looks like a stack of plates with small plates on the top and bottom of the stack and the largest plate in the middle of the stack. The stack can be t,i!ted in any orientation in space-time. The power srjectrum of Equation (2) is a pair of 3-D Gaussiana. The mode! uses a family of Gabor-energy filters, each of which is the squared sum of the response of a sine- and cosine- I:hase Gabor filter, giving an output that is invariant to the phase of the signal. The presenet implementation uses 12 fil- Iprs. each tuned to the same range of spatial frequencies but to different spatiot empora! orientations. Their power spectra are positioned in pairs on a cube in the spatiotemporal-frequency c!:jrnain (Figure 2): four of them are at the eight corners of the cube, two at the centers of the four sides, and six at the mid- points of the ttvrlvc edges. For example, the filt,er that is most sensitive to down-left motion has tile following power spectnlm: G(w, - wo, wy - wo, wt - !JO) + G(w* + wo, "y + wo, wt + wg) (3) wLcre G(LJ=, CL+,, wt) is a 3-D Gaussian, wZ, wy, and wt are spatial and t,emyoral frequencies, and wg specifies the tuning frequency at which the filter achieves its peak output. Gabor filters can be built from separable components, thereby greatly increasing the eficiency of the computations. 3 Motion in the Frequency Domain Now let us review some properties of image motion, first pre- sented by Watson and Ahumada [3,4], that are evident in the spatiotemporal-frequency domain. I shall begin by describing 1-D motion in terms of spatial and temporal frequencies, and observe that t!le power spectrum of a moving 1-D signal occu- pies a line in the frequency domain. Analogously, the power spectnun of a translat$ing 2-D texture occupies a tilted plane in the frequency domain. 3. l One-dimensional Motion. The spatia: frcqucnry of a moving sine wave is expressed in cy- cles per unit of diqtance (e.g., cycles per pixel), and its temporal frequency is cxprrssed in cycles per unit of time (e.g., cycles per frame). Velocity which is distance ovet time ot pixels per frame, equals the temporal frequency divided by the spatial frequency: i7 = LLy/wt (4 Now consider a I-D signal, moving with a given velocity t’, that has many spatial-frequency components. Each such com- ponent wZ has a temporal frcquc*ncy of wtl = wZij, while each spatial-frequtncv component 2&T has twice the temporal fre- quency LLlfl = 2qfy. In fact., t&he temporal frequency of this moving signal, ,as a function of its spatial frequency, is a straight line passing through the origin, where the slope of the line is tt. 3.2 Two-Dimensional Motion Analogously, 2-D pattrrns (textures) translating in the image plane occupy a p!ane in the spatiatPmpora!-frequency domain: Wt = IIW~ + UJWy (f4 where tf = (u, 17) is the velocity of the pattern. For example, a region of a translating random-dot field or a translating field of normally distributed intensity values fills a plane in the fre- qurncy domain uniformly, i.e., t,he power of such an image se- quence is a constant within that. plane and zero outside of it (a dot or impulse, anr! a normally distributed random texture have equal power at all spatial frequencies). Because the motion of a small region of an image is approximated by translation in the image plane, the velocity of such a region may be computed in the Fourier domain by finding the plane in which a!! the power resides. The mot ion-srnsitive spatiotempora! Gabor filters in- trocluced txarlirr are an efficient way of “sampling” these power spectra (image a p!ane pngsing through the center of Figure 2). $ Motion Energy to Extract Image Flow Spatiotcmpora! bandpass filters like Gabor energy filters and those filters discussed in previous papers [4,5,7] are not vclocity- selective mechanisms, hub rathrr are tuned to particular spa- tiotemporal frequencies. Consider, for example, two sine grat- ings with the same velocity but different spatial frequencies (i.e., they have proportionately different temporal frequencies as well). A spatiotcmpora! bandpass filter will respond differently to these two signals even though they have the same velocity. hdotion-energy filters were first presented by Adelson and Bergen [5]. Watson and Ahumada [3,4] first explained that a mwing texture occupies a tilted plane in the frequency domain. 6j8 / SCIENCE This section combines these two concepts and derives a tech- nique that uses a family of motion-energy filters to extract ve- locity. The role of the filters is to sample the power spectrum of the moving texture. By combinin g t,he outputs of several filters, the model cstimattls the slope of the plane (i.e., the velocity of the moving testurc.) directly from the motion energies wit,hout first computing component (or normal) velocity. 4.1 Extracting Image Flow I+‘irst, I derive equations fot Gabor energy resulting from mo- tion of random textures or random-dot fields. Based on these rquations, I tbcn formulate a least-squares estimat.e of velocity. r’arseval’s theorem states that the integral of squared values 3i’er the spatial domain is proportional to the integral of the squared Fourier components ovef the frequency domain. Convo- lution wiih a banrlpass filter results in a signal that is restricted to a limited range of frequencies. ‘l’herefore, the integral of the square of the convolved signal is proport(ional to the integral of the power within the original signal over this range of frequen- cues. The power spectrum of a normally dist,ributed random tex- ture (or random-dot field) fills a plane in the apatiotemporal- freq~~cncy domain uniformly (Equation 5). The power spectrum of a G abor filter is a 3-D G ausaian. By Parseval’s theorem Ga- bor energy, in response to a moving random texture, is propor- tional to the integral of the product of a Gaussian and a plane. For csamplc, the formula for the response of the G&or-energy iiltrr most scnsit ive to down-left motion is derived by suhstitut- ing Equation (5) for stt in Equation (3), and integrating over 1 he frequency domain: I M 2x: im I ~--C[(W*-~?r,)2+(~J~--W~,)t(~IW+tuW,--W~)21~w*~w ti (6) J-m J-m tvhrre k is a scale fnct.or and c depends on bhe bandwidth of the filter. This integral evaluates to (7) where f&f, v, k) = 2klr -~-_I --.--. c&l2 t v2 + 1 (8) cw; f&u) = --.Iil u2+u $1 2 (9) Similar equations can be derived for all twelve Gabor-energy filters, thus yielding a system of twelve equations in the three unknowns (It, 7’. k). The factor tl(tt,~y, k) appears in each of these twelve equations. \Ye can eliminate it by dividing rach equation by the sum of all twelve of them resulting in a system of equations that, depend only on u and o. These equations pre- dict the output of Gabor energy filters due to local translat,ion. The predicted energies are exact for a pattern with a flat power spectrum (e.g., random-dot or random-noise fields). Now let us formulate the “best” choice for u and 31 as a least- squares estimate for this nonlinear system of equations. Let 0: Ii = 1 to. 12) be the twelve observed Gabor energies. Let ~1~0, Ict R;(u, u) be the twelve predicted energies as in 7. The least-squares est imate of v’= (u, u) minimizes Equation (11) ‘I’l~e are st,antlard numerical methods for estimating ~7 = (u, u) to minimize E;quat ion (I I), e.g., the Gauss-Newton gradient- descent, method. In Section 5, I describe a parallel approach for finding this minimum. Since the system of equations is overconstrained (12 equa- tions in two unknowns), the residuals [Oi- R;(u, v)] may be used t.o comput,e a confidence index for the solution. I am investigat- ing the possibility of using, as potential confidence indices, the sum of the squares of t.he residuals, t,he variance of the residuals tlividcd by thc*ir mean, as well as the sharpness/width/curvature of the minima. Computations t,hat use the flow vectors as in- puts, e.g., for tstimat ing 3-D structure and motion, could weight each vect,or according to its confidence. In summary, an algorithm for extracting image flow proceeds as follows: 1. 2. 3. 4. 4.2 Convolve each ima.ge in the image sequence with a center- surround filter to remove the dc and lowest spatial fre- qucnrles. Compute motion energy as t,he squared sum of the sine- and cosine-phase Gahor filt#ers. Smooth the resulting motion energies and divide each by t,he sum of all twelve. Find the best choice of u and v to minimize Equation (1 l), e.g., by employing the Gauss-Newton method or the parallel approach presented in Section S. Results The system of equations discussed above are precisely correct only for images with a flat power spect#rum, but the model has been succesfully tested for a variety of computer-generated and natural itnages. Figure 4 shows the flow firld extract.ed from a random-dot image sequence of a sphere rotating about an axis through it,s center, in front of a stationary backgroltnd. Figure G shows t,he flow field extract.ed ftom a computer- generated image sequence Rying through Yosemite valley. Each frame of the sequence was generated by mapping an aerial pho- tograph onto a digital-terrain map (alititude map). The ob- server is moving towatd the righbward horizon. The clouds in the background v;ere generated with fractals (see recent SIG- GRAPH conference proceedings) and move to the tight while changing their s!iape over time. One way to test the accuracy of the flow field is to use it to compensate for the image motion by shifting each local region of each image in the sequence opposite to the ext,racted flow. This should result in a new image sequence that is motionless, i.e., the intensity at each pixel should not change over time. Figure G(d) h s ows the variance in the intensity at each pixel after compensating for the image motion in this way. The variance is very small for most of the landscape region indicating that the extracted flow field is quite accurate. Most of the high variance regions can be attributed to occlusions - more of the landscape PERCEPTION AND ROBOTICS / 6 59 comes into view over time thereby changing the grey levels. The extracted flow field is erroneous in two regions: (1) the cloud motion is blurred over the landscape near the horizon; (2) some of the flow vectors near the bottom of the image are incorrect probably due t.o temporal alinsing and/or the aperture problem. The clouds change their shape over time while moving right- ward. Compensating for the extracted rightward flow yields stat.ionary clouds t,hat, still change t,heir shape over time result- ing in the high variance at the top of 6(d). The procedure of estimating image flow and then compensating for it has allowed us bo separat,e the cloud region from the rigidly moving land- scape. i fractal-based mode! for the recognition of nonrigid, turbulent, flows is presented by Heeger and Pentland [9]. 5 A Parallel Implementation The lrrst step in the above algorithm is to find the minimum of a two-parameter function. One way to locate this minimum is to evaluate the function in parallel at a number of points (say, on a Iised square grid), and pick the smallest result. In the context of the mode!, each point on the grid corresponds to a velocity. Thus, evaluating the function for a particular point on the grid gives an output that, is velocity-sensitive. Local image velocity may be encoded as the minimum in the distribution of the outputs of a number of such velocity-sensitive units, each tuned to a different t;. The units tuned to velocities close to the true vrlocity will have relatively small outputs (small error), while those tuurd to velocities that deviate substantially from the tale velocity will have large outputs (large error). For a fixed velocit,y, the predicted motion entargies from the system of equations discussed above (e.g., Equation 7) are fixed constants - ¬e them by IV{. Thus, we may rewrite Equat,ion (11) for a fixed i; as Fj = f, [Oi - vJij]? W) a’=1 wllcrr p’ is the response of a single velocity-sensit.ive unit and ‘uJ;~ are constant wrights, each corresponding to one of the mo- t,ion energies for a particular velocit,y. A mechanism that com- putes a velocit.y-tuned output, from the motion-energy measure- ments performs t,hc following simple operations: by t’he sum ot average of a!! 1. Divitlcx each motion energy twelve motion rm~rgiea. 2. Suhracts a constant from each of the results of Step (1). 3. Sums the squares of the resu1t.s of Step (2). An example of the out,put,s of these velocity-t,uned units is shown in Figure 3(a) t.hat displays a map of ve1ocit.y space, with each point corresponding to a different velocity -- for example, d = (0,O) is at t.he cent.et of each image, u’ = (‘2,2) at the top-right corner. The brightness at each point is the output of a velocity-sensitive unit tuned t,o t,hat velocity, thetefore, the minimum in the distribution of responses corresponds to t.he v:e!ocity extracted by the mode!. 6 Motion Energy to Recover Depth This sect,ion presents a technique for recovering a dense depth map from the velocity-sentive units discussed above given prior Figure 3: (a) velocity-sensitive units responding to a moving ranc!om-dot, field. The minimum in t.he distribution of responses corresponds to the velocity extracted by the mode!. (b) ve- locity-sensitive units responding to a single moving sinusoidal grating; the aperture problem is evident as there is a trough of minima. knowledge of the 3-D rigid-body motion parameters. There are a number of situations in which we have prior estimates of these parameters - for example, they may have been estimated from sparse image motion data or from other sensors, e.g., a mo- bile robot equipped with an inertia! guidance system moving through a static environment. First, I show that for a fixed 3-D rigid-body motion, depth values parametcrize a line through image-velocity space, Each point, on the surface of a rigidly moving surface patch has an associated velocity vector, d = (V,, Ii, V,), given by d=iixE+i! where fi = (LCI =,tiy,tit) ape the rotational components of the rigid motion, ? = ( tz, t,, lZ) are the translational components of the rigic! motion, and fi = [r, y, ~(7, y)] is the 3-D position of each point on the rigidly moving surface [lo]. Under orthographic projection, image velocity, u’ = (u,u) = (L>,Vv). Thus, we may rewrite Equation 13 as U = Wy%-wZy+tZ (14) U = -Wz% + wt2 + t, (15) For fixed r‘i, ?, .T, and v this is the paramet,ric form of the equation of a line -- cllanging t corresponds to sliding along this line. Now, I rxplain how to recover depth from the collection of velocity-sensitive units present,ed in the prcccdiug section. Since dcpt,h parameterizrs a line through velocity space, t,he local depth estimate corresponds to t!le minimum in the distribution of the out.puts of t,hose velacit,y-sensitive units that lie along this line. We need only locate the minimum along this line. Formally, WC substitute Equations 14 and 15 for u and u in Equation 11 giving F’(2) = qut4, WI 12 ZZ Cl Oi - Ri(u(t)j t1(t))]2 (16) i=l and pick t that minimizes F’(z). In this way, depth and velocity are simultaneously extracted from motion energy. A dept.h map recovered from the random-dot sphere se- quence discussed above is shown in Figure 5. The technique may be extended to perspective projection by approximating with locally orthographic projection. 660 / SCIENCE 7 The Aperture Problem An oriented pattern, such as a two-dimensional sine grating OP an rst,ended step edge suffers from what has been called the aperture problem (for example, see Hildreth ill]): there is not enough information in the image sequence to disambiguate the true motion of the pattern. At best, we may extract only one of the two ve1ocit.y components, as there is one extra degree of freedom. In the spatiotemporal-frequency domain, the power spectrum of such an image sequence is restricted to a line, and the many planes that contain the line correspond to the possible velocities. Normal flow, defined as the component of motion in the direction of the image gradient, is the slope of that line. Figure 3(b) shows the 0utput.s of velocity-sensitive units for a moving sinusoidal grating. The aperture problem is evident as there is a trough of minima. Preliminary investigation indicat,es that the velocity extracted by the model defaults to normal Row foor such strongly oriented patterns. Depth may be recovered even for local regions that suffer from t.he aperture problem - t!lough the velocity-sensit,ive units output a trough of minima, there will be a single minimum along a line passing across the trough. Future research will study how the velocity and depth e&mates vary for patterns that are more and more strongly oriented. 8 Summary This paper presentIs a model for extracting velocity and depth at each location in the visual field by taking advantage of the abundance of motion information in highly textured image se- quences. The power spectrum of a moving textupe occupies a t iltcd plane in the spatiotcmporal-frequency domain. The model uses 3-D (space-time) Cabot filters to sample this power spec- trum and estimate the slope of the plane (i.e., the velocity of the moving te?:r:ture) without first computing component velocity. A parallel implementation of the model encodes velocity as the peak in a distribution of velocity-sensitive units. For a fixed 3- D rigid-body motion, depth values parametcrize a line through image-velocity space. The model estimates depth by finding the peak in the di$tribubion of velocity-sensitive units lying along this line. In this way, depth and velocity are simultaneously extracted from motion energy. Acknowledgments This research is supported at the llniversity of Pennsylva- nia by contracts AR0 DAA6-29-84-k-0061, AFOSR 82-NM-299, XSF MC?-821919&CER, NSF MCS 82-07294, AVRO DAABOT- 84-K-FO77, and NIH l-ROl-HI,-2998S-01, at SRI International by contracts NSF DCR-83-12766, DARPA MDA 903-83-C-0027, DARPA DACA7G-85-C-0004, NSF DCR-8.519283, and by the Systems J)evclopmmt Foundation. Special thanks to Ted Adetson for mot.ivating this research, to Sandy Pentland for his invaluable help and a,dvice, to David Marimont for his mathematical insight,, and to Mark Turner for introducing me to Gabor filters. 1 also want to thank Jack Nachmias, Grahame Smith, and Dorothy Jameson for comment- ing on earlier drafts of this paper, and especially Lynn Quam for generating the Yosemite fly-through image sequence. References Barnard, S.T., and Thomson, W.B. (1980) Disparity anal- ysis of images, IEEE Pattern Analysis and Mackne Intel- ligence, 2, No. 4, pp. 333-340. Horn, B.K.P., and Schunk, R.G. (1981) Determining optical flow, Artificial Intelligence, 17, pp. 185-203. Watson, A.B., and Rhumada, A.J. (1983) A look at motion in the frcqucncy domain, NASA technical memorandum 84352. Watson, A.H., and Ahumada, A.J. (1985) Model of hu- man visual-motion sensing, Journal of the Optical Society of America, 2, No. 2, pp. 322-342. Adelson, E.A., and Bergen, J.R. (1985) Spatiotemporal en- ergy models for the perception of motion, Journaf of the Optical Society of America, 2, No. 2, pp. 284-299. van Sanfen, J.P.H., and Sperfing, G. (1985) Elaborated Re- ichardt detectors, .lournal of the Optical Society of Amer- ica, 2, No. 2, pp. 3m-323. Fleet, D.J. (198,1) The early processing of spatio-t,emporal visual information, MSc. Thesis (available as RBCV-TR- 84-7, Depart,ment of Computer Science, University of Toronto). Daugman, J .G. (1985) Unccrt ainty relation for resolution in space, spatial frequency, and orientation optimized by t,wo-dimensional visual cortical filt#ers Journal of l-he Optical Society of America, 2, No. 7, pp. llGO-1169. Hecgct, D.J., and I’entlancl, A.P. (1986) Seeing Strucbure Through Chaos, proceedings of the IEEE $l/orksbop on Mo- tion: Representation and .4na!ysis, Kiawah Island Resort, Charleston, North Carolina, p. 131-136. Hoffman, D.D. (1982) Inferring local surface orientat ion from mot ion fields, Journd of the Optical Society of Amer- ica, 72, No. 7, pp. 888-892. Hilrlreth, E.C. (1984) Computations Underlying the Mea- surement of Visual Mot,ion, ArtiGcid Intelligence, 23, No. 3, pp. 309-355. PERCEPTION AND ROBOTICS / 66 1 A C D B . . . * . . I -L***...*... . . . . . . . . \‘ 1 . . 1 . . . . . Figure 4: A rotating random-dot sphere. (a) one frame from the image sequence. (b) actual flow field. (c) extracted flow - the brightness of each vector is weighted by a confidence index computed from the residuals of the least squares. (d) difference between (b) and (c). Figure 5: (a) Actual depth map. (b) R difference between (a) and (b). ecovered depth map. (c) Absolute value of the 662 / SCIENCE Figure 6: (a) one frame of an image sequence flying through Yosemite valley. (b) extracted flow field. (c) th e variance of image intensity over time at each pixel of the original image sequence. (d) variance after compensating for the image motion. PERCEPTION AND ROBOTICS / 663
|
1986
|
179
|
449
|
Saturn: An Automatic Test Generation System for Digital Circuits Narinder Singh Department of Computer Science Stanford University Stanford, CA 94305 ABSTRACT This paper describes a novel test generation system, called Saturn, for testing digital circuits. The system differs from ex- isting test generation systems in that it allows a designer to specify the structure and behavior of a design at a collection of abstraction levels that mirror the design refinement process. The system exploits the abstract design formulations to increase the efficiency of test generation by ignoring irrelevant detail when- ever possible, These capabilities are made possible by using gen- eral representation and reasoning methods based on logic, which provide a declarative representation of a design, and permit us- ing a single inference procedure for reasoning both forwards and backwards through the design for test generation. I Introduction With the advances being made in the technology of manu- facturing digital devices it is now possible to build systems of unprecedented complexity. An integral part of manufacturing such systems involves testing them in order to ensure their cor- rect operation. The complexity of these systems has led directly to the complexity of generating a set of test vectors to verify the correct operation of the device. Traditional approaches to test- ing devices have either failed to distinguish between the most detailed device structure and its design, or have provided a lim- ited vocabulary for capturing higher level design descriptions. The inability to represent and reason with abstract design for- mulations will make it impractical to generate tests for complex devices using traditional methods, where the cost is exponential in the size of the design. It is possible to manage the complexity of generating tests for complex devices by capturing their description (their subparts and the relationships between them) at a collection of abstrac- tion levels. By capturing higher-level design formulations we can dramatically improve the efficiency and quality of solutions for test generation, and also reduce the sise of a design. It is pos- sible to capture such high level design descriptions by recording the evolving descriptions of a design during the refinement pro- cess. Though not ideal for test generation, these descriptions are much more efficient to reason with compared to a flat low-level (e.g., gate level) description of a device. This paper describes Saturn, a novel test generation system that permits a designer to specify the structure and behavior of a design at a collection of abstraction levels. The system ex- ploits abstract design formulations to increase the efficiency of test generation by reducing the depth and branching factor of nodes in the search space. This paper is outlined as follows: sec- tion 2 defines the test generation task, which is followed by the description of the Saturn test generation system. Section 4 will examine the important dimensions along which a design can be abstracted to capture its higher level formulations, and section 5 will present some experimental results that demonstrate the utility of reasoning with abstract design formulations. Finally, Schlumberger Palo Alto Research 3340 Hillview Ave. Palo Alto, CA 94304 the last section will conclude with a summary and a description of our current research efforts. II Test Generation Test generation involves generating a collection of tests, which check the functionality of a device. The design of the device is assumed to be correct. However, the manufacturing process which realizes the physical device from the design specification is assumed to be imperfect. In this paper we are focusing on test- ing the functionality of a device, and are not addressing testing other properties, e.g., power consumption, and the steady state voltage and current parameters. In addition, we are only inter- ested in checking if a device is functioning or not, rather than identifying the source of any failures. The goal of test generation is to come up with a sequence of tests, such that if the device satisfies these tests it is guaranteed to be consistent with its design. This goal must be satisfied sub- ject to certain constraints (e.g., minimizing the length of the test vectors). In practice, it is impractical to generate the minimal set of test vectors to test a device, so a small set is acceptable. The result of the test generation process is a collection of tests, where each test specifies the values for some inputs and the ex- pected values at some outputs. If the possible manufacturing failures for a device zue either stuck-at 1 or stuck-at 0 faults at the boolean gate level, the test generation process must generate tests to check each possible fault for all the boolean gates in the device. For example, Figure 1 shows a S-bit adder device, which is made up of 2 full-adders, which themselves are implemented using a collection of boolean gates. Testing this device includes testing the inputs and output of the the exclusive-or gate zorl to see if they are stuck-at 1, or stuck-at 0. A test for checking if the output is stuck-at 0 requires controlling the inputs so that the output should be 1 if the device is working, and checking the actual output for these inputs. For example, if the inputs of the exclusive-or gate are 1 and 0 and the output is 1, then the output cannot be stuck-at 0. Testing a device is complicated by the fact that usually a small fraction of its ports are directly controllable or directly observable. In order to test a subpart of a device we must set the directly controllable inputs so that the part being tested has the desired inputs and its output is propagated to a directly observable port. For the previous test for zorl this involves controlling the inputs of the adder to 1 and 0, and observing a 1 at the output if none of the internal nodes can be directly controlled/observed. The exact value propagated to a directly observable output is not important as long as it depends on the output of the part being tested. The key problem in testing is to generate a reasonably small set of vectors that test all possible failures of a device. The exe- cution of a single test is relatively inexpensive, and the number of vectors required to test a device is linear in the number of 778 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. r 1’“-” ! 1 Figure 1: A design for an adder with ‘L-bit inputs. parts (61. A naive test generation algorithm would enumerate all input/state combinations; however, the number of test vec- tors generated will be unacceptably large. The key problem in reducing the number of test vectors is the cost of test genera- tion, which is exponential in the depth of the circuit [9]. This complexity is compounded for sequential circuits where we may have to “unfold” a copy of a finite state machine for each possible state. The full-adder example was used to illustrate the test gener- ation task for a simple device. For large real-world devices this task is significantly more complex. In practice, the most opti- mistic empirical estimates show that the cost of test generation is proportional to the cube of the circuit size [6]. III Saturn The inputs to the Saturn test generation system are: the de- sign description for the device, the directly controllable inputs and outputs of the device, user specified tests for parts of the device, and the user specifications of the abstraction levels at which the parts of a design are to be tested. These are speci- fied in the language Corona (201, which defines a vocabulary of terms in a prefix form of predicate calculus. The system uses the resolution residue planning procedure [3] implemented in the knowledge representation system MRS [19] to automatically gen- erate tests from the design descriptions. The system exploits the meta-level control capabilities of MRS to increase the efficiency of test generation by selecting the most abstract formulation for propagating values through a design. The performance of the system is further enhanced by checking the consistency of evolv- ing solutions, and sharing the results for one test with others by caching tests and solutions to subtasks for these tests. Saturn is an algorithmic test generation system similar to the D-algorithm [18]. Both systems achieve tests by propagating values forwards and backwards through a design. However, the D-algorithm can only generate tests for stuck-at faults at the boolean gate level, which is extremely inefficient for large de- signs. Saturn, on the other hand, can generate tests for designs described at arbitrary abstraction levels, with user specified fault models. Hitest [17] and SCIRTSS [8] are examples of more recent systems that attempt to exploit higher level design formulations for generating tests, and allow the user to specfy heuristics to control the reasoning process. The SCIRTSS system only per- mits specification of designs at two specific abstraction levels, as opposed to an arbitrary collection of abstraction levels. The Hitest system uses the PODEM (71 algorithm for achieving tests, where the system randomly generates inputs for the design and propagates these forwards to see if these achieve a test. Unfor- tunately, the random generation of inputs does not work well for testing sequential circuits which require a specific sequence of stimuli to propagate values from their inputs to an output. The remainder of this section will illustrate the operation of the Saturn test generation system for the 2-bit adder device pictured in Figure 1. A. Describing Designs The design specification for the adder includes the behavior for: the adder as a whole, the full-adders, the ten boolean gates, and the connections. The following collection of facts specify the structure of the design by specifying the components and their type. Andg(al) Port(al-in1 al). - -Conn(al-out olmin2)-. . 0uW) Port ol-in1 01). Xorg(x1) Port 1 -.Conn(ol-out a3-inl), .. xl-in1 xl). . -Conn(xl-in1 al-inl) . a. Full-adder(fl) Port(f-in1 fl) . . .Conn(f-in1 xl-inl) . . . Adder(a) Port(a-in1 a) . . .Conn(a-inl-s fl-inl).. . The facts on the first line assert that al is an and gate, al-in1 is a port of the module al, and that the port al-out is connected to the port ol-in2. The remaining lines similarly define the type of the other components in the design and their interconnection. The behavior of the modules and the connections is specified by a collection of facts in conjunctive-normal-form (CNF)l. The lower case letters in the following behavior descriptions stand for universally quantified variables, and the ‘W’ character is used to define the time at which a fact is true. For example, the fact al-inlQ3=O asserts that the value of the port al-in1 at time 3 is 0. In this design, connections have 0 delay, and the gates have a delay of 5 time units. Xonn(y 2) V ly@t=u v zOt=u 7al-inlOt= V al-out@t+5=0 -al-in20t=O v al-out@t+5=0 -Ial-inl@t=l V lal-in20kl V al-out@t+5=1 . . . -ifl-inl@t=u V -7fl-in20t=v v lfl-cinQt=w V lMajority(u v w)=x V fl-coutQt+l5=x Majority(x y y)=y Majority(y x y)=y Majority(y y x)=y . . . -la-inlQt=u V -a-in29kv V a-out@t+30=u+v The fact on the first line defines the zero delay behavior for connections. It states that if y is connected to z, and if y at time t has some value u, then z has the same value at the same time. The next three facts define the behavior of the and gate al. The first two state that the output is 0 if any input is 0, and the third states that the output is 1 when both inputs are 1. The next fact defines the behavior of the carry output of the first full-adder in terms of the majority function Majority. The next three lines define this function, where the result is equal to the two arguments that are identical. The fact on the last line defines the behavior of the adder as a whole. In specifying these behaviors, we have enumerated the input combinations (e.g., the and gate al), made use of built in functions (e.g., +), and defined new functions (e.g., Majority). ‘The rule A A B + C is equivalent to the clause -IA V YB V C. APPLICATIONS / 779 B. Automated Deduction The Saturn test generation system uses the resolution residue planning procedure [3] for propagating values through a design. This procedure takes a collection of facts in CNF, a set of as- sumable facts (primitive actions that can be directly executed), and a specification of a goal. The inference procedure returns a set of assumable facts which together with the original de- sign description entail the goal. An example of the basic rule of inference for resolution is given below: This rule states that if you know that the clauses on the top two lines are true, then YOU can conclude that the clause on the bottom line is also true. The resolution procedure matches a literal in one clause against the negation of that literal in the second clause. In this example the literal a in the first clause is matched with its negation la in the second clause. If such a match can be found, then we can conclude that the clause consisting of the disjunction of the remaining literals from the matching clauses must also be true. In this example the remaining literals from the first clause are {p}, and those from the second clause are {-y}. Therefore, the clause p V 7 must also be true. a v p Ia v 7 Pv 7 The resolution residue planning procedure adds the negated goal to the original design, and repeatedly applies the resolu- tion rule of inference until a clause with all assumable literals is derived. This procedure can be used to control and observe values in propagating tests through a design. For the adder ex- ample, suppose we are trying to control the output of the and gate al to 1 at time 32. The negation of the goal, Tal-out932=1, is resolved with the last clause of the behavior of the and gate (defined earlier) to give al-inl@27=1 v -Ial-in2@27=1 by match- ing the variable t to 27. In other words, in order to control the output of the gate to 1 at time 32 both inputs must be controlled to 1 at time 27. This inference procedure is repeated untill the port to be controlled is one of the directly controllable inputs. The same inference procedure can also be used to observe port values. For example, suppose we want to observe the value 1 at the first input of the same and gate at time 3. The negation of the goa12, al-inlW=l, is resolved with the last clause of the behavior of the and gate to give -al-in2@3=1 v al-outQ8=1 by binding the variable t to 3. That is, in order to observe the value 1 at the first input of the and gate at time 3 we must control the other input to 1 at the same time and observe the value 1 at its output at time 8. This inference procedure is repeated untill the port to be observed is one of the directly observable outputs. C. Algorithm The system first examines the topology of the design and com- putes the estimates of the number of inference steps required for controlling and observing every port (separate estimates for con- trolling/observing) based on the directly controllable inputs and the directly observable outputs. The cost of controlling the di- rectly controllable inputs is 1, and the the cost of controlling an end port of a connection is one more than the cost of controlling the starting port. The cost of controlling an output of a module is one more than the sum of the costs of controlling all its inputs. Similarly, the cost of observing a directly observable output is 1, and the cost of observing the starting port of a connection is one more than the minimum of the costs of observing all the end ports (e.g., the minimum at a fanout point). Finally, the cost of 2Goals for obseming port is lal-in193=1. values are negated, e.g., the goal for this example ‘80 / ENGINEERING gies that the Saturn test generation system employs in order to increase the efficiency of inference for test generation: consis- tency checking, heuristics to guide search, and caching. 1. Consistency Checking In general, there will be more than one choice at each decision point in the search space. The goal of consistency checking is to prune inconsistent paths in the search space as early as possible. For example, it is impossible to control the output of the or gate in a full-adder to 1 by controlling both its inputs to 1. In choosing a subgoal sg; at a node in the search space, consistency checking corresponds to seeing if it is possible to prove lsgi, in which case this node is pruned. The utility of consistency checking is dependent upon selecting the appropriate amount effort in attempting to prove lsgi. The drawback of too little effort is that inconsistencies are not caught early, and too much effort has the potential drawback that fruitless work may be done for consistent nodes. The Saturn system performs consistency checking incremen- tally by propagating the consequences of every choice through the design. These consequences can propagate information for- ward and backward through the design. In general, it is com- putationaly inefficient to prove that a given choice is consistent, since this task is non-semi-decidable. The Saturn system per- forms limited consistency checking by only propagating the val- ues of individual ports, and state variables. This is sufficient for detecting contradictions for atomic clauses, but not for disjunc- tive clauses. For example, it cannot detect that the four clauses (aVb), (-aVd), (-aVb), and (aV-b) are mutually inconsistent. If an alternative at a choice point is inconsistent with the cur- rent state, it is pruned from the search space, and the system backtracks to try an alternate path. Since inconsistencies are not always detected immediately, chronological backtracking can be inefficient. Saturn keeps track of the justifications for each fact, and follows the justifications back to a choice point, and tries another alternative at the source of the inconsistency. This corresponds to dependency-directed backtracking [2], which dra- matically improves the efficiency of test generation. a. Heuristics to Guide Search Consistency checking cannot eliminate search, since there may be more than one consistent alternative at a choice point. The Saturn system uses heuristics to select the most promising al- ternative at a choice point. These heuristics are based on the estimated deductive cost for controlling/observing a port value, and the probability of being able to achieve this goal given the current problem constraints. The cost of a task is equal to the sum of the costs of its subtasks, which can be looked up directly in the knowl- edge base after the cost estimates have been calculated. The probability of being able to achieve a task is equal to l-yp+-ta (Q)-(W), where common fanouts are the fanout points in the design that need to be controlled for the task, Di is the size of the domain of possible values at fanout point i, and Ni is the number of subgoals of the task that need to control the fanout point i. As a preprocessing step the system records the number Di with every fanout node, and also records the fanout nodes that may need to be controlled for control- ling/observing every internal port. We can calculate Ni for a task by looking up the fanout nodes that need to be controlled for its subtasks. Let C(Ti) be the CO& of executing task Ti, let P(Ti) be the probability of succeeding in achieving task Ti, and C(T;,Tj) be the cost of executing task T; followed by executing task Tjm Since the tasks are independent of each other: C(Ti,Tj) = C(Ti) + (1 - P(Z)) X “(3; ) C(Ti,Tj) < C(Tj,Z) e z# > & That is, the task with the largest probability of success-to-cost ratio should be executed first. The cost heuristics automatically select the more abstract de- sign descriptions for propagating values since these have a lower cost associated with them. For example, in order to control the sum output of a full-adder it is cheaper to use the behavior of the full-adder as a whole since this only requires controlling the inputs of the full-adder. The cost of controlling the output of the exclusive-or gate driving the sum output must be greater since it includes the cost of propagating the values at the input of the full-adder through the other gates and connections. Generating tests for an adder with four bit inputs required 272 seconds on a Symbolics 3600 using the cost estimates to select tasks. When the tasks were selected based on their order of generation, the time required was more than 2 hours, a factor of 26 slower. 3. Caching In the Saturn test generation system, caching is used to share the definition of how to test one component with other similar components, and to share the solutions to subtasks for one test with other tests. Without test caching, the number of tests to generate is pro- portional to the number of modules in the design, as opposed to the number of different module types. For a hierarchically for- mulated design with I levels and approximately n modules per level (five for the full-adder), the number of modules to generate tests for is approximately n ‘. If the average number of different types of submodules of a module is m (three for the full-adder), the number of modules to generate tests for is m’. Since n > m, these savings can be quite substantial. Generating tests for an adder with four bit inputs (with sub- parts that are four full-adders) required 272 seconds with test caching, and 1230 seconds without it. This represents a factor of 4.5 improvement. Test generation involves testing all the possible faults for a device. These tasks share many subtasks in common. For ex- ample, it is possible to control the output of an and gate to 0 by controlling its first input to 0. This backward propagation is followed to the boundary of the enclosing hierarchy to find a solution. This same solution (the values of the inputs at the enclosing hierarchy boundary) can be cached with the other sub- goals back to the hierarchy boundary (e.g., the first input of the and gate can be controlled to 0 with these inputs). For the 4-bit adder, the cost of test generation is reduced from 591 seconds to 272 seconds by caching solutions to subgoals. This represents a factor of 2.2 improvement. These examples have illustrated the savings for designs with one level of hier- archy. These savings are multiplied for complex designs with additional levels of hierarchy. IV Exploiting Abstract Descriptions One of the key features of the Saturn test generation system is that it permits the user to extend the vocabulary for describing designs. This allows a designer to specify a design in terms of the objects, functions and relations that he thinks of. We can exploit the higher level formulations of a design for test generation by capturing them in the design refinement process. By capturing design descriptions in predicate calculus it is possible to use the same descriptions to reason forwards and backwards through a design, as is needed for an automatic test generation system. APPLICATIONS / 781 Reasoning with higher level design formulations increases the efficiency of test generation by reducing the complexity of the descriptions. In this section we will examine following dimen- sions along which a design can be abstracted to capture these formulations: structural abstraction, functional abstraction, and value abstraction. By abstracting a design we are augmenting the original description with additional facts. We do not discard the lower level descriptions. A. Structural Abstraction Structural abstractions corresponds to the case where a design is augmented by a new module whose subparts are a collection of existing modules. For example, creating a new full-adder module whose subparts are the appropriate interconnection of the five boolean gates. We must define the sum and carry outputs of the full-adder as a function of its three inputs. The behavior of the newly created module must be related to its subparts by defining the relations between the values at its ports and the values at the ports of its subparts (e.g., via connections). By allowing the user to define structural abstractions we can replace a subtree in the search space with a single node and its children. For example, we can define the input/output rela- tions for the full-adder as a whole as a table (e.g., the Majority function), and directly look up the solutions for controlling an output. Using the lower-level design formulation we must rea- son backwards through the gates of the full-adder. The search space at the lower level includes: paths with no solutions, differ- ent paths with identical solutions, and solutions with redundant conjuncts [22]. R easoning with structurally abstracted design formulations for test generation provides a savings which is ex- ponential in the difference between the depth of the original and reformulated description. In addition, by sharing the de- scriptions of identical components using prototypes the size of a structurally abstracted design can be made proportional to the number of distinct module types, as opposed to the total number of modules. B. Functional Abstraction Functional abstraction corresponds to the case where the be- havior of an existing module is specified in terms of a newly de- fined function. For example, we can define the behavior of the 2 bit adder by writing clauses that enumerate all the input/output combinations (16 clauses in all). Alternatively, we can define the addition function + (including its properties, e.g., commutativ- ity, associativity), and define the behavior of the adder in terms of this function, as is done in the following CNF clause: +nl=x v An2=y V out=x+y. Functionally abstracting designs permits achieving tests using constraint propagation. For example, if a subgoal of a test re- quires controlling the output of an adder to 4122, we can achieve this goal by propagating symbolic constraints through the de- sign. That is, the goal out=4122 is replaced by the new subgoals inl=x A in2=y A x+y=4122. The original design formulation forces the inference mechanism to employ search to solve this problem. That is, the system must prematurely select input combinations that add up to 4122, e.g., 0 and 4122, 1 and 4121, etc. The size of the search space using this strategy is exponential in the depth of the circuit. For a system of linear constraints, func- tional abstraction reduces the cost to a polynomial function of the number of constraints. However, if the constraints are non- linear it may be advantageous to use search if there are good heuristics to guide the inference mechanism to paths likely to have solutions. C. Value Abstraction Value abstraction corresponds to the case where the behavior of an existing module is defined in terms of more abstract values. # Tests Flat # Tests Refined Flat Time Refined Time Table 1: Utility of refining designs. The original values are partitioned into equivalence classes such that all values in the same equivalence class map to a unique value at the abstract level. For example, at the lower-level we can model the behavior of a multiplier in terms of integer values at its inputs and outputs. At a more abstract level we can define the behavior of the multiplier in terms of the objects positive and negative, i.e., --iinl=positive v An2=negative v out=negative, etc.. Controlling the output of the multiplier to a negative number using the original design formulation requires prematurely se- lecting specific integers at the inputs with opposite sign. Using the value abstracted design description we find the two solu- tions: inI=positive A in2=negative, and inl=negativer\ in2zpositive. This delays the actual selection of specific values at the inputs, some of which may be inconsistent with the current state. Value abstraction reduces the branching factor of the nodes in the search space by reducing the number of alternatives that achieve a goal. A linear reduction in the branching factor re- duces the size of the search space by a factor that is exponential in the depth of the circuit. V Experimental Results In this section we will present examples that illustrate the utility of abstracting and refining designs, and describe the com- plexity of devices for which Saturn has been used. In these ex- amples it is assumed that the possible failures are stuck-at faults for the boolean gates. Table 1 shows the utility of refining designs for a collection of devices of increasing complexity. For example, adderi stands for an adder with i bit inputs. The columns for the flat de- sign formulation correspond to a black box description of the device at a high level, which must be tested by testing all input combinations. The columns for the refined design formulations correspond to descriptions that have been refined to the gate level’. The number of tests and time grows exponentially using the high-level formulation alone, whereas the number of tests remain roughly constant using the refined descriptions and the time grows quadratically. For small devices the cost of test gen- eration is cheaper using only the high level descriptions. How- ever, for large devices both the time and quality of solutions (smaller number of tests) improves using the refined descrip- tions. An example illustrating the utility of abstracting designs is given in Table 2 for an adder with 4 bit inputs. The flat gate level formulation corresponds to a description of the adder in terms of boolean gates alone, while the hierarchical description defines the adder in terms of 4 full-adders, which are them- selves defined in terms of boolean gates. In addition to reducing the time required to generate tests by a factor of 3.5, the ab- stracted description provided better tests (a smaller set). This is due to the imperfections of the test minimization algorithm. The repeated application of the minimization algorithm at the 3Each adder has been refined into a collection of full-adders, which them- selves are reE.ned into a collection of boolean gates. 782 / ENGINEERING Formulation Time # Tests Cost Fat tor Table 2: Advantage of abstracting a design description. boundaries of the full-adders with a smaller set of tests provides better solutions compared to a single application with a large set of test vectors. The largest design for which we have generated tests consists of 3 multipliers and 2 adders (with 4 bit inputs). This design has approximately 650 objects, seven levels of hierarchy and values ranging from bits to integers, The Printer Adapter card of the IBM PC is the most realistic example for which we have gen- erated tests. The interesting aspects of this board are that it has feedback paths, bi-directional signals, and tri-state busses. The system generated 40 tests using a high level formulation of the board in approximately 30 minutes4. Currently we are in the process of generating tests for the motherboard of the IBM PC-AT. This device is significantly more complex since it includes complicated microprocessors (Intel 80286,87), and pe- ripheral chips. VI Conclusion In order to generate tests efficiently for complex devices we must reason with higher level design formulations. Existing test generation systems have restricted the vocabulary for describing designs, thus preventing a designer from specifying the higher level design formulations that are created in the design refine- ment process. By capturing higher level formulations of a device we can re- duce the size of the search space exponentially by reducing its depth and branching factor. We have empirically validated this by demonstrating that both the time and the quality of solutions can be improved by reasoning with higher level design formula- tions. Thus, by capturing a design formulation at a collection of abstraction levels we can increase the complexity of the devices that we can generate tests for. We believe that this approach is suitable for large real-world designs. At present we have demonstrated this for the printer adapter card of the IBM PC, and we are in the process of model- ing and generating tests for the motherboard of the IBM PC-AT. Acknowledgments This paper has benefited from comments and suggestions by Mike Genesereth, Glenn Kramer, Mark Shirley, and Vineet Singh. This research was funded in part by Schlumberger Palo Alto Research. REFERENCES [l] Comerford, R. and Lyman, J. “Self-Testing Special Re- port,” Electronics, March 10, 1983, pp 109-124. [2] Doyle, 3. “Truth M aintenance Systems for Problem Solv- ing,” AI-TR 419, M assachusetts Institute of Technology, January, 1978. [3] Finger, J. and Genesereth, M. “Residue: A Deductive Ap- proach to Design Synthesis,” HPP-85-1, Stanford Univer- sity Heuristic Programming Project, January, 1985. [4] Garey, M. and Johnson, D. Computers and Intractability: A Guide to the Theory ojNP-Completeness, W.H. Freeman and Company, 1979, pp 161-164. PI PI PI PI PI WI Pll PI (131 D41 PI PI WI PI PI WI [=I WI Genesereth, M. et. al. “The MRS Dictionary,” HPP-80-24, Stanford University Heuristic programming Project, Jan- uary, 1984. Goel, P. “Test Generation Cost Analysis and Projections,” Proceedings of the 17th Design Automation Conference, June, 1980. Goel, P. “An Implicit Enumeration Algorithm to Generate Tests for Combinational Logic Circuits,” IEEE nansac- tions on Computers, vol. c-30, no. 3, 1981, pp 215-222. Hill, F. and Huey, B. “SCIRTSS: A Search System for Se- quential Circuit Test Sequences,” IEEE fiansactions on Computers, May 1977, pp 490-502. Ibarra, H. and Sahni, S. ‘Polynomially Complete Fault De- tection Problems,” IEEE Transaction on Computers, vol. C-24, no. 3, March 1976, pp 242-250. Kramer, G. “Employing Massive Parallelism in Digital ATPG Algorithms,” Proceedings of the 1989 IEEE Inter- national Test Conference, IEEE Press, pp 108-114. Lai, K. “Functional Testing of Digital Systems,” CMU-CS- 81-148, Carnegie-Mellon University, December, 1981. Mark, G. “Parallel Testing of Non-volatile Memories,” Pro- ceedings of the 1989 IEEE International Test Conference, IEEE Press, pp 738-743. McCluskey, E. “A Survey of Design for Testability Scan Techniques,” VLSI D esign, December, 1984, pp 38-61. McCluskey, E. “Minimization of Boolean Functions,” Bell System Technical Journal, 35, no. 6, November, 1956, pp 1417-1444. Moszkowski, B. “Reasoning about Digital Circuits,” STAN- CS-83-970, Stanford University, June, 1983. Nilsson, N. Principles of Artificial Intelligence, Tioga Pub- lishing Company, Palo Alto, 1980. Robinson, G. “Hitest- Intelligent Test Generation,” Pto- ceedings of the 1989 IEEE International Test Conference, IEEE Press, pp 311-323. Roth, J. “Diagnosis of Automata Failures: A Calculus and a Method,” IBM Journal of Research and Development, vol. 10, pp 278-291, 1966. Russell, S. “The Complete Guide to MRS,” KSL-85-12, Stanford Knowledge Systems Laboratory, June, 1985. Singh, N. “Corona: A Language for Describing Designs,” HPP-84-37, Stanford University Heuristic Programming Project, September, 1984. Singh, N. “MARS: A Multiple Abstraction Rule-Based Simulator,” HPP-83-43, Stanford University Heuristic Pro- gramming Project, December, 1983. Singh, N. Ezploiting Design Morphology to Manage Com- plezity. PhD thesis, Stanford University, August 1985. ‘Additional information for these examples can be found in 1221. APPLICATIONS I 783
|
1986
|
18
|
450
|
SHAPE FROM DARKNESS: Deriving Surface Information from Dynamic Shadows John R. Kender” Earl M. Smith Department of Computer Science Columbia University New York, New York 10027 1 Abstract We present a new method, shape from darkness, for extracting surface shape information based on object self-shadowing under moving light sources. It is motivated by the problem of human perception of fractal textures under perspective. One-dimensional dynamic shadows are analyzed in the continuous case, and their behavior is categorized into three exhaustive shadow classes. The continuous problem is shown to be solved by the integration of ordinary differential equations, using information captured in a new image representation called the suntrace. The discretization of the one-dimensional problem introduces uncertainty in the discrete suntrace; however it is successfully recast as the satisfaction of 8n constraint equations in 2n unknowns. A form of relaxation appears to quickly converge these constraints to accurate surface reconstructions; we give several examples on simulated images. The shape from darkness method has two advantages: it does not require a reflectance map, and it works on non-smooth surfaces. We conclude with a discussion on the method’s accuracy and practicality, its relation to human perception, and its future extensions. 2 Introduction We present a new, active method for obtaining shape information from low level cues. It exploits the information implicit in the shadows that an object or an object part casts upon itself or another object. In spirit, it is most like the photometric stereo method of Woodham (Woodham, 1981), in that it requires control over illuminant position. However, it also extends the exrsting work on shadow geometry of Shafer (Shafer, 1985) and others, and gives additional insight into the nature of shadows, especially in the cases where the objects are neither polyhedra nor smooth, or where the shadows are dynamically changing. The method has two major advantages. It appears to work best for textured objects, that is, where existing methods fail most badly. And it is more robust than existing methods, in that it requires little a priori information about a surface’s reflectance. Further, it illustrates the inherent utility-.-and complexity--of static or dynamic shadow-based cues for any integrated vision system, whether active or passive. 3 Historical Background The method, which can be called shape from darkness, was motivated by an interest in the human perception of fractal textures. As Pentland has shown (Pentland, 1984), the fractal dimension of textured surfaces is a powerful feature on which the segmentation of an image can be based. He further observed that the image of a single fractal surface viewed under perspective has non-constant fractal dimension. It is conjectured that this change in measured feature is closely related to the change in overall local surface orientation of the surface with respect to the observer. If this is the case, then fractal dimension can serve as a basis for a “shape from fractal” method, similar to other gradient-based shape from x methods. However, the mathematics behind such relationships appear formidable. This is because the observed change in fractal dimension appears to be due to the increasing self-occlusion of the fractal surface as it is viewed at increasingly oblique angles. That is, unlike an airborne observer of a mountain range, an observer down in the foothills sees very little of the mountain peaks: he sees mostly the sides of foothills. The mathematical difficulty stems from the intractability of the threshold-like non-linear functions that express the nature of object occlusion; the difficulties are similar to the ones faced when trying to integrate object segmentation with standard shape from x methods. *This research was supported in part by ARPA grant #NOOO39-84-C-0165, by a NSF Presidential Young Investigator Award, and by Faculty Development Awards from AT&T, Ford Motor Co., and Digital Equipment Corporation. Nevertheless, the problem does have the following analogue, which ultimately suggested the method reported here. It is that self-occlusion is very similar to shadowing: were a light source moved to the observer’s position, the self-occluded areas would now be the ones in shadow. Thus, instead of attempting to investigate the effect that varying surface orientations have on observed fractal properties (or, equivalently, the effect that varying observer positions have), one can explore the effects that varying light source positions have on the generation of a fractal’s shadows. Ideally, one would like to look into the shadows in order to see what information has been lost. Generating and analyzing shadow information allows for several computational efficiencies. Essentially, when working with shadows, one is doing rendering and shading under extreme conditions. The capture of shadow information from real imagery or the generation of shadows synthetically both result in binary imagery. Instead of collecting shading information that has a range of values, one obtains a characteristic function instead: zero means shadow, one means illuminated. Simple thresholding of actual imagery is usually all that is required, and the synthetic casting of shadows is a straightforward computation. The imagery that results can be seen as extreme shape from shading in another sense. A synthetic shadow image can be obtamed in the standard graphic rendering way by first thresholding the reflectance map: all gradients which reflect any light at all are set to one, and the remainder of the map stays at zero (for self- occluding). What results when an image is rendered with such a map is an image with extreme contrast; indeed, the contrast cannot be more extreme. Recovering the depth or orientation of those surface fragments that have been shadowed is clearly a difficult task given only one shadow image. As with many other problems in vision, many influences are conflated into the simple image observable, the shadow. The beginning of a shadow is determined not only by surface orientation and illuminant direction, but also by the absence of any prior surface to overshadow it. The termination of a shadow depends on the relative heights and orientations of both the shadowing and shadowed surface. Deconflating these influences in a single image is not necessarily impossible; it depends on the additional information and assumptions one also brings to the task. For example, if it is known that the surface is that of a hemisphere, its position and radius are easily recovered, even without knowledge of the illuminant direction. Less restrictive assumptions, such as the surface having a band-limited fourier spectrum (and therefore “smooth” in exactly this sense of smooth), may also admit to solutions, perhaps in a form analogous to the Logan theorem characterizing a signal by its zero-crossings (Logan, 1977). But still weaker assumptions, such as the surface simply being twice differentiable, probably do not lead to solutions at all. This is because smoothness as defined by differentiability is the assumption implicit in true shape from shading, and true shape from shading depends heavily on the amount of curvature in the reflectance map (Lee, 1985); the thresholded reflectance map has none. 4 Problem Formalization The shape from darkness problem is more straightforward to solve by using multiple images. The observer and the objects can be held stationary, obviating any image-to-image correspondence problem, and what is moved is the light source, in a manner similar to photometric stereo. Photometric stereo usually can be done with three illuminant positions, although four is the usual number used in practice in order to prevent exactly the problem discussed here: objects in self-shadow. It is apparent that even four shadow images is woefully inadequate for shape from darkness under reasonable surface assumptions. Thus, the problem is relaxed to allow a fixed number of illuminant positions, the exact count and location of which are to be determined. The added complexity of increased imagery is n&igated in part by its binary nature, and in part by the lack of any necessity to calibrate the shadow reflectance map, since the latter is determined solely by the 664 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. illuminant terminator direction. One orientations. only needs to define the location of the shadow For simplicity in the discussion that follows, the problem is further reduced to its natural one-dimensional subproblem. That is, the algorithms presented here will discuss the recovery of a planar curve rather than a surface, given illuminants that lie in the plane of the curve. (The extension of the method to the full two dimensional case, including a discussion of the degrees of freedom of illuminant placement, is sketched later.) Thus, we assume that depth is a function solely of x, z-f(x), rather than z=f(x,y), and that the illuminants lie wholly within the xz plane. Note that photometric stereo has a similar one-dimensional analogue, with one-dimensional reflectance maps that are functions of curve derivative rather than of surface gradient. In one-dimensional photometric stereo, three lights are necessary to prevent objects--here, curves--from self-shadowing. 5 The Continuous Problem It is instructive to consider the shape from darkness problem as a continuous problem first. Assume that the illuminant is an infinitely distant point source, and that the observer is infinitely far in the positive z direction. (Thus, instead of investigating the surface properties of a fractal seen under % rspective, we are now exploring the recovery of curve information om shadows generated under parallel illumination.) Given that the illuminant will appear in many orientations, it will be convenient to identify the illuminant with the sun, the positive direction of the x axis with the east, ilhtmination at zero slope with dawn, illumination at positive slopes with morning, and illumination from the positive z axis with noon: often these terms are more immediate and compact. As shown in the figure, it is easy to show that under these conditions all curve points fall into one of three classes of dynamic shadow behavior under increasing morning illumination, with analogous classes in the afternoon. A point either can become illuminated because it gradually is moved out from self-shadowing, or it can be always illuminated, or it can become illuminated because it gradually moves out from a cast shadow. These definitions can be made precise, at the given points: A minus point m has f(m) >=O such that for all x, x>m implies f(x) c= f(m) + f(m)(x-m). (Implicitly, f”(m) c= 0.) Intuitively, a minus point can only be in shadow when it is (or would be) self-shadowed. It becomes illuminated precisely at the time of day when the rising illuminant’s slope is equal to f(m). When f(m) becomes illuminated, points to the immediate west of m remain in shadow; therefore, in the direction of illumination the transition at m is from illumination into darkness. This terminator travels west with increasing illumination, and crosses descending values off. Note that the shadow is caused by light grazing f(m), and it is therefore diffuse, especially at low illuminant slopes. Such a point is therefore called minus for five negatively flavored reasons (a sixth becomes apparent shortly): its second derivative is negative, its terminator goes from light to dark, the terminator travels west, the terminator descends, and the shadow is not sharp. A zero point z is such that for all x, x>z implies f(x) <= f(z). (Implicitly, f(z) <= 0.) Intuitively, a zero point is never shadowed (in the morning), not even at dawn. It becomes illuminated when the rising illurninant has slope equal to zero. It never experiences a terminator: it is characterized by zero shadow and zero change. A plus point p is every other point. Negating and manipulating quantifies yields: either f (p) C= 0 and it is not a zero point, or f(p) >= 0 and it is not a minus point. Intuitively, a plus point can only be shadowed due to cast shadows. It becomes illuminated when the rising illuminant grazes a minus point at m (illuminant slope is f(m)), such that f(m) = f(p) + f (m)(m-p). When f(p) becomes illuminated, points to the immediate east of p remain in shadow; therefore, in the direction of illumination the transition at p is from darkness into illumination (thus, plus). This terminator travels east (plus) with increasing illumination. Note that the shadow is caused by occlusion, and is therefore sharp (plus). (However, f ‘@) is not necessarily positive, and the terminator does not necessarily cross ascending values off.) The function f can therefore be partitioned into segments and the segments labeled by their shadow class. The grammar of segment labels is simple; in the morning it is given by the regular expression ((+-)*O)*. Such strings have three significant transitions. Plus to minus occurs at f’ = 0 with f at a local maximum. Minus to zero occurs at f = 0 with f at a local maximum. Minus to plus occurs at curious “second grazing” points, those points m where f(m) is equal to the illuminant slope, but where there is also a p>m with f(p) also equal to the illuminant slope, and f(p) = f(m) + f (m)(p-m). (The fourth transition, zero to plus, appears to have no special significance.) 5.1 The Continuous Suntrace Quantitative reconstruction can be based on the integration of the derivative information intrinsic in the minus points. The reconstruction requires an additional representation of image information, called the suntrace, from which the requisite derivative information is obtained The suntrace is a mapping from the domain of the original curve into (morning) illumination slopes. For each x, it records the slope at which the value f(x) first became illuminated. The suntrace is a function of x, since a given f(x) can become illuminated only once. Depending on the underlying curve, the suntrace may be unbounded: although the entire curve must be illuminated no later than noon, noon corresponds to an unbounded illumination slope. Since zero points are illuminated at dawn, they have suntrace values identically zero; see Figure 1. Minus points are likewise easy to detect and label: they are exactly those points (in the morning) with negative (minus) suntrace derivatives, since their terminators move west with increasing illuminant slope. What remains are the plus points; they have positive (plus) suntrace derivatives. 5.2 Solution Using ODES Given a morning suntrace, the underlying curve can be partially reconstructed A contiguous curve segment with minus labels can be integrated into a function segment by using the suntrace value of the point as the value of f’ at the point. The segment, however, must “float” at an unknown height until it is given an absolute height by the appropriate constant of integration. By definition, the function values of all plus points can be determined relative to the position of their corresponding minus points that shadow them. For a plus point p, the calculation is based on the relation f(m) = f(p) + f (m)(m-p), where the corresponding minus point m is found in the suntrace as the least m greater than p that has the same illumination slope, f(m), that p has. Entire contiguous segments of plus edges can therefore be fixed in space, and joined to their integrated minus segment The now completed plus-minus complexes can themselves be joined one to another at their common “second grazing points” (that is, at minus-plus transitions). In this way, long, self-consistent segments of the curve result, but with each “floating” with respect to a constant of integration; see Figure 2. The fuller recovery can never be made since a simple morning suntrace provides no information about zero points. Their relative and actual depths can attain arbitrarily high values, and any self-consistent segments separated by zero points can freely float relative to each other, as long as the slope of the intervening zero segments remain negative. Pinning down the constant of integration and restricting the behavior of zero points can be achieved by using a second suntrace, usually the afternoon suntrace which maps illumination slopes from noon to dusk. It is apparent that the only point that can be labeled a zero point for both suntraces is the global maximum. All other points are shadowed at least once and can therefore be assigned a function value relative to some constant. What results, within the accuracy of the suntrace and the integration, is a reconstruction of the underlying curve with depth values relative to a single constant of integration: the global maximum. PERCEPTION AND ROBOTICS / 665 UPPER u(fs(x)) <= u(x) + (fs(x)-x)*sfs(x) LOWER l(ls(x)) >= l(x) + (Is(x)-x)*sls(x) The shape from darkness method begins by collecting from the discrete suntrace, for every element x in the domain of the curve, information about such shadowers. The last shadower of f(x) is found in the morning in the following way. If f(x) fist became illuminated at time t+l, the last shadower of f(x) was the nearest eastern illuminated neighbor to f(x) at time t. The failing shadower of f(x) is the nearest eastern illuminated neighbor at time t+l. Fortunately, such information can be collected in one pass through the suntrace. Assuming both a morning and afternoon suntrace, each element x of the domain will gather eight pieces of information: for each of the four morning or afternoon last or failing shadowers, it stores their position and the time of their shadowing (t or t+l). 6.2 The Eight Constraints per Point Given this information, each point in the domain affects and is affected by these four critical shadowers. Each point therefore participates in eight constraints, four to do the affecting, and four to be affected by. Given that the morning and afternoon suntraces are completely symmetrical, there are only four basic conceptual relations: forward or backward constraints on upper or lower bounds. The forward constraints propagate constraint information in the direction of the illuminant; the backward constraints propagate it against the illuminant. The forward constraints are based on the following observations. At point x, x’s upper bound can be no higher than the projected shadow of the upper bound of its last shadower. (If x’s upper bound were any higher, x would not be shadowed at time t). Similarly, at point x, x’s lower bound can be no lower than the projected shadow of the lower bound of its failing shadower. (If x’s lower bound were any lower, x would instead be shadowed at time t+l). In the morning, the forward constraint equations are therefore: UPPER u(x) c= u(ls(x)) - (Is(x)-x)*sls(x) LOWER l(x) >= l(fs(x)) - (fs(x)-x)*sfs(x) where u(.) and l(.) represent the upper and lower limits in effect at any time, Is(.) and fs(.) are the coordinates of the last shadower and failing shadower, and sls(.) and sfs(.) are the illumination slopes at the times of last shadow and failing shadow. The backward constraints are a bit trickier, but it is their feedback that seems to account for the method’s power. Consider the upper bound at x. Since the failing shadower must fail to shadow x, the upper bound of the failing shadow& is limited by the height at which it just barely fails to shadow x: the maximum allowable height for the failing shadower occurs when x i&elf is at its maximum. (If &e failing shadower’s upper bound were higher, it would instead shadow x.) This height can be determined by backprojecting the upper bound of x along the slope in effect at the failing shadow time, t+l. Similarly, consider the lower bound at x. Since the last shadower must successfully shadow x, the lower bound of the last shadower is limited by the depth at which it just barely succeeds in shadowing x; the minimum allowable depth for the last shadower occurs when x itself is at its minimum. (If the last shadower’s lower bound were smaller, it would instead fail to shadow x.) This height can be determined by backprojecting the lower bound of x along the slope in effect at the last shadow time, t. See Figure 4. In the morning, the backward constraint equations are therefore: Four similar constraints afternoon suntrace. apply to the information gathered for x from the It is surprising that these appear to be all the constraints possible (aside from the trivial constraint that u(x) > l(x)). Other relationships between the upper and lower bounds of x, upper and lower bounds of its last shadower, and upper and lower bounds of its failing shadower, do not appear to be constraining. For example, if x’s upper bound decreases, it has no effect on the upper bound of its last shadower. 6.3 Solution Using Relaxation The specific family of constraints that result from a given suntrace have a complex interrelated structure. It is not apparent whether there is any special solution method applicable to this problem in general, or even for well-defined subclasses of curves. There are 8n inequalities in 2n unknowns, and there is a well-defined objective function to minimize: that is, the sum, over all x, of u(x) - l(x)). Although the problem might be solved using linear programming, a more attractive solution method is the use of a version of relaxation. Conceptually this consists of a number of successive iterations, in each of which the eight constraint equations are successively applied to each point x in the domain. If the application of any constraints results in better estimates for u(x) or l(x), they are updated. As in the continuous case, the only valid initial values are those of the global maximum (the only point labelled zero in both suntraces); its upper and lower limits are set arbitrarily to a pleasant value (say, zero) before the relaxation begins. In practice, convergence seems very rapid. Unlike some relaxation algorithms, updating is based on thresholds, so upper and lower bounds are only altered if they are moved closer together. The method is therefore more likely to terminate when it recognizes a lack of measurable progress. 7 Experimental Red ts In the experiments that follow, some of the generalities of the algorithm were made particular. For ease of comparing the final reconstructed curve to the original, the global maximum of the reconstruction was initialized to its true known height. Sun positions were simulated at constant slope increment; thus, sun angles in the morning linearly increase in tangent. (Under this scenario, the sun literally rises, rather than travels an arc!) This policy of constant increment seems to be closely related to the encouraging accuracy obtained in the final processing step, where the final estimate of the curve is defined to be the curve midway between the computed upper and lower bounds. Each of these series of test images shows the following, The first figure of a series is the original curve, with its morning and evening suntraces. The domain of the original curve is aligned with the domain of the suntraces. Both suntraces have the axes for increasing sun slope pointing toward the curve. Thus, on all suntraces, the line nearest the curve is pure black, indicating all pixels have been illuminated. The second figure of a series is a record of the constraint processing. Initial estimates for upper and lower bounds as propagated from the global maximum have gradually approached each other, subject to the suntrace data. The third figure of the series shows final upper original curve, and the superimposed best estimate. and lower bounds, the The first series is an image of a self-similar mountain. It is approximately 300 points wide by 85 points peak-to-peak. The suntrace was taken at increments of. 1, that is, at approximately four degrees, to a maximum of 30 increments. The final estimate has a cumulative total error of less than 68 (about .2 error per pixel, average), and a maximum single point error of less than 1.2. The second series is the same image, but with a suntrace increment of 1: that is, the first non-dawn suntrace is taken at 45 degrees, and only four increments are possible. Although not a realistic test, it demonstrates more visibly the method and its results, especially the goodness of the final estimate even under extremely severe conditions. 666 i SCIENCE The third series demonstrates the applicability of the processing to very smooth imagery: a semicircle of radius 50, again under 30 increments of .l each. Maximum error occurs at the extreme left and right of the “table”, although reconstruction error within the circle is no more than 0.5. 8 Discussion It appears that the accuracy of the final estimate is surprisingly good, and may be related to the use of constant illumination slope increment. Choosing the midway curve is guaranteed to minimize worst case error, since the midpoint can never be off more than half the available range. 8.1 Performance Aside from the empirical data given above, little is known about the theoretic performance of the algorithms except in two worst cases. In terms of accuracy, the worst case image occurs in a monotonically decreasing function with positive curvature (as in z = l/(x+c)). Here, points at the extreme asymptotic end have little opportunity for feedback, so the range between upper and lower bounds is virtually the same as the initial forward constraints, length*(slope(t+l)-slope(t)); if slopes increase in constant increments, this is simply Iength+increment. In terms of convergence, it appears that certain square wave trains takes n iterations, where n is the number of pulses in the train. Shape from darkness has several advantages, most notably that it can exploit the surface information implicit in a class of dynamic shadows, with very little restrictions placed on the class of surfaces being shadowed: they need not be smooth. In particular, it can probably be useful in increasing the accuracy with which finely textured surfaces are viewed, especially under oblique illumination. It can also exploit smart cameras that run- length encode the incoming binary shadow imagery, but the exact information content of a shadow image, especially with respect to the information content in a gray scale image, remains to be explored. 8.2 Practicality The utility of the method depends upon the extent to which shadowed imagery can be accurately obtained. This does not necessarily imply a completely controllable artificial light source: natural sources such as the sun can be used if there is concurrent accurate slope (or time of day) information. In effect, the method establishes an upper bound on the error of reconstruction for any series of shadowed imagery, artificial or otherwise. Like other shape from x methods, it is best seen as one of many possible sources of surface information. 8.3 Relation to Human Perception The complexity of the data interaction does suggest why humans do not appear to derive much surface information from dynamic shadows. The necessity to store, in effect, an entire suntrace is probably excessive. On the other hand., if our earth rotated much faster (say, once every three seconds), there may have been more reason for natural systems to develop at least an approximate solution to the shape from darkness problem. 8.4 Extensions The method admits of many extensions. The application of the method to real imagery must address the difficulties of specularity, mutual ilhrmination, and diffuse shadows. However, in a robot environment, much of the environment can be structured to make the problem easier. For example, having the knowledge that the object is on a fixed table at a given depth can aid in the setting of lower bounds. The extension to two-dimensional surfaces is probably the most critical. The problem can probably be decomposed for parallel processing in ways beyond the trivial one of partitioning the images in strips parallel to the illuminant direction; it may even be done in a hierarchical way. Selecting optimal sun positions with two degrees of freedom is challenging, but may reduce to two simple perpendicular transits. The problem is especially acute if sun and observers are allowed to be near, and observers are allowed to view off the normal axis; this is once again the original problem of fractals under perspective. 9 Acknowledgements Paul Douglas provided an early version of the code for the generation of suntrace information, and coined the name of the representation. 111 PI r31 [41 [51 References Lee, D. A Provably Convergent Algorithm for Shape from Shading. In Proceedings of the DARPA Image Understanding Workshop, pages 489-496. December, 1985. Logan, B.F. Information in the Zero-Crossings of Bandpass Signals. Bell System Technical Journal 56:487-510, 1977. Pentland, A.P. Fractal-Based Description of Natural Scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-6(6):661-674, November, 1984. Shafer, S.A. Shadows and Silhouettes in Computer Vision. Kluwer Academic Publishers, Hingham, MA, 1985. Woodham, R.J. AnaIysing Images of Curved Surfaces. Artificial intelligence 17( l-3): 117-140, August, 1981. First series: Original curve and suntrace First series: Constraint propagation First series: Final bounds and estimate PERCEPTION AND ROBOTICS ! 66’ t=3 t=2 t=1 t=o slope = 3 slope = 1 i 5 slope = 0 3 > 2 Figure 1 Su ntrace 668 / SCIENCE H leight t Solved Solved by Solved by reference by ODE 1 to minus edge I0D.E 1 “Floats” I I I ) East Figure 2 Figure 3 X w4 LS(x) Figure 4 Second series: Original curve and suntrace Second series: Constraint propagation Second series: Final bounds and estimate 1 Upper Third series: Original curve and suntrace Third series: Constraint propagation Third series: Final bounds and estimate PERCEPTION AND ROBOTICS / 669
|
1986
|
180
|
451
|
3-D MOTION RECOVERY FROM TIMEVARYING OPTICAL FLOWS Kwangyoen Wohn and Jian Wu Division of Applied Sciences Harvard University Cambridge, MA 02138 Abstract Previous research on analyzing time-varying image sequences has concentrated on finding the necessary (and sufficient) conditions for a unique 3-D solution. While such an approach provides useful theoretical insight, the resulting algorithms turn out to be too sensi- tive to be of pratical use. We claim that any robust algorithm must improve the 3-D solution adaptively over time. As the first step toward such a paradigm, in this paper we present an algorithm for 3-D motion computa- tion, given time-varying optical flow fields. The surface of the object in the scene is assumed to be locally planar. It is also assumed that 3-D velocity vectors are piecewise constant over three consecutive frames (or 2 snapshots of flow field). Our formulation relates 3-D motion and object geometry with the optical flow vector as well as its spatial and temporal derivatives. The deformation parameters of the first kind, or equivalently, the first-order flow approximation (in space and time) is sufficient to recover rigid body motion and local surface structure from the local instantaneous flow field. We also demonstrate, through a sensitivity analysis carried out for synthetic and natural motions in space, that 3-D inference can be made reliably. 1. INTRODUCTION When an object moves relative to a viewer, the pro- jected image of the object also moves in the image plane. By analyzing this evolving image sequence, one hopes to extract the instantaneous 3-D motion and sur- face structure of the object. The path from time- varying imagery to its corresponding 3-D description may be divided into two relatively independent steps: (1) computation of 2-D image motion from the image sequence, and (2) computation of 3-D motion and struc- ture of objects from 2-D motion. This paper deals with the latter issue. __________________----------------------------------------------- This work was supported by the Office of Naval Research under Contract N00014-84-K-0504 and Joint Service Electronics Program under Contract N00014-84-K-0465. The relations between 2-D motion and 3-D environ- ment are formulated in terms of non-linear equations. This non-linearity prevents us from solving them in a trivial way, which makes the problem mathematically interesting. The schemes used to interpret 2-D motion information can be classified into two categories depend- ing on the kind of 2-D motion representation utilized. One may use the motion of distinct, well-isolated feature points [6]. The other approach uses the continuous flow field within a small region [4]. While either method has its own merits and drawbacks, the second approach leads to stable solutions provided that the partial derivatives of the flow field are available [8]. Waxman and Wohn [9] developed the methods of extracting the partial derivatives of the flow field directly from evolving contours over time. They have demonstrated that the combined algorithms of 2-D flow computation and 3-D structure and motion computation are quite stable with respect to input noise and changes in surface structure. Although the approach of Waxman et. al. behaves much better than its predecessors, it is still questionable that all the partial derivatives of flow up to second order could be recovered reliably enough to produce a meaningful 3-D solution. While no rigorous analysis on the behavior of this algorithm has been so far con- ducted, it has turned out that, in general, the second- order derivatives determine the accuracy of 3-D solution, and they are not reliable as the field of view decreases under 20’. Our approach is based on the derivatives of flow up to the first order. In our recent experiments conducted on various natural images, we found that the first-order derivatives can be recovered with greater accuracy than those of second-order. However, the first-order deriva- tives alone do not provide the sufficient condition for solving 3-D motion. We obtain additional constraints by introducing the temporal derivatives of optical flow. The new representational scheme of 2-D motion then consists of four first-order spatial derivatives and two first-order temporal derivatives of local flow field. We call these partial derivatives “deformation parameters of the first kind”. The idea of utilizing multiple frames has been proposed by several researchers for a restricted class of rigid-body motion [3], for semi-rigid motion 6’0 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. under orthographic projection [7], and for determining the focus of expansion (or contraction) [l]. In Section 2, we begin our discussion of the 3-D motion recovery process, adopting the new flow represen- tation scheme. The formulation given here relates this new local representation, of an optical flow to object motion and structure, in terms of eight non-linear alge- braic equations. This formulation requires that the object surface be approximated as locally planar and that 3-D velocities do not change over a short period of time. Our method reveals certain families of degenerate cases for which the temporal derivatives of flow do not provide sufficient independent information. In certain cases, the temporal change of the first-order derivatives may be used. In some other cases, multiple solutions may result due to the non-linearity. The complete solu- tion tree is also presented. We conduct a stability analysis of the algorithm in Section 3. Some results of experiments conducted on synthetic data as well as real time-varying images are presented. Concluding remarks and our future direction follow in Section 4. 2. SPATIO-TEMPORAL IMAGE DEFORMATION Adopting a 3-D coordinate system (X, Y, Z) as in [4] (See Figure l), the relative motion is represented in terms of viewer motion: the translational velocity v =(Vx, v,, Vz), and the rotational velocity 62 = (R,, n2,, f12,). The origin of the image coordinate system (z, y) is located at (X, Y, Z) = (0, 0, 1). As a point P in space (located by position vector R) moves with a relative velocity U = - (V + RX R), The corresponding point p moves with a velocity: (2.111) These equations define an instantaneous image flow field, assigning a unique 2-D image velocity v to each direction (3, y) in the observer’s field of view. 2.1. Spatial Coherence of Optical Flow Equations (2.1) constitute two independent relations among seven unknowns. Various techniques for recover- ing the 3-D parameters differ by the way the additional constraints are provided. We shall consider only a sin- gle, smooth surface patch of some object in the field of view, such that the surface is differentiable with respect to the image coordinates. Let us further assume that such surface patch be locally approximated by a planar surface in space as: (2.2) Substituting the above into the flow relation (2.1), we get expressions in the form of a second-order flow field with respect to the image coordinates (z,y). On the other hand, non-planar surfaces generate flows which are not simple polynomials in the image coordinates. Following [8], we form the partial derivatives of flow in equations (2.1) with respect to the image coordi- nates and evaluate them at the image origin; we get the following four independent relations: a,v, V = x,x - I b dX = v; + vi p, dvx = %Y - i-b &I = vf: q + R,, &J, vY,x = ax I I = v; p - cl,, vY,Y = au, I 1 dY 0 = v; + v; q, where we parameters have introduced the normalized (2.3a) (2.3b) (2.3~) (2.3d) mot ion vi=-, v+-, v$!E YY VY ZO ZO zo . (2.4a,b,c) The quantities on the left-hand side represent the relative motion (or geometrical deformation) in an infinitesmal neighborhood. The above process adds 4 relations while replacing the unknown Z with p and q. Now we have 6 relations (equations (2.1) evaluated at the origin and (2.3)) and 8 unknowns. 2.2. Temporal Coherence of Optical Flow Unlike [4] and [8] in which the additional con- straints are obtained by introducing higher-order deriva- tives of flow, we observe that the flow field itself changes smoothly over time unless the following situation(s) occur: 1) abrupt change of 3-D motion in time, and/or 2) abrupt change of object distance in time. For the latter case, it can be easily shown that the tem- poral change of object distance is given by: 1 dZ i b - =- -gy dt v; + p (If,{ + sz,) + 9 (Vb - W(2.5) and the previous assumption on surface planarity elim- inates this possibility. This observation leads us to investigate the way flow field changes over time. Now, let us assume that 3-D motion parameters do not change at all during an infinitesimal time period. Differentiating (2.1) with respect to time, and utilizing PERCEPTION AND ROBOTICS / 6’1 (2.5): (2.6a) (2.6b) where S = - Vi -P vu, -9 vy. S describes the rate of depth change in time, normalized by Z, the absolute dis- tance. The above quantities represent the flow changes in time, at the center of the image coordinates. Notice that we have obtained two independent relations without introducing any additional unknown. There is a class of motion for which equations (2.6) become redundant and do not provide any new con- straint. In such a case, we observe that the temporal change of spatial deformation may also provide indepen- dent relations: \ z - Vf: f12,, (2.7b) V;q f-& - V; f-ly, (2.7~) = -v; s + v;q y + v;p St, + v; clt, where y = v;- pv;I - qv;. (2.74 2.3. Recovery of 3-D Motion Once the deformation parameters are obtained, we can proceed to recover the 3-D motion and structure of a surface patch. The eight equations to be solved ((2.1), (2.3) and (2.6)) involve eight unknowns: Vi, Vg, Vi, f-k, fb, fh P and 9. We first observe that Vi and Vb are coupled with p and q as seen in equations (2.3). This suggests to us a two-step method: (1) to solve for the products as a whole, and (2) to separate them into individual parameters. The detailed derivation may be found in [ll]. When VL= 0, 3-D motion is restricted to the motion parallel to the image plane. Substituting G = 0 into the original equations (2.3), one can see that the coupled terms Vfcp, Vkq, V&p, Vkq cannot be separated. We now utilize the temporal change of the spatial deformation parameters: vZ,zt, vz,yt, vy,,t and vy,yt, as depicted in equations (2.7). They provide sufficient constraints for solving for 3-D parameters. In summary, one can determine uniquely except the following cases: 3-D motion 6) (ii) 3. Stationary flow: The flow field does not change over time. Our method fails to recover 3-D motion. v; =o: Motion is parallel to the image plane. Dual solution may exist. The complete solution tree is presented in Figure 3. SENSITIVITY ANALYSIS AND EXPERI- MENT Unlike other existing algorithms such as (5,8], our method solves for 3-D motion without involving any searching or iterative improvement. Given the perfect deformation parameters (or flow field), the algorithm recovers the exact 3-D solution. In practice, however, there are various factors which reduce the accuracy of deformation parameters. Beside the sources of error common to any early vision processing such as the error due to digitization, camera distortion, noise, truncation error due to discrete arithmetic, etc., there are several other factors which affect the accuracy of calculated values of deformation parameters. In principle, the deformation parameters may be obtained first by recov- ering optical flow and secondly by taking the partial derivatives of optical flow. But’ since the differentiation process will amplify noise, we are unlikely to recover these partial derivatives reliably. 3.1. Recovery of Optical Flows In our experiment conducted on the natural time- varying image, contours are used as a primary source of information. The following factors concerning various stages of the flow computation must be considered and their effects must be analyzed. Imperfect contour extraction and false matching: Since we utilize the contours to sample the image deformation experienced by a neighborhood in the image, for all points on a contour in the fifst image there must be corresponding points on the matching con- tour in the second image, and vice versa. Hence con- tours such as those which corresponds to extremal boun- dary or shadow boundary must be excluded. However, there is no reliable method for classifying edges into meaningful categories, based on the local intensity meas- urement. In fact, the analytic boundaries of flow field suggest a way of classifying edges [2]. The performance of edge operators severely affects the forthcoming computation. In our current implemen- tation, contours are obtained from the zerocrossings of V2G*I. Although zerocrossings possess many attractive features by itself, they may not correspond to the actual meaningful edges. 672 / SCIENCE Imperfect normal flow estimation: Having deter- mined the pair of matching contours, one can measure only the norms/ flow around the contour, whereas the motion along the contour is invisible. We make use of geometrical displacements normal to contours as shown in Figure 2. Since the points along the contour generally have some component of motion tangential to the con- tour as well, the measured normal flow in this way is not exactly equal to the true normal flow. In most cases, the effect of this tangential component on the resulting full flow is not negligible and the 3-D solution becomes of no use at all. In this regard, we developed an algo- rithm which iteratively improves the normal flow esti- mates [12]. Inaccurate flow model: Given the normal flows, the deformation parameters (or equivalently optical flow) can be recovered by the Velocity Functional Method proposed by [9]. It considers the second-order flow approximation as the starting point of flow recovery. It then computes the best-fitting second-order flow from the local measure of normal flow. Although the second- order terms will not be used at the later stage of 3-D motion computation, they are included here in order to “absorb” noise, and thereby to obtain more accurate spatial deformation parameters. Currently temporal deformation parameters are obtained by subtracting the spatial deformation parameters over two consecutive image frames. Alternatively, as a better approach, the velocity can be approximated as the truncated Taylor series in the spatio-temporal coordinates (z, y, t). In general, such approximation is not exact. It involves the truncation error which is characterized by many quantities. However, as we have mentioned in Section 2, the second-order approximation is exact for planar surfaces. For curved surfaces, it has been shown that the exact error formula is determined mainly by the surface curvature and the size of the neighborhood [lo). By keeping the neighborhood size small enough, one can still rely on the second-order model. Non-uniform 3-D motion: In the mathematical sense, non-uniformity of motion is not a problem since all we need is the change of flow in infinitesmal time period. In practice, we have to subtract (or differentiate) two flow fields, which means three snapshots of images are needed at least. The motion is assumed to be constant during this time interval. The effect of 3-D motion change, i.e. acceleration, on the first-order image deformation can be shown to be pro- portional to the amount of acceleration. 3.2. Experiment In the first experiment, we test the sensitivity of the proposed algorithm by using synthetically generated data. Typical sensitivity is well illustrated by the fol- lowing case. A planar surface in space is described by z = Zu+pX + qY with Z0 = 10 units, and p and q corresponding to slopes of 30’ and 45’, respectively. The observer moves with translational velocity V = (5, 4, 3) units/frame and rotational velocity ha = (20, -10, 30) degrees/frame. On the image plane we specify two alge- braic curves along which the normal Aow can then be computed exactly (S ee Figure 4a). We then perturb the magnitude and direction of the normal flow vectors ran- domly (from a uniform distribution), ‘bounded by a specified percentage of the exact normal flow (Figure 4b). The viewing angle is fixed at 20’. The Velocity Functional Method is then applied to the input normal flows (Figure 4~). We consider three measures of sensi- tivity which characterize the relative error in surface orientation es, in translation eT, and in rotation eR . One can see from Table 1 that the sensitivity of 3-D structure and motion predictions is fairly linear to the normal flow. We have found that perturbations to the normal flow of about 20% can be tolerated down to fields of view of about 10’. In the next example, the normal flow vectors are measured from the synthetic images. The same contours defined above undergo the following motion: transla- tional velocity V’ = (0.06, 0.03, -0.04) and rotational velocity !J = 0. The slope of the surface is measured roughly as p = 30’ and q = 45’ . Three frames of digi- tal images (of size 256 by 256) were generated from the graphical simulation program (Figures 5a). Normal flows along the contours are measured from the pair of consecutive frames (Figure 5b). The iterative process in [12] was used to recover the full flows (Figure 5~). The 3-D parameters computed from this full flow are: V’ = (0.059, 0.028, -0.038) units/frame, ib = (-0.0021, 0.0006, -0.0007) degrees/frame, p = 31.87’ and q = 46.26’ . As the last example, Figure 6a shows one of three consecutive images obtained from the natural environ- ment. A CCD camera with known viewing angle and focal length was attached to-a robot arm so that the camera can follow a predetermined trajectory. The images are 385 by 254 in size, with 6 bits of resolution. The motion between the successive frames is given as V’ = (0.6, -0.25, 0.4) and R = 0. The orientation of the table was given as p = -10’ and q = 55’. A pyramid- based flow recovery scheme being developed is applied to the input images. Figure 6b shows the flow field obtained from the first two frames, assuming that the entire image consists of a single planar surface. 3-D parameters computed from this flow field are: V’ = (0.45, 0.18, 0.57) units/frame, n = (0.028, -0.037, 0.021) degrees/frame, p = -25.11’ and q = 56.29’ . PERCEPTION AND ROBOTICS / 673 4. CONCLUDING REMARKS This work is a part of the hand-eye coordination project at Robotics Laboratory, Harvard University. In order to provide 3-D geometrical information of scene from visual data, binocular/time-varying images will he used as main visual source. \Ve presented an algorithm which wcovers 3-D structure and motion from the first-order deformation parameters. In most cases, such cdeformation parame- ters can be recovered quite reliabllr. 1”\*e carried ollt a sensitivity analysis for synthetic and natllral motions in space, to demonstrate that 3-D information can also be recovered reliably. Although many independent factors affect the accuracy of 3-D solution, we have found, through various experiments, that the temporal defor- mation given as II, t and uY t plays t-he major role on the sensitivity. hlore work should be done on the fiow recovery scheme. 1\7e are currently investigating a muti-resolution approach which integrates several different methods of flow computation. The 3-D solution can also be improved further by exploiting the idea of 3-D motion coherence on the level of 3-D motion compu- tation. We are developin, 0 a predicti\re filtering scheme based on a dynamic model. REFERENCES [l] ,4. Bandyopadhvay and J. Aloimonos, “Perception . of Rigid Motion from Spatio-Temporal Derivatives of Optical Flow”, Computer Science TR-157, Univ. Rochester, March 198.5. [‘?I 111. F. Clocksin, iiComputer Prediction of Visual Thresholds for Surface Slant and Edge Detection from Optical Flow Fields”, Ph.D. Dissertation, Univ. Edinburgh, 1080. [3j D. D. Hoffman, “Inferring Local Surface Orientation from hlotion Fields” , .Journal Optical Sot. I~tvl. 72, 888-892, 1982. [4] I-I. Longuet-Higgins and 1~. Prazdny, “The lntprpre- tation of a Moving Retinal Image”, J’YOC. Iloycrl Sot. Lodon B 208, 385-397, 19SO. [.5] A. hlitiche, “On Kineopsis and Computation of Structure and Motion”, IEEE Trans. Pattern ;1nal. Afach. Intell. 6, 109-112, 1986. [6] R. Y. Tsai and T.S. Huang, “T.iniqueness and Esti- mation of 3-D Motion Parameters of Rigid Objects with Curved Surfaces”. IEEE Trn~s. Patter)1 Anal. :\fach. Intell. 6, 13-27, 1984. [i] S. Ullman, “~~Iasimizing Rigidity: The Incremental Recovery of 3-D Structure and Rubbery hlotion”, zl.1. _Uemo 721, M.I.T., June 1983. [8] A. Wasman, and S. Ullman, “Surface Structure and 3-D Motion From Image Flow: A Kinematic Analysis”, Intl. Journal of Robotics Research 4, 72- 94. 1985. [o] A. M. Wasman and K. Wohn, “Contour Evolution, Neighborhood Deformation and Global Image Flow: Planar Surfaces in Motion”, Intl. Journal of Robot- ics Research 4, 95-108, 1985. [lo] II;. l\rolln and A. M. Waxman, “Contour Evolution, Neighborhood Deformation and Local Image Flow: Curved Surfaces in Motion”, Computer Science Tech. Report TR-1531, July, 1985. [II] K. 12’ohn and J. Wu, “Recovery of 3-D Structure and hlotion from l’st-Order Image Deformation”. Robotics Lab. Tech. Report, Harvard University, January 1986. [la] K. IYohn, J. Wu and R. Brockett, “A Contour- based Recovery of Optical Flows by Iteratil-e Improvement”, Robotics Lab. Tech. Report, Harvard University. June 1986. noise I 0 % 5% 10% 15% L 20% 25% Table . eT eR 0.000% 0.000% 0.171% 1.493% 1.294% 4.532% 4.324% 8.218% 10.080% 11.115% 18.338% 12.523% Relatil-e error in 3-D solution with respect to the noise in normal flex-. Figure 1. 3-D coordinate system and 2-D image coord- inates with motion relative to an object. 6-4 / SCIENCE Figure 2. Estimating normal flow. The normal flow cannot be obtained exactly due to the tangential component. V,fO L, -” no solution 6% unique vncv v I/ v A nzo l-l-0 unique d dual D unique a) y-7 L i L! 0 frame #I frame #2 frame #3 Figure 5. Recovering optical flow from contours. a) contours on the image planes as input. (3 frames&are shown.) b) measured normal flow along the contours (from frame #l and #2). c) optical flow field recovered. Figure 3. Solution tree for 3-D structure and motion algorithm. b) ..--.--4 .-- - - . _ . . 7 CT - -,+.- --- . . -ci a . . . . A- x...... Figure 4. Recovering full flow from normal flow. a) contours on the image plane. b) normal flow along the contours as input (no noise added). c) optical flow field recovered. Figure 6. Recovering optical flow from natural images. a) input images. (One frame is shown). b) flow field recovered (with zerocrossings). PERCEPTION AND ROBOTICS / 675
|
1986
|
181
|
452
|
A STOCHASTIC APPROACH TO STEREO VISION Stephen T. Barnard Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, California 94025 Abstract A stochastic optimization approach to stereo matching is pre- sented. Unlike conventional correlation matching and feature matching, the approach provides a dense array of disparities, eliminating the need for interpolation. First, the stereo match- ing problem is defined in terms of finding a disparity map that satisfies two competing constraints: (1) matched points should have similar image intensity, and (2) the disparity map should be smooth. These constraints are expressed in an ‘(energy” function that can be evaluated locally. A simulated anneal- ing algorithm is used to find a disparity map that has very low energy (i.e., in which both constraints have simultaneously been approximately satisfied). Annealing allows the large-scale structure of the disparity map to emerge at higher tempera- tures, and avoids the problem of converging too quickly on a local minimum. Results are shown for a sparse random-dot stereogram, a vertical aerial stereogram (shown in compari- son to ground truth), and an oblique ground-level scene with occlusion boundaries. 1 Introduction To solve the stereo matching problem, one must assign corre- spondences between points on two lattices (the left and right images), such that corresponding points are the projections of the same point in the scene. The problem can be viewed as a complex optimization in which two criteria must be satisfied simultaneously. First, the corresponding points should have similar local features (in particular, similar intensity). Sec- ondly, the spatial distribution of disparities, or, equivalently, the spatial distribution of depth estimates, should be plausible with respect to the depths likely to be observed in real scenes. Several authors have noted that, because surfaces are spatially coherent, the result of the stereo process should also be co- herent, except at the relatively rare occlusion boundaries (for example, see Julesz [l] and Marr and Poggio [2]). The first cri- terion - similarity of local features - is insufficient because stereo correspondences are locally ambiguous. The second cri- terion, which is sometimes called the snoothness constraint, provides a heuristic for deciding which of the many combina- tions of feature-preserving correspondences are best. The two major conventional approaches to stereo matching - feature matching and area correlation - suffer from two serious problems: Support for this work waa provided by the Defense Advanced Research Projects Agency under contracts DACA 7685-C-0004 and MDA903-83- C-0027. Support was also provided by FMC Corporation. Areas of nearly homogeneous image intensity are diffi- cult to match because they lack local spatial structure. Edge-matching approaches never even attempt to match in such areas because no edges are found, and area corre- lation approaches fail because no significant peaks appear in the correlation surface. For most stereo vision appli- cations, however, a dense matching is required. Dense estimates of depth are also more consistent with the sub- jective quality of human stereo experience, as revealed, for example, in random-dot stereograms. To obtain dense depth maps with the conventional approaches, one must resort to a postmatching interpolation step. Even where local structure is abundant, stereo correspon- dences may be ambiguous. Small-scale periodic struc- tures are particularly difficult to match. To resolve these ambiguities, stereo matchers usually rely on a propaga- tion of information, either from nearby areas, or from matching at larger scales, or both. This paper describes an approach to stereo matching that is quite different from conventional area-based and feature-based matching. It is essentially an undirected Monte Carlo search that simulates the physical process of annealing, in which a physical system composed of a large number of coupled ele- ments is reduced to its lowest energy configuration (or ground state) by slowly reducing the temperature while maintaining the system in thermal equilibrium. The system is composed of the lattice sites of the left image, and the state of each site en- codes a disparity assignment. The total energy of the system is the sum of the energies of the local lattice sites. The local energy, which is a function of the states of the lattice site and its neighbors, has two terms: one term is proportional to the absolute intensity difference between the matching points, and the other term is proportional to the local variation of dispar- ity (that is, to the lack of smoothness). The effect of a heat bath is simulated by considering local random state changes and accepting or rejecting them depending on the change in energy and the current temperature. 2 Simulated Annealing Simulated annealing is a stochastic optimization technique that was inspired by concepts from statistical mechanics [3], [4]. It has been applied to a wide variety of complex problems that involve many degrees of freedom and do not have con- vex solution spaces. See Carnevali [5] for examples of image- processing applications. At the heart of simulated annealing is the Metropolis algorithm [6], which samples states of a system 676 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. in thermal equilibrium. When a system is in thermal rium, its states have a Boltzman distribution: equilib- P(E) = exp(-E/T) (1) where E is energy, P(E) is the probability of a state hav- ing energy E, and T is the temperature of the system.’ The Metropolis algorithm takes the system to equilibrium by con- sidering random, local state transitions on the basis of the change in energy that they imply: if the change is negative, the transition is accepted; whereas, if the change is positive, the transition is accepted with probability exp(-AE/T). Starting at a very high temperature, simulated annealing uses the Metropolis algorithm to bring the system to equilib- rium. Then the temperature is lowered slightly and the pro- cedure is repeated until a very low temperature is achieved. If the temperature is lowered too quickly, the system may get stuck in locally optimal configurations and the ground state may not be reached. The algorithm is shown in Figure 1. Select a random state S. Select a sufficiently high starting temperature T. while T > 0 do Make a random state change S’ t R(S). AE + E(S’) - E(S) ; Accept lower energy states. ifAE<OthenStS’ ; Accept higher-energy states with probability P. else P + exp(-AE/T) x + random number in (0, l] ifx<PthenStS’ if there has been no significant decrease in E for many iterations then lower the temperature T. Figure 1: The Simulated Annealing Algorithm Simulated annealing tends to exhibit good average-case performance. It has the advantage of being a very simple algo- rithm that is inherently massively parallel. Furthermore, the parallelism is easily implemented because the processors need only short interconnections, may run asynchronously, and can even be unreliable. To be a good candidate for simulated an- nealing, a problem should follow the analogy of physical an- nealing. The function to be optimized should be expressed as an analog to the energy of a system composed of many local el- ements, and the interaction between the local elements should be short-range. A small random change in the state of the sys- tem should be possible by switching the microstate of a local element, and the resulting change in energy should be quickly computed by evaluating only the effects of the element’s neigh- bors. ‘The Boltzman distribution is usually written aa exp(-AS/T), where k is Boltzman’s constant. Because we define energypnd temperature as pure numbers, no con&ant is necessary. 3 Stochastic Stereo Matching If the relative positions and orientations of the two cameras are known, as well as the internal camera parameters, we can use the epipolar constraint to restrict the correspondences to the epipolar lines [7]. With no loss of generality, we can assume that the epipolar lines are parallel to the horizontal lines of lattice sites.2 The correspondence problem then reduces to the assignment of a single horizontal disparity to each pixel in, say, the left image lattice. Suppose that we have left and right image lattices, Lk and Rk, with k = {i,j),O 5 i,j 5 n - 1 , that constitute a stereo pair with horizontal epipolar lines. The intensity of the left lattice point Lk is IL(k) = 1~(i, j), and similarly for right lattice points. For every k there is a (horizontal) disparity D(k) such that the lattice point Lk = Li,j in the left image matches the point &I = &,j+D(k) in the right image. The problem is to find an assignment of disparities to lattice points that satisfies the two criteria discussed in Section 1: similar intensity and smoothness. We assume that the upper and lower limits of disparity, Dmin and D,,,, are known. Furthermore, we consider only integer values of disparity. Even with these restrictions, the system has N = (DmZ - Dmin + 1)“’ possible states. Typical values in our examples are D,,, = 9, Dmi, = 0, and n = 128, in which case N = 10’6384. Exhaustive search is obviously out of the question. The disparity map should satisfy two criteria that are, to some extent, incompatible. The first criterion, which we call the photometric constraint, dictates that the disparity assign- ments should map points in L to points in R with comparable intensity: IL(k) fil IR(k’). The second criterion is the smooth- nes8 constraint, which limits the variation in the disparity map. Both criteria cannot be perfectly satisfied except in trivial situations. The photometric constraint can only be approxi- mately satisfied due to sensor noise, quantization, slight light- ing differences, and the presence of areas in one image that are occluded in the other. As discussed above, areas of home- geneous intensity will lead to ambiguous disparities based on photometry alone. The smoothness constraint will be perfectly satisified only with a uniform disparity map. In an attempt to satisfy the two minimize a function of the form: criteria simultaneously, we E = c Ilk(k) - h&i + W))lI + WW)ll . (2) k The first term inside the sum represents the photometric con- straint and the second term the smoothness constraint. The constant X determines their relative importance. We imple- ment the ]]VD(k)l] p t o era or as the sum of the absolute dif- ferences between disparity D(k) and the disparities of the kth lattice point’s eight neighbors. Equation (2) is similar to the nonquadratic Tikhonov stabilizer proposed for stereo by Pog- gio, et. al. [8]. Following the simulated annealing algorithm, the system begins in a state chosen at random. Individual lattice points are considered in scan-line order, new disparities are selected at random, and the changes in energy are computed from equa- 21f the epipolar lines are not horizontal, the images can be mapped into a rectified etereo pair. PERCEPTION AND ROBOTICS / 6’7 tion (2). Instead of monitoring the energy distribution to test for thermal equilibrium, we use a fixed annealing schedule. 4 Results We have tested the stochastic matching algorithm on a variety of images, including random-dot stereograms, vertical aerial stereograms, and oblique ground-level stereograms. Identical parameters were used for all the examples shown in this sec- tion. In particular, the intensity ranged between 0 and 255, and we used X = 5. We used a fixed annealing schedule: the temperature begins at T = 100 and is repeatedly reduced by 10% until it falls below T = 1. A total of ten scans through the lattice are performed for each temperature in this sequence. Figure 2 shows a four-level “wedding cake” random-dot stereogram composed of 10% white and 90% black pixels. The background has zero disparity, and each successive layer has an additional two pixels of disparity. The figure shows the results with disparities encoded as grey values. Pixels with higher disparity are “closer” and are displayed as brighter values. In- termediate results for T = 47 and T = 25 and the final result for T = 0 are shown. Figures 2c-e illustrate an important advantage in stochas- tic matching: the large-scale structure of the scene begins to emerge at higher temperatures, and as the temperature de- creases finer structures become apparent. Temperature there- fore provides a mechanism for dealing with problems of scale that is simpler than the complex search strategies employed by conventional methods. Note that the final disparity map is dense and that it corresponds very well to the three-dimensional wedding-cake shape. The errors are confined to the occlusion boundaries. The next example, shown in Figure 3, is a vertical aerial stereogram supplied by the Engineering Topographic Labo- ratory (ETL). Th e original images have been bandpassed to remove the DC component. Again, intermediate results for T = 47 and T = 25 and the final result for T = 0 are shown. In addition, a disparity map supplied by ETL is shown for comparison. ’ The stochastic matching algorithm produces a result that is quite similar to the ETL data, although it is somewhat smoother. To some extent, this difference can be explained by the fact that the ETL result was produced from higher-resolution stereo images. The errors on the right border of Figure 3e are due to the fact that the stereo images do not have 100% overlap. The final example, shown in Figure 4, is an oblique view of an outdoor scene containing a number of trees in both the foreground and background. The result in Figure 4e is cer- tainly plausible, although we do not have a quantitative dis- parity model to compare it with, as in the previous examples. The matching algorithm seems to have smoothed over the fore- ground trees more than necessary, although we must be careful when relying on our subjective impressions of depth. When we interpret a scene like this one, we do not use stereo exclusively. ‘The ETL disparity map wae made with an interactive digital corre- lation device that depends on a human operator to detect and correct errore. The disparity map in Figure 3f hae been sampled from a larger map compiled from much higher-resolution imagery. (a) left image (b) right image (c) T = 47 (d) T = 25 (e) T = 0 Figure 2: A 10% Random Dot Stereogram. 5 Conclusions Stochastic stereo matching provides an attractive alternative to conventional stereo-matching techniques in several respects. The algorithm is simple, and, with suitable parallel hardware, can be very fast. Unlike conventional approaches, it produces a dense disparity map. As noted by Geman and Geman [9], stochastic optimization by simulated annealing is in some ways similar to relaxation labeling [lo]. In both approaches, objects are classified in such a way as to be consistent with a global context and to satisfy local constraints. There are, however, important differences. Relaxation labeling is a nonstochastic approach that, unlike simulated annealing, finds the local optimum closest to the initial state. Simulated annealing is intended to find the global optimum, or at least a local optimum nearly as good as the global one. Relaxation labeling has no counterparts for two important concepts in simulated annealing: temperature and thermal equilibrium. G7t3 / SCIENCE (b) right image (c) T = 47 (d) T = 25 (a) left image (c) T = 47 (e) T = 0 (f) ETL disparity map Figure 3: A Vertical Aerial Stereogram. The concept of temperature in simulated annealing pro- vides a way to handle different scales in the problem instance. At higher temperatures, objects are only weakly coupled, and long-range interactions among large collections of objects can dominate the behavior of the system. At lower temperatures, local interactions take over. This effect was clearly seen in the examples of Section 4. Some physical systems exhibit a phase transition at some critical temperature. When simulat- ing such systems, one must be careful to lower the temperature very slowly in the vicinity of the critical temperature. We have not observed phase transitions in the stereo problem and have been able to use fixed annealing schedules. We are considering two extensions of the simple model pre- sented here. First, the effective range of disparity could be in- creased by using lattices of several scales, allowing the coarser ones to bias the finer, in a manner similar to the hierarchi- cal control structures used in many other matching techniques. Second, following Geman and Geman [9], a “line process” could be used to model depth discontinuities; although, in addition to lines, the process would also model occluded areas. (b) right image I s.* r.4 (d) T = 25 (e) T = 0 Figure 4: An Oblique Stereogram. References PI PI PI PI 151 B. Julesz, Foundation8 of Cyclopean Perception, Univ. of Chicago Press, Chicago, Ill., 1971. D. Marr and T. Poggio, Cooperative computation of stereo disparity,” Science, 194, (1976), pp. 283-287. S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, Optimiza- tion by simulated annealing, Science, vol. 220, no. 4598, May 13, 1983, pp. 671-680. S. Kirkpatrick and R.H. Swendsen, Statistical mechanics and disordered systems, Comm. ACM, vol. 28, no. 4, April 1985, pp. 363-373. P. Carnevali, L. Coletti, and S. Patarnello, Image pro- cessing by simulated annealing, IBM. J. Res. Develop., vol. 29, no. 6, November 1985, pp. 564579. PERCEPTION AND ROBOTICS / 679 [S] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller, Equations of state calculations by fast computing machines, J. Chem. Phys., vol. 21, no. 6, June 1953, pp. 1087-1092. [7] S.T. Barnard and M.A. Fischler, Computational stereo, Computing Surveys, vol. 14, no. 4, December 1982, pp. 553-572. [8] T. Poggio, V. Torre, and C. Koch, Computational vision and regularization theory, Nature, vol. 317, September 1985, pp. 314-319. [9] S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and Bayesian restoration of images, IEEE Transactions Pattern on Analyeia and Machine Intel/i- gence, vol. PAMI-6, no. 6, November 1984, pp. 721-741. [lo] R.A. Hummel and SW. Zucker, On the foundations of relaxation labeling processes, IEEE Transactions Pattern Analysis and Machine Intelligence, vol. PAMI- 5, May 1983, pp. 267-287. 680 / SCIENCE
|
1986
|
182
|
453
|
Determining the 3-D Motion of a Rigid Surface Patch without Correspondence, under Perspective Projection: I. Planar Surfaces. II. Curved Surfaces. John (Yiannis) Aloimonos and Isidore Rigoutsos Department of Computer Science The University of Rochester, Rochester, New York 14627. Abstract A method is presented for the recovery of the 3-D motion parameters of a rigidly moving textured surface. The novelty of the method is based on the following two facts : 1) no point-to-point correspondences are used, and 2) “stereo” and “motion” are combined in such a way that no correspondence between the left and the right stereo pairs is required. 1. Introduction An important problem in Computer Vision is to recover the 3- D motion of a moving object from its successive images. Dynamic visual information can be produced by a sensor moving through the environment and/or by independently moving objects in the observer’s visual field.The interpretation of such dynamic imagery information consists of dynamic segmentation, recovery of the 3-D motion ( of the sensor and the objects in the environment ) as well as determination of the structure of the environmental world. The results of such an interpretation can be used to control behavior as for example in robotics, tracking, and autonomous navigation. Up to now there have been, basically, three approaches towards the solution of this problem: 1) The first assumes the dynamic image to be a three- dimensional function of two spatial arguments and a temporal argument. Then if this function is locally well-behaved and its spatiotemporal derivatives are computable, the image velocity or optical flow may be computed([7,9,10,17,23,35,391). 2) The second method for measuring image motion considers the cases where the motion is “large” and the previous technique is not applicable. In these instances the measurement technique relies upon isolating and tracking highlights or feature points in the image through time. In other words operators are applied on both dynamic frames which output a set of points in both images, and then the correspondence problem between these two sets of points has to be solved (i.e. finding which points on both dynamic frames are due to the projection of the same world point)([3,21a,21b, 6,32,331). In both the above approaches, after the optical flow field or the discrete displacements field (which can be sparse) are computed, algorithms are constructed for the determination of the three-dimensional motion , based on the optical flow or discrete displacements values ([l, 4, 5a, 5b, 8, 18, 19, 24, 2526, 27, 28, 29, 30,32,33,34,36,3&W. 3) The three-dimensional motion parameters are computed directly from the spatial and temporal derivatives of the image intensity function. In other words, iffis the intensity function and (u,v) the optical flow at a point, then the equation fXu+fYv +ft= 0 holds approximately. All the methods in the category are based on the substitution of the optical flow values in terms of the three dimensional motion parameters in the above equation, and there is very good work in this direction ([22,11,21). As the problem has been formulated over the years, one camera is used and so the three dimensional motion parameters that have to be computed and can be computed, are five (two for the direction of translation and three for the rotation). In our approach, we consider a binocular observer, and so all six parameters of the motion can be recovered. 2. Motivatidn and Previous Work The basic motivation for this research is the fact that optical flow (or discrete displacement) fields produced from real images by existing techniques are corrupted by noise and are partially incorrect ([331). Most of the algorithms in the literature that use the retinal motion field to recover three-dimensional motion fail when the input (retinal motion) is noisy. Some algorithms work reasonably for images in a specific domain. Some researchers ([26, 40, 41, 42, 8, 431) developed sets of non-linear equations with the three-dimensional motion parameters as unknowns, which are solved by iterations and initial guessing. These methods are very sensitive to noise, as it 1s reported in [26, 40, 8, 431. On the other hand, other researchers ([30, 181) developed methods that do not require the solution of non-linear systems, but the solution of linear ones. Despite that, under the presence of noise, the results are not satisfactory ([30, 181). Bruss and Horn ([5al) presented a least-squares’ formalism that tried to compute the motion parameters by minimizing a measure of the difference between the input optic flow and the predicted one from the motion parameters. The method, in the general case, results in solving a system of non-linear equations with all the inherent difficulties in such a task, and it seems to have good behavior with respect to noise only when the noise in the optical flow field has a particular distribution. Prazdny, Rieger, and Lawton presented methods based on the separation of the optical flow field in its translational and rotational components, under different assumptions (124,251). But difficulties are reported with the approach of Prazdny in the present of noise ([44]), while the methods of Rieger and Lawton require the presence of occluding boundaries in the scene, something which cannot be guaranteed. Finally, Ullman in his pioneering work ([321) presented a local analysis, but his approach seems to be sensitive to noise, because of its local nature. Several other authors ([19, 381) use the optical flow field and its first and second spatial derivatives at corresponding points to obtain the motion parameters. But these derivatives seem to be unreliable with noise, and there is no known algorithm which can determine them reasonably in real images. Others ([ll) follow an approach based partially on local interpretation of the flow field, but it can be proved ([341) that any local interpretation of the flow field is unstable. At this point it is worth noting that all the aforementioned methods assume an unrestricted motion (translation and rotation). In the case of restricted motion (only translation), a robust algorithm has been reported by Lawton ([45]), which was successfully applied to some real images. His method is based on a global sampling of an error measure that corresponds to the potential position of the focus of expansion (FOE); finally, a local search is required to determine the exact location of the minimum value. However, the method is time-consuming, and is likely to be very sensitive to small rotations. Also the inherent problems of correspondence, in the sense that there may be drop-ins or drop- PERCEPTION AND ROBOTICS / 68 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. outs in the two dynamic frames, is not taken into account. All in all, most of the methods presented up to now for the computation of three-dimensional motion depend on the value of flow or retinal displacements. Probably there is no algorithm until now that can compute retinal motion reasonably (for example, 10% accuracy) in real images.Even if we had some way, however, to compute retinal motion in a reasonable (acceptable) fashion, i.e.with at most an error of lo%, for example, all the algorithms proposed to date that use retinal motion as input, would still produce non-robust results. The reason for this is the fact that the motion constraint (i.e., the relation between three-dimensional motion and retinal displacements) is very sensitive to small perturbations ([471). Table 1 shows how the error of motion parameters grows as the error in image point correspondence increases when B-point correspondence is used, and Table 2 shows the same relationship when 20-point correspondence is used with 2.5% error on point correspondences based on a recent algorithm of great mathematical elegance. (Tables 1 and 2 are from [30].) Table 1: Error of motion parameters for 8-point correspondence for 2.5% error in point correspondence. Error of E (essential parameters) 73.91% Error of rotation parameters 38.70% Error of translations 103.60% Table 2: Error of motion parameters for 20-point correspondence for 2.5% error in point correspondence. Error of E (essential parameters) 19.49% Error of rotation parameters 2.40% Error of translations 29.66% It is clear from the above tables that the sensitivity of the algorithm in [30] to small errors is very high. It is worth noting at this point that the algorithm in [301 is solving linear equations, but the sensitivity to error in point correspondences is not improved with respect to algorithms that solve non-linear equations. Also, it is worth mentioning at this point that the same behavior is present in the algorithms that compute 3-D motion in the case of planar surfaces ([301). Finally, the third approach, which computes directly the motion parameters from the spatiotemporal derivatives of the image intensity function, gets rid of the correspondence problem and seems very promising. In [ll, 22, 141, the behavior with respect to noise is not discussed. But extensive experiments (13111, implementing the algorithms presented in [2] show that noise in the intensity function affects the computed three-dimensional motion parameters a great deal. We should also mention that the constraint fxu + fyv + ft = 0 is a very gross approximation of the actual constraint under perspective projection ([461). So, despite the fact that no correspondences are used in this approach, the resulting algorithms seem to have the same sensitivity to small errors in the input as in the previous cases. This fact should not be surprising, because even if we avoid correspondences, the constraint between three-dimensional motion and retinal motion (regardless of whether the retinal motion is expressed as optic flow or the spatiotemporal variation of the image intensity function1 will be essentially the same when one camera is used (monocular observer, traditional approach). This constraint cannot change, since it relates three-dimensional motion to two-dimensional motion through projective geometry. So, as the problem has been formulated (monocular observer), it seems to have a great deal of difficulty. This is again not surprising, and the same problem is encountered in many other problems in computer vision (shape from shading, structure from motion, stereo, etc.). There has recently been an approach to combine information from different sources in order to achieve uniqueness and robustness of low level visual computations (1471). With regard to the three-dimensional motion parameters determination problem, why not combine motion information with some other kind of information? It is clear that in this case the constraints will not be the same, and there is some hope for robustness in the computed parameters. As this other kind of information that should be combined with motion, we choose stereo.The need for combining stereo with motion has recently been appreciated by a number of researchers ([13, 37, 12, 471). Jenkin and Tsotsos, ([131), used stereo information for the computation of retinal motion, and they presented good results for their images. Waxman et al., (13711, presented a promising method for dynamic stereo, which is based on the comparison of image flow fields obtained from cameras in known relative motion, with passive ranging as goal. Whitman Richards, ([481), is combining stereo disparity with motion in order to recover correct three- dimensional configurations from two-dimensional images (orthography-vergence). Finally, Huang and Blostein, ([121), presented a method for three-dimensional motion estimation that is based on stereo information. In their work, the static stereo problem as well as the three-dimensional matching problem have to be solved before the motion estimation problem. The emphasis is placed on the error analysis, since the amount of noise (in typical image resolutions) in the input of the motion estimation algorithm is very large. So a natural question arises: is it possible to recover three- dimensional motion from images without having to go through the very difficult correspondence problem? And if such a thing is possible, how immune to noise will the algorithm be? In this paper, we prove that if we combine stereo and motion in some sense and we avoid any static or dynamic correspondence, then we can compute the three-dimensional motion of a moving object. At this point, it is worth noting recent results by Kanatani ([15, 16l),that deal with finding the three-dimensional motion of planar contours in small motion, without point correspondences. These methods seem to suffer from numerical errors a great deal, but they have a great mathematical elegance. As the problem has been formulated over the years, usually one camera is used and so the 3-D motion parameters that can be computed are five : 2 for the direction of translation and 3 for the rotation. In our approach, we assume a binocular observer and so we recover 6 motion parameters : 3 for the translation and 3 for the rotation. With the traditional one camera approach for the estimation of the 3-D motion parameters of a rigid planar patch, it was just mentioned ([261),that one should use the image point correspondences for object points not on a single planar patch when estimating 3-D motions of rigid objects. But it was not known how many solutions there were, what was the minimum number of points and views needed to assure uniqueness and how could those solutions be computed without using any iterative search (i.e. without having to solve non-linear systems ) It was proved ([27,28,301) that there are exactly two solutions for the 3-D motion parameters and plane orientations, given at least 4 image point correspondences in two perspective views, unless the 3x3 matrix containing the canonical coordinates of the second kind ([201) for the Lie transformation group that characterizes the retinal motion field of a moving planar patch, has multiple singular values. However, the solutions are unique if three views of the planar patch are given or two views with at least two planar patches. In our approach, the duality problem does not exist for two views, since two cameras are used (and so the analysis is done in 3-D 1. In this paper, we present a method for the recovery of the 3-D motion of a rigidly moving surface patch, by a binocular observer without using correspondence neither for the stereo nor for the motion. We first analyze the case of planar surfaces and then we develop the theory for any surface. The organization of the paper is as follows: the next Section 3 describes how to recover the structure and depth of a set of 3-D planar points from their images in the left and right flat retinae, without using any point correspondences. We also discuss the effect of noise in the procedure and we describe a method for the improvement of the two camera model using three cameras 6X2 / SCIENCE (trinocular observer ). Section 4 gives a method for the recovery of the 3-D direction of translation of a translating set of planar points from their images without using any correspondence; it furthermore introduces the reader to Section 5 which deals with the solution of the general problem ( the case where the set of 3-D planar points is moving rigidly - - i.e. translating and rotating). Section 6 describes the theory for the determination of 3-D motion for any kind of surface that moves with an unrestricted motion. 3. Stereo without correspondence In this section we present a method for the recovery of the 3-D parameters for the sot of 3-D planar points from their left and right images without using any point-to-point correspondence; instead we consider all point correspondences at once and so there is no need to solve the difficult correspondence problem in the case of the static stereo. Let an orthogonal Cartesian coordinate system OXYZ be fixed with respect to the left camera, with 0 at the origin (0 being also the nodal point of the left eye) and the Z- axis pointing along the optical axis. Let the image plane of the left camera be perpendicular to the Z-axis at the point (O,O,f ), (focal length=f). Let the nodal point of the right camera be at the point (d,O,O) and its image plane be identical to the left one; the optical axis of the right camera (eye) points also along the Z-axis and passes through point (d,O,O) (see Figure 1.). Consider a set of 3-D points A = { (Xi,Yi,Zi ) / i= 1,2,3 . . . n } lying on the same plane(see Figure l.), the latter being described by the equation : Let Ol,O, be the origins of the two-dimensional orthogonal z=p*x+q*y+c (1) coordinate systems on each image plane; these origins are located on the left and right optical a xes while the corresponding coordinate systems have their y-axes parallel to the axis OY, and their x-axes parallel to OX. Finally let { (xli,yli) / i= 1,2,3 . . . n } and { (xri,yri) / i= 1,2,3 _.. n } be the projections of the points of set A on the left and right retinae, respectively, i.e. f*Xi ( 2 ) % = 7 f*Yi 1 3 ) -- YlL-- z, ! i= 1,2,3 . . . n f*(Xi-d) f*Yi x =- (4) y,i= z. (5) / i=1,2,3 . . . R t-1 Z Let (WYlJ and (xri,yri) be corre;ponding points in the two frames. Then we have that: *li f*d (6) -xri= - Z I where Zi, the depth of the (7) having those projections. It can be proved ([491), that the quantity where k20 A k+, m,n E z- {O}, is directly computable and equal to: n Y;i n Ix 2 31 *Yz: n r ‘ri *‘PI -= -- Z i=l f*d (9) 1=1 1 i=l f’d k nyll 1 n Ix - --1: k z-c 1 l *[ i: p*x r=l l z=l Y11 - z 1=1 1=1 The left-hand side of equation (lo), is computable without using any point-to-point correspondence (see above). If we write equation (10) for three different values of k, we obtain the following linear system in the unknowns p,q,c which in general has a unique solution (except for the case where the projection of all points of set A, have the same y-coordinate in both frames): where we used equation ( 9 ) to the left hand sidesThe solution of the above system recovers the structure and the depth of the points of set A without any correspondence. 3.3 Practical Considerations We have implemented the above method for different values of kl,kz,ks and especially for the cases: a) kl =0 k2 = l/3 k3 =2/3 b) kl =0 k2 = II3 k3 = l/5 The noiseless cases give extremely accurate results. Before we proceed, we must explain what we mean by noise introduced in the images. When we say that one frame (left or right) has noise of a%, we mean that if the plane contains N projection points we added [(N*a)/lOO] randomly distributed points. ( Note: [] denotes the integer part of its argument). When the noise in both frames is kept below 2% then the results are still very satisfactory. When the noise exceeds 5% then only the value of p gets corrupted, but the values of q and c remain very satisfactory. To correct this and get satisfactory results for high noise percentages, we devised the following method that uses three cameras : ” We consider the three camera configuration system as in Figure 2., where the top camera has only vertical displacement with respect to the left one. If all three images are corrupted by noise ( ranging from 5% to 20% ) then application of the algorithm ( Proposition 3.1 ) to the left and top frames will give very reasonable values for p and c and corrupt q, which q, as well as c, are accurately computed from the application of the same algorithm to the right and left frames “. So, by applying our stereo (without correspondence) algorithm to the 3-camera configuration vision system, we obtain accurate results for the parameters describing the 3-D planar patch, even for noise percentages of 20% or slightly more, and for different amounts of noise in the different frames. 4,Recovering the direction of translation. Here we treat the case where the points of set A just rigidly translate, and we wish to recover the direction of the translation. In this case, the depth is not needed but the orientation of the plane is required. The general case is treated in the next section. 3.1 Proposition: Using the aforementioned nomenclature, the parameters p, q and c of the plane in view, are directly computable ([49]) without using any point-to-point correspondence between the two frames . Actually one can prove that: 4.1 Technical prerequisites. Consider a coordinate system OXYZ fixed with respect to the camera; 0 coincides with the nodal point of the eye, while the image plane is perpendicular to the Z-axis ( focal length=f ), that is pointing along the optical axis (see Figure 3.). PERCEPTION AND ROBOTICS / 683 Let us represent points on the image plane with small letters 4e.g (x,y)) and points in the world with capital ones (e.g. (X,Y,Z)). Let us consider a point P=(Xr,Yr,Z1) in the world, with perspective image (xl,yl), where xl = ( f *X1 )/Z and y1 = ( PYl)/Z. If the point P moves to the position P’= (Xz,Y2,Z2) with X2=X1 +AX (14) Y2=Y1 +AY (15) Z2 = Z1 +AZ (16) then we desire to find the direction of the translation (AXlAZ,AYIAZ). If the perspective image of P’ is ( x2,y2 ), then the observed motion of the world point in the image plane is given by the displacement vector : ( xg-xl, yz-y1 ) (which in the case of very small motion is also known as “optical flow”). We can easily prove that: f+ Ax - x1 *AZ x2 - x1 = z, + AZ (17) f*AY-y,*AZ Y, -Y, = z +AZ (18) Under the assumption thdt the motion in depth is small with respect to the depth, the equations above become: f*AX-xl*AZ x2 - x1 = Z. (19) f*AY-y,*AZ Y2 -Y, = (20) The above equations relas the retinal motion ( left-hand sides ) to the world motion AX, AY, AZ. 4.2 Detecting 3-D direction of translation without correspondence. Consider again a coordinate system’OXYZ fixed with respect to the camera as in Figure 4., and let A={ (Xi,Yi,Zi) /i= 1,2,3 . . . n}, such that Zi= p*Xi+q*Yi+C I i= 1,2,3 _.. n that is the points are planar. Let the points translate rigidly with translation (AX,AY,AZ), and let { (xi,yi ) / i= 1,2,3 . . . n } and { (xi),yi) ) / i= 1,2,3, .._ n } be the projections of the set A before and after the translation, respectively. Consider a point (xi,yi) in the first frame which has a corresponding one (xi’,yi) ) in the second (dynamic) frame. For the moment we do not worry about where the point (x i’, yi’ ) is, but we do know that the following relations hold between these two points: f*AX - x1 * AZ x -x = (21) 1 1 Z f+AY - y, * AZ Y, - Yi = Z (22) where Zi is the depth of the 3-D point whose projection (on the first dynamic frame) is the point (xi,yi). Taking now into account that 1 f-P*xi --4*Yi -= Z c*f the above equatiohs become: (23) x; - x y(f*Ax-xl *AZ)* f-P*xi -9*Yi c*f (24) y;-yl=(f*AY-yl*AZ)* f-P*%i -4*Y, c*f (25) If we now write equation (24) for all the points in the two dynamic frames and sum the resulting equations up, we take: ” ” 2 (+r,)= 2 r(f’Ax-x,‘~)* f-P’I, -q*Yl 1 r=L I=1 c’f or (26) Similarly, if we do the same for equation (ZS), we take: ~,(,;-,,I= ;,L(f**Y-y,*Az)* f-p’~:tp’y’l or n /‘~-p*x,-q’y,)*AY-y,‘~--p*r,-q*y,)*bZ gi-0=&l&-,’ c*/ 1 (27) _ - At this point it has to be understood that equations (26) and (27) do not require our finding of any correspondence. By dividing equation (26) by equation (27), we get: -~ d__. .---- -= (281 I&;- gz ,I, “,’ x I- *f l c f-P’xlr-q’YJ - cf-P*xI‘-q*YI, )‘Y(, 1 Equation (28) is a lmear equation m the unknowns AX/AZ , AYIAZ and the coefficients consist of expressions involving summations of point coordinates in both dynamic frames; for the computation of the latter no establishment of any point correspondences is required. So, if we consider a binocular observer, applying the above procedure in both left and right “eyes”, we get two linear equations (of the form of equation (28)) in the two unknowns AXIAZ , AYIAZ, which constitute a linear system that in general has a unique solution. 4 Z&What the previous method is not about If one is not careful1 when analyzing the previous method, then he might think that all the method does, is to correspond the center of mass of the image points before the motion with the center of mass of the image points after the motion, and then based on that retinal motion to recover three dimensional motion. But this is wrong, because perspective projection does not preserve simple ratios, and so the center of mass of the image points before the motion does not correspond to the center of mass of the image points after the motion. All the above method does, is aggregation of the motion constraints; it does not correspond centers of mass. 4.3 Practical considerations. We have implemented the above method with a variety of planes as well as displacements; noiseless cases give exremely accurate results, while cases with noise percentages up to 20% (even with different amounts of noise in all four frames ( first left and right second left and right ) ) give very satisfactory results (an error of at most 5% ).We now proceed considering the general case. 5. Determining unrestricted 3-D motion of a rigid planar patch without point correspondences. Consider again the imaging system (binocular) of Figure 4., as well as the set A= { ( Xi,Yi,Zi) / i= 1,2,3 . . . n } such that: Zi=p*Xi+q*Yi+C / i= 1,2,3 . . . n i.e. the points are planar; let B be the plane on which they lie. Suppose that the points of the set A move rigidly in space (translation plus rotation ) and they become members of a set A’ = { ( X:,Yi’,Z: ) /“i=1,2,3 . . . n }. Since all o f the points of set A move rigidly, it follows that the points of set A’ are also planar; let B’ be the (new) plane on which these points lie. 6% / SCIENCE In other words the set A becomes A’ after the rigid motion transformation. We wish to recover the parameters of this transformation . From the projection of sets A and A’ on the left and right image planes and using the method described in Section 3 the sets A and A’ can be computed. In other words, we know exactly the positions in 3-D of all the points of the sets A and A’ (and this has been found without using any point correspondences - Section 3). So, the problem of recovering the 3-D motion has been transformed to the following: “Given the set A of planar points in 30 and the set A’ of new planar points, which has been produced by applying to the points of set A a rigid motion transformation, recover that transformation.” Any rigid body motion can be analyzed to a rotation plus a translation; the rotation axis can be considered as passing through any point in the space, but after this point is chosen, everything else is fixed. If we consider the rotation axis as passing through the center of mass (CM) of the points of set A, then the vector which has as its two endpoints the centers of mass CMA and CMAI of sets A and A’ respectively, represents the exact 3-D translation. So, for the translation we can write: translation = T = (X,Y,Z) = CM,, - CM, It remains to recover the rotation matrix. Let, therefore, nl and n2 be the surface normals of the planes B and B’. Then, the angle 9 between nl and n2 , where co& = 5 * “2 Iln1 II * II n2 II ’ with ’ * ’ the inner-product operator represents the rotation around an axis 0~02 perpendicular to the plane defined by nl and ng, where “1 x “2 *lo2 = 11 n, X n, 11 ’ with ’ X ’ the cross-product operator From the’axis 6102 and the angle 8 we develop a rotation matrix RI. The matrix R1 does not represent the final rotation matrix since we are still missing the rotation around the surface normal. Indeed, if we apply the rotation matrix RI and the translation T to the set A, we will get a set A” of points, which is different than A’, because the rotation matrix RI does not include the rotation around the surface normal n2. So we now have a matching problem : on the plane B’ we have two sets of points A’ and A” respectively, and we want to recover the angle Q by which we must rotate the points of set A” (with respect to the surface normal n2 1 in order to coincide with those of set A’, Suppose that we can tind angle a. From @ and n2 we construct a new rotation matrix R2 . The final rotation matrix R can be expressed in terms of R1 , R2 as follows: R= R1R2. It therefore remains to explain how we can compute the angle @ For this we need the statistical definition of the mean direction. Definition 1. Consider a set A={ (Xi,Yi) / i= 1,2,3 . . . n } of points all of which lie on the same plane. Consider the center of mass, CM, of these points to have coordinates (X,,,Y,,). Let also circle (CM,1 ) be the circle having its center at ( Xcm,Ycm > and radius of length equal to l.Let Pi be the interse-ctions of the vectors CMAi with the circumference of the circle (CM,l), i= 1,2,3 . . . n. Then the “mean direction” of the po- ints of the set A, is defined to be the vector MD, where MD= i CMPj It is clear that the ve&%‘r of the mean direction is intrinsically connected with the set of points considered each time, and if the set of points is rotated around an axis perpendicular to the plane and passing through CM, by an angle o, the new mean same angle 0. direction vector is the previous rotated by the So, returning to the analysis of our approach, the angle Q is the angle between the vectors of mean directions of the sets A’ and A” ( which have obviously, common CM’s). Moreover, it is obvious that the angle @, and therefore the rotation matrix R2, cannot be computed in the case the mean direction is 0 (i.e. in the case the set of points is characterized by a point symmetry). 6. Determining unrestricted 3-D motion of a rigid surface without point correspondences In this section we conkider the problem of the recovery of unrestricted 3-D motion of non-planar surfaces. Again, we consider a set of rigidly moving points, and we assume that the depth information iSavailable. In another work ([491), we describe how to recover the depth of a set of non-planar points from their stereo images without having to go through the correspondence problem. So consider the imaging system ( binocular 1 of Fig.5, and a set A = { Pi = (Xi, Yi, Zi ) / i = 1,2,3 . . . n } of 3-D non-planar points. The coordinates are with respect to a fixed coordinate system that will be used throughout the paper (we can consider as this system either the system of the left or right camera, or the head frame coordinate system). Applying the method described in [49] , from the left and right images of the points of set A, we can recover the members of A themselves, i.e. their 3-D coordinates. Suppose now that the points of the set A move rigidly in space ( translation plus rotation ) and that they become members of the set A’ = { P’i = ‘Xi, Y’i, Z’i ) / i= 1,2,3 . . . n }. It is evident that the set A’ can be recovered exactly as the set A with the method described in [49] . In other words, the set A becomes A’ after the rigid motion transformation. We wish to recover the parameters of this transformation. We have already stated that from the projection of the sets A and A’ on the left and right image planes and using the method described in 1491 , the sets A and A’ can be computed. Hence we know exactly the positions of the points of the sets A and A’ (and we came up with this result whithout relying to any point- to-point correspondence ). So, for the purposes of this section we will assume that the depth information is available. From the above discussion, we see that the problem of recovering the 3-D motion has been transformed to the following: rr Given the set A of nonplanar points and the set A’ corresponding to the new positions of the initial points after they have experienced a rigid motion transformation, recover that transformation, without any point-to-point correspondences! ” Any rigid motion can be analyzed to a rotation plus a translation; the rotation axis can be considered as passing through the any point in space, but after this point is chosen, everything else is fixed. If we consider the rotation axis as passing through the origin of the coordinate system, then if the point ( Xi, Yi, Zi ) G A moves to a new position (X’i, Y’i, Z’i ) < A’, the following relation holds: (X’i,Y’i,Z’i)‘=R(Xi,Yi,Zi)’ +T /i=1,2,3...n (29) where R is the 3x3 rotation matrix and T=(AX, AX, AZ ) ’ is the translation vector. We wish to recover the parameters R and T, without using any point-to-point correspondences. Let:(Xi,Yi,Zi)’ G Pi and (X’i,Y’i,Z’i)‘~P’l/i=1,2,3...n Then, equation ( 29 1 becomes: Pi = R P’l + T Ii- 1,2,3 . . . n Summing up the above n equations and dividing by the total number of points, n, we get: n n Ix P. L x P. 1 i=l -1 R i=l -+T (30 ) n n From equation (30) it is clear that if the rotation matrix R is known, then the translation vector T can be computed. So, in the sequel, we will describe how to recover the rotation matrix R. In PERCEPTION AND ROBOTICS / 685 order to get rid of the translational part of the motion we shall transform the 3-D points to ” free ” vectors by subtracting the center-of-mass vector, Let, therefore, CMA and CMA* be the center-of-mass vectors of the sets of points A and A’ respectively; i.e. CMA = C ( Pi / n ) and CMA~ = C ( P’l/ n >. We furthermore define: Vi = Pi-CMA / i=1,2,3...n V’i = P’i - CMA~ / i= 1,2,3 ,,, n With these definitions, the motion equation ( 29 ), becomes : v’i = Rvi / i=1,2,3 . . . n where R is the ( orthogonal ) rotation matrix. If we know the correspondences of some points ( at least three ) then the matrix R can in principle be recovered, and such efforts have been published [12] . But we would like to recover matrix R without using any point correspondences. Let, vi = (vq, vYi,vzi) / i=1,2,3...n V'i = ( V’T,V’yi, V’%) / i=1,2,3 . . . n Note that vi and v’i are the position vectors of the members of sets A and A’ respectively with respect to their center-of-mass coordinate systems. We wish to find a quantity that will uniquely characterize the whole sets A and A’ in terms of their ” relationship ” ( rigid motion transformation ). We have found that the matrix consisting of the second order moments of the vectors vi and v’i has these VE Y- v= From these relations, we have that : properties. In particular, let ” m- 5 v2xi 5 vxivyi i=l i=l n i flvYivxi bYi i =l ; VXiVZi n ‘vYivZi i=l i =1 _ 5 v9 2xi n z v’xiv’yi i=l i =I n x v’yiv’xi hry ZYi i=l i=l i v’xiv’zi ; v’yiv9zi i=l i=l n ’ vXivZi t=l ;: VYiVZi i=l n c v2q i =l I L n c v’xiv’zi i =I n c vpyivyzi i=l ;: v’ 2Zi i =l I V’ = If ( v’xi, v’yi, V’Zi ) t ( v’+ v’yi, V’Zi ) = 1=1 = z R( vxi, vyi, vzi )” ( vxi, vyi, vzi ) ti = R v R’ 1 so, V’= RVRt (311 At this point it should be mentioned that equation (31) represents an invariance between the two sets of 3-D points A and A’, since the matrices V and V’ are similar. In other words we have discovered that matrix V remains invariant under rigid motion transformation. The reason that the quantity (matrix) V remains invariant is much deeper and very intuitive, and it comes from the principles of Classical Mechanics ( see also APPENDIX ). From now on, the recovery of the rotation matrix R is simple and comes from basic Linear Algebra. Furthermore equation ( 3 ) implies that the matrices V and V’have the same set of eigenvalues (150 I). But sinceV and V’ are symmetric matrices, they can be expanded in their eigenvalue decomposition, i.e. there exist matrices S, T, such that: V=SDSr V’ =T DTt (32) (33) where S, T are orthogonal matrices having as columns the eigenvectors of the matrices V and V’ respectively ( e.g. i-th column corresponding to the i-th eigenvalue) and D diagonal matrix consisting of the eigenvalues of the matrices V and V’. We have to mention at this point that in order to make the decomposition unique we require that the eigenvectors in the columns of matrices S and T be orthonormal. From equations ( 31 ), ( 32 ), ( 33 ) we derive that matrices T and RS both consist of the orthonormal eigenvectors of matrix V’. In other words, the columns of matrices RS and T must be the same, with a possible change of sign. So, the matrix RS is equal to one of eight possible matrices, Ti , i= 1,..,8. Thus, R =TIST v i=1,..,8. But the rotation matrix is orthogonal and it has determinant equal to one. Furthermore, if we apply matrix R to the set of vectors v; then we should get the set of vectors v; ‘. So, given the above three conditions and Chasles t.heorem, the matrix R can be computed uniquely. There is something to be said about the uniqueness properties of the algorithm. When all the eigenvalues of the matrix V have multiplicity one then the problem has a unique solution. When there are eigenvalues with multiplicity more than one, then there is some inherent symmetry in the problem that exhibits some degeneracy properties. For example, if the surface in view (i.e. the surface on which the points lie) is a solid of revolution, then there is an eigenvalue (of the matrix V) with multiplicity 2, and only the eigenvector corresponding to the axis of revolution can be found. The other two eigenvectors define a plane vertical to the axis of revolution. So, in this case there is an inherent degeneracy. We are currently working towards a complete mathematical characterization of the degenerate cases of the problem. We are also developing experiments to test the robustness of the method as well as setting up the equipment for experimentation in natural images. 7. Conclusion and future work. We have presented a method on how a binocular (or trinocular) observer can recover the structure, depth, and 3-D motion of rigidly moving surface patch without using any static or dynamic point correspondences. We are currently setting up the the experiment for the application of the method in natural images.We are also working towards a theoretical error analysis of the presented methods as well as the development of experiments to test the robustness of these methods. 8. Acknowledgments. We would like to thank Christopher Brown for his constructive criticism during the preparation of this paper. This work was supported by the National Science Foundation (DCR- 8320136) a nd by the Defense Advanced Research Projects Agency (DACA76-85-C-0001, N00014-82-K-0193). REFERENCES 1. G. Adiv, Determining 3-D Motion and Structure from Optical Flow Generated from Several Moving Objects. COINS-Tech. Rep. 84-07, July 1984. 2. J. Aloimonos, and C. M. Brown, Direct Processing of Curvilnear Sensor Motion from a Sequence of Perspective Images, Proc. of the IEEE Workshop on Computer Vision: Representation and Control, Annapolis, Maryland,1984. 686 / SCIENCE 3. A. Bandyopadhyay, A Multiple Channel Model for the Perception of Optical Flow, IEEE Proc. of the Workshop on Computer Vision Representation and Control, 1984, ~~78-82. 4. A. Bandyopadhyay and J. Aloimonos, Perception of Rigid Motion from SpatioTemporal Derivatives of Optical Flow, Tech. Report 157, Dept. of Computer Science, University of Rochester, 1985. 5a). A. Bruss and B.K.P. Horn, Passive Navigation, CVGIP 21, 1983, ~~3-20. 5b). D. H. Ballard and 0. A. Kimball, Rigid Motion from Depth and Optical Flow, Computer Graphics and Image Processing, 22:95-l 15,1983. 6. S.T. Barnard and W.B. Thompson, Disparity analysis of images, IEEE Trans. PAMI, Vol. 2, 1980, pp 333-340. 7. L.S. Davis, Z. Wu and H. Sun, Contour Based Motion Estimation, CVGIP 23,1983, ~~313-326. 8. J.Q.. Fang and T.S. Huang, Solving Three Dimension Small- Rotation Motion Equations: Uniqueness, Algorithms and Numerical Results, CVGIP 26,1984, ~~183-206. 9. R.M. Haralick and J.S.Lee, The Facet Approach to Optical Flow, Proc. Image Understanding Workshop, Arlington, Virginia, June, 1983, pp84-93. 10. B.K.P. Horn and B.G. Schunck, Determining Optical Flow, Artificial Intelligence 17,1981, ~~185-204. 11. T.S. Huang, Three Dimensional Motion Analysis by Direct Matching, Topical Meeting on Machine Vision, Incline Village, Nevada, 1985, ppFA1.1 - FA1.4. 12. T.S. Huang and S.D. Blonstein, Robust Algorithms for Motion Estimation Based on Two Sequential Stereo Image Pairs, Proc, CVPR 1985, ~~518-523. 13. M. Jenkin and J. Tsotsos, Applying Temporal Constraints to the Dynamic Stereo Problem, Department of Computer Science, University of Toronto, CVGIP ( to be published >. 14. T. Kanade, Camera Motion from Image Differentials, Proc. Annual Meeting, Optical Society of America, Lake Tahoe, March 1985. 15. K.-I. Kanatani, Detecting the Motion of a Planar Surface by Line and Surface Integrals, CVGIP, Vol. 29, 1985, ~~13-22. 16. K.-I. Kanatani, Tracing Planar Surface Motion from a Projection without Knowing the Correspondence, CVGIP, Vol. 29, 1985, ~~1-12. 17. L, Kitchen and A. Rosenfeld, Gray-Level Corner Detection, TR 887, Computer Science Dept., U. of Maryland, 1980 18. H.C. Longuet-Higgins, A Computer Algorithm for Reconstructing a Scene from 2 Projections, Nature 293, 1981. 19. H.C.Longuet-Higgins and K.Prazdny, The Interpretation of a Moving Retinal Image, Proc. Royal Society London B 208, 1980, pp385-397. 20. Y. Matsushima, Differentiable Manifolds, New York. Marcel Dekker, 1972. Bla).H.P.Moravec, Towards Automatic Visual Obstacle Avoidance, Proc., 5th IJCAI, 1977. 21b).H. Nagel, Displacement Vectors D erived from Second Order Intensity Variations in Image Sequences, CVGIP, Vol. 21, 1983,pp85-117. 22. S. Negahdaripour and B.K Horn, Detremining 3-D Motion of Planar Objects from Image Brightness Patterns, IJCA185, 1985, pp898-901. 23. J.M. Prager and M.A. Arbib, Computing the Optical Flow : The MATCH Algorithm and Prediction, CVGIP, Vol. 21, 1983, ~~272-304. 24. K.Prazdny, Determining the Instantaneous Direction of Motion from Optical Flow Generated by Curvilinearly 3Ioving Observer, CVGIP, Vol. 17,1981, pp 94- 97. 25. J.H.Rieger and D.T Lawton, Determining the Instantaneous Axis of Translation from Optic Flow Generated by Arbitrary Sensor Motion, COINS Tech. Report 83-l., January 1983. 26. J.W. Roach and J.K. Aggarwal, Determining the Movement of Objects from a Sequence of Images, PAMI, Vol. 2, No. 6, November 1980, ~~554-562. 27. R.Y.Tsai, T.S.Huang and W. Zhu, Estimating Three Dimensional Motion Parameters of a Rigid Planar Patch II: Singular Value Decomposition, IEEE Trans. A.S.S.P. ASSP-30, August 1982, ~~525-534. 28. R.Y.Tsai, T.S.Huang, Estimating Three Dimensional Motion Parameters of a Rigid Planar Patch III: Finite Point Correspondences and The Three View Problem, Proc. IEEE Conf. ASSP Paris, May 1982. 29. R.Y.Tsai, T.S.Huang, Uniqueness and Estimation of Three Dimensional Motion Parameters of Rigid Objects with Curved Surfaces, IEEE, Trans. PAM1 6, January 1984, ~~13-27. 30. R.Y.Tsai, T.S.Huang, Uniqueness and Estimation of Three Dimensional Motion Parameters of Rigid Objects, Image Understanding 1984, eds Shimmon Ullman and Whitman Richards, Ablex Publ. Co., New Jersey, 1984, ~~135. 31. B. Sarachan, Experiments in Rotational Egomotion Calculation, TR 152, Computer Science Dept., U. of Rochester, 1985. 32. S. Ullman, The Interpretation of Visual Motion, Ph.D. Thesis, 1977. 33. S. Ullman, Analysis of Visual Motion by Biological and Computer Systems, IEEE Computer, 14 (8),1981, pp57-69. 34. S. Ullman, Computational Studies in the Interpretation of Structure and Motion: Summary and Extension, AI Memo No 706, Mit AI Lab., March 1983. 35. S. Ullman and E. Hildreth, The Measurement of Visual Motion, in Physical and Biological Processing of Images (Proc. Int. Symp. Rank Prize Funds London), O.J. Braddick and A.C. Sleigh ted.), Springer-Verlag, September 1982, ~~154-176. 36. B. Yen and T.S. Huang, Determining 3-D Motion and Structure of Rigid Objects Containing Lines Using the Spherical Projection, in Image Sequence Processing & Dynamic Scene Analysis, T.S. Huang ted.), 1983. 37. A.M. Waxman and S.S. Sinha, Dynamic Stereo: Passive Ranging to Moving Objects from Relative Image Flows, CAR- Tech. Report-74, College Park, Maryland 20742, July 1984. 38. A. Waxman and S. Ullman, Surface Structure and 3-D Motion from Image Flow: A Kinematic Approach, CAR-TR-24, Center for Automation Research, Univ. of Maryland, 1983. 39. K. Wohn and A.Waxman, Contour Evolution, Neighbourhood Deformation and Local Image Flow: Curved Surfaces in Motion, CAR-TR-134, Computer Vision Lab., Univ. of ,Massachusetts. 40. Prazdny, K., Egomotion and relative depth map from optical flow, Biol. Cybernetics 36, 1980. 41. Nagel, H., On the derivation of three-dimensional rigid point configurations from-image sequences, Proc., PRIP, Dallas, TX, 1981. 42. Bagel, H.-H. and B. Neumann, On three-dimensional reconstruction from and two perspective views, Proc., 7th IJCAI, Vancouver, Canada, 1981. 43. Fang, J.Q. and T.S. Huang, Estimating three-dimensional movement of rigid objects: experimental results, Proc., 8th IJCAI, Karlsruhe, West Germany, 1983 44. Jerian, C. and R. Jain, Determining motion parameters for scenes withtranslation and rotation, Proc., Workshop on Motion, Toronto, Canada, 1983 45. Lawton, D., Motion analysis via local translational processing, Proc., Workshop in Computer Vision, Rindge, NH, 1982. 46. Schunck, B., personal communication, 1984. 47. Aloimonos, J., Low-level visual computations, Ph.D. thesis, Computer Science Dept., U. Rochester, in preparation. 48. Richards, W., Structure from stereo and motion, JOSA A 2, 2, February 1985. 49. Aloimonos, J. and Rigoutsos,I., Determining the 3-D Motion of a Rigid Surface Patch, without Correspondence, under Perspective Projection, TR178, Dept. of Computer Science, University of Rochester, December 1985. 50. Stewart,G., Introduction toMatrix Computations, Academic Press, 1983. 51. Synge,L.J., and Schild, A., Tensor Calculus , Toronto, University of Toronto Press, 1949. 52. Goldstein, H., Classical Mechanics, Addison -Wesley, 1980. APPENDIX We know that the quotient of two quantities is often not a member of the same class as the dividing factor, but it may belong to a more complicated class. To support this statement we need only recall that the quotient of two integers is in general a rational number. Similarly the quotient of two vectors cannot be PERCEPTION AND ROBOTICS / 68’ defined consistently whithin the calss of vectors. we need a class that is a superset of that of vectors, namely the ciass of tensors. The quantity that is known as moment of inertia of a rigid body with respect to its axis of rotation is defined as. I = L 1 w, where I, L, and o are the moment of inertia of the considered body, the ( total ) angular momentum of the body and its angular velocity with respect to its axis of rotation, say 00’, respectively. It is not therefore surprising to find that I is a new quantity, namely a tensor of the second rank. In a Cartesian space of three dimensions, a tensor T of the k-th rank may be defined for our purposes as a quantity having 3k components Tiliais...ik that transform under an orthogonal transformation of coordinates, A , according to the following relation ( see [ 511 1: 3 T.’ . . I’~I’~I’~ . . . igk (~‘1 = x i .i_. . . . i. aipl ilai,2’2 . . . aiy i Til’2i3...ik(~) kk By this definition, ‘t’hi’ 3 2 ‘= 9 components of a tensor of the second rank transform according to the equation: 3 Tij= 1 aikajlTkl k,l = 1 If one wants to be rigorous, one must distinguish between a second order tensor T and the square matrix formed from its components. A tensor is only defined in terms of its transformation properties under orthogonal coordinate transformations However, in the case of matrices there is no restriction in the kind of transformations it may experience. But considering the restricted domain of orthogonal transformations, there is a practical as well as important identity. The tensor components and the matrix elements are manipulated in exactly the same fashion; as a matter of fact for every tensor equation there will be a corresponding matrix equation, and vice versa. Consider now an orthogonal transformation of coordinates defined by a matrix A. Then the components of a square matrix V will now be: V = A V Ax 3 or equivalently: Vij = E &kVklajl k.1 = 1 If we now denote by Iu the 3x3 matrix that corresponds to the inertia tensor of the second rank, I , we are able to write the followinB eauation: r= AZA’, where, In the al bc we matrix, mi is the mass of the i-th “particle ” (point) and (xi, yi, zi)Eri is its position vector with respect to the considered coordinate system. Restricting ourselves in the center- of-mass coordinate system, with respect to which the rigid motion is viewed as consisting only of a rotational part (see previous discussion and [52] ), and recalling that the rotation matrix R defines an orthogonal transformation of the coordinates, we can I’XX I’,, I’XZ [ I’,, I’, I’,, I’zx I’,, I’zz 1 = R where the primed and the unprimed factors refer to quantities measured with respect to the center-of-mass coordinate system after and before the transformation ( rigid motion 1 respectively. Consider now the diagonal matrix: D__[ 4 3 i ] ,where Q is an arbitrary scalar. From basic Linear Algebra, it follows that:D=RDRz (2). The above relation (2) will clearly hold for the case of Q E Cmi(xi2 + yi’ + zi2) = Cmi(ri - r; 1, where q is the position vector of the ith ” particle ” ( point ) with mass mi with respect to the center-of-mass coordinate system. At this point recall that the orthogonal transformations preserve inner products. Hence, ifri’ is the new position vector with respect to the same coordinate system C center-of-mass 1, of the i-th “particle” ( point ), the following equation will obviously hold: r’i * r’i = q * r; I i= 1,2,3 . ..n Therefore:Q’ E C mi (x’i’ + y’i2 + z’i2 ) = C mi(xi2 + yi2 + zi2 ) s Q and the equation ( 2 ) can now be written as follows: D’ = RDR’(3). Recall that the primed quantities refer to the center-of-mass coordinate system after the the rigid motion. Finally, subtracting equation (3) from equation ( 1 1 and recalling from Linear Algebra that: R Al R’ - R A2 R’ = R ( Al - A2 1 for any two matrices Al, and A2 of appropriate order. we conclude that. CIIljX’i2 CIZliX’iy’i Cm,x’,z’, i i i i i ii ’ 1 rmix l2 Txiyi rxizLl CITliy’iX’i CIIliy’ i2 Cllliy’ iZ’ i = R CRliyiXi CIYliyi2 EmiyjZi I CIlljZ'jX'i Cllliy'iZ'i CIYliZ'i2 EllljZjXi CllliyiZi Cl&Z i2 i i i I I i i i J L I L IJ in other words the right matrix is an invariant under orthogonal transformations, and such a transformation is the rigid motion as viewed from the center-of mass coordinate system. Certainly the moment of inertia matrix I can be used instead of the matrix V (recall section 6), but the matrix V is of a simpler form and so it is better to be used for calculations. The moment of inertia matrix 1, facilitates a uniqueness analysis of the problem. I Figure 4. Figure 5. 688 / SCIENCE
|
1986
|
183
|
454
|
Stereo Integral Equation Grahame B. Smith Artificial Intelligence Center, SRI International Menlo Park, California 94025 Abstract A new approach to the formulation and solution of the prob- lem of recovering scene topography from a stereo image pair is presented. The approach circumvents the need to solve the cor- respondence problem, returning a solution that makes surface in- terpolation unnecessary. The methodology demonstrates a way of handling image analysis problems that differs from the usual linear-system approach. We exploit the use of nonlinear func- tions of local image measurements to constrain and infer global solutions that must be consistent with such measurements. Be- cause the solution techniques we present entail certain computa- tional difficulties, significant work still lies ahead before they can be routinely applied to image analysis tasks. 1 Introduction The recovery of scene topography from a stereo pair of images has typically proceeded by three, quasi-independent steps. In the first step, the relative orientation of the two images is determined. This is generally achieved by selecting a few scene features in one image and finding their counterparts in the other image. From the position of these features, we calculate the parameters of the transformation that would map the feature points in one image into their corresponding points-in the other image. Once we have the relative orientation of the two images, we have constrained the position of corresponding image points to lie along lines in their respective images. Now we commence the second phase in the recovery of scene topography, namely, determining a large number of corresponding points. The purpose of the first step is to reduce the difficulty involved in finding this large set of corresponding points. Because we have the relative orientation of the two images, we only have to make a one-dimensional search (along the epipolar lines) to find points in the two images that correspond to the same scene feature. This step, usually called solving the “cor- respondence” problem, has received much attention. Finding many corresponding points in stereo pairs of images is difficult. Irrespective of whether the technique employed is area-based cor- relation or that of edge-based matching, the resultant set of cor- responding points is usually small, compared with the number of pixels in the image. The solution to the correspondence problem, therefore, is not a dense set of points over the two images but rather a sparse set. Solution of the correspondence problem is made more difficult in areas of the scene that are relatively fea- tureless or when there is much repeated structure, constituting The work reported here was supported by the Defense Advanced Research Projects Agency under Contract8 MDA903-83-C-0027 and DACA76-85-G ooo4. local ambiguity. To generate the missing intermediate data, the third step of the process is one of surface interpolation. Scene depth at corresponding points is calculated by simple triangulation; this gives a representation in which scene depth values are known for some set of image plane points. To fill this out and to obtain a dense set of points at which scene depth is known, an interpolation procedure is employed. Of late there has been significant interest in this problem and various techniques that use assumptions about the surface properties of the world have been demonstrated [1,3,5,8]. Such techniques, despite some difficulties, have made it possible to reconstruct credible scene topography. Of the three steps outlined, the initial one of finding the rela- tive orientation of the two images is really a procedure designed to simplify the second step, namely, finding a set of matched points. We can identify several aspects of these first two steps that suggest the need for an alternative view of the processes entailed in reconstructing scene topography from stereo image pairs. The techniques employed to solve the correspondence problem are usually local processes. When a certain feature is found in one image, an attempt is made to find the corresponding point in the other image by searching for it within a limited region of that image. This limit is imposed not just to reduce com- putational costs, but to restrict the number of comparisons so that false matches can be avoided. Without such a limit many points may “match” the feature selected. Ambiguity cannot be resolved by a local process; some form of global postmatching process is required. The difficulties encountered in featureless areas and where repeated structure exists are those we bring upon ourselves by taking too local a view. In part, the difficulties of matching even distinct features are self-imposed by our failure to build into the matching procedure the shape of the surface on which the feature lies. That is, when we are doing the matching we usually assume that a feature lies on a surface patch that is orthogonal to the line of sight - and it is only at some later stage that we calculate the true slope of the surface patch. Even when we try various slopes for the surface patch during the matching procedure, we rarely return after the surface shape has been estimated to determine whether that calculated shape is consistent with the best slope actually found in matching. In the formulation presented in the following sections, the problem is deliberately couched in a form that allows us to ask the question: what is the shape of the surface in the world that can account for the two image irradiances we see when we view that surface from the two positions represented by the stereo pair? We make no assumptions about the surface shape to do the matching - in fact, we do not do any matching at all. What we are interested in is recovering the surface that explains simul- PERCEPTION AND ROBOTICS / 689 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. taneously all the parts of the irradiance pattern that are depicted in the stereo pair of images. We seek the solution that is globally consistent and is not confused by local ambiguity. In the conventional approach to stereo reconstruction, the fi- nal step involves some form of surface interpolation. This is necessary because the previous step - finding the corresponding points - could not perform well enough to obviate the need to fabricate data at intermediate points. Surface interpolation tech- niques employ a model of the expected surface to fill in between known values. Of course, these known data points are used to calculate the parameters of the models, but it does seem a pity that the image data encoding the variation of the surface be- tween the known points are ignored in this process and replaced by assumptions about the expected surface. In the following formulation we eliminate the interpolation step by recovering depth values at all the image pixels. In this sense, the image data, rather than knowledge of the expected surface shape, guide the recovery algorithm. We previously presented a formulation of the stereo recon- struction problem in which we sought to skirt the correspon- dence problem and in which we recovered a dense set of depth values [6]. That approach took a pair of image irradiance pro- files, one from the left image and its counterpart from the right image, and employed an integration procedure to recover the scene depth from what amounted to a differential formulation of the stereo problem. While successful in a noise-free context, it was extremely sensitive to noise. Once the procedure, which tracked the irradiance profiles, incurred an error recovery proved impossible. Errors occurred because there was no locally valid solution. It is clear that that procedure would not be successful in cases of occlusion when there are irradiance profile sections that do not correspond. The approach described in this paper attempts to overcome these problems by finding the solution at all image points simultaneously (not sequentially, as in the pre- vious formulation) and making it the best approximation to an overconstrained system of equations. The rationale behind this methodology is based on the Expectation that the best solution to the overconstrained system will be insensitive both to noise and to small discrepancies in the data, e.g., at occlusions. While the previous efforts and the work presented here aimed at simi- lar objectives, the formulation of the problem is entirely different. However, the form of the input - image irradiance profiles - is identical. The new formulation of the stereo reconstruction task is given in terms of one-dimensional problems. We relate the image ir- radiance along epipolar lines in the stereo pair of images to the depth profile of the surface in the world that produced the irradi- ante profiles. For each pair of epipolar lines we produce a depth profile, from which the profile for a whole scene may then be derived. The formulation could be extended directly to the two- dimensional case, but the essential information and ideas are bet- ter explained and more easily computed in the one-dimensional case. We couch this presentation in terms of stereo reconstruction, although there is no restriction on the acquisition positions of the two images; they may equally well be frames from a motion sequence. 2 Stereo Geometry As noted earlier, our formulation takes two image irradiance pro- files - one from the left image, one from the right - and describes the relationship between these profiles and the corresponding OL _I/ f L XL A I AB CD D is the point (x, -2) -=- 0,B CO, DN= (s-x) GH FD -=- 0,H FO, O,N=(h-z) Figure 1: Stereo Geometry. The two-dimensional arrangement in the epipolar plane that contains the optical axis of the left imaging system. depth profile of the scene. The two irradiance profiles we con- sider are those obtained from corresponding epipolar lines in the stereo pair of images. Let us for the moment consider a pair of cameras pointed towards some scene. Further visualize the plane containing the optical axis of the left camera and the line joining the optical centers of the two cameras, i.e., an epipolar plane. This plane intersects the image plane in each camera, and the image irradiance profiles along these intersections are the cor- responding irradiance profiles that we use. Of course, there are many epipolar planes, not just the one containing the left optical axis. Consequently, each plane gives us a pair of corresponding irradiance profiles. For the purpose of this formulation we can consider just the one epipolar plane containing the left optical axis since the others can be made equivalent. A description of this equivalence is given in a previous paper [6]. Figure 1 depicts the two-dimensional arrangement. AB and GH are in the camera image planes, while 0~ and OR are the cameras’ optical centers. D is a typical point in the scene and AD and GD are rays of light from the scene onto the image planes of the cameras. From this diagram we can write two equations that relate the image coor- dinates ZZ, and ZR to the scene coordinates z and t. These are standard relationships that derive from the geometry of stereo viewing. For the left image 2 =L -=- --t fL , while for the right image 2 - =gR(ZR)- -2 b+gR(ZR)h) , % bY0 / SCIENCE where g&R) = xRcos’ - isin+ xRSin(b+icos(b In addition, it should be noted that the origin of the scene co- ordinates is at the optical center of the left camera, and therefore the z values of all world points that may be imaged are such that 2x0 3 Irradiance Considerations From any given point in a scene, rays of light proceed to their image projections. What is the relationship between the scene ra- diance of the rays that project into the left and the right images? Let us suppose that the angle between the two rays is small. The bidirectional reflectance function of the scene’s surface will vary little, even when it is a complex function of the lighting and viewing geometry. Alternatively, let us suppose that the surface exhibits Lambertian reflectance. The scene radiance is indepen- dent of the viewing angle; hence, the two rays will have identi- cal scene radiances, irrespective of the size of the angle between them. For the model presented here, we assume that the scene radiance of the two rays emanating from a single scene point is identical. This assumption is a reasonable one when the scene depth is large compared with the separation distance between the two optical systems, or when the surface exhibits approxi- mate Lambertian reflectance. It should be noted that there are no assumptions about albedo (i.e., it is not assumed to be con- stant across the surface) nor, in fact, is it even necessary to know or calculate the albedo of the surface. Since image irradiance is proportional to scene radiance, we can write, for corresponding image points, I IL(zL)= IR(SR) 1~ and IR are the image irradiance measurements for the left and right images, respectiveIy. It should be understood that these measurements at positions XL and XR are made at image points that correspond to a single scene point x. While the above assumption is used in the following formula- tion, we see little difficulty in being less restrictive by allowing, for example, a change in linear contrast between the image profile and the real profile. 4 Integral Equation Let us consider a single scene point x. For this scene point, we can write IL(X) = IR(x). This equality relation holds for any function F of the image irradiance, that is, F( IL(Z)) = F( IR( X)). If we let p select the particular function we want to use from so-me set of functions, we shall write The set of functions we use will be the set of all nonlinear func- tions for which F(pl, I) # a(pl,pz)F(pz, I) for all p. A specific example of such a function is F(p, I) = P. The foregoing functions relate to the image irradiance. We can combine them with expressions that are functions of the stereo geometry. In particular, for the as yet unspecified function 2’ of 5, we can write We have written z as 44 to emphasize the fact that the depth profile we wish to recover t is a function of X. Should a more concrete example of our approach be required, we could select T( 5) = In( 5), which, w h en combined with the example for F above, gives us d IL*(X)- In( dz -&’ = IR’(z);idz Id-+) We now propose to develop the left-hand side of the above expression in terms of quantities that can be measured in the left stereo image, and develop -the right-hand side in terms of quantities from the right stereo image. If’we were to substitute XL for z in the left-hand side of the above expression and XR for z in the right-hand side, we would have to know the correspondence between XL and XR. This is a requirement we are trying to avoid. At first, we shall integrate both sides of the above expression with respect to z before attemping substitution for the variable x: / b a F(p.~R(&(+~ , where a and 6 are specific scene points. Now let us change the integration variable in the left-hand side of the above expression to ZL, and the integration variable in the right-hand side to XR: / bL OL F(P, Wd&T(~)d~L = / h F(P, IR(zR))U(XR)dXR 9 (1) aR where Equation (1) is our formulation of the stereo integral equa- tion. Given that we have two image irradiance profiles that are matched at their end points - i.e., UL and 6~ in the left image correspond, respectively, to OR and bR in the right image - then Equation (1) expresses the relationship between the image irra- diance profiles and the scene depth. It will be noted that the left-hand side of Equation (1) is composed of measurements that can be made in the left image of the stereo pair, while measure- ments in the right hand side are those that can be made in the right image. In addition, the right-hand side has a function of the scene depth as a variable. Our goal is to recover z as a function of the right-image coordinates XR, not as a function of the world coordinates x. Once we have %(xR), we can transform it into any coordinate frame whose relationship to the image coordinates of the right image is known. The recovery of Z(xR) is a two-stage process. After first solving Equation (1) for U(XR), we integrate the latter to find %(XR) by using TbR(xR)- b+gRbR)h) +R) = 1 TbRbR) - (8 + gRbR)h) z(“R) ) + /., u(X’R)dX’R . aR In this expression one should note and UL are corresponding points. that z(dR) is known, since aR PERCEPTION AND ROBOTICS / 691 It is instructive as regards the nature of the formulation if we look at the means of solving this equation when we have dis- crete data. In particular, let us take another look at an example previously introduced, namely, F(PlO = lQ and hence / bL IL%L) -dxL = / 6R hxP(ZR)U(Q&hz , =L *L “R and then +R)= (a+gR(zR)h) gR(zR)- fh'J;; U(tiR)ds'R ' where K = (gR(aR) - b + gR(aR)h) +‘R) 1 Suppose that we have image data at points ~~1~2~2, .,,, ZL~ that lie between the left integral limits and, similarly, that we have data from the right image, between its integral limits, at points xRl,ZR2, . . . . xRn. Further, let us approximate the integrals as follows: c * y = 2 J!R'(zRj)l/(zRj) j=l j=l In actual calculation, we may wish to use a better integral for- mula than that above, (particularly at the end points), but this approximation enables us to demonstrate the essential ideas without being distracted by the details. Although the above approximation holds for all values of p, let us take a finite set of values, Pi,p2, ----9Prn, and write the approximation out as a matrix equation, namely, L Let us now recall what we have done. We have taken a set of image measurements, along with measurements that are just some non-linear functions of these image measurments, multi- plied them by a function of the depth, and expressed the rela- tionship between the measurements made in the right and left images. Why should one set of measurements, however purpose- fully manipulated, provide enough constraints to find a solution with almost the same number of variables as there are image measurements? The matrix equation helps in our understand- ing of this. First, we are not trying to Iind the solution for the scene depth at each point independently, but rather for all the points simultaneously. Second, we are exploiting the fact that, if the functions of image irradiance used by us are nonlinear, then each equation represented in the above matrix is linearly inde- pendent and constrains the solution. There is another way of saying this: even though we have only one set of measurements, requiring that the one depth profile relates the irradiance profile in the left image to the irradiance profile in the right image, and also relates the irradiance squared profile in the left image to the irradiance squared profile in the right image, and also relates the irradiance cubed profile etc., provides constraints on that depth profile. The question arises as to whether there are sufficient con- straints to enable a unique solution to the above equations to be found. This question really has three parts. Does an integral equation of the form of Equation (1) have a unique solution? This is impossible to answer when the irradiance profiles are unknown; even when they are known an exceedingly difficult problem con- fronts us [2,4]. Does the discrete approximation, even with an unlimited number of constraints, have the same solution as the integral equation ? Again this is extremely difficult to answer even when the irradiance profiles are known. The flnal question relates to the finite set of constraint equations, such as those shown above. Does the matrix equation have a unique solution, and is it the same as the solution to the integral equation? Yes, it does have an unique solution - or at least we can impose so- lution requirements that makes a unique answer possible. But the question that asks whether the solution we find is a solution of the integral equation remains unanswered. From an empiri- cal standpoint, we would be satisified if the solution we recover is a believable depth profile. Issues about sensitivity to noise, function type, and the form of the integral approximation will be discussed later in the section on solution methods. bet us return to considerations of the general equation, E&m- tion (1). We have just remarked upon the difficulty of solving this equation, so any additional constraints we can impose on the solution are likely to be beneficial. In the previous section on geometrical constraints, we noted that an acceptable solution has z < 0 and hence %(zR) < 0. Unfortunately, solution methods for matrix equations (that have real coefficients) find solutions that are usually unrestricted over the domain of the real num- bers. To impose the restriction of %(zR) < 0, we follow the methods of Stockham [7]; instead of using the function itself, we formulate the problem in terms of the logarithm of the function. Consequently, in Equation (1) we usually set T( 5) = ln( $), just as we have done in our example. It should be noted that use of the logarithm also restricts z > 0 if z < 0. To construct the z < 0 side of the stereo reconstruction problem, we have to employ reflected coordinate systems for the world and image coordinates. Use of the logarithmic function ensures t < 0 and allows us to use standard matrix methods for solving the system of constraint equations. Once we have found the solution to the matrix equation, we can integrate that solution to Iind the depth profile. In our previous example, we picked F(p, I) = P. In our ex- periments, we have used combinations of different functions to establish a particular matrix equation. For example we have used functions such as F(P, 0 = IcospI~ = (f:+Q = P = sinpl = (p+I)i and we often use image density rather than image irradiance. 692 / SCIENCE The point to be made here is that the form of the function F in the general equation is unrestricted, provided that it is nonlinear. Equation (1) provides a framework for investigating stereo re- construction in a manner that exploits the global nature of the solution. This framework arises from the realization that non- linear functions provide a means of creating an arbitary number of constraints on that solution. In addition, the framework pro- vides a means of avoiding the correspondence problem, except at the end points, for we never match points. Solutions have the same resolution as the data and this allows us to avoid the interpolation problem. 5 Solution Methods Equation (1) is an inhomogeneous Fredholm equation of the first kind whose kernel function is the function F(p, IR(zR)). To solve this equation, we create a matrix equation in the manner previ- ously shown in our example. We usually approximate the inte- gral with the trapezoidal rule, where the sample spacing is that corresponding to the image resolution. Typically we use more than one functional form for the function F, each of which is parameterized by p. We have noticed that the sensitivity of the solution to image noise is affected by the choice of these func- tions, although we have not yet characterized this relationship. In the matrix equation, we usually pick the number of rows to be approximately twice the number of columns. However, owing to the rank-deficient nature of the matrix and hence to the se- lection of our solution technique, the solution we recover is only marginally different from the one obtained when we use square matrices. Unfortunately, there are considerable numerical difficulties as- sociated with solving this type of integral equation by matrix methods. Such systems are often ill-conditioned, particularly when the kernel function is a smooth function of the image co- ordinates. It is easy to see that, if the irradiance function varies smoothly with image position, each column of the matrix will be almost linearly dependent on the next. Consequently, it is advis- ible to assume that the matrix is rank-deficient and to utilize a procedure that can estimate the actual numerical rank. We use singular-valued decomposition to estimate the rank of the ma- trix; we then set the small singular values to zero and find the pseudoinverse of the matrix. Examples of results obtained with this procedure are shown in the following section. An alternative approach to solving the integral equation is to decompose the kernel function and the dependent variable into orthogonal functions, then to solve for the coefficients of this decomposition, using the aforementioned techniques. We have used Fourier spectral decomposition for this purpose. The Fourier coefficients of the depth function were then calculated by solving a matrix equation composed of the Fourier components of image irradiance. However, the resultant solution did not vary significantly from that obtained without spectral decomposition. While the techniques outlined can handle various cases, they are not as robust as we would like. We are actively engaged in overcoming the difficulties these solution methods encounter because of noise and irradiance discontinuities. 6 Results and Discussion Our examples make use of synthetic image profiles that we have produced from known surface profiles. The irradiance profiles were generated under the assumptions that the surface was a Lamb&an reflector and that the source of illumination wa a Figure 2: Planar Surface. At the upper left is depicted the recov- ered depth from the two it-radiance profiles shown in the lower half. For comparison, the actual depth is shown in the upper right. point source directly above the surface. This choice was made so that our assumption concerning image irradiance, namely, that I(zL) = I(ZR) at matched points, would be complied with. In addition, synthetic images derived from a known depth pro- file allow comparison between the recovered profile and ground truth. Nonetheless, our goal is to demonstrate these techniques on real-world data. It should be noted that the examples used have smooth irradiance profiles; they therefore represent a worst case for the numerical procedures, as the matrix is most ill- conditioned under these circumstances. Our first example, illustrated in Figure 2, is of a flat surface with constant albedo. In the lower half of the figure, the left and right irradiance profiles are shown, while in the upper right, ground truth - the actual depth profile as a function of the image coordinates of the right. image, ZR - is shown. The upper left of the figure contains the recovered solution. The limits of the recovered solution correspond to our selection of the integral end points. This solution was obtained from a formulation of the problem in which we used image density instead of irradiance in the kernel of the integral equation, and for which the function T was w&J). The second example, Figure 3, shows a spherical surface with constant albedo, except for the stripe we have painted across the surface. The recovered solution was produced from the same formulation of the problem as in the previous example. The ripple effects in the recovered profile appear to have been induced by the details of the recovery procedure; the attendant difficulties are in part numerical in nature. However, any changes made in the actual functions used in the kernel of the equation do have effects that cannot be dismissed as numerical inaccuracies. As we add noise to the irradiance profiles, the solutions tend to become more oscillatory. Although we suspect numerical prob terns, we have not yet ascertained the method’s range of effec- tiveness. This aspect of our approach, however, is being actively investigated. In the formulation presented here, we have used a particular function of the stereo geometry, 5, in the derivation of Equation (1) but we are not limited to this particular form. Its attractive- ness is based on the fact that, if we use this particular function of the geometry, the side of the integral equation related to the left image is independent of the scene depth. We have used other functional forms but these result in more complicated integral equations. Equations of these forms have been subjected to rela- tively little study in the mathematical literature. Consequently, the effectiveness of solution methods on these forms remains un- PERCEPTION AND ROBOTICS / 693 Figure 3: Spherical Surface with a Painted Stripe. known. In most of our study we have used T( 2) to be In(3) and the properties of this particular formulation should be noted. It is necessary to process the right half of the visual field sepa- rately from the left half. The integral is more sensitive to image measurements near the optical axis than those measurements off- axis. In fact, the irradiance is weighted by the reciprocal of the distance off-axis. If we were interested in an integral approxima- tion exhibiting uniform error across the extent of that integral, we might expect measurements that had been taken at interval spacings proportional to the off-axis distance to be appropriate. While it is obvious that two properties of a formulation that match those of the human visual system do not in themselves give cause for excitement it is worthy of note that the formula- tion presented is at least not at odds with the properties of the human stereo system. On balance, we must say that significant work still lies ahead before this method can be applied to real-world images. While the details of the formulation may be varied, the overall form presented in Equation (1) seems the most promising. Nonethe- less, solution methods for this class of equations are known to be difficult and, in particular, further efforts towards the goal of selecting appropriate numerical procedures are .essential. In formulating the integral equation, we took a function of the image irradiance and multiplied it by a function of the stereo geometry. To introduce image measurements, we changed vari- ables in the integrals. if we had not used the derivative of the function of the stereo geometry, we would have had to introduce terms like & and & into the integrals. By introducing the derivative we avoided this. However, we did not really have to select the function of the geometry for this purpose; we could equally well have introduced the derivative through the function of image irradiance. Then we would have exchanged the calcula- tion of irradiance gradients for the direct recovery of scene depth (thus eliminating the integration step we now use). Our selection of the formulation presented here was based on the belief that irradiance gradients are quite susceptible to noise; consequently we prefered to integrate the solution rather than differentiate the data. In a noise-free environment, however, both approaches are equivalent (as integration by parts will confirm). 7 Conclusion The formulation presented herein for the recovery of scene depth from a stereo pair of images is based not on matching of image features, but rather on determining which surface in the world is consistent with the pair of image irradiance profiles we see. The solution method does not attempt to determine the nature of the surface locally; it looks instead for the best global solu- tion. Although we have yet to demonstrate the procedure on real images, it does offer the potential to deal in a new way with problems associated with albedo change, occlusions, and discon- tinuous surfaces. It is the approach, rather than the details of a particular formulation, that distinguishes this method from con- ventional stereo processing. This formulation is based on the observation that a global solu- tion can be constrained by manufacturing additional constraints from nonlinear functions of local image measurements. Image analysis researchers have generally tried to use linear-systems theory to perform analysis; this has led them, consequently, to replace (at least locally) nonlinear functions with their linear ap- proximation. Here we exploit the nonlinearity; “What is one man’s noise is another man’s signal.” While the presentation of the approach described here is fo cussed upon stereo problems, its essential ideas apply to other image analysis problems as well. The stereo problem is a con- venient problem on which to demonstrate our approach; the for- mulation of the problem reduces to a linear system of equations, which allows the approach to be investigated without diversion into techniques for solving nonlinear systems. We remain act- tively interested in the application of this methodology to other problems, as well as in the details of the numerical solution. References PI PI PI PI PI Fl PI PI Boult, T.E., and J.R. Kender, ‘On Surface Reconstruc- tion Using Sparse Depth Data,” Proceedings: Image Un- deratanding Workahop, Miami Beach, Florida, December 1985. Courant, R., and D. Hilbert, Methods of Mathematical Physics, Interscience Publishers, Inc., New York, 1953. Grimson, W.E.L., ‘An Implementation of a Computa- tional Theory of Visual Surface Interpolation,” Computer Vision, Graphics, and Image Proceaaing, Vol. 22, pp 39-69, April 1983. Hildebrand, F.B., Methods of Applied Mathematics, 2nd ed., Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1965. Smith, G.B., “A Fast Surface Interpolation Technique,” Proceedinga: Image Understanding Workshop, New Or- leans, Louisiana, October 1984. Smith, G.B., “Stereo Reconstruction of Scene Depth,” IEEE Proceedinga on Computer Viaion and Pattern Recaq nition, San Francisco, California, June 1985, pp 271-276. Stockham, T.G., “Image Processing in the Context of a Visual Model,” Proceeding of IEEE, Vol. 60, pp 828-842, July 1972. Terzopoulos, D., ‘Multilevel Computational Processes for Visual Surface Reconstruction,’ Computer Viaion, Gruph- ica, and Image Procebaing, Vol. 24, pp 52-96, October 1983. (,c)j / SCIENCE
|
1986
|
184
|
455
|
PARTS: STRUCTURED DESCRIPTIONS OF SHAPE Alex P. Pentland Artificial Intelligence Center, SRI International Medlo Park, California and Center for the Study of Language and Information Stanford Universitv ABSTRACT ’ A shape representation is presented that has been shown competent to accurately describe an extensive variety of nat- ural forms (e g., people, mountains, clouds, trees), as well as man-made forms, in a succinct and natural manner. The ap preach taken in this representational system is to describe scene structure at a scale that is similar to our naive perceptual notion of “a part,’ by use of descriptions that reflect a possible formative history of the object, e.g., how the object might have been constructed from lumps of clay. For this representation to be useful it must be possible to recover such descriptions from image data; we show that the primitive elements of this repre- sentation may be recovered in an overconstrained and therefore potentially reliable manner. 1 Introduction Most models used in vision and reasoning tasks have been of been of only two kinds: high-level, sped& models, e.g., of peo- ple or houses, and low-level models of, e.g., edges. The reason research has almost exclusively focused on these two types of model is a result more of historical accident than conscious de- cision. The well-developed fields of optics, material science and physics (especially photometry) have provided well worked out and easily adaptable models of image formation, while engi- neering, especially recent work in computer aided design, have provided standard ways of modeling industrial p&8, airplanes and so forth. Both the use of image formation model8 and specialized models has been heavily investigated. It appears to us that both types of models, although useful for many applications, encounter insuperable difficulties when applied to the problems faced by, for instance, a general purpose robot. In the next two subsections we will examine both type8 of model8 and outline their advantages and disadvantage8 for recovering and reason- ing about import,ant scene information. In the remainder of this section we will then mot,ivate, develop and investigate an alternative category of models. ‘This research was made possible by National Science Foundation, Grant No. DCR85-19283, by Defense Advanced Research Projects Age& &tract no. MDA 903-83-C-0027, and by a grant from the Systems Development Foundation. I wish to thank Marty Fischler, Ruzena Bajcsy and Andy Witkin for their comments and insights. Figure 1: A scene constructed of IO0 primitives, less than lk bytes of information. 1.1 Models of Image Formation Most recent research in computational vision has focused on us- ing point-wise models borrowed from optics, material science and physics. This research has been pursued within the genera1 framework originally suggested by Marr [l] and by Barrow and Tenenbaum 121, in which vision proceeds through a succession of levels of representation. Processing is primarily data-driven (bottom-up), i.e., the initial level is computed directly from local image features, and higher levels are then computed from the information contained in small regions of the preceding levels. Problems for vision. Despite its prevalence, there are serious problems that seem to be inherent to thin research paradigm. Be- cause scene structure is underdetermined by the local image data [3], researchers have been forced to make unverifiable assump tions about large-scale structure (e.g., smoothness, isotropy) in order to derive useful information from their local analyses of the image. In the real world, unfortunately, such aaeumptions ate often seriously in error: in natural scene8 the image formation parameters change in fairly arbitrary ways from point to point, making any assumption about local context quite doubtful. As a result, those techniques that rely on strong Msumptions such as isotropy or smoothness have proved fragile and error-prone; they are simply not useful for many natural reener. That such difficulties have been encountered should not, pet- haps, be too surprising. It is emily demon8trated (by looking through a viewing or reduction tube) that people can obtain lit- tle information about the world from a local image patch taken out of its context. It ie also clear that detailed, analytic mod- els of the image formation ptoeerrr are not eraential to human - PERCEPTION AND ROBOTICS / 695 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. perception; humans function quite well with range fmder im- ages (where brightness is proportional to distance rather than a function of surface orientation), electron microscope images (which are approximately the reverse of normal images), and distorted and noisy images of all kids - not to mention draw- ings 141. Problems for reasoning. Perhaps even more fundamentally, however, even if depth maps and other maps of intrinsic sur- face properties could be reliably and densely computed, how useful would they be in reasoning tasks? Industrial vision work using laser range data has demonstrated that the depth maps, reflectance maps and the other maps of the 2-1/2D sketch are still basically just images. Although useful for obstacle avoid- ance and other very simple tasks, they still must be segmented, interpreted and so forth before it cau be used for any more so- phisticated task 151. 1.2 Specialized Models The alternative to models of image formation has been engineering- style representations; e.g., CAD-CAM models of specific ob- jects that are to be identified and located. Such detailed, spe cific models have provided virtually all of the success stories in machine vision; nonetheless, such models have important inherent limitations. Problems for vision. As the object’s orientation varies these models produce a very large number of different pixel config- urations. The large number of possible appearances for such models makes the problem of recognizing them very difficult - unless an extremely simplified representation is employed. The most common type of ‘simplified representation is that of a wireframe model whose components correspond to the imaged edges. The use of au impoverished representation, however, generally means that the flexibility, reliability and discrim- inablity of the recognition process is limited. Thus research efforts employing specific object models have floundered when- ever the number of objects to be recognized becomes large, when the objects may be largely obscured, or When there are many unknown objects also present in the scene. Problems for reasoning. An even more substantive limita- tion of systems that employ only high-level, zpeeific models is that there is no way to learn new objects: new models must be specially entered, usually by hand, into the database of known models. This is a significant limitation, because the ability to encounter a new object, enter it into a catalog of known ob- jects, and thereafter recognize it iB an absolute requirement of truly general purpose robot. 1.3 Part and Process Models In response to these difficult problems some researchers have begun to search for a third type of model, one with a grain size intermediate between the point-wine models of image formation and the complex, specific models of particular objects [6,7]. Recent research in graphics, biology, and physics has pro- vided us with good reason to believe that it may be possible to accurately describe our world by means of d few, commonly- occuring types of formative procezses [1,8,9,10]; i.e., that our world can be modeled as a relatively small zet of generic pro- cesses - for instance, bending, twisting, or interpenetration - that occur again and again, with the apparent complexity of our environment being produced from this limited vocabulary by compounding these basic forms in myriad different combinations. Moreover, some modern psychologists [18,19,20], aa well as the psychologists of the classic Gestalt movement, have argued that the initial stages of human perception function primarily to discover image features that indicate the presence of these generic categories of shape structure. They hrive presented strong evidence showing that we conceive of the world in terms of parts, and that the first stages of human perception are primarily concerned with detecting features tbat indicate the structure of those parts. This part-structure, then, seems to form the building blocks upon which we build the rest of our perceptual interpretation. Such part-and-process models offer considerable potential for reasoning tasks, because they describe the world in some- thing like “natural kind” terms: they speak quaIitatively of whole forms and of relations between parts of objects, rather than of local surface patches or of particular instances of ob- jects. It seenis, for instance, that we employ such btermediate- grain descriptions in commonsense reasoning, Beaming, and aualogical reasoning jl3,l4,lS]. The problem with forming such ‘parts” models is that they must be complex enough to be PeIiabIy recognizable, and yet simple enough to reasonably serve as building blocks for spe- cific object models. Current 3-D machine vision systems, for instance, typically use %arts* consisting of rectangular solids and cylinders. Unfortunately, such a representation is only capable of an extremely abstracted description of most natu- ral and biological forms. It cannot accurately and succinctly describe most natural animate forms or produce a succinct de- scription of complex inanimate forms such as clouds or moun- tains. If we retreat from cylinders to generalized cylinders we can, of course, describe such shapes accurately. The cost of such retreat is that we must introduce several 1-D functions describing the axis and cross-section shape; this makes the rep- resentation neither succinct nor intuitively attractive. 2 A Representation The idea behind this representational system is to provide a vo- cabulary of models and operations that Will allow U8 to model our world as the relatively simple composition of component ‘parts,’ parts that are reliably recognizable from image data. The most primitive notion in this represention is analogous to a “lump of clay,” a modeling primitive that may be deformed and shaped, but which is intended to correspond roughly to our naive perceptual notion of *a part.” For this basic mod- eling element we use a parameterized family of shapes known as a superquadrics [lO,ll], which are described (adopting the not.ation cos Q = C,,, sinw = S,) by the following equation: where x(q,w) is a t.hree-dimensional vectot that 8Weeps out a surface parameterized in latitude Q and longitude W, with the surface’s shape controlled by the parameters ~1 and ez. This family of functions includes cubes, cylinders, spheres, di- amonds 696 / SCIENCE Figure 2: (a) A sampling of the basic forms allowed, (b) defor- mations of these forms, (c) a chair formed from Boolean combi- nations of appropriate!y deformed superquadrics. and pyramidal shapes as well as the round-edged shapes inter- mediate between these standard shapes. Some of these shapes are illustrated in Figure 2(a). Superquadrics are, therefore, a superset of the modeling primitives currently in common use. These basic “lumps of clay” (with various symmetries and profiles) are used as prototypes that are then deformed by stretching, bending, twisting or tapering, and then combined US- ing Boolean operations to form new, complex prototypes that may, recursively, again be subjected to deformation and Boolean combination [I2]. As an example, the back of a chair is a rounded- edge cube that has been flattened along one axis, and then bent somewhat to accommodate the rounded human form. The bot- tom of the chair is a similar object, but rotated 90*, and by ‘or- ing” these two parts together with elongated rectangular primi- tives describing the chair leg we obtain a complete description of the chair, as illustrated in Figure 2(e). We have found that this representational system haz a surpriringly powerful gener- ative power that allows the creation of a tremendous variety of form, such as is illustrated by Figure 1. This descriptive language is derigned to describe shapes in a manner that corresponds to a poraible formative history, e.g., how one would create a given shape by combining lumps of clay. Thus the description provides w with an explumtion of the image data in terms of the interaction of generic formative processes. This primitive explanation can then be refined by application of specific world knowledge and context, eventually deriving causal connections, affordances, and all of the other information that makes our perceptual experience appear 80 rich and varied. For instance, if we have parsed the chair in Figure 2(c) into its constituent parts we could deduce that the bottom of the chair is a stable platform and thus might be useful as a seat, or we might hypothesize that the back of the chair can rigidly move relative to the supporting rod, given the evidence that they are sepa- ral.e “parts” and thus likely separately formed. We believe that t.his process-oriented, possible-history form of representation will prove to be extremely useful for commonsense reasoning tasks. 2.1 Building 3-D models This type of representation seems to produce models that rep- resent the shape “naturally.” We have, for instance, performed a protocol analysis in which we found [14] that when adult hu- man subjects are required to verbally describe imagery with completely novel content, their typical spontaneous strategy is to employ a descriptive system analogous to this one - i.e., form is described by modifying and combining prototypes. Moveover, the non-proper-noun terms used were limited and stereotyped: they resorted largely to terms indicating interpen- etration (boolean combination), squareness-roundness, bend- ing, tapering, and stretching. We have also investigated the psychological reality of this descript,ive framework using the psychophysical techniques de- veloped by Treisman [17]. Using this experimental paradigm and employing monocular imagery depicting shaded, perspec- tive views of three-dimensional forms, we have collected ex- perimental evidence indicating 1211 that convexity-concavity (equivalent to boolean combination), squareness-roundness, bend- ing, tapering and relative axis size (streching) may all be preat- tentively perceived, that is, there appear to be parallel =detec- tom” that search for the presence (but not absence) of these features within a 3-D scene. We have also attempted to verib this psychological evi- dence in a more practical manner. The fact that “natural” man-machine interaction requires that the machine uses a rep- resentation that closely mat,ches that of the human operator provides a practical test for our descriptive framework. That is, if an interface based on this representation appears %at- Ural” to users, then we can conclude that the representation must closely match at least one way that people think about 3-D shapes. We have, therefore, constructed a 3-D modeling syst.em called “SuperSketch,” that employs the shape representation described here. This real-time, interactive * modeling system is implemented on the Symbolics 3600, and allows users to inter- actively create %mpsln change their squareness/roundness, stretch, bend, and taper them, and finally to combine them using Coolean operations. This system was used to make the images in this paper. We have found that interaction is surprisingly effortless: it took less than a half-hour to assemble the faces in Figure 1, and about four hours total to make the complete Figure 1. This is in rat,her stark contrast to more traditional 3-D modeling systems. It thus appears that the primitives, operations and combining rules used by the computer closely match the way that the human operat.ors think about 3-D shape. 2.2 Biological forms In Figure 1 (as in all cases examined to date) when we try to model a particular 3-D form we find that we are able to describe - indeed, it is quite natural to describe - the shape in a manner that corresponds t.o the organization.our perceptual apparatus imposes upon the image. That is, the components of t.he *Because these forms have an underlying analytical form, we csn use fast, qualitative approximations to accomplish hidden surface removal, intersection and image intensity calculations in “real time,” e g , a ‘lump” can be moved, hidden surface removal accomplished, and ?rawn a~ a 200 polygon line drawing approximation in l/Sth of a second. PERCEPTION AND ROBOTICS / 697 description match one-to-one with our naive perceptual notion of the “parts” in the figure, e g., the face in Figure 1 is composed of primitives that correspond exactly to the cheeks, chin, nose, forehead, ears, and so forth. This correspondence indicates that we are on the right track; e g , that this representation will be useful in understanding com- monsense reasoning tasks. Similarly, the ability to make the right ‘part” distinctions offers hope that we can form qualitative de- scriptions of specific objects (“Ted’s face”) or of classes of objects (=a long, thin face”) by specifying constraints on part parameters and on relations between parts, in the manner of Winston [15]. Finally, we note that the extreme brevity of these descriptions makes many otherwise difficult reasoning tasks relatively simple, c g , even NP-complete problems can be easily solved when the sizr of t,he problem is small enough. The human bodies shown in Iiignre 1, for instance, require combining only 45 primitives, or approximately 450 bytes of information (these informational requirements are not a function of body position). Similarly, the description for the face requires the combination of only 18 primitives, or fewer than 200 bytes of information. 2.3 Complex inanimate forma This method for representing the three-dimensional world, al- though excellent for biological and man-made forms, becomes awkward when applied to complex natural surfaces such as moun- tains or clouds The most pronounced difficulity is that, like pre- viously proposed representations, our superquadric lumps-of-clay representation becomes implausably complex when confronted with the problem of representing, e.g., a mountain, a crumpled newspaper, a bush or a field of graae. Why do such introspectively simple shapes turn out to be so hard to represent? Intuitively, the main source of difficulty is that there is too much information to deal with. Such objects are amazingly bumpy and detailed; there is simply too much detail, and it is too variable. People escape this overwhelming complexity by varying the level of descriptive abstraction - the amount of detail captured - depending on the task. In cases like the crumpled newspaper, or when recognizing classes of objects such M “a mountain’ or ‘a cloud,= the level of abstraction is very high. Almost no specific detail is required, only that the crumpleclness of the form comply with the general physical properties characteristic of that type of object. In recognizing a specific mountain, however, people will require that all of the major features be identical, although they typically ignore smaller details. Even though these details are ‘+gnored,’ however, they must atill conform to the constraints characteristic of that type of object: we would never mistake a smooth cone for a rough-surfaced mountain even if it had a generally conical shape. Our previous work with fractal models of natural surfaces [lS] allows us to duplicate this sort of physically-meaningful abstrac- tion from the morass of details encountered in natural scenes. It lets us describe a crumpled newspaper by specifying certain structural regularities - its crumplednese, in effect - and leave the rest as variable detail. It lets us specify the qualitative shape - i.e , the surface’s roughness - without (necessarily) worrying about the details. We may construct fractal surfaces by using our superquadric Figure 3: (a) - (c) show the construction of a fractal shape by successive addition of smaller and smaller features with number of features and amplitudes described by the ratio l/r, (d) shows spherical shapes with surface crenulations ranging from smooth (r $s 0) to rough (r % 1). “lumps” to describe the surface’s features; specifically, we can use the recursive sum of smaller and smaller superquadric lumps to form a true fractal surface. This construction is illustrated in Figures 3(a) - (c). We start by specifying the surface’s qualitative appearance - it’s roughness - by picking a ratio r, 0 5 r 5 1, between the number of features of one size to the number of features that are twice as large. This ratio describes how the surface varies across different scales (resolutions, spatial frequency channels, etc.) and is related to the surface’s fractal dimension D by D = T+r, where 7’ is the topological dimension of the surface. We then randomly place tt2 large bumps on a plane, giving the bumps a Gaussian distribution of altitude (with variance u*), as seen in Figure 3(a). We then add to that 41s’ bumps of half the size, and altitude variance u*r*, aa shown in Figure 3(b). We continue with 16n2 bumps of one quarter the size, and altitude 02r4, then 64n* bumps one eighth size, and altitude u*re and so forth, as shown in Figure 3(c). The Anal result, shown in Figure 3(c) is a true Brownian fractal shape. Different shaped lumps will produce different textures on the rerulting fraetal surface. When the larger components of thir cmm are matched to a particular object we obtain a description of that object that is exact to the level of detail encompassed by the speeifled com- ponents. This makes it possible to specify a global shape while retaining a qualitative, statistical deaerlption at smaller scales: to describe a complex natural form such as a cloud or mountain, we specify the “lumps” down to the derired level of detail by lix- ing the larger elements of this sum, and then we specify only the fractal statistics of the smaller lumpr thus Iking the qualitative appearance of the surface. Figure 3(d) illustrates an example of such description. The overall shape is that of a sphere; to this specified large-scale shape, smaller lumps were added ran- domly. The smaller lumps were added with six different choices of r (i.e., six different choices of fraetal statistics) resulting in 698 I SCIENCE six qualitatively different surfaces - each with the same basic spherical shape. The ability to fix particular “lumps” within a given shape provides an elegant way to pass from a qualitative model of a surface to a quantitative one - or vice versa. 3 Recognizing Our Modeling Primitives The major difficulty in recovering such descriptions is that image data is mostly a function of surface normals, and not directly a function of the surface shape. This is because image intensity, texture anisotropy, contour shape, and the like - the information we have about surface shape - is largely determined by the direction of the surface normal. To recover the shape of a general volumetric primitive, therefore, we must (typically) first compute a dense depth map from information about the surface normals. The computation of such a depth map has been the major focus of effort in vision research over the last decade and, although the final results are not in, the betting is that such depth maps are impossible to obtain in the general, unconstrained situation. Even given such a depth map, the recovery of a shape description has proven extremely difficult, because the parameterization of the surface given in the depth map is generally unrelated to that of the desired description. Because image information is largely a function of the surface normal, one of the most important properties of superquadrics is the simple “dual” relation between their surface normal and their surface shape. It appears that this dual relationship c8n ahow us to form an overconstrained estimate of the 3-D parameters of such a shape from noisy or partial image data, as outlined by the following equations. The surface position vector of a superquadric with length, width and breadth ai, 8s and as is (again writing costs = C,,, sin ,LI = Sti) arCilCiz R(~I, w) = asCilS:z ( 1 ass;’ and t,he surface normal at that point is (I) (3) Therefore the surface vector 2 = (z,Y, z) is dual to the sur- face normal vector s = (z,,, y,, 2,). From (2), then we have ( > Yy, rt2 = tanw (3) =%I We may also derive an alternative expression for tansw from (1): (4) Combining these expressions for tanw and letting r = yn/xn, k = (a* /ar)2/22 and L$ = 21~8 - 1 we find that r = k( qc (5) z (6) This gives us two equations relating the unknown shape param- eters to image measurable quantities, i.e., 7 Y r --z _=_ dr ,$ z=- dy H < (7) Thus Equations (7) allow us to construct a linear regression to solve for center and orientation of the form, as well as the shape parameter er, given only that we can estimate the surface tilt direction r. When we generalize these equations to include unknown ori- entation and position parameters for the superquadric shape, we obtain a new set of nonlinear equations that can then be solved (in closed form) for the unknown shape parameters er and es, the center position, and the three angles giving the objects ori- entation. Once these unknowns are obtained the remaining un- knowns (al, as, and as, the three dimensions of the object) may be directly obtained. 3.1 Overconstraint and reliability Perhaps the most important aspect of these equation8 is that we can form an overconstrained estimate of the 3-D parameters: thus we can check that our model applies to the situation 8t hand, and we can check that the parameters we estimate are correct. This property of overconstraint comes from using models: when we have used some points on a surface to estimate 3-D parameters, we can check if we are correct by examining additional points. The model predicts what these new points should look like; if they match the predictions then we can be sure that the model applies and that the parameters are correctly estimated. If the predictions do not match the new data points, then we know that something is wrong. The ability to check your answer is perhaps the most important property any vision system c8n have, because only when you can check your Bnewers can you build a reliable vision system. And it is only when you have a model that relates many different image points (such 8s a model of how rigid motion appears in an image sequence, or a CAD-CAM model, or this 3- D shape model) that you can have the overconstraint needed to check your answer. Another aspect of Equations (7) that deserves special note is that the only image measurement needed to recover 3-D shape is the surface tilt r, the component of shape that is unaffected by projrrt.ion and, thus, is the most reliably estimated parameter of sltirface shape It is, for instance, known exactly at smooth oc- rllrding cont.ours and both shape-from-shading and shape-from- tex(llrc r~~ell~ocls produce a more reliable estimate of r than of slit~lt , tllch oOn,r surface shape parameter. That we need only the (relatively) easily estimated tilt to estimate the 3-D shape pa- ranletc:rs makes robust recovery of 3-D shape much more likely. One Final note about Equations (7) is that they become sin- gular when superquadric becomes rectangular; i.e., when the sides of the superquadric have zero curvature. This, however, is the case of the blocks world. We may view this work with soperquadric shapes, therefore, M a natural extension of the blocks world to a domain that also encompasses smoothly curved shapes 3.2 Recovering Part Deecriptions PERCEPTION AND ROBOTICS i 699 Figure 4: Recovering the part structure of a scene: (a) the orig- inal scene, (b) ratio of r to dr/dy, (c) ratio of r to dr/dz, (d) recovered scene description. Figure 4 illustrates how we may use Equation (7) for image seg- mrsntation and shape recovery. In these examples we will not con- siclpr rotation in depth; the extension to three degrees of freedom is straightforeward, although from a numerical view considerably morf complex Figure 4 shows an actual example of recovering a part de- set iption from depth information. We started with the complex ssc*rnr shown in Figure 5(a), and generated a depth array with approximately eight bits of accuracy. We then computed the gradient direction (the tilt r) over the entire depth array (with about seven bits accuracy), and finally computed the z and y cletivatives of the tilt array. From this we calculated the ratios of r to dr/dy (shown in Figure 4(b)) and r t,o dr/dz (shown in Figure 4(c)). Equation (7) predicts that wit,hin each superquadric form: (1) that the value of these ratios should be a linear function of the image y (x) coordinate, (2) that the zero-crossing of this ratio should lie along the z (g) axis of the imaged form, and that (3) the slope of this ratio as a function of g (2) should be proportional to the squareness-roundness of the form along that axis. It can be seen that these relat,ions are in fact obtained, except for a vertical bar caused by the tilt fields’ singular transition point. We may use the image regulprity shown in Figures 4(b) and (c) to segment the image: as each imaged supequadtic produces a linearly sloping region with a particular orientation and axis di- rect,ion, we need only segment the ratio of (1) r to dr/dy, and (2) r to dr/dz into linearly varying domains in order to completely segment the image into its component parts. We can even use this regularity to “match up” the various portions of a partially occluded object. It can be seen, for instance, that there are two disjoint areas of the block-like shape. How tan we infet that these two visible portions in fact belong to a single whole? From Figures 4(b) and (c) we can observe that the z axes of both visible portions ate collinear, and that they have t,he same slope and zero-crossing when considered as a function of both z and y. This, then, gives F j Figure 5: Examples of recovering bent and tapered part descrip- tions. us enough information to combine these two separate segments into a single 3-D part: for it is extremely unlikely that two surface segments would be collinear, of the same size and shape, and even share the same cent&d, without both being portions of the same object. Finally, aft.er segmenting the figure into linearly varying do- mains,‘the po&ion of their z and y axes and the shape along these axes can be calculated using Equation (7), and then the extent along each axis determined. The resulting recovered de- scription is shown in Figure 4(d); the most pronounced error is that the squareness of the cylindrical shape was somewhat over- estimated. There are two important things to remember about this demonstration: One is that we are recovering a large-grain, pmt- by-part description, rather than simply a shrface description. That medns that we can predict how the form will look from ot,her views, and reason about the functional aspects of the com- plet,e shape. The second is that the estimation process is over- constrained, and thus it can be made reliable. 3.2.1 Recovering deformed primitives SO l’tit H’P have not talked about recovering deformed part primi- t.ivps; clerormations, of course, are an important part of out shape retmsrnt ation theory. Figure 5, therefore, shows how we may ap- ply thrse same ideas to the problem of recovering deformed part primilives. Figure 5(a) shows a bent cylinder; Figure 5(b) and (c) show the rat,io of r to dr/dz. It can be seen that the lineat relation still holds over most of the form; thus allowing segmentation. Perhaps even more importantly, however, is that the axis of the figure is clearly defined in Figures 5(b) and (c). It appears, therefore, that, we may be able to locate the axis of the figure, and then estimat,e the amount of bending that has occured. Once we know the deformat,ion, we can then undeform the shape, and proceed as before. Figure 5(d) shows the case of a tapered cylinder. Figures S(e) and (f) show that a linear ratio of r to dr/dy is still obtained, allowing not only segmentat.ion but also estimation of the shape along the g direction. The amount of tapering can be determined from the tapering extent of the linearly varying region. 700 I SCIENCE 4 Summary We have d%cribed a shape representation that is able to accu- rately describe an wide variety of natural forms (e.g., people, mountains, clouds, trees), as well as man-made forms, in a suc- cinct and natural manner. The approach taken in this repre- sentational system is to describe scene structure at a scale that is more like our naive percept.uaI notion of (La part” than the point-wise descriptions typical of current image understanding research. W’e have been able to use this representation to make sev- eral interesting points, in particular: . We have demonstrated that this formative-history-oriented represent.ational system is able to accurately describe a wide range of natural and man-made forms in an ex- tremely simple, and therefore useful, manner. . We have shown that this approach to perception formu- lates the problem of recovering shape descript,ions as an overconstrained problem, thus potentially allowing reli- able shape recovery while still providing the flexibility to learn new object descriptions. l We have collected experimental evidence about the con- stituent elements of this representation, and have found that (1) evidence from the Triesman paradigm indicates that they are features detected during the early, preat- tentive stage of human vision, and (2) that evidence from protocol analysis indicates that people standardly make use of these same descriptive elements in generating ver- bal descriptions, given that there is no similar named object available. * And finally, we have presented evidence from our 3-D modcling work showing that descriptions framed in the representation give us the right Qontrol knobs” for dis- cussing and manipulating 3-D forms in a graphics envi- ronment. The representational framework presented here is not com- plete. It seems clear that additional process-oriented modeling primitives, such as branching structures or particle systems [22], will be required to accurately represent objects such as trees, hair, fire, or river rapids. Further, it seems clear that domain experts form descriptions differently than naive ob- servers, reflecting their deeper understanding of the domain- specific formative processes and their more specific, limited purposes. Thus, accounting for expert descriptions will require additional, more specialized models. Nonetheless, we believe this descriptive system makes an important contribution to current research by allowing us to describe a wide variety of forms in a surprisingly succinct and natural manner, using a descriptive vocabulary that offers hope for the reliable recovery of shape. REFERENCES ]I] hfarr, D. (1982) Vision, San Fransico: W.H. Freeman and co. [2J Barrow, H. G., and Tenenbaum, J. hf., (1978) Recovering intrinsic scene characteristics from images. In Computer Vision Systems, Hanson, A. and Riseman, E. (Ed.) New York: Academic Press. 131 141 151 161 171 [81 191 1161 1111 ]I21 1131 [141 1151 P61 (171 1181 PI PO1 1211 I221 Pentland, A. Local analysis of th,e image, (1984) IEEE fiansactions on Pattern Analysis and Machine Recognition, 6 2, 170-187 Witkin, A. P., and Tenenbaum, J. M., (1985) On perceptual organization. In nom Pixels to Predicates, Pentland, A. (Ed.) Norwood N.J.: Ablex Publishing CO. Belles, B. and Haroud, R., (1985) 3DPO: An inspection sys- tem. In horn Pixels to Predicates, Pentland, A. (Ed.) Nor- wood N J : Ablex Publishing Co. Nevatia, R., and Binford, T.0.,(1977) Description and recognit.ion of curved objects. Artificial Intelligence, 8, 1, 77-98 A Pentland and A. Witkin, (1984) ‘On Perceptual Orga- nization,” Second Conference on Perceptual Organization, Pajaro Dunes, CA, June 12-15. Thompson, D’Arcy, (1942) On Growth and Form,, 2d Ed., Cambridge, England: The University Press. Stevens, Peter S., (1974) Patterns In Nature, Boston: Atlantic-Little, Brown Books. Gardiner, M. (1965) The superellipse: a curve that lies between the ellipse and the rectangle, Scientific American, September 1965. Barr, A., (1981) Superquadrics and angle-preserving trans- formations, IEEE Computer Graphics and Application, I I-20 Barr, A , (1984) Global and local deformations of solid primitives. Computer Graphics 18, 3, 21-30 Hayes, P. (1985) The second naive physics manifesto, In Forma/ Theolies of the Commonsense World, Hobbes, J. and Moore, R. (Ed.), Norwood, N.J.: Ablex Hobbs, J. (1985) Final Report on Commonsense Summer. SRI Artificial Intelligence Center Technical Note 370. Winston, P., Binford, T., Katz, B., and Lowry, M. (1983) Proceedings of the National Conference on Artificial Intel- ligence (AAAI-83}, pp. 433-439, Washington, D.C., August 22-26. Pentland, A. (1984a), Fractal-based description of natural scenes, IEEE Pattern Analysis and Machine Intelligence, 6, 6, 661-674. Treisman, A., (1985) Preattentive processing in vision, Computer Vision, Graphics and Image Processing, Vol 31, No. 2, pp. 156-177. Hoffman, D., and Richards, W., (1985) Parts of recogni- tion, In From Pixels to Predicates, Pentland, A. (Ed.) Nor- wood, NJ.: Ablex Publishing Co. Leyton, M. (1984) Perceptual organization as nested con- trol. Biological Cybernetics 51, pp. 141-153. Beiderman, I., (1985) Human image understanding: recent research and a theory, Computer Vision, Graphics and Im- age Processing, VoI 32, No. 1, pp. 29-73. Pentland, A. (1986) On perceiving 3-D shape and texture, Computational models in human vision, Center for Visual Science, University of Rochester, Rochester. N.Y., June 14 21 Reeves, W. T., (1983) Particle systems - a technique for modeling a class of fuzzy objects, ACM %nssctiona on Graphics 2, 2, 91-108. PERCEPTION AND ROBOTICS / 70 1
|
1986
|
185
|
456
|
CONSTRAINT-THEOREMS ON THE PROTOTYPIFICATION OF SHAPE Michael Leyton Harvard University and State University of New York at Buffalo * ABSTRACT Mathematical results are presented that strongly constrain the prototypification of complex shape. Such shape requires local prototypification in two senses: (1) prototypification occurs in parallel at different parts of the figure, and (2) prototypification varies diferentially (smoothly) across an individual part. With respect to (l), we present a theorem that states that every Hoffman-Richards codon has a unique Brady Smooth Local Sym- metry. The theorem solves the issue of defining units for parallel decomposition, for it implies that a codon is the minimal unit with respect to the existence of prototypification via symmetry, and is mazimal with respect to prototypification via non- ambiguous symmetry. Concerning issue (2) above, a further theorem is offered that severely limits the possible shapes that result from the sequential application of prototypifying opera- tions to smoothly varying deformation. This second result explains why considerably fewer prototype classes exist than one would otherwise expect. I. INTRODUCTION It has usually been assumed that, in human cognition, the prototypification of an object (e.g. a shape) occurs in a single step (e.g. Rosch, 1978). However, in a number of papers (Leyton, 1984, 1985, 1986a, 1986b, 1986c, 1986d), I have argued that prototypification is decomposed into a sequence of well-defined psychologically-manageable stages; that is, an object is assigned a backward history of successively greater stages of prototypification. In the present paper, theorems are offered that allow us to extend this decompositional analysis to the prototypification of complex shape. The prototypification of such shape requires stages that are local, in two senses: (1) prototypification occurs in parallel, at different regions of the figure; and (2) prototypification removes deformation that differentially (smoothly) varies over an individual region. Our theorems con- strain local prototypification in these two senses; that is: (1) they help to establish an optimal decomposition with respect to paral- lel prototypification, and (2) they strongly constrain the possible results of the sequential decomposition of differential prototypification. Let us, however, first give an example of sequential prototypification for simple shape, in order to identify more clearly what needs to be extended to handle complex shape. In a converging set of experiments (e.g. Leyton, 1984, 1985, 1986a, 198613, 1986c, 1986d) I found that, when subjects are presented *Send correspondence to Michael Leyton, Departments of Com- puter Science and Psychology, State University of New York, Buffalo, hrY 14260. This research was supported by NSF grants IST-8418164 and IST-8312240, and by AFOSR grant F49620-83- c-0135. with a rotat#ed parallelogram, Fig la, they reference it to a non- rotated one, Fig lb, which they then reference to a rectangle, Fig lc, which they then reference to a square, Fig Id. We can characterize these results in the following way. Let us assume that the relationship between the ultimate prototype, the square, and the first figure, the rotated parallelogram, is given by a linear transformation, T. transformation (without reflection) can be represented as a duct of three more primitive linear transformations, thus: Any non-singular linear Pro- T= Stretch x Shear x Rotation. This decomnosition of a linear transformation will be called its Iwasawa decomposition. The three transformations, comprising the decomposition, characterize the three stages in Fig 1. That is, working from right to left, Fig Id + Fig lc is a stretch, Fig lc + Fig lb is a shear, and Fig lb + Fig la is a rotation. Therefore, the experimental results indi- cate that this decomposition is psychologically salient and follows in a specific order. For later usage, in this paper, it is representation of the decomposition, thus: Stretch x Shear x Rotation = having a Observe that (1) in the first the amounts of stretch along matrix, the eigenvalues Xi represent the directions of the eigenvectors of that matrix; (2) in the middle matrix, /J is the amount of move- ment in the r-direction of the y-basis vector; and (3) in the last matrix, 0 is the extent of rotation. This decomposition is remin- iscent of the Gram-Schmidt orthoponalization (Hoffman & J Kunze, 1961), where a set of linearly independent vkctors (i.e. a non-singular matrix) is transformed into an orthonormal set (i.e. a rotation matrix) by first shearing the set and then stretching it. Although the order is different in the Gram-Schmidt process, this does not ultimately matter, because the subgroup of stretches group-theoretically normalizes the subgroup of shears (Lang, 1975), and therefore an equivalent representation can be found with the required ordering. The purpose of the present paper is to extend the above simple use of the Iwasawa decomposition to analyze the prototypification of complex shape. The two main problems for a b d Figure 1. One of the successive ton (1984,1985,1986a). reference phenomena discovered ‘02 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. such an investigation is that (1) the decomposition, as used so far, has been applied globally, and (2) the decomposition is that of linear fransformafions. For example, consider the seal shown in Fig 2 (third row, second column). To prototypify this shape, one would want to apply different operations to different regions of the figure (the head, the back, etc.). Again, given an indivi- dual region, the simple use of a linear transformation would not usually have the desired effect; e.g. a linear transformation would not straighten the arched back. Thus, the problem is that, with complex shape, one requires local prototypification in two senses: prototypification (1) applies to the subparts and (2) varies differentially. The purpose of this paper is to present a set of mathematical results that yield solutions to these two problems. II. HOW TRANSFORMATIONS ACT ON PROTOTYPE STRUCTURE It can be assumed that the prototypicality ranking of a shape corresponds to the latter’s degree of symmetry. However, although this introduces into consideration the crucial factor of the symmetries of the shape, a much tighter relationship between the shape symmetries and the deforming transformations has been proposed in Leyton (1984), and has been corroborated using a considerable number of empirical studies in several areas of perceptual organization (Leyton, 1984, 1985, 1986a, 1986b, 1986c, 1986d). The relationship is summarized in: INTERACTION LAW (Leyton, 1984): The symmetry azes of the prototype are interpreted as eigenspaces of the most allowable transformations. An eigenspace is a linear subspace (e.g. a line through the origin) that maps to it,self under a linear transformation. Visually, an eigenspace-line or eigenvector is interpreted as a direction of flexibility. As an illustration of the Interaction Law, observe in Fig 1 that the salient symmetry axes of the square (i.e. the side- bisectors) become the eigenspace-lines in the transition of the square (Fig Id) to the rectangle (Fig lc). Now let us examine the validity of the Interaction Law with respect to complex shape. In order to investigate the local prototypification of complex shape, I gave human subjects, under experimentally controlled testing conditions, the twenty-two out- lines of complex natural and abstract shapes shown in Fig 2. The subjects were asked to give, at each of four points in each shape, the direction of perceived maximal flexibility of the region local to the point. The results were that the subjects chose a local symmetry axis at each point. Thus they converted local symmetry axes into local eigenspaces. (The statistical significance was considerable: n = 12, 88 choices per subject; expected mean = 44; actual mean = 77.58; p < 0.0005, one tailed). These results therefore lead us to the conclusion that the Interaction Law is valid in complex shape, and that, it applies locally. The usefulness of the conclusion is that it gives us an indication as to the nature of local prototypification - for one can assume that prototypification occurs along lines of maximal flexi- bility. However, the Interaction Law (i.e. symmetry axes are con- verted into eigenspaces) requires, as input, a symmetry analysis; and since we are using the Interaction Law locally, what we require is a local symmetry analysis. The symmetry analysis we shall use is the Smoothed Local Symmetry (SLS) of Brady (1983). It can be regarded as a natural means of describing the local structure of a contour, because it is yielded by the set of reflectional symmetries between tangent vectors. For example, the bold curved line, in Fig 3, shows a segment of contour. Points A and B are paired because their tangent vectors, tA and tg, are symmetric about some vector t. The dotted line, which is the locus of the mid- points P of the chords AB, is taken to be the symmetry axis. Let us now investigate the relationship between subparts and the SLS, as follows. III. THE SYMMETRYSTRUCTURE OF PARTS We begin by investigating the relationship between the Brady SLS and the part-analysis provided by Hoffman & Richards (1985) and Richards & Hoffman (1985). These latter researchers have put forward compelling evidence that contours are perceptually partitioned at points of negative curvature extrema; i.e. points of maximal “indentation”. For example, if one were to partition the contour of a face at such points, the resulting segments would be the chin, the lips and the nose. In Flgure 4. The twenty-two complex natural and abstract shapes in which human subJects converted local symmetry axes lnto local elgen- vectors. Figure 3. The Brady SLS PERCEPTION AND ROBOTICS / ‘03 fact, Hoffman’s and Richards’ basic primitive is a segment whose endpoints are curvature minima - and they call such a segment, a codon. Thus, they define a codon representation of a contour to be the sequence of codons obtained by traversing the contour. Codon representations have two important advantages: (1) Any contour has a unique representation as a codon string; and (2) there are only five types of non-trivial codon, where a type corresponds to a unique sequence of singularities. Fig 4 shows the five types. The dots along the codon represent the curvature singularities (minima, maxima, and zeros) that define the particu- lar codon-type. The question we now ask is whether codons are related to the Interaction Law and therefore to the problem of local prototypification. A theorem, which I proposed and proved in Leyton (1986e), is crucial in answering this question. SYMMETRY-CURVATURE DUALITY THEOREM (Leyton, 1986e): Any segment 01 smooth planar curve, bounded by two consecutive curvature eztrema of the dame type (either both mazima or both minima), has u unique SLS symmetry azis, and the azi8 terminate8 at the curvature eztremum o/the opposite type (respectively, minimum or maximum). COROLLARY: The SLS of a codon is unique, and ter- minates at the point of maximal curvature on the codon. It should be observed that the above theorem relates two previously unrelated branches of perceptual research: (1) Sym- metry research, starting with the Gestalt movement and going up to modern AI symmetry extraction programs; and (2) Curvature research, starting with Attneave’s (1954) work on information maximization at extrema and going up to, for example, a recent formalization of Attneave’s results by Resnikoff (1987). It is also worth observing that the theorem defines what is a minimal local region to consider with respect to symmetry, in the following sense: Observe that any codon is itself built from a number of examples of only one primitive subpart. Each subpart is a spiral. (A spiral is a curve with monotonically changing curvature of the same sign.) In Fig 4, any curve-segment, bounded by two adja- cent dots (singularities) is a spiral. Thus, each codon is a sequence of two, three or four spirals. Any smooth curve can therefore be represented as a string of spirals. We shall call this representation the s-code of the curve. The importance of basing one’s representation on spirals arises from a theorem I proved in Leyton (1986e), which states that an SLS cannot be constructed on a spiral. (We are assum- ing that the curve’s normals cannot change sides). That is, an SLS cannot be constructed on a single unit of the s-code. Furth- ermore, it is easy to show that an SLS cannot be constructed on any adjacent pair of units in the s-code where the pair both have increasing or both have decreasing curvature. Thus, to allow a symmetry axis to appear, the s-code needs to contain two con- secutive spirals where one spiral has increasing and the other has decreasing curvature. But, any such pair is either a codon, or part of a codon that must exist in the curve at that point. Thus, the appearance of symmetry requires that, minimally, the curve must contain a codon. Ffgure 4. The five non-trlvlal codons. The second thing to observe is that the codons are mazi- ma/ local regions with respect to symmetry uniqueness. That is, as soon as one continues a curve past either end-point minimum of a codon, an extra symmetry axis must appear terminating at that minimum. The uniqueness of the symmetry axis within the codon fol- lows from the fact that, at any pair of SLS points A and B, as in Fig 3, a unique circle can be drawn that is tangential to the curve at A and B. It is shown in Leyton (1986e) that: (1) given any point A on a codon, there is at most one circle that is tangential to A and some other point, B on the codon, and (2) this circle is not tangential to a third point on the codon. This result proves the uniqueness of the symmetry-point associated with A, and hence the uniqueness of the symmetry axis within the codon. The above considerations therefore reveal that there are a set. of properties (uniqueness, maximality, minimality, etc) that make the relationship between smooth local symmetries and codons significant. IV. DIFFERENTIAL PROTOTYPIFIOAT~ON lot al Having seen how the symmetry analysis interacts with the structure, where local means subpart, we shall now look at how the symmetry analysis interacts with the local structure, where local means diflerential. We require a differential charac- terization of the SLS such that prototypification via the Iwasawa decomposition (Stretch X Shear x Rotation) becomes both possi- ble and meaningful. It turns out that the latter requirements strongly constrain the type of characterization that is allowable, as follows: It is clear that, at any point P along the SLS-axis, two vec- h tors characterize the SLS structure: (1) ;u where u is the unit I h cross-section vector based at P (as shown in Fig 3) and - is the 2 scalar measuring half the cross-section, (2) tp which is the unit tangent to the SLS axis. This pair of local vectors defines a local frame, F, which varies as F moves along the curved axis. What we need to do is to characterize F as the consequence of transformations, T, on some other frame, E, such that when T is factorized (that is F becomes E), the resultant shapes are regarded as psychologically more prototypical. Thus we have to decide how to choose E. Two candidates for E seem obvious: (1) El, which has, as basis vectors, the tangent tp to the symmetry line and the nor- mal np to that line; and (2) E, which has, as basis vectors, the unit vector t (about which t, and t, are symmetrical) and the unit vector u which is normal to t and lies along the cross- section. The linear transformations E,+F and E2+F each comprise stretch and shear. Furthermore, when F propagates along the axis, it undergoes rotation. Observe that, if El is used, rotation is conveniently described as rotation of the axis-tangent t,, whereas if Ez is used then rotation is conveniently described as rotation of the cross-section. In order to see how important the choice of basis E, or E, is, let us consider what happens when the frames Ei undergo no Figure 5. A local frame that does not accord wlth the Interaction Law can lead to psychologlcally meaningless results when the frame Is prototypifled. 7’04 / SCIENCE rotation. For example, consider again the contour shown in Fig 3. It has a curved symmetry axis. Thus, when the basis E, is propagated along the axis, it undergoes rotation. Now let us pro- totypify by removing rotation. The resultant shape is shown in Fig 5. However, observe that, even though the axis is straight, the shape itself (e.g. as given by the contour) does not seem significantly more prototypical. It seems therefore that E, is an inappropriate basis. Thus let us reject it, as a basis, and investigate what happens when Es is the chosen basis. However, before we do this, it is important to observe that there is a good reason why we could have suspected, in advance, that E, would be a bad choice, and why we might believe that Es will be more successful. Recall the Interaction Law. It states that, perceptually, symmetry axes provide an appr0priat.e basis of eigenvectors for actions on shape. Observe also that the sym- metry axes here are t and u (t is such in the plane of the page, and u is such in the plane of the cross-section of the implied three-dimensional shape; see Leyton, 1985, 1986c, 1986f, for details). Now, returning to the definition of EI-tF and E,+F, one finds that it is only in the latter case that the initial basis (i.e. E,) is a basis of eigenvectors (for example, the cross-section vector, u, is an eigenvector of stretch). Let us now compute the full linear transformation associ- ated with the basis, Es = (t, u). Let 4 be the angle between t and tp, and B be the angle between basis Es = (t, u) and corresponding fixed basis (tl, ul) at the beginning of the SLS- axis; i.e. at the beginning of the protrusion. Then the matrix second level torizations. of prototypification. The tree shows all possible fac- Observe now that when the transformations are used glo- bally, each node of the figure represents a mathematically realiz- able shape. For example, starting with a rotated parallelogram at the top node, the middle level yields, from left to right, a rotated rhombus, a rotated rectangle, a parallelogram; and the bottom level yields a rhombus, a rotated square and a rectangle. The question which concerns us is what shapes are obtained at the nodes when one uses the transformations locally. A theorem which I proposed and proved in Leyton (1986f) is cru- cial in answering this question: THEOREM (Leyton, 1986f): Let a ehape be locally characterized by the Iwasawa decomposition defined by the coor- dinate eyirtem of eigenuectora that are the symmetry uectots oj the SLS (i.e. in accord with the Interaction Law). Then the removal of one of the factor subgroups necessarily involve8 the removal of one of the other factor subgroups. That is, the theorem states that one level of prototypification is mathematically impossible; i.e. a shape with only rotation and shear is impossible, a shape with only shear and stretch is impossible, and a shape with only rotation and stretch is impossible. able Having established that there are mathematically no realiz- shapes at the middle level of the tree in Fig 6, let us move h describing the linear transformation (tl, ul) --* (yu, tp), is given by: bp ~cos~cose - bp Icos&in8 I ttp ~in&osB+-$sin0 -bp bin&infl+$cos6 Furthermore, crucially, we can now compute the Iwasawa decom- position. It is on to the bottom level. It is easy to show that no shapes are mathematically possible at the bottom left node. This leaves shapes at only the Rotation and Stretch nodes, on that row. The shapes corresponding to these nodes are, respectively, (1) the flexed symmetries such as the worm in Fig 7, and (2) the global symmetries such as the goblet in Fig 7. Thus the conclusion is that although the Iwasawa decomposition, using basis E2 = (t, u), disallows one level of prototypification, the prototypes that it does produce are, psychologically, highly salient as prototypes. This contrasts with the use of basis E,, which allows shapes at other nodes of the hierarchy (e.g. Fig 5 is at node StretchxShear), but where the shapes are not significantly proto- typical. Thus again we have here a corroboration of the Interac- tion Law, because the basis E2 accords with that law, whereas the basis E, does not. v. PROTOTYF’IFICATION CONSTRAINTS Now let us investigate whether the factorization of these transformations, from the shape, results in psychologically salient prototypes. Consider Fig 6. The top node represents an arbitrary shape characterized by the Iwasawa decomposition. Prototypification occurs by removing the factors; i.e. by progressing downward in the tree. The middle row of nodes of the tree represents the first level of prototypification and the bottom row represents the metry structure, and deformed prototypes as shapes that have a local symmetry structure that is the image of the former global structure under deformation, then the minimalitv condition implies that prototypification cannot take place if one does not have codons. Stretch x Shear x Rotatzon Shear x R&ton Stretch x Rotatzon Stretch x Shear l&l-Ti Shear Rotation Stretch Flgure 0. All possible prototyplflcatlons via factorlzations Iwasawa decomposition of a llnear transformation. Flgure 7. .4 flexed goblet). symmetry (the worm) and a global symmetry (the PERCEPTION AND ROBOTICS / 705 Recall also that the Duality Theorem can be regarded as yielding the following maximality constraint: a codon is the maxi- mal possible region possessing a unique symmetry axis. This con- straint provides two natural means by which prototypification can be decomposed: (1) Th e codon, being the maximal unit for a unique axis, corresponds to a prototypification stage, in the sense that the removal of the codon (e.g. a protrusion), by shrinking, removes the curvature extremum and the axis at the same time (by the Duality Theorem). Thus the contour is made more uni- form by a factor of one exactly extremum and exactly one axis. It is important to observe that if the Duality Theorem did not constrain the symmetry axis of a codon to terminate at the extremum, the removal of the axis by shrinking would not neces- sarily remove the extremum and thus yield a more prototypical contour. (2) The codon, being the maximal contour unit for a unique axis, corresponds to a unit that can be manipulated in parallel with other such units. Thus, the Duality Theorem implies a prototypification-decomposition that is both serial and parallel. (We should note that Pizer, Oliver & Bloomberg (1986) have implemented an algorithm that hierarchically orders protru- sions in an SAT-based analysis, and obtains psychologically natural results.) Finally, the theorem of the last section strongly constrains the further decomposition of the prototypification units just defined. It states that, when this decomposition is characterized by the Iwasawa decomposition, prototypification with respect to one of the Iwasawa factors necessarily involves prototypification with respect to one of the others. Thus the theorem tells us why there are only a few shape prototypes; i.e. those that are global symmetries or flexed symmetries. REFERENCES 1. Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61, 183-193. 2. Brady, M. (1983). Criteria for Representations of Shape. In A. Rosenfeld & J. Beck (Eds.), Human and Machiae Viaion. Hillsdale, NJ: Erlbaum. 2. Hoffman, D.D. & Richards, W.A. (1984) Parts of recognition. Cognition, 18, 65-96 3. Hoffman, K., & Kunze, R. (1961). Linear Algebra. New York: Prentice Hall. 4. Lang, S. (1975). SL$. London: Addison-Wesley. 5. Leyton, M. (1984) P erceptual organization as nested control. Biological Cybernetics, 51, 141-153 6. Leyton, M. (1985) G enerative systems of analyzers. Computer Vision, Graphic8 and Image Processing, 91, 201-241 7. Leyton, M. (1986a) Principles of information structure com- mon to six levels of the human cognitive system. Informa- tion Sciences, 38, l-120, entire journal issue. 8. Leyton, M. (198613) A theory of information structure I: Gen- eral Principles. Journal of Mathematical Psychology. (In press) 9. Leyton, M. (1986c) A theory of information structure II: A theory of perceptual organization. Journal of Mathematical Psychology. (In press) 10. Leyton, M. (1986d) Nested structures of control: An intuitive view. Computer Vision, Graphics, and Image Processing. (In press) 11. Leyton, M. (1986e) A theorem relating symmetry structure and curvature extrema. Technical Report, Harvard Univer- sity. 12. Leyton, hl. (1986f) Prototypification of complex shape by local Lie group action. Technical Report, Harvard Univer- sity. 13. Pizer, S.M., Oliver, W., & Bloomberg, H. (1986) Hierarchical shape description via the multiresolution of the symmetric axis transform. Technical Report, University of Northern Carolina. Submitted for publication. 14. Resnikoff, H.L. (1987) The Illusion of Reality: Topics in Information Science. New York: Springer Verlag. 15. Richards, W. Ss Hoffman, D.D. (1985) Codon constraints on closed 2D shapes. Computer Vision, Graphics, and Image Processing, 31, 265-281. 16. Rosch, E. (1978). P rinciples of categorization. In E. Rosch & B.B.Lloyd (Eds.), Cognition and Categorization. Hills- dale NJ: Lawrence Erlbaum. 706 / SCIENCE
|
1986
|
186
|
457
|
LINEAR IMAGE FEATURES IN STEREOPSIS Michael Kass Schlumberger Palo Alto Research 3340 Hillview Ave. Palo Alto, CA 94304 ABSTRACT Most proposed algorithms for solving the stereo corre- spondence problem have used matching based in some way on linear image features. Here the geometric effect of a change in viewing position on the output of a linear filter is modeled. A simple local computation is shown to provide confidence intervals for the difference between fil- ter outputs at corresponding points. Examples of the use of the confidence interval are provided. For some widely used filters, the confidence intervals are tightest at iso- lated vertical step edges, lending support to the idea of using edge-like features in stereopsis. However, the same conclusion does not apply to image regions with more complicated variation on the scale of the filter support. I Introduction Most proposed algorithms for solving the stereo correspon- dence problem have used matching based in some way on linear image filters. The algorithms are usually based on the assump- tion that the filter outputs will be very similar at corresponding points in the two images. Differences in viewing geometry be- tween the two views, can however, introduce fairly large distor- tions. For any given filter, there are some local image patterns for which even small changes in the viewing geometry will cause large changes in the filter output. Here, a simple computation will be developed to identify such points by placing confidence intervals on the difference between filter outputs at correspond- ing points. For some widely used filters, the confidence intervals are tightest at isolated vertical step edges, lending support to the idea of using edge-like features in stereopsis. However, the same conclusion does not apply to image regions with more com- plicated variation on the scale of the filter support. In general, the confidence interval is constructed from two linear filters, one measuring sensitivity of the filter output to horizontal compres- sion and the other measuring sensitivity of the filter output to vertical skew. The stereo correspondence problem is the problem of match- ing two images of the same scene from different viewing posi- tions. Let Ii(z, y) and lz(z, y) be the light intensity functions for two images whose correspondence is to be computed and let 2’ : !R2 H lR2 be the mapping from points in the first image to corresponding points in the scond image. Then for any point p in the domain C of T, Ii(p) and 12(T(p)) are projections of the same physical point. The problem is to recover T from 11 and 12. All solutions to the stereo correspondence problem are based on finding some type of similarity between the local image in- tensities surrounding corresponding points. Understanding in detail how image intensities change under a change of viewpoint is critical in constructing good measures of similarity for com- puting correspondence. A large number of stereo algorithms use measures of similarity based in some way on the outputs of linear image filters. Multi- resolution correlation-based algorithms [e.g Hannah, 1974; Gen- nery, 1977; Moravec, 1977; Barnard and Thompson, 19801 typi- cally use the outputs of linear low-pass filters for coarse match- ing. Edge-based algorithms [e.g Marr and Poggio, 1979; Grim- son, 1981a; Mayhew and Frisby, 1981; Baker and Binford, 1981; Medioni and Nevatia 1983; Ohta and Kanade 19831 typically use linear filters to identify the locations of edges. The combination of a large number of independent linear filters [Kass 19838; Kass 1983b] has also been used effectively to compute correspondence. Even if there is no important change in photometry between views, the outputs of these filters at corresponding image points will in general be different because of difference in the projection geometry of the two views. Since typical similarity measures make use of linear filters with local support, it is the local change in geometry between views which is of concern. To keep the analysis manageable, we will assume that locally (on the scale of the filter support), the transformation T can be accurately represented by a first order approximation. If T : (z,y) H (z’, y’), the assumption is that T x (DT) . (z, y)’ (1) where is the Jacobian matrix of T and the lower case superscript t denotes matrix transposition. The operator D will be used throughout to represent the Jacobian of a vector field. Equation 1 defines an affine tranSformation, which will be a good local ap- proximation as long as T is smooth and continuous. For planar surfaces under orthogonal projection, affine mappings correctly describe the stereo transformation. For curved surfaces under perspective projection, afhne mappings are the best linear ap- proximations to the transformation. The spatial extent of the affine approximation will be the region of support of the linear filter in question. Even when T is limited to locally affine transformations, ar- bitrarily large distortions are possible between 11 and 12. These distortions can change the output of linear filters by arbitrarily large amounts. Fortunately, large distortions by T occur rarely. In the next section, we will develop confidence limits on the components of DT. Then in section III, we will use these limits to construct a simple local computation which provides confi- dence limits on the difference between corresponding values of filtered images. The confidence intervals vary over the image, so some points can be identified as unusually good or bad points to try to match. Finally, In section IV, the relevance of these results to the Marr-Poggio-Grimson algorithm and to the Kass algorithm will be discussed. PERCEPTION AND ROBOTICS / 707 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Figure 1: Definition of disparity field: Shift between p and T(p) is the disparity of the midpoint (p + T(p))/2. II Disparity Gradient Limits An important observation about the transformation T was made by Arnold and Binford (19801. Assuming a uniform dis- tribution of surfaces on the Gaussian sphere, they were able to show that because of foreshortening, surfaces with steep depth gradients occupy only a small portion of most images. As a re- sult, local transformations which cause extreme geometric dis- tortion are rare. In order to apply the Arnold and Binford results the problem of geometric distortion of linear filters, it is convenient to intro- duce the notion of a disparity field to represent the shift between the positions of corresponding points in a pair of images. Using a generalized version of the Burt-Jules2 coordinate system, the disparity field x(z, y) can be defined by the relationship: X([P + T(P)1/2) = T(P) - P (2) where p is a vector quantity (2, y), In the interest of symmetry, the shift T(p) - p is defined to be the disparity of the point halfway between p and T(p) ( see figure 1). Other definitions of the disparity field have been used-the advantage of this coor- dinate system is that if 11 and 12 are exchanged, the disparity field merely changes its sign. One problem with this definition of disparity is that under pathological conditions it can become multivalued. The problem is not very serious because for affine T, x is uniquely defined by the above formula except on a set of transformations of measure zero [Kass, 19841. Th us the possible ambiguity in the definition of x is not a major difficulty. Moreover, for affine T, Dx is constant. Let H be the horizontal component of the disparity field and let V be its vertical component. Then if x(0,0) = (0,O) and T is afhne, the disparity field can be written X(z, y) = Dx(z, Y)’ = (;I ;;) (2, Y)‘. In general, the non-translational component of a two- dimensional a&e transformation can be decomposed into com- pression or expansion along two orthogonal axes, a rotation, and some skew. The compression and expansion components of T are determined by the diagonal elements of DX while the ro- tation and skew components are determined by its off-diagonal elements. A good discussion of the details of one possible de- composition as it relates to the disparity field can be found in Koenderink and Van Doorn (19761. Under ordinary stereo viewing conditions, V, and V, are quite small so the geometric distortion is due primarily to HZ and Hy. As a consequence, the range of likely distortions is restricted to horizontal compression and vertical skew. Suppose V, = V, = Hv = 0 and Hz # 0. Then the two images are related by pure horizontal compression. Let (zz,yz) = T((zl, yr)). Since x(z, y) = (H,z,O), we know y2 = yl and 22 - ccl = H&a + 42. (4 Hence 22 = cl(2 + H,)/(2 - Hz). If we define 6 = (2 + H,)/(2 - H,) then the transformation T can be described by the equation T((zl, yr)) = (8x1, yr) which describes horizontal compression by a factor of 8-l If Hz = 0 and Hv # 0 then the two images are related by vertical skew. Consider a point (zr, yr) in the first image. Its corresponding point in the second image is (~1 + Hvyl, yl). Hence the line az = by in the first image will map to the line az = (b + aH,,)y in the second. Horizontal lines (a = 0) will be unchanged by the transformation, but all other lines are rotated by an angle that reaches a maximum of tanwl(Hv) for vertical lines. When neither Hz nor H,, is zero, compression and skew occur simultaneously. If the points (~1, y) in the first image and (22, y) in the second image correspond, then since H(z, y) = H,z+ Hvy, we have 22 - 21 = H&l + 22)/Z + H,y (5) 22 = 218 + y&,/(1 - K/2). (6) Thus the horizontal compression is unaffected by the presence of skew, but the skew is adjusted by the factor l/(1 - Hz/2). Likely values of Hz and H,, are heavily constrained by fore- shortening effects. Based on the Arnold-Binford assumption that surface orientations are uniformly distributed on the Gaus- sian sphere, complete distributions for Hz and Hv can be calcu- lated. These distributions allow confidence limits on H, and Hv to be established, so that the range of image compressions and skews to be considered can be suitably restricted. Let a be the ratio between the inter-ocular distance and the viewing distance. Arnold and Binford calculate the cumulative distribution function of 8, the ratio of horizontal lengths in the two images, to be that of a Cauchy random variablr The cu- mulative distribution function can be written as follows IJr[s < z] = ; + ; tan-’ 2(” - 1) u(z f 1)’ Since H, = 2(8 - 1)/(~3 + l), th e cumulative distribution of H, can be written Pr[H, < z] = i + t tan-‘(z/a). (8) Negative values of 6 are due to occlusion. Substituting z = 0 into equation 7, we see that occlusion occurs on the Gaussian sphere with probability 1 -- t tan-‘(2/a). 2 7r The upper quartile of the distribution of Hz begins where (l/a) tan -‘(z/a) = l/4. M It 1 u ip ying through by ?r and taking the tangent of both sides shows that the equation is satisfied when z = a. Hence Pt[-a < Hz < a] = t. (10) 708 / SCIENCE III Geometric Distortion Estimate I9 !sl- i e Figure 2: Line orientations for calculating the distribution of H, Differentiating the cumulative distribution function yields a Cauchy density for Hz d dzPr[H, < z] = ---if- pm. 7r(a + z”) (11) Thus the standard deviation of H, is infinite. At z = a, the density falls to half its height at the origin, so in addition to marking the upper quartile, a is also the half-width at half- maximum (HWHM) for Hz. The exact distribution of H, is considerably more difficult to calculate. Suppose lines rotated counterclockwise horn the y- axis by an angle 0 in the first image correspond to lines rotated clockwise by 0 in the second (see figure 2). Then Hv = 2 tan 6. Arnold and Binford have calculated the joint distributions of line angles in stereo images assuming a uniform distribution of surface orientations on the Gaussian sphere. For a = .07 which is typical of human vision at a range of about one meter, Arnold and Binford find the HWHM of the angular difference between the two views (28) to be about 9 degrees. For a = .7 which is typical of wide angle aerial photography, the HWHM is 30 degrees. These values correspond to HWHM values for Hv of .16 and .54. The following table summarizes probability of occlusion and HWHM values for H, and Hv assuming values of u correspond- ing to human vision at a distance of about one meter and cor- responding to typical wide angle aerial photography. On the basis of psychophysical experiments with stereograms consisting of pairs of dots, Burt and Julesz [1980a, 1980b] have discovered that the human visual system seems unable to achieve fusion unless ]VH] < 1. The Burt-Jules2 experiments are some- what controversial (see Krol and van de Grind [1982] and the response by Burt and Julesz [1982]), but they are consistent with earlier work on sine wave gratings done by Tyler [1973, 19771 suggesting a disparity gradient limit for human stereopsis. Based on the probability distributions calculated by Arnold and Binford, a visual system able to tolerate disparity gradients near the Burt-Julesz limit should have little difficulty with geo- metric distortion for conditions typical of human vision. Aerial photographs of mountainous regions could be expected to cause some problems, but for most other aerial photographs, a dispar- ity gradient tolerance near one would probably be sufficient. Given bounds on H, and H,, we can investigate limits on how much the outputs of linear filters can be distorted by the geometric differences between stereo images. Let NT(P) be the difference betwen the outputs of the filter f at corresponding points in the two images. Then NT(P) = [f * ~2lcw) - [f * Ill(P) (12) In general, the behavior of NT is quite complex because it depends on both the transformation T(p) and the image I(p). At some image points, large distortions between images caused by T(p) will have only a small effect on the filter output. At these points, the filter outputs will be reliably preserved between views. At other image points, however, even a small amount of compression or skew induced by T(p) will have a large effect on the filter output. At these points, [f * Lr](p) is a poor predictor of If * L2lWPN. The disparity gradient limits developed in section II provide a method of computing confidence limits on the transformation T(p). These confidence limits will be used here to develop con- fidence limits on NT(P). Since NT(~) depends on the local be- havior of I(p), th e confidence limits will vary over the image. By a simple change of variables, NT(~) can be rewritten as the convolution of I(p) with a point-spread function that depends on the transformation T. This will make it easier to discuss the dependence of NT(P) on I(p). Convolving the deformed image lo T-’ with the point spread function f is the same as convolving the original image I with the mask f o T and multiplying by the Jacobian determinant of T. This follows easily from a change of variables. At the origin, we have (I o T-‘) * f = / / I o T-‘(z’,y’)f(-z’, -y’)dz’dy’. (13) Transforming into the (z, y) coordinate system, we obtain (I o T-l) * f = / / I(z,y)f o T(-z, -y)lDTldzdy = I * (f o T)IDTI where (15) is the Jacobian matrix of T and IDTl is its determinant. Thus the geometric distortion noise Np can be expressed as a single filter fnrT applied to the first image: NB = 1 l [(f o T)IDTI - f] = I* fNT. (16) Unfortunately, the filter fNT is not known exactly until the correspondence problem is solved. However, away from depth discontinuities, it is restricted by the surface orientation con- straint on T. In order to apply the constraint, we need to rep- resent fNT in terms of Dx, the Jacobian of the disparity field. To do so, we first compute the Jacobian determinant IDTI. PERCEPTION AND ROBOTICS / 709 A. Jacobian Determinant Let H be the horizontal component and V be the vertical component of the disparity field x. If the origins of the two image coordinate systems correspond and T is affine then x = (Dx) - (z, y)T where Dx = ($ ;;) (17) is the Jacobian matrix of the disparity field. The diagonal el- ements of DX are responsible for horizontal and vertical com- pression and expansion, while the off-diagonal elements cause rotation and skew. Using a Burt-Julesz type coordinate system, (2, y) in the first image corresponds to (z’, y’) in the second if H((z + 2’)/2, (y + YW) = z’ - z and V((z + 2’)/2, (y + y’)/P) = y’ - y. Since x = (H,z + Hyy, V,z + V,y), the transformation T can be described by the equations Fl(z,y,z’,y’) = H,(z+z’)/2+Hy(y+y’)/2-z’+z = 0 (18) F2 (% YJ 6 Y’) = Vz(z + q/2 + V,(y + y’)/2 - y’ + y = 0. (19) The Jacobian DT and its determinant can be computed by means of the implicit function theorem from the equations Fl = 0 and F2 = 0 that define T: Substituting in the partial derivatives, we obtain fi,12 Hl//2 vu/2 - 1 VJ2 + 1 ) . (21) Since the determinant of a product is the product of the deter- minants, we have B. First-Order Approximation Equation 16 gives the total geometric distortion NB as the convolution of the intensity I(p) with the filter fret = [f - (f o T)IDTI]. We already h ave an expression for IDTj in terms of the components of Dx. To express the entire geometric distor- tion noise in terms of Dx and I(p), we will make a first-order approximation to N,. Let S = (2, y, Hz, H,,V,,V,) and So = (O,O, O,O,O,O). The surface orientation constraint assures us that H,, Hy , V,, and V, are usually quite small and since Np = I * fh’~, we can approximate Ns at the origin as Ng = Nge = I * s * VfNTI&=O (23) where the gradient is in the variables z, y, H,, Hy, V,, and V,. In order to compute VfNT, we can use equation 16 directly. Hence (VfNT)IDX=O = v ((f o T)IDTI - f)l&=o * (24 It is convenient here to extend T such that it maps vec- tors (~,Y,H,,H,X&) t o vectors (z’, y’, HL, HL, V,‘, V.) with Hz = Hk, H,, = H: etc. Note that the Jacobian determinant IDTl is unaffected by the change. With this extension of T, the multidimensional product rule can be applied to the gradient in equation 24 to yield (VfdDX=O = ((f o T)viDTI)IDx=~(25) + (IDTID(f o T))ID,,o - vf II&o. where f and IDTl are regarded as functions of the six vari- ables z,y, Hz, Hy,Vz, and V,. When Dx = 0, the function T becomes the identity so f 07 = f and IDTl = 1. Hence equation 25 can be re-written as P~NT)ID~=~ = f (VIDTI)IDx,o + D(f o T)ID~=~ - Vfl~~=~. (26) The Jacobian D ( f o T) according to the chain rule is V f -DT. Substituting into equation 26 leaves (vfNdIDX=O = f (VIDTI)ID*,,o (27) + (vf) * (DT)IDx,o - vf I&=0. The last term on the right in equation 27 is simply the vector (-fz2, -fwO,O,O,O). St raightforward calculation of the deriva- tives from equation 22 shows that the first term is (0, 0, f, 0, 0, f). Thus we have (V~~VT)ID~=~ = (-f=,-fy,f,O,Olf)+(vf).(DT)IDx=D. (28) The extended Jacobian DT fore by adding the equations can be calculated exactly as be- F+, y, z’, y’) = H, - HA = 0 (29) F,(z, y, z’, y’) = Hy - H; = 0 (30) F5(z,y,z’,y’) = V, - V; = 0 (31) F&y, z’, y’) = Vu - V; = 0. (32) The implicit function theorem states that DT is the product of two matrices. When Dx = 0, the first matrix becomes the identity and we have z = z’, y = y’. Thus DT is simply lOzyO0 OlOOzy DTIDx,o = 001000 i 1 000100 (33) 000010 000001 Multiplying out (Vf) 0 (DT)IDx,s and substituting in the eXpreSSiOn for v frJTIDXCo leaves VfNTIDX=O = (O,O, f + Zfz, Yfi,zfv, f + yfv). (34 Hence the estimated geometric distortion noise is Nge = I*S .(O,O,f+ 2fz,Yf+fv,f +Yfx,) = I * (Hz(f + Zfz) + Hvyfz + Vzzfv + V,(f + yfv)). (35) Under ordinary stereo viewing conditions, VV < c VH so the estimate of N,, can be simplified to % = I * (Hz(f + zfz) + Hvyfz). (36) ? 10 / SCIENCE the filter used is that of the Marr-Hildreth edge-detector [Marr and Hildreth, 19801. Then we have: Sigma /Mph:’ Figure 3: Correlation between f * I and fH * I as a function of o/i based on first-order Mark& image model Figure 4: The ratio between the standard deviations of fv * I and f * I as a function of a/a based on the first-order Markov image model C. Application of Orientation Constraint Since Hz and Hv are unknown, equation 36 does not provide a method of calculating No, directly from an image. It does, however, give a method of translating constraints on Hz and Hy into constraints on NBe. Using the Arnold-Binford analysis, for any ratio a between the inter-ocular distance and the viewing distance, confidence limits on Hz and Hv can be computed. If the limits are such that with probability q, lHzl < Hz, and ] HvI < Hvm then with probability at least q, we have INoel < II* &mf~l + II* &mfvI. (37) For the Burt-Julesz psychophysical constraint that ]VH] < 1, the situation is much the same. Clearly [Hz1 < 1 and lHvl < 1 so ]Noe] is bounded by the equation INgel < II* fHI + II* fvl. (38) In both cases, ] Nse ] is bounded by the sum of the absolute values of the outputs of two linear filters. The first filter measures the sensitivity of I* f to small horizontal compression and the second measures the sensitivity of I * f to small vertical skew. The computation is simple enough to be performed at every image point without excessive cost. IV Applications of Distortion Estimate A. Marr-Hildreth Special Case An interesting special case of the estimate NBe occurs when fH = & I$ + ey -2 ( 1 e-r=/2~~ (40) where r2 = x2 + y2. The Marr-Poggio theory argues that zeros of 11 * f reliably correspond to zeros of 12 * f because they will both be caused by physical edges or surface markings. At a vertical step edge, this view is easily confirmed by the preceeding analysis of geometric distortion. Let Ir(z, y) b e a step edge defined as follows: U%Y) = { 1 if Z<O 0 otherwise Direct calculation shows that -2 f*Il=-e -z=/202 &G7s (42) (43) Since fv is odd along the y-direction, fv * 11 = 0 everywhere. The other component fH of Noe can be easily evaluated along the edge by noticing that fH = ilzf/ilz. Hence a I*fH=I*zZf="f (44 Along the edge z = 0, so I * fH = 0. Since I * fv is also zero, the estimate NQe of the geometric distortion noise must vanish along the edge. Thus zero-crossings in I1 * V2G can be trusted to correspond to zero-crossings in I2 * V2G. The fact that NBe vanishes along the edge should not be particularly surprising. A step edge under an affine transformation simply changes its orientation. For a radially symmetric filter like V2G, the change in orientation does not introduce any geometric distortion noise. Thus, for images which consist entirely of sparse, straight, step-edges, geometric distortion is not a problem for Marr- Poggio based correspondence algorithms except perhaps near edge intersections. Nevertheless, for more general images, geo- metric distortion can pose severe problems. If the image spec- trum is symmetric and representable at least locally as a Gaus- sian process, tqen f * I, and fv * II are independent random variables. Thus points where f * I = 0 (edges in the Marr- Hildreth theory) are no less susceptible to geometric distortion due to skew than are other image points. With geometric dis- tortion due to compression, the situation is more favorable for the Marr-Poggio approach. Suppose for example that the image is a stationary first-order Gaussian Markov process with auto- correlation exp( - ]r/a]) [K ass 19841. Then f + I and fH * I are somewhat correlated, so points where f * I = 0 should on the average have somewhat less geometric distortion due to com- pression than randomly selected image points. Figure 3 shows the correlation between f * I and fH * I as a function of the ra- tio u/a between the space constants of the filter and the image. As a result of these correlations, the Marr-Poggio approach to stereopsis should be slightly more tolerant of horizontal disparity gradients than vertical ones. When complex scenes are viewed, fv * I and fH * I can take on reasonably large values with high probability even at zero- crossings of V2G * I. Attempting to match such points on the PERCEPTION AND ROBOTICS / 7 11 Figure 5: University of British Columbia Acute Care Center from the air (right image) basis of the value of V2G * I seems imprudent. The ratio of the standard deviation of fv * I to the standard deviation of f * I is shown in figure 4 again as a function of a/a based on the first- order Markov model. Assuming a vertical disparity gradient at the Burt-Julesz limit of 1 and a large U/Q ratio, up to 48 percent of the zero-crossings will have geometric distortion in excess of a(V’G * I)/2 and up to 15 percent will have geometric distor- tion in excess of a(V2G * I). Under such conditions, geometric distortion is clearly a major problem. Equations 37 and 38 provide a simple method of modifying the Marr-Poggio approach to substantially improve its immunity to geometric distortion. At each zero-crossing in V2G * I, the bound on ] N,,] can be computed from fH * I and fv * I. If the bound on ] N,,] is too large, the zero-crossing should not be matched since any match would be very unreliable. Empirical investigations of this approach are planned. B. Gaussian Filter Special Case Another interesting special case for the geometric distortion analysis is when f is a Gaussian filter. Then f = A-e-(‘2+w”2 (45) fH=f+zf,= 1-g f=-f12fit ( 1 (46) fv = yfz = zfv = Tf = -r7’fzv. (47) Thus the estimate No, becomes Nge = H,o’I * fZz + Hvu21 * fiv. (48) Once again, there is a connection with the Marr-Hildreth the- ory of edge detection. Not all intensity edges are guaranteed to give rise to zero-crossings in I * V2G. However, Marr and Hildreth [1980] h s owed that under a condition known as linear variation edges of all orientations cause zero-crossings in I * V2G (for a detailed discussion, see Torre and Poggio [1986]). At edge Fig fro: points, the condition of linear variation states roughly that the image intensities are locally linear so the Hessian matrix van- ishes. At a zero-crossing in I * V2G where the condition of linear variation holds, both I * fiz and I * fiv are zero so the estimate NQe goes to zero. Hence, under the condition of linear variation, edges in the Marr-Hildreth theory are points where I * f is best preserved between views when the filter f is a Gaussian. Note that this does not imply anything about how well I * V2G is preserved between views. C. Application to Kaas stereo algorithm The Kass stereo algorithm [Kass 1983a,b 19841 computes correspondence based on combining indications from a set of nearly independent linear filters at each point. A decision about whether point p1 in the first image can match p2 in the second is made by comparing vectors of linear filters at the two points. If the output of any linear filter differs in the two images by more than the threshold for that operator, then the potential match is rejected. Using the geometric distortion estimate Npe, these thresholds can be adjusted dynamically across the image so that the relative weighting of the different filters depends on how invariant they are with respect to changes in geometry. Figures 5 and 6 show a stereo pair of the University of British Columbia Acute Care Center from the air. Using the Kass stereo algorithm and the geometric distortion estimate N,,, these stereo images were matched. Figure 7 shows the results plotted as contours of constant height above the ground. Note that the buildings are accurately separated from the ground. Matching without the geometric distortion estimate resulted in far noisier results. V Conclusions The geometric distortion estimate Nge is applicable to any stereo algorithm which uses a linear filtering step. In particular, this includes coarse-to-fine techniques which blurr the images prior to matching, as well as edge-based techniques which de- tect edges using linear filters. The estimate makes it possible to identify the points in an image where geometric distortion is 712 / SCIENCE Figure 7: Contours of constant height above the ground for the stereo pair of figures 5 and 6 likely to pose a large problem. The cost of doing so is very min- imal because the estimate NQe can be computed trivially from the outputs of two linear filters. Computational experiments with the Kass stereo algorithm have shown that the theoretical advantages of using the distortion estimate are easily attainable. Acknowledgement a Tomaso Poggio and Keith Nishihara provided important guid- ance. R.J. Woodham and Eric Grimson made the stereo pho- tographs available. References (11 Arnold, R.D. and Binford, T.O. “Geometric constraints in stereo vision,n Proc. SPIE, San Diego 238, (1980), 281- 292. [2] Baker, H. H. and Binford, T. 0. “Depth from Edge and Intensity Based Stereo,” Seventh International Joint Con- ference on Artificial Intelligence, August (1981), 631-636. [3] Barnard, S. T. and Thompson, W. B. “Disparity analysis of images,” IEEE Pattern Analysis and Machine Intelli- gence PAMI-2, 4, (1980), 333-340. [4] Burt, P., Julesz, B., “A disparity gradient limit for binoc- ular fusion,” Science 208 (1980a) 615-617. [S] Burt, P., Julesz, B., “Modifications of the classical notion of Panum’s fusional area,’ Perception, 9 (1980b) 671-682. [6] Burt, P., Julesz, B. “The disparity gradient limit for binoc- ular fusion: an answer to J. D. Krol and W. A. van de Grind,” Perception, 11 (1982) 621-624. [7] Gennery, D. B. “A system for stereo computer vision with geometric models,” Fifth International Joint Confer- ence on Artificial Intelligence, Cambridge, Massachusetts, (1977)) 576-582. [8] Grimson, W.E.L. “A computer implementation of a theory of human stereo vision,” Phil. lbans. Roy. Sot. Lond.B 292 (1981a), 217-253. [9] Grimson, W.E.L. From Images to Surfaces: A com- putational study of the human early visual system MIT Press, Cambridge, Ma., (1981b). [lo] Hannah, M.J., “Computer Matching of areas in stereo im- w--s, n Stanford Artificial Intelligence Laboratory memo, AIM-239, July (1974). 111 Kass, M. “Computing visual correspondence,” Proc. ARPA Image Understanding Workshop, Washington D.C., (1983a), 54-60. Also re-printed in Pentland, A. (ed), From pixels to predicates. Ablex, Nonvood, NJ 1986. 121 Kass, M. “A computational framework for the visual cor- respondence problem,” Eighth International Joint Confer- ence on Artificial Intelligence, Karlsruhe, W. Germany, August (1983b), 1043-1045. (131 Kass, M. “Computing visual correspondence,” S.M. The- sis, Department of Computer Science and Electrical Engi- neering, Massachusetts Institute of Technology, 1984. [14] Koenderink, J. and van Doorn, A. “Geometry of binocular vision and a model for stereopsis,” Biol. Cybernetics 21 (1976), 29-35. [15] Krol, J. D. and Grind, W. A. van de, “Rehabilitation of a classical notion of Panum’s fusional mea,n Perception 11 (1982) 621-624. [16] Marr, D., HiIdreth, E. “Theory of edge detection,” Proc. R. Sot. Lond. B, 207, (1980) 187-217. [17] Marr, D. and Poggio, T. “A theory of human stereo vi- sion,” Proc. Roy. Sot. Lond.E? 204 (1979), 301-328. (an earlier version appeared as MIT AI Lab Memo 451,1977). [18] Mayhew, J.E.W. and F’risby, J.P. “Psychophysical and computational studies towards a theory of human stere- apsis,” Artificial Intelligence 17 (1981), 349-385. [19] Medioni, G.G. and Nevatia, R. “Segment-based stereo matching” Proceedings of the DARPA Image Understand- ing Workshop, Washington, D.C., June, (1983) 128-136. (201 Moravec, H.P. “Towards automatic visual obstacle avoid- ance,” Fifth I t n ernational Joint Conference on Artificial Intelligence, Cambridge, Massachusetts (1977), 584. [21] Nishihara, H.K. “PRISM: A practical realtime imaging stereo matcher,” Proceedings, Third International Confer- ence on Robot Vision and Sensory Controls, SPIE Cam- bridge Symposium on Optical and Electra-Optical Engi- neering, November (1983) 449. [22] Ohta, Y. and Kanade, T. “Stereo by intra- and inter- scanline search using dynamic programming,” Carnegie- Mellon University Technical Report CMU-CS-83-162, 1983. [23] Torre, V. and Poggio, T., “On Edge Detect& u,” IEEE Pattern Analysis and Machine Intelligence, PAMI-8, 2 (1986). [24] Tyler, C. W., “Stereoscopic vision: cortical limitations and a disparity scaling effect”, Science 181 (1973) 276- 278. [25] Tyler, C. W. “Spatial limitations of human stereoscopic vision,” Proceedings, SPIE 120 (1977). PERCEPTION AND ROBOTICS / 7 13
|
1986
|
187
|
458
|
SIGNAL MATCHING THROUGH SCALE SPACE Andrew Witkin Demetri Terzopoulos Michael Kass Schlumberger Palo Alto Research 3340 Hillview Ave. Palo Alto, CA 94304 ABSTRACT Given a collection of similar signals that have been de- formed with respect to each other, the general signal matching problem is to recover the deformation. We for- mulate the problem as the minimieation of an energy mea- sure that combines a smoothness term and a similarity term. The minimieation reduces to a dynamic system governed by a set of coupled, first-order differential equa- tions. The dynamic system finds an optimal solution at a coarse scale and then tracks it continuouslv to a fine scale. Among the major themes in recent work on vi- sual signal matching have been the notions of matching as constrained optimization, of variational surface recon- struction, and of coarse-to-fine matching. Our solution captures these in a precise, succinct, and unified form. Results are presented for one-dimensional signals, a mo- tion sequence, and a stereo pair. I Introduction Given a collection of siinilar signals that have been deformed with respect to each other, the general signal matching problem is to recover the deformation. Important matching problems in- clude stereo vision, motion analysis, and a variety of registration problems such as template matching for speech and vision. We cast the problem as the minimization of an energy func- tional E(V) h w ere V is the deformation. The energy functional is the sum of two terms, one based on the correlation of the deformed signals, and the other based on the smoothness of the deformation. In general, the energy functional E(V) can be highly non- convex, so that ordinary optimization methods become trapped in local minima. Optimization by simulated annealing can be attempted, but at severe computational expense. Instead, we rely on continuation methods to solve the problem. By introduc- ing a scale parameter o, the minimization problem is embedded within a larger space. A suitable minimum can be achieved relatively easily for large u because the signals and hence the energy landscape are very smooth. The solution of the original minimization problem is then obtained by continuously tracking the minimum as u tends to zero. This is analogous to a coarse- to-fine tracking of extrema through scale-space in the sense of (Witkin, 19833. The entire procedure consists of solving the first-order dy- namic system 0 = -kr exp(-kz]VE(V,o)]), ir = -VE(V,a), where the dot denotes a time derivative, B is the scale parameter and kl and kz are constants. Given an initial crude estimate for V at a coarse-scale ~0, the system minimizes E at ~0 and follows a trajectory of minima through finer scales, thereby increasing the resolution of V. Any of a number of well known numerical techniques can be used to solve for the trajectory. Through a series of incremental deformations, correlations of deformed signals are optimized and balanced against the smoothness of the deformations while moving from coarse to fine scale. Thus, the first-order system compactly unifies a number of important yet seemingly disparate signal-matching notions. In the remainder of this section, the relation of our technique to previous work on matching is discussed. Then in section 2, a framework for the minimization problem is introduced. In section 3, the solution of the problem by continuation methods and the resulting single differential equation are also developed. Section 4 describes the specific similarity term employed and, in section 5, the details of the smoothness term are discussed. Finally, section 6 presents several examples of matching results for one- and two-dimensional signals. A. Background An enormous amount of work has been done on signal match- ing, giving precedent for several components of our approach. Optimization of constained deformations guided by correla- tion or L2 metrics can be found in prior work. In speech recog- nition, the problem of time warping speech segments to match input utterances with stored prototypes has been addressed in this context. Dynamic programming has been used to com- pute constrained warping functions (see (Rabiuer and Schafer, 19781 and [Sankoff and Kruskal, 19831 Part II). This particular optimization technique is readily applicable in matching situa- tions involving sequentially ordered signals, such as speech, and unilateral continuity constraints. However, its stringent require- ments on the energy functional appear incompatible with the unordered multi-dimensional signals and isotropic smoothness constraints which are of primary concern to us. Smoothness constraints have been popular in computational vision. Consider the important problem of stereo matching. In the past, dense disparity maps have been computed through a two step process of local matching followed by smooth [Grimson, 19831, multiresolution (Terzopoulos, 19831, or piecewise contin- uous [Terzopoulos, 19861 surface reconstruction from the sparse disparities. The approach in the present paper unifies match- ing and piecewise smooth reconstruction into a single iterative optimization process. Broit’s [1981] work in registering a deformed image to a model image resembles ours, in that matching is explicitly formulated as a minimization problem involving a cost functional that com- bines both a deformation constraint and a similarity measure. His deformation model, which involves the strain energy of a globally smooth elastic body, is more elaborate than the defor- mation constraints inherent in the spring loaded subtemplate matching technique of Fischler and Elschlager [1973] or the iter- ative Gaussian smoothed deformation models proposed by Burr [1981]. Our controlled-continuity deformation model provides us 714 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. with the additional capability to regulate the order of smooth- ness and to preserve discontinuities in the deformation. Coarse- to-fine matching schemes have previously been treated as a multistage process in which a matching operation is per- formed at each successive level [Mori, et al., 1973; Hannah, 1974; Moravec, 1977; Marr and Poggio, 1979; Gennery, 19801. We have extended this idea into a matching process which evolves con- tinuously towards finer spatial scale. The idea of progressing continuously through scale space derives from Witkin (19831. As our matching process computes the deformation itera- tively, it is best to perform the similarity measurements by de- forming the signals according to the current approximation of the deformation. This concern has also been addressed by the matching algorithms described in [Mori, et al., 1973; Burr, 1981; Broit, 1981; Quam, 1984). II Framework Consider a vet tor of n similar signals f (x) = [G (x), . . . , fi (x)] defined in d dimensional space x = 121,. . . , zd] E @, and a deformation mapping V : !I? I+ !Rnd, such that V(x) = [Vi(X), . . * ,Vn(X)I, where each of the n disparity functions vi : !@ H ti is a Vector valued fUnCtiOn Vi(X) = [q(X), . . . , Vd(X)IT. Given a set of deformed signals f such that f’ (x) = f(V* (x)), the matching problem is to recover the deformation V*(x). Suppose that the similarity between the signals f for a given deformation V is measured by a functional Q(V) : Pd H !R bounded from above by a value achieved by the best possible match. A reasonable objective is to find the deformation U which maximizes the quality of the match; i.e., to minimize -Q(V) over possible deformations V. Thus, U represents an optimal approximation to V’. This minimization problem is clearly ill-posed in the absense of constraints on admissible deformations, since, e.g., degenerate or chaotic deformations can always be contrived that achieve the minimum value. Such constraints may be encoded by a second functional S(V) : Und t) !I?, where Nnd c !Rnd is the subset of admissible deformations. Useful instances of similarity and constraint functionals will be formulated shortly. Their combination, however, leads to the following minimization problem: Find the deformation U E Und such that E(U) = infVENRd E(V), where the energy functional is given by E = -(l- X)Q - XS (1) and where X E (0,l) is a weighting parameter. Stabilization offers a general approach to a numerical solution through the construction of a discrete dynamic system whose fixed points include a discrete solution of the above optimization problem [Bakhvalov, 19773. A simple dynamic system with this property is characterized by the differential equations ++VE=O, (2) where the dot denotes differentiation with respect to time t and VE denotes the gradient of E with respect to the free variables of the discrete deformation. Optimization occurs by dissipation of energy; energy cannot increase along the system’s trajectory V(x,t) in lPd, which follows the direction of the gradient of E. Although the trajectory terminates at a local minimum of E, there is no guarantee that the global minimum U will be attained by solving this initial value problem starting from an arbitrary initial condition V(x, 0). III Continuation over Scale The key remaining difficulty is that for obvious choices of Q, such as linear correlation, E is likely to have many local minima, making the minimization problem highly non-convex and there- fore extremely difficult to solve. There are two options: solving this hard problem directly (for example by simulated annealing) or simplifying the problem by choosing Q to be convex or nearly so. We pursue the second option because annealing is expensive. A. Continuation Methods Q may be smoothed by subjecting f to a smoothing filter of characteristic width Q. We observe empirically that the best so- lution for v as (T increases tends to be an increasingly smoothed version of the correct solution. These means that slightly deblur- ring v by reducing D produces a slightly better solution close to the one just obtained. To the extent this is so, we can solve the problem using equation 2 by means of continuation methods (Dahlquist and BjGrck, 19741. Continuation methods embed the problem to be solved, g(v) = 0, in a family of problems dv, 8) = 0, parameterized by 6. Let si+l = 6; + Aa, g(v, 6,) be the problem we wish to solve, presumably difficult, and g(v,sr) a readily solvable member of the family, and let u(8) = Hk, 8, vo) be the solution for g at u(s,) is obtained from condition. 8 given vc as an initial ~(61) by the iteration Then u(ei+l) = H(g, 6i+l, ui); i= l,...,n-- 1 that is, each solution is used as an initial condition to obtain the next one. For the current problem, the continuation parameter is u, with Au < 0. We continue from an initial coarse scale (rl and an initial guess VI by vi+l = H(q + VE, ui+l, Vi), to a fine scale 0, and a final answer V,. To visualize this method, imagine thd energy landscape at each value of cr as a contoured surface in 3-space. The surfaces are stacked one above the other, so that the topmost surface is very smooth, while the lower ones become increasingly bumpy. Imaging a hole drilled at each local minimum on each surface. A ball bearing dropped onto the topmost surface will roll down to the bottom of the hill. At this point, it falls through to the next level, rolls down again, falls through again, and so on to the bottom. B. A Scale Space Equation This iteration solves a separate initial value problem at each step. A more attractive alternative is to collapse the contiu- uation over 0 into a single differential equation. Ideally, the solution should follow a curve V(o) satisfying ]VE(V(b))( = 0; i.e., a continuous curve of equation for this curve is solutions over scale. A differential V, = -(VE),(VVE)-‘. (3) PERCEPTION AND ROBOTICS / 7 15 The solution to this equation tracks a given coarse-scale solution measure of similarity over position. If Ki,j(X) is a local mea- continuously to fine scale, in precise analogy with the coarse-to- sure of the similarity of fi(Vi(x)) and fj(vj(x)) around X, then fine tracking through scale space of [Witkin, 1983). Unfortu- Qi,j = I Ki,j( )d x x is a global measure of similarity for fi and nately, it is impractical to solve this equation for arbitrary 5’ fj. By simply adding up pairwise similarities, a global measure and Q, since VVE is high dimensional. of similarity can be constructed for n signals: Q = Cifi Qi,j. To construct an approximate equation, we introduce the quantity N = -kle-WEI, (4 so that N = -kl at a solution to equation 2, diminishing with distance from the solution at a rate determined by the space constant kz. The equation b = N, ir= -VE (5) approximates the desired behavior. Far from a solution, where N is small, equation 5 approaches equation 2, changing V but not u. Approaching a solution, u begins to decrease. At a solution, Q = 0 and C+ = -kl. From an initial V(to),a(to), the solution V(t), a(t) moves through V at nearly constant scale until a minimum in E is approached, then it begins descending in scale staying close to a solution. 1 A number of possibilities exist for the local similarity mea- sure Ki,j(x). Normalized cross-correlation produces good re- sults for several matching problems that we have examined. If W,(x) is a window function where 7 denotes the width param- eter, pi(x) = J fi(vi(Y - ~))W,(Y)~Y, and ui(x) = J[,‘i(vi(x - Y)) - Pitx - y)12w (Y) dy, th en the normalized cross-correlation can be written Ki,j(x) = [vi(x)vj(x)1-1’2 /{ [fi(vi(x - Y)) - Pi(X - Y)l x [fjtvjtx - Y)) - Pjtx - Y)lw(Y)} dY* The resulting functional Q(V) generally has many local min- ima. In order to apply the continuation method, we compute Q for signals f which have been smoothed by Gaussians of stan- dard deviation CJ. The resulting functional Q6(V) can then be made as smooth as is desired. Equation 5 finds a solution at the initial scale, then tracks it continuously to finer scales. To use equation 5, we choose a coarse scale u(tO)> a crude initial guess V(t,), and a terminal fine scale 0~. We then run the equation until o(t) = UT, taking V(t) as the solution. C. Ambiguous Solutions From time to time, we expect to encounter instabilities in the solutions of equation 5, in the sense that a small perturba- tion of the data induces a large change in the solution curve’s trajectory through scale space. These instabilities correspond to bifurcations of the solution curve, analogous to bifurcations that can be observed in Gaussian scale space. We have consid- ered two approaches to dealing with them. First, by adding a suitable noise term to E, equation 5 becomes a hybrid of scale space continution and simulated annealing. We believe that lo- cal ambiguities can be favorably resolved using low-amplitude noise, hence with little additional computational cost. A second approach is to regard these instabilities as genuine ambiguities whose resolution falls outside the scope of the method. In that case, a set of alternative solutions can be explored by the ad- dition of externally controlled bias terms to E. These terms can reflect outside constraints of any kind, for example, those imposed by the operation of attentional proceseses. In the following sections, we turn to specific choices for S and Q. IV Similarity Functional In general the similarity measure Q should capture what is known about the specific matching problem. In many cases, the undeformed signals are sufficiently similar that a simple corre- lation measure suffices. In this section we formulate a generic choice for this class of problems. Note that, by assumption, it is the undeformed signals f(x) which are similar, so the quality of a potential solution V should be measured by the similarity of the signals f(V(x)). Consider the case of two signals fi and fj. A general fam- ily of similarity measures is obtained by integrating a local ‘The solution to equation 5 oscillatea around the exact solution (equation 3,) with frequency and amplitude controlled by kl and kz. This oscillation can be damped by the addition of second order terms in t, but we have not found it necessary to do so in practice. The correlation window size W, should be large enough to provide an accurate local estimate of the mean and variance of the signals, but small enough that non-stationarities in the signals do not become a problem. A convenient way to set 7 to a reasonable value is to make it a fixed multiple of the average autocorrelation widths of the smoothed sign,&. Then 7 can be regarded as a function of 1~. Note that Q,(V) must be recomputed at each iteration, with the signals resampled to reflect the current choice of V. If the deformation is very small, the distortion induced by failing to resample can be ignored, but the value of such resampling in stereo matching, for example, is well established [Mori, et al., 1973; Quam, 19841. The Gaussian signal smoothing should also take place on the resampled functions j(v(x)). V Smoothness Functionals The functional S(V) places certain restricitions on admissible deformations in order to render the minimization problem better behaved. Perhaps the simplest possible restriction, and one that has been used often in the past, is to limit possible disparities between signals to prespecified ranges. A deformation can then be assigned within disparity bounds on a point by point basis according to maximal similarity criteria. Although simple, such limited searches are unfortunately error prone, since they are based on purely local information. This problem can be resolved by imposing global constraints on the deformation that are more restrictive yet remain generic. Such constraints may be based on a priori expectations about deformations; for example, that they are coherent in some sense. In particular, admissible deformations may be characterized ac- cording to the controlled-continuity constraints defined in (Ter- zopoulos, 19861. Th ese constraints, which are based on general- ized splines, restrict the admissible space of deformations to a class of piecewise continuous functions. Not only is the defor- mation’s order of continuity controllable, but discontinuities of various orders (e.g., jump, slope, and curvature discontinuities) are permitted to form, subject to an energy penalty. A general controlled-continuity constraint is imposed on the deformation by the functional 7 16 / SCIENCE Figure 2 shows a more challenging example in which 4 signals are matched simultaneously. The signals are intensity profiles from a complex natural image. On the left the four signals are shown superimposed at several points in the matching process. As before, the original signals appear at the top and the fi- nal result is at the bottom. Note that coarse-scale features are aligned first in the matching process while fine scale features are matched later. The four corresponding deformation functions vi(z) are shown to the right. C. Motion Sequence Figure 3 shows two frames from a motion sequence showing M. Kass moving against a stationary background. The frames are separated in time by about 1.5 seconds. Results of the match- ing process are shown as follows: the original image has been mapped onto a surface that encodes estimated speed as eleva- tion. The raised area shows the region in which the algorithm detects motion. D. Stereo Matching Figure 4 contains a stereogram showing a potato partly oc- cluding a pear. The matching results are rendered as two shaded surfaces with depth computed from the disparity. An image co- ordinate grid is mapped onto the first surface and the left image is mapped onto the second. The reconstructed surfaces are ren- dered from an oblique viewpoint showing the computed surface discontinuities. Only those portions of the scene visible in the original stereogram are shown. VII Conclusion The main contribution of this paper is two-fold. First, we introduced the notion of tracking the solution to the matching problem continuously over scale. Second, we developed a single system of first-order differential equations which characterize this process. The system is governed by an energy functional which balances similarity of the signals against smoothness of the deformation. The effectiveness of this approach has been demonstrated for both one- and two-dimensional signals. Acknowledgements We thank Al Barr for introducing us to continuation meth- ods, and for helping us with numerical solution methods for differential equations. Keith Nishihara provided us with stereo correlation data. References Bakhvalov, N.S., alVumerical Methods, B Mir, Moscow (1977). Broit, C., “Optimal Registration of Deformed Images,” Ph.D. Thesis, Computer and Information Science Dept., Uni- versity of Pennsylvania, Philadelphia, PA (1981). Burr, D.J., “A dynamic model for image registration,” Com- puter Graphics and Image Processing, 15 (1981) 102-112. Dahlquist, G., and BjBrck, A., aNumerical Methods,” N. Anderson (trans.), Prentice-Hall, Englewood Cliffs, NJ (1974). Pischler, M.A., and Elechlager, R.A., “The representa- tion and matching of pictorial structures,” IEEE ‘Itans. Com- puters, C-22 (1973) 67-92. Gennery, D.B., ‘Modeling the Environment of an Exploring Vehicle by means of Stereo Vision,n Ph.D. thesis, Stanford Ar- tificial Intelligence Laboratory, also Artificial Intelligence Labo- ratory Memo 339 (1980). Figure 3: Two frames from a motion sequences about 1.5 sec- onds apart. The original image has been texture mapped onto a surface that encodes speed as elevation. The raised area is moving while the background remains stationary. PERCEPTION AND ROBOTICS / 7 17 ItU.tlm: 21 6.~1.: 2.9999911 Me-“: 8.917444339 Figure 1: Matching two one-dimensional signals. The signals are measurements of the resistivity of a geological structure as a function of depth at two different locations. From top to bot- tom, the signals first appear in their original form, then partially deformed at intermediate stages of the matching process, finally showing the end result. Above the signals are shown the defor- mation function V and the correlation gradient VQ, The positive integer p indicates the highest order generalized spline that occurs in the functional, and this determines the maximum order of continuity (0-l) of the admissible defor- mations. The nonnegative continuity control functions W(X) = [we(x), * *. > wp(x)] determine the placement of discontinuities. A discontinuity of order q < p is permitted to occur at x0 by forcing wi(xo) = 0 for i > q (see [Terzopoulos, 19861 for details). Thep= 2 order controlled-continuity constraint is employed in our implementations to date. If, for convenience, a “rigid- ity” function p(x) and a “tension” function [l - r(x)] are in- troduced such that we(x) = 0, WI(X) = p(x)[l - T(X)], and w,(x) = P(X)+), th en it is natural to view the function& as characterizing “generalized piecewise continuous splines under tension.” In particular, for the case of n signals in 1 dimension, the functional reduces to S(v) = - h p(z){ [l - 7(z)]]v,12 + .(+ZZ12} dz, while for the case of 2 signals in 2 dimensions, it becomes S(v) = - / /-, P(W){ 11 - 45 Y)l(lVz12 + I$, + ~(%YNVzz12 + 21vz,12 + lvvv12)} dzdy, Figure 2: Simultaneous matching of four signals. The signals are intensity profiles from a complex natural image. On the left the four signals are shown superimposed at several points in the matching process. As before, the original signals appear at the top and the final result is at the bottom. Note that coarse-scale features are aligned first in the matching process while fine scale features are matched at the end. The four corresponding defor- mation functions vi(x) are shown to the right. where z1 = z and 22 = y. VI Results A. Implement at ion Notes Discretization of the continuous variational form of the match- ing problem can be carried out using standard methods. Al- though finite element methods offer the greatest flexibility, for simplicity we employ standard multidimensional finite difference formulas for uniform meshes to approximate the spatial deriva- tives in S(V). Th ese approximations yield local computations analogous to those in [Terzopoulos, 19831. Equation 5 is a standard first-order initial value problem, for which solution methods abound. We have employed nu- merical methods of varying sophistication, each giving satis- factory results. In order of sophistication, these include Eu- ler’s method, a fourth-order Runge-Kutta method, and Adams- Moulton predictor-corrector methods. The latter offer the ad- vantage that the step size can be automatically adapted, making them particularly robust [Dahlquist and Bjijrck, 19741. B. One-dimensional Signals The method is applicable to matching n signals, each of which is d-dimensional. Figure 1 shows the simplest case, that of matching two one-dimensional signals. The signals are measure- ments of the resistivity of a geological structure as a function of depth at two different locations. From top to bottom, the sig- nals first appear in their original form, then partially deformed at intermediate stages of the matching process, finally showing the end result. The deformation function V and the correlation gradient VQ are shown above the signals. 7 18 / SCIENCE Figure 4: A stereogram showing a potato partly occluding a pear (the images are reversed for free fusing). The matching results - are rendered as two shaded surfaces with depth computed from thd disparity. An image coordinate grid is mapped onto the first surface and the left image is mapped onto the second. The reconstructed surfaces are rendered from an oblique viewpoint showing the computed surface discontinuities. Only those portions of the scene visible in the original stereogram are shown. Grimeon, W.E.L., ‘An implementation of a computational theory of visual surface interpolation,” Computer Vision, Graph- ics, ond Image Processing, 22 (1983) 39-69. Hannah, M.J., ‘Computer matching of areas in stereo im- ages,” Stanford Artificial Intelligence Laboratory memo, AIM- 239, July (1974). Marr, D., and Poggio, T. “A theory of human stereo vi- sion,” Proc. Roy. Sot. Lond.B, 204 (1979) 301-328. Moravec, H.P., “Towards automatic visual obrtacle avoid- ance,” Fifth International Joint Conference on Artificial Intel- ligence, Cambridge, Massachusetts (1977) 584. Mori, K., Kidodi, M., and Asada, H., “An iterative pre- diction and correction method for automatic stereo comparison,” Computer Graphics and Image Processing, 2 (1973) 393-401. Quam, L.H., “Hierarchical warp stereo,” Proc. Image Un- derstanding Workshop, New Orleans, LA, October, 1984, 149- 155. Rabiner, L.*R., and Schafer, R.W., Digitof Processing of Speech Signals, Prentice-Hall, NJ (1978). Sankoff, D., and Kruekal, J.B., (ede.), Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison, Addison- Wesley, Reading, MA, (1983). Terropoulos, D., “Multilevel computational processes for visual surface reconstruction,” Computer Vision, Graphics, and Image Processing, 24 (1983) 52-96. Terropouloe, D., ‘Regularization of inverse visual problems involving diecontinuities,” IEEE ‘Itans. Pattern Analysis and Machine Intelligence, PAMI- (1986). Wit kin, A., “Scale Space Filtering,” Proceedings of Inter- national Joint Conference on Artificial Intelligence, Karlsruhe (1983) 1019-1021. PERCEPTION AND ROBOTICS / 7 1 c)
|
1986
|
188
|
459
|
A GR7QH’H-oRII!X~ KI- ~REQ!XlZATION AND UNIFICATION TEIcHNIQuEm ~ICATXY SELECTING AND INVOKING So- FUNCTIONS William F. Kaemrrer er and James A. Larson Honeywell wuter Sciences Center Artificial Intelligence Section 1000 Boone Avenue North Golden Valley, Minnesota 55427 ABSTRACT An interface to information systems that can automatically select, sequence, and invoke the sources needed to satisfy a user’s request can have great practical value. It can spare the user from the need to know what information is available from each of the sources, and how to access them. We have developed and implemented a graph-oriented technique for representing software modules and databases, along with unification and search algorithms that enable an interface to perform this automatic programming function. The approach works for a large class of useful requests, in a tractable amount of run time. The approach permits the logical integration of pre- existing batch application programs and databases. It may also be used in other situations requiring automatic selection of software functions to obtain information specified in a declarative expression. I INTRODUCTION Users of computerized informat ion systems often need to access multiple sources of information and multiple software programs in the course of performing a single, practical task. For example, a bank loan officer may need to access credit records, automobile book values, and amortization software to determine whether to grant a car loan. The use of multiple systems can burden users with the task of choosing which system to invoke to obtain each piece of desired information, and with the mechanical details of obtaining and combining intermediate results. A means is needed by which a person can access diverse information sources and software functions without being distracted by these details. A way to meet this need is to provide a user- system interface that allows a person to access diverse information sources as if they were a single, virtual information system. We have developed and implemented an algorithm that automatically selects and sequences the “servers” needed to respond to a request for information stated in server-independent terms. (We use the term “server )I to refer collectively to pre- existing batch application software as well as databases residing under database management systems. ) The output consists of a series of expressions sufficient to invoke the servers and obtain the desired information. II PROBLEM DEFINITION AND TERMINOLOGY We use the term “server-unit” to refer to each retrievable unit of data (e.g., each type of tuple in a relational database) or each invokable function provided by an individual server. Our justification for applying the same term to data and functions is the observation that an invokable function of a server can also be considered a type of retrievable unit of data. It may be represented as a virtual relation between its input and output arguments, Each individual server may provide multiple server-units. For a relational, database management system, a server- unit corresponds to each of the relations in the database. For an application program, a server- unit corresponds to each entry point of the program. We view the functions and information available from a set of servers as collectively defining the l’capabili ty space” of a single, virtual server (Ryan and Larson, 1986). A representation of this space is derived by merging the representations of the server-units for each of the actual servers. Given a means of representing the semantics of the information collectively available from the server-units, the user may request information in server-independent terms by declaratively expressing the desired result in terms of the capability space. Satisfying the request is then a matter of finding and sequencing a set of server-units that is a procedural equivalent to the user’s declarative expression. The basic problem we have addressed is as follows : Given a set of servers and a user’s request expressed in server-independent terms, how can server-units be automatically selected and invoked to satisfy the user’s request? Solving this problem involves solving three subproblems: a. The knowledge representation problem-- Given a collection of servers, represent the data and functions supported by the servers. Essentially, the problen is to represent the semantics and relationships among entities in the capability space, and to define server-units in terms of that space. b. The formulation problem--Given a request expressed in terms of the capability space, transform the request into an equivalent one expressed in terms of server-units. APPLICATIONS / 825 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. C. The planning/execution problem--Given a request re-expressed in terms of server-units, determine a sequence in which to invoke those server-units that will obtain the information that satisfies the request. The focus of this paper is a solution to the formulation problem. We discuss the knowledge representation approach to the extent necessary in this context . The planning/execution problem is one of finding a sequence for invocations that is sufficient to yield the proper result, then optimizing the sequence for execution efficiency. The optimization step is beyond the scope of this paper. III RELATED WORK -- Among major approaches to automatic programming (e.g., construction by theorem provers [ Nilsson, 19801 knowledge-based program construction [Barstow, 19791, etc.) our work takes the fftransformationff approach (e.g., [Burstall and Darlington, 19771), in which an expression of a problem is successively transformed into a more specific form. Kim (1985) used a transformation approach to generate examples given a constraint formula expressed as a conjunction of predicates. In both Kim’s work and ours, the goal is to return sets of variable bindings which satisfy the input expression. Kim’s approach reduces the constraint formula to simpler terms for which known examples are stored, or from which variables’ values can be found by algebraic solution. The stored examples for individual terms are tested to find those that satisfy the entire constraint expression; those that survive the test are combined to generate the desired result. This approach is not well suited to accessing database and application programs, however, since it is not feasible to generate results by successively testing each database record or potential application output for consistency with the input expression. Gray and Moffat (1983) developed a method for transforming requests for information expressed as relational algebra queries into programs to access Codasyl databases. In their approach, multiple access paths are stored for each database relation, giving the alternative sequences in which the data items corresponding to the columns of the relation can be found. Combinations of access paths for the relations involved in a query are tested to find a combined path equivalent to the relational joins in the query. Our work is similar, in that we dynamically generate the necessary joins by finding the ffoverlapff of items involved in relations. However, Gray and Moffat assume that the user’s request has specified the particular relations to be used ; they then generate an efficient way to access them. In contrast, our work focuses on how to identify the particular database relations (and application programs) to be used, given a request expressed in server-independent terms. In general, distributed database management Systems (Ceri and Pelagatti, 1984) handle server- 826 / ENGINEERING independent queries by replacing each object with its equivalent server-specific object, using a list of mappings from server-independent to server-dependent terms. This step, sometimes called “query modification” (Stonebraker, 1975) is used to solve the formulation problem for distributed databases. A problem with the query modification approach, however, is its requirement that a potentially large number of mappings be explicitly stored. The graph-oriented searching and unification technique we present avoids this problem, and provides a way to achieve the logical integration of application programs and databases. The technique may be useful in other situations that require the selection of primitive functions to solve problems stated at a higher level. IV GRAPH REPRESENTATION OF SERVER-UNITS AND REQUESTF A key to our approach is that the semantics of both user’s requests and the information available from the server-units comprising the capability space can be expressed using graphs. We represent the semantic relationships among the information provided by server-units in structures, comparable to the conceptual graphs described by Sowa (1984). Several types of nodes exist in our graphs. Yoncept nodes” are one-place predicates, C(x), denoting that entity x is a member of the class of entities C. ffRole nodes f( are two-place predicates, R(x ,y), denoting that entity x bears relation R to the entity y (x and y are implicitly existentially quantified). The C and R predicates meaningful in the domain are derived from a hierarchically structured, slot-and-frame-based domain model (c.f. Brachman, 1983) which provides the definitions of the corresponding concepts and roles. 9election node9 are two-place predicates, S(x,c), serving to restrict the entities denoted by x to those for which the relation S between x and c holds. For example, the selection node EQUAL(name,ffJohnff) restricts name to be equal to ffJohnff. Finally, a fffunction node” is a multiple-place predicate specifying that the named functional relation holds among its arguments , e.g., SUM(x,y,sum). Currently, we restrict function nodes to simple arithmetic functions. A connected graph is formed by a collection of nodes such that each node shares at least one argument with another node. Each predicate is a node in the graph. Each arc connects a argument that is common to two nodes. A connected graph represents an expression that is interpreted as the conjunction of the predicates that are the nodes of the graph. Thus, the graph: person(x), birthday-of(x,d), date(d), name-of(x,n), string(n), equal(n,f’Johnff) denotes the set of x,d,n combinations such that x is a person, d is a date that is that person’s birthday , n is a string that is that person’s name, and n is equal to John. A server-unit is represented as a single predicate with an (arbitrary) predicate name, and a list of formal arguments. The semantics of the server-unit are represented by asserting that the server-unit predicate is equivalent to the appropriate graph. For example, a relation in a relational database between a person Is name and birthday is represented as follows: birthday-relation(n,d) <=> person(x), birthday-of(x,d), date(d), name-of(x,n), string(n) In general, a server-unit predicate for a database relation has as many arguments as there are columns in the relation. A program that computes w, the day of the week on which the date d falls, is represented as follows: day-of-week(d,w) <=> date(d), weekday-of(d,w), string(w) A server-unit predicate for an entry point of an application program has as many arguments as there are input and output arguments in that entry point. Attached to the server-unit predicate are properties giving the owners of the server-unit, its mandatory inputs, and its available outputs. The ffownersff property is the list of servers which provide the server-unit (several servers may provide the same information). The ffmandatory inputs” property identifies which of the predicate’s formal arguments must be bound or restricted before the server-unit may be meaningfully invoked. For database relations, this is nil, since no selection conditions need be specified, (e.g., when retrieving all tuples in the relation.) For a software function, the mandatory inputs property identifies the input arguments that must be supplied to the function. The ffavailable outputsff property identifies which of the predicate’s formal arguments are available as output from the invocation. For database relations, this is all columns of the relation. For a software function, the available outputs are the output arguments of the function. For example, the mandatory inputs property of the day- of-week program is the list (d), and the available outputs property is the list (w). A user Is request for information is encoded as an expression with a “head” and a “body. tf The head is a predicate with an arbitrary name, and an argument list specifying the arguments to be returned. The body of the request is a set of nodes forming a connected graph. For example, a user’s request for the day of the week on which “John Doe” was born may be represented as follows: answer(w) :- person(x), birthday-of(x,d), date(d), name-of(x,n) ,&ring(n) ,equal(n,“Johnff) weekday-of(d,w), string(w) If the argument list for the head predicate of the request is empty, the request is considered to be a fftrue/falseff question. Otherwise, it is considered to be a request for all possible sets of variable bindings that can be found that satisfy the semantics of the body of the request. The variables that are the first arguments of selection nodes, if any, are called the ffknown’t variables of the request. Note that although this representation is similar to PROLOG, the body of the request is not processed by a software interpreter. Rather, the formulation problem is to find an expression in terms of server-units that is equivalent to the body of the request. From this expression, a sequence of server-unit invocations is readily generated. The representation al lows the formulation problem to be solved by searching the set of server-unit graphs to find a set of graphs that collectively matches the body of the user’s request, exclusive of the selection nodes. The selection nodes are then used to supply the input arguments to the server-units selected through this matching process. V GRAPH UNIFICATION Transforming a request expressed in terms of the capability space into an equivalent expression in terms of server-units is accomplished by unifying server-unit graphs and the body of the request until the request body is completely %overed. If A node in the request body is considered covered when it is unified with a node from the graph for at least one server-unit. The request body is considered covered when all its non-selection nodes are covered. Once the body is covered, its non-selection nodes are replaced by the set of server-unit predicates that are equivalent to the server-unit graphs that have been unified with the request. The result is an expression, consisting of server-unit predicates and selection nodes, that is equivalent to the user’s request. When matching server-unit graphs to the body of a request, we do not require the entire server- unit graph to ffunifyff with the request body. Rather, we merge the server-unit graph with the request body only if at least one role or function node unifies with the request body. (This requirement prevents the inclusion of server-units that contribute nothing to coverage of relationships among variables in the request.) The additional nodes of the server-unit graph, not present in the request body, become available for unification with further server-unit graphs, as the algorithm proceeds in its search to cover the request graph. This allows the algorithm to find connections among server-units that are necessary to solve the user’s problem, but which are not explicit in the original request. ( “Hidden database joins” [Sowa, 19841 are one class of such necessary connections.) We define the unification of a server-unit graph to the body of a request by giving the conditions under which a node from a server-unit graph may be unified to a node in the request body. Nodes may only be unified to nodes of the APPLICATIONS / 827 same type (concept to concept, role to role, etc.) that have the same predicate name. (An extension of this approach, allowing unification of nodes to superordinate nodes in a type hierarchy, is beyond the scope of this paper). Given a concept node Cl(x1) from a server-unit graph and a concept node C2(yl) in the request graph, Cl can be unified to C2 if Cl and C2 are the same concept name. Unification takes place when xl replaces all occurrences of yl in the request graph. We require that xl has not already been unified to any other concept in the server- unit graph in order to prevent nequivalencingff two distinct arguments in the request graph. Given a role node Rl(xl,x2) from a server-unit graph and a role node R2(yl,y2) in the request graph, Rl can be unified to R2 if Rl and R2 are the same role name, and for i=1,2, the concept node for xi in the server-unit graph may be unified to the concept node for yi in the request graph. Given a functional node Fl(xl,x2,...,xn) from a server-unit graph and a functional node JQ(yl,y2,..., yn) in the request graph, Fl can be unified to F2 if Fl and F2 are the same functional node and for i=1,2,...n, the concept node for xi in the server-unit graph may be unified to the concept node for yi in the request graph. Given a selection node Sl(xl,cl) from a server-unit graph and a selection node S2(yl,dl) in the request graph, Sl can be unified to S2 if S2(xl,dl) implies Sl(xl,cl) and the concept node for yl in the server-unit graph may be unified to the concept node for xl in the request graph. For example, LESS-THAN(xl,l5) in a server-unit graph may be unified with LESS-THAN(yl,lO) in a request graph, because the information requested is more restricted than the information available from the server. Once all nodes in a server-unit graph that may be unified with a request body are identified, the unification is performed by substituting the arguments from the request body for the arguments in a copy of the server-unit graph. The resulting server-unit graph is merged with the request body by ffsuperimposing If the two, and removing duplicate nodes. Note that there may be mu1 tiple ways to unify a server-unit graph and the request body, involving different nodes of the request body. This arises, for example, in requests whose solution involves joining a database relation to itself. In these cases, multiple copies of the server-unit graph are made and merged with the request. For example, a database relating a person’s name, employee id, and the employee id of his/her manager would be used twice to cover a request for the name of some person’s manager. The fact that the employee id’s are used as the link between an employee and his/her manager emerges as a result of the multiple unifications. VI THE FORMULATION ALGORITHM The overall algorithm is as follows: The search space of server-units is first pruned by eliminating server-units whose graphs contain no role or function nodes with predicate names in common with those in the request body. These server-units can never be selected for merging with the request body. Conversely, the request may be immediately identified as one that cannot be handled by the server-units if it contains one or more nodes not found in any of them. Next, a generate-and-test approach is used to perform a best-first search of the space of the power set of server-units to find a set that covers the request. Starting with the empty set, the generate portion takes the set most recently tested and found insufficient to cover the request, and generates its successors. It will have N successors, each created by adding to the set one of the server-units among the N not already in that set. The best set of server- units among all those already generated but not yet tested is selected as the candidate set for testing. The heuristic for “best” is to choose a set with the minimum score, defined as follows: H = cl * card((suj) + c2 * card({predicate-names(r)) - {predicate-names(s where cl and c2 are positive weighting coefficients, card is the cardinality function, {su) is the set of server-units being scored, and (predicate-names(r)] - (predicate-names(su)) is the set of nodes in the request for which nodes with identical names are not found in the server- units. This heuristic gives a better score to smaller sets of server-units, and sets leaving fewer nodes in the request body that have no potential covering in the set of server-units. If c2 is zero, the heuristic yields a breadth-first search of the power-set of server-units; if cl is zero, the search is depth-first. Because the cardinality of the power set of server-units is finite, the search will always terminate. The heuristic improves the search by causing small sets with a greater likelihood of covering the request body to be examined first. The test portion of the algorithm performs the unification of the candidate set with the request body, then tests whether all non-selection nodes of the request body are covered, and the server- units form a single, connected, acyclic dataflow graph that is sufficient to obtain the desired output. The latter condition is tested by forming a graph in which each server-unit predicate is a node. Nodes are connected by directed arcs from the available outputs of a server-unit to those server-units containing the same variables on their mandatory inputs list. Starting from the known variables and the output variables of server-units whose mandatory inputs are nil, a ffmarkff is propagated through the graph. The mark propagates from the mandatory input variables to the available output variables of a node, if and only if all mandatory input variables of the node 828 / ENGINEERING are marked. The mark propagates from node to node along the directed arcs of the graph. After all propagation is complete, the test succeeds if all mandatory input variables for each server-node are marked, and a mark has reached each argument found in the head of the request. For example, the request to find the day of the week on which John was born, after unification with the set of server-units comprised of birthday-relation and day-of-week, appears as follows: answer(w) :- birthday-relation(n,d), equal(n,ffJohnn),day-of-week(d,w) This formulation covers all non-selection nodes of the request. The mandatory inputs of birthday-relation are nil, and its available outputs are (n,d). The mandatory-inputs of day- of-week are (d), and its available outputs are (WI- A mark starting from the known variable n, and the outputs of the server-units whose mandatory inputs are nil, propagates to all mandatory inputs of each server-unit, and the desired variable, w. Thus, this set of server- units succeeds as a formulation of the original request in terms of the capability space. From this, invocations to the servers can be readily generated and sequenced. VII EXTENSIONS As described above, the graph representation allows expression of requests for information that are simple conjunctions of predicates, equivalent to the ffselect/project/joinff operations of relational algebra (Maier, 1983). It must be extended to enable representation of requests involving disjunctions, aggregations, and recursion. The key to handling these is to extend the graph representation of requests to include subgraphs. A subgraph is a request (head and body), whose head predicate appears in the body of another request. Disjunctions are represented by alternative subgraphs defining the same head predicate. Recursive requests are represented by alternative subgraphs, one of which has a body that directly or indirectly contains its own head predicate. An aggregation is represented by a node in the body of a request which contains the head predicate of a subgraph as one of its arguments. Extension of the basic formulation algorithm to handle subgraphs involves performing the unification of server-units to each subgraph of the overall request body. A given graph is considered covered if and only if its body and all subgraphs referenced in its body are covered. and requests involving aggregations. (Extensions for disjunctive and recursive requests are being designed.) The prototype accesses three separate databases and three application programs in the general domain of parts inventory and equipment maintenance. The databases and application programs run on two personal computers using commercially available software. Table 1 shows the performance of the prototype for some typical requests, exclusive of the time taken by the personal computers in retrieving or computing answers. Out of a total of 17 server- units in the capability space for these databases and applications, more than half may be pruned from the search space for a typical request. The best-first search frequently yields the formulation of the request in server-unit terms with a search of a small fraction of this space. This is because the heuristic function readily discriminates between server-units based on the relevance of their server-graphs to nodes in the request graph. On the Symbolics 3600, the formulation of requests is accomplished in the order of 1 to 3 seconds of processing. -------------------------------------------------- TABLE 1 Performance of the prototype on several requests in a sample domain -------------------------------------------------- -----_-------------------------------------------- Request Pruned # of sets CPU time (English search of server- used to equiv.) space units formulate (P of sets) tested request -------------------------------------------------- Is X a widget? What component is X a part of? On what date was X replaced? What was X last serviced? Where can more parts of type X be obtained? On what day of the week was part X replaced? 2**5 1 .345 sec. 2-8 6 2.888 sec. 2**6 1 1.035 sec. 2**6 1 .725 sec. 2**5 2 1.544 sec. 2**7 2 1.928 sec. VIII IMPLEMENTATION A prototype has been implemented on a Symbolics 3600 using Zetalisp. The implementation handles ffselect/project/joinff types of requests, APPLICATIONS / 829 IX EVALUATION Our approach to the formulation problem has two major strengths. First, it is independent of the domain. It can be ported to another domain by creating the appropriate server-unit representations, and the software to create an invocation message to a server given its server- unit predicate. The second strength is the wide variety of requests that can be handled. Not only can a simple request be routed to the appropriate server, but a request can be automatically partitioned into subrequests, each processed by a different server. Our current implementation has several shortcomings. First, the request language as implemented cannot yet deal with disjunctions and recursive requests (although it is possible to invoke server-units that are internally recursive). Second, rather than invoke a more efficient database query, the algorithm may choose to run an application program to respond to a true/false question, treating failure as negation. Finally, the input language is not useful as an end-user language; a more user-friendly interface is needed (Larson, et al., 1985). X CONCLUSION We have developed a graph-oriented technique for representing server capabilities. Used in conjunction with unification and heuristic search algorithms, it provides an approach to automatically selecting and invoking the appropriate servers to solve a user’s problem. Our implementation shows that the approach can formulate a set of server invocations equivalent to a user’s request, stated in server-independent, declarative terms, in a tractable amount of time. Included as a component of a larger user interface, this approach can yield the benefits of a uniform interface to multiple heterogeneous, pre-existing servers, hiding the existence of those servers by automatically generating sequences of server invocations to solve the user’s problem. REFERENCES [l] Barstow, D. Knowledge-based Program Construction. North Holland, New York: Elsevier, 1979. L-21 Brachman, R. J., R. E. Fikes, and H. J. Levesque . “Krypton: A Functional Approach to Knowledge Representation. If Computer, 16:10 (1983) 67-73. [41 Burstall, R. M. and J. Darlington. “A Transformation System for Developing Recursive Programs. If Journal of the - - - Association for Computing Machinery, 24:l NV"& 44-67. [51 Gray, P. M. D. and D. Moffat. “Manipulating Descriptions of Programs for Database Access: If Proc. IJCAI-83, Karlsruhe, W. Germany, August, 1983, pp. 21-24. C61 Kim, M. W. EGS : “A Transformational Approach to Automatic Example Generation.” P&c. IJCAI-85, Los Angeles, California, August,1985,p. 155-161. [71 Larson, J. A., W. F. Kaemmerer, K. L. Ryan, J. Slagle, and W. T. Wood. “ATOZ : A Prototype Intelligent Inter face to Mu1 t iple Information Systems. If Proceedings of the -- IFIP Working Conference, the Future of Command Languages. Rome, Imy, September, 1985. Ml Maier, D. The Theory of Relational Databases . Rockvme, Maryland: Computer Science Press, 1983. [g] Nilsson, N. J. Principles of Artificial Intelligence. Palo Alto, California: Tioga Publishing, 1980. [lo] Ryan, K. R. and J. A. Larson. “The use of E-R Data Models in Capability Schemas.” Technical Report, Honeywell Computer Sciences Center, Golden Valley, Minnesota, March, 1986. [ll] Sowa, J. F. Conceptual Structures: Information Processing in Mind and Machine. -- Menlo Park, California: Addison-Wesley, 1984. [12] Stonebraker, M. “Implementation of Integrity Constraints and Views by Query Modification,” ACM/SIGMOD International - Symposium on Management of Data, San Jose, California, May, 1975, pp. -8. [31 Ceri, S. and G. Pelagatti. Distributed Databases Princi les and Systems. New York: -+r- McGraw-Hill, 19 830 / ENGINEERING
|
1986
|
19
|
460
|
The Butterfly TM Lisp System Seth A. Steinberg Don Allen Laura B agnall Curtis Scott Bolt, Beranek and Newman, Inc. 10 Moulton Street Cambridge, MA 02238 ABSTRACT This paper describes the Common Lisp system that BBN is developing for its ButterflyTM multiprocessor. The BBN ButterflyTM is a shared memory multiprocessor which may contain up to 256 processor nodes. The system provides a shared heap, parallel garbage collector, and window based I/O system. The future construct is used to specify parallelism. THE BUTTERFLYTM LISP SYSTEM For several decades, driven by industrial, military and experimental demands, numeric algorithms have required increasing quantities of computational power. Symbolic algorithms were laboratory curiosities; widespread demand for symbolic computing power lagged until recently. The demand for Lisp machines is an indication of the growth of the symbolic constituency. These machines possess architectural innovations that provide some performance increases, but they are still fundamentally sequential systems. Serial computing technology is reaching the point of diminishing returns, and thus both the numeric and symbolic computing communities are turning to parallelism as the most promising means for obtaining significant increases in computational power. BBN has been working in the field of parallel computers since the early 1970’s, having first developed the Pluribus multiprocessor and more recently, the Butterfly, the machine whose programming environment we concern ourselves with in this paper. The Butterfly multiprocessor consists of a set of up to 256 nodes, each containing both a processor and memory, connected by a Butterfly switch (a type of Omega network) (see figure 1). Each node has from 1 to 4 megabytes of memory, a Motorola 68000 series processor and a special purpose Processor Node Controller (PNC). The PNC is microprogrammed to handle inward and outward Butterfly switch transactions, and to provide multiprocessing extensions to the 68000 instruction set, particularly in cases where atomicity is required.1 To date, Butterfly programs have been written exclusively in C, with most numeric applications using the Uniform System package. The Uniform System provides and manages a large shared address space and has subroutines which can be used to distribute subtasks to all of the active processors. It has been used to speed up algorithms for matrix multiplication, image processing, determining elements of the Mandelbrot set, and solving differential equations and systems of linear equations.2 3 Butterflym is a trademark of Bolt, Beranek and Newman. Under DARPA sponsorship, BBN is developing a parallel symbolic programming environment for the Butterfly, based on an extended version of the Common Lisp language. The implementation of Butterfly Lisp is derived from C Scheme, written at MIT by members of the Scheme Team.4 The simplicity and power of Scheme make it particularly suitable as a testbed for exploring the issues of parallel execution, as well as a good implementation language for Common Lisp. The MIT Multilisp work of Professor Robert Halstead and students has had a significant influence on our approach. For example, the future construct, Butterfly Lisp’s primary mechanism for obtaining concurrency, was devised and first implemented by the Multilisp group. Our experience porting MultiLisp to the Butterfly illuminated many of the problems of developing a Lisp system that runs efficiently on both small and large Butterfly configurations.5 6 In the first section, this paper describes future-based multitasking in Butterfly Lisp and how it interacts with more familiar Lisp constructs. The second section describes how Butterfly Lisp deals with the problems of task distribution and memory allocation. It contrasts our approach and the Uniform System approach. The third section describes the Butterfly Lisp parallel garbage collector and the fourth section describes the user interface. Figure 1: 16x16 Butterfly Switch 730 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. BUTTERFLY LISP Experiments in producing parallel programs that effectively use the available processing power suggest that the fundamental task unit must execute on the order of no more than lOO- 1000 instructions 7 8 9. If the task unit is larger, there will be insufficient parallelism in the program. This task size is only slightly larger than the size of the typical subroutine. This similarity in scale implies that the tasking overhead must be within an order of magnitude of the subroutine overhead. To encourage the programmer to use subtasks instead of calls, the tasking syntax should be similar to the calling syntax. mapcar does not return until all of the subtasks have been spawned. We can create the futures more aggressively, as in the following example: (defun aggressive-mapcar (function list) (if (null list) nil (cons (future (function (car list))) (future (aggressive-mapcar function (cdr list)))))) A call to aggressive-mapcar would quickly start two subtasks and immediately return a cons containing the two Butterfly Lisp uses the future mechanism as its basic task creating construct. The expression: futures. This makes it possible to start using the result of the subroutine well before the computation has been completed.Ifapairof aggressive-mapcars is cascaded: (future <s-expression>) causes the system to note that a request has been made for the evaluation of <s-expression>, which can be any Lisp expression. Having noted the request (and perhaps begun its computation, if resources are available), control returns immediately to the caller of future, returning a new type of Lisp object called an “undetermined future” or simply,a future. The future object serves as a placeholder for the ultimate value of <s-expression> and may be manipulated as if it were an ordinary Lisp object. It may be stored as the value of a symbol, consed into a list, passed as an argument to a function, etc. If, however, it is subjected to an operation that requires the value of <s- expression> prior to its arrival, that operation will automatically be suspended until the value becomes available. future provides an elegant abstraction for the synchronization required between the producer and consumer of a value. This permits results of parallel evaluations to be manipulated without explicit synchronization. Thus, Butterfly Lisp programs tend to be quite similar to their sequential counterparts. (aggressive-mapcar f (aggressive-mapcar g x-list)) subtasks spawned by the outer aggressive-mapcar may have to wait for the results of those spawned by the inner call. As might be expected, the introduction of parallelism introduces problems in a number of conventional constructs. For example, the exact semantics of Common Lisp do are extremely important as in the following loop: (do ((element the-list (cdr element))) ((null element)) (future (progn (process-first (caar element)) (process-second (cadar element))))) In a serial system it doesn’t matter if do is implemented tail recursively or not. In a parallel system, if the loop is implemented tail recursively the semantics are pretty clear: (defun stepper (element) (cond ( (null element) nil) This similarity is illustrated by the following example. To convert a simplified serial version of mapcar : (defun simple-mapcar (function list) (if (null list) nil (cons (function (car list)) (simple-mapcar function (cdr list))))) into a parallel version requires the addition of a single futureform: (defun parallel-simple-mapcar (function list) (if (null list) (t (future (progn (process-first (caar element)) (process-second (cadar element)))) (stepper (cdr element))))) There will be a new environment and so a new binding of e lement for each step through the loop. It is easy to determine which value of e lement will be used in the body of the future. This is not the case if the do loop is implemented using a prog, side-effecting the iteration variable: nil (prog (element) (cons (future (function (car list))) loop (if (null element) (parallel-simple-mapcar (return)) function (future (progn (cdr list))))) (process-first (caar element)) In this version of mapcar, the primary task steps down the original list, spinning off subtasks which apply function to each element. The function, parallel-simple- (process-second (cadar element)))) (setq element (cdr element)) (go loop)) AI LANGUAGES AND ARCHITECTURES / 73 1 Since all iterations share a single binding of element, we have a race condition. IMPLEMENTATION In some ways, Butterfly Lisp is similar to the Uniform System. Both systems provide a large address space which is shared by all of the processors. Processors are symmetric, each able to perform any task. In both systems programs are written in a familiar language which, for the most part, executes in a familiar fashion. Since the Uniform System has been well tuned for the Butterfly we can use it to study the effectiveness of our implementation. The Uniform System was designed for numeric computing, taking advantage of the regularity of structure of most numeric problems. At any given time all of the processors are repeatedly executing the same code. This consists of a call to a generator to determine which data to operate on followed by a call to the action routine which does the computation. Loops in typical numeric algorithms can be broken into two parts: the iteration step and the iteration body. On a multiprocessor the iteration step must be serialized, but the iteration bodies may be executed in parallel. To use all processors effectively, the following condition should be met: Tbody >= Tstep * Nprocessors The Uniform System produces its best performance when the iteration step is completely independent of the iteration body, as is the case in many numeric programs. While symbolic tasks are usually expressed recursively rather than iteratively, this shouldn’t be a problem. Recursion can be broken into a recursion step and a recursion body. Stepping through a list or tracing the fringe of a tree is not significantly more expensive than incrementing a set of counters. If we are scanning a large static data structure, then our recursion step and recursion body will be independent and we can obtain performance improvements similar to those produced by the Uniform system. Unfortunately, this condition cannot always be met, Many symbolic programs repeatedly transform a data structure from one form to another until it has reached some desired state. The Boyer-Moore theorem prover applies a series of rewrite rules to the original theorem and converts it into a truth table-based canonical form. Macsyma repeatedly modifies algebraic expressions as it applies various transformations. In these cases, the recursion step often must wait until some other recursion body has finished before the next recursion body can begin; the future mechanism is better suited to dealing with this interdependency. It was because of these differences between the numeric and symbolic worlds that we implemented the future mechanism, which can be used to write programs in a number of styles.. The basic units of work are either procedures or continuations, both of which are first class Scheme objects. When a task is first created by the execution of the future special form, a procedure is created and placed on the work queue. Whenever a processor is idle it takes the next task off this queue and executes it. When a task must wait for another task to finish evaluating a future, its continuation is stored in that future. Continuations are copies of the control stack createdby thecall-with-current-continuation primitive. When a future is determined, any continuations waiting for its value are placed on the work queue so they may resume their computations. The Butterfly Lisp memory allocation strategy is different from that of the Uniform System. While the Butterfly switch places only a small premium on references to memory located on other nodes, contention for the nodes themselves can be a major problem. If several processors attempt to reference a location on a particular target node, only one will succeed on the first try; the others will have to retry. With a large number of processors, this can be crippling. To minimize contention, the Uniform System assists the programmer in scattering data structures across the nodes. In addition, it encourages the programmer to make local copies of data, providing a number of highly optimized copying routines that depend on the contiguous nature of numeric data structures. The complicated, non-contiguous data structures used in symbolic computation make the copying operation far more expensive, and thus worthwhile in fewer cases. Copying also introduces a number of consistency and integrity problems. Lisp makes a fundamental distinction between the identity primitive (eq) and the equality primitives (eql, equal, =). While other languages draw this distinction, it is frequently fundamental to the working of many symbolic algorithms. To diffuse data structures throughout the machine, the Lisp heap is a striated composite of memory from all of the processors. The garbage collector, which is described in the next section, is designed to maintain this diffusion. Each processor is allotted a share of the heap in which it may create new objects. This allows memory allocation operations to be inexpensive, as they need not be interlocked. GARBAGE COLLECTION Butterfly LISP has a parallel stop and copy garbage collector which is triggered whenever any processor runs out of heap space. (This strategy runs the risk that garbage collections might occur too frequently, since one processor might use memory much more quickly than the others. Experiments indicate that there is rarely more than a 10% difference in heap utilization among the processors), 10 When a processor realizes that it must garbage collect, it generates a local garbage collection interrupt. The handler for this local interrupt uses a global interrupt to notify every processor that systemwide activity is needed. A global interrupt works by sending a handler procedure to each of the other processors, which they will execute as soon as they are able. Global interrupts do not guarantee synchrony. Since all processors must start garbage collecting at once and must not return from the interrupt handler until garbage collection is completed, the processors must synchronize before and after garbage collecting. This is accomplished by use of the await-synchrony operation, which uses a special 732 / ENGINEERING synchronizer object. Each processor awaits synchrony until all processors are waiting for the same synchronizer, at which time they may all continue. Internally, a synchronizer contains a waiting processor count that is atomically decremented as each processor starts waiting for it. When this count goes to zero, all of the processors may proceed. The garbage collection interrupt handler uses global interrupts and synchronizers something like this: (defun handle-local-gc-interrupt () (let ((start-synch (make-synchronizer)) (end-synch (make-synchronizer))) (global-interrupt (lambda () ; The Handler (await-synchrony start-synch) (garbage-collect-as-slave) (await-synchrony end-synch))) (await-synchrony start-synch) (garbage-collect-as-master) (await-synchrony end-synch))) On a serial machine, a copying garbage collector starts by scanning the small set of objects called the root, which can be used to find all accessible data. Scanning consists of checking each pointer to see if it points into old space. Old space objects are copied into new space and the original old space pointer is replaced by a pointer to the copied object in new space. A marker is left in old space so that subsequent checks do not copy the object again. Once an object has been copied into new space it must be scanned as well, since it may still contain pointers into old space. The scanning and copying continues until everything in new space has been scanned, at which time there are no more pointers into old space and garbage collection is complete. The Butterfly Lisp garbage collector works by breaking new space into partitions. These are the basic units of memory which are scanned or copied into. Each processor waits for a partition to appear on the work queue and begins to scan it, copying objects from old space into new space. When it needs memory to copy into, a processor grabs a fresh partition from new space. When it fills a partition, it puts it on the queue to be scanned. The garbage collection starts with one processor scanning the root, but all of the processors are quickly engaged in scanning and copying (see figure 2). Garbage collection continues until all the processors are idle and the queue is empty. lo USER INTERFACE The Butterfly Lisp User Interface is implemented on a Symbolics 3600-series Lisp Machine, communicating with the Butterfly using Internet protocols. This system provides a means for controlling and communicating with tasks running on the Butterfly, as well as providing a continuously updated display of the overall system status and performance. Special Butterfly Lisp interaction windows are provided, associated with tasks running on the Butterfly. These windows may be selected, moved, resized, or folded up into task icons. There is also a Butterfly Lisp mode provided for the ZMACS editor, which connects the various evaluation commands (e.g. evaluate region) to an evaluation service task running in the Butterfly Lisp system. A version of the Lisp machine- based data Structure Inspector is also being adapted for examining task and data structures on the Butterfly. Processor 1 Processor 2 Old New 2 4 q 2 4 EB 2 6 PI 2 6 La 3 8 I3 3 8 ScanQueue 3 q 3 q 5Ez2z!6m 5Ea6B 0 0 0 Figure 2: Parallel Garbage Collection - Copy and Scan Each task is created with the potential to create an interaction window on the Lisp machine. The first time an operation is performed on one of the standard input or output streams a message is sent to the Lisp machine and the associated window is created. Output is directed to this window and any input typed while the window is selected may be read by the task. This multiple window approach makes it possible to use standard system utilities like the trace package and the debugger. A pane at the top of the screen is used as a “face panel” to display the system state. This is information collected by a separate process, which spies on the Butterfly Lisp system. The major feature of this pane is a horizontal rectangle broken vertically into slices. Each slice shows the state of a particular processor. If the top half of the slice is black then the processor is running, if gray, it is garbage collecting, and if white, it is idle. The bottom half of each slice is a bar graph that shows how much of each processor’s portion of the heap is in use (see figure 3). This status pane also shows the number of tasks awaiting execution, the effective processor utilization, the rate of memory consumption, and an estimate of Butterfly switch contention. The graphical display makes such performance problems as task starvation easy to recognize. FUTURE DIRECTIONS Butterfly Lisp is currently interpreted. While this has been adequate during the development of various aspects of the system, such as the user interface and metering facilities, compilation is essential to the realization of the performance potential of the Butterfly. We are currently working on integrating a simple compiler, which will be used to explore the behavior of compiled code in our parallel environment. Later, we will substitute a more sophisticated compiler currently under development at MIT. AI LANGUAGES AND ARCHITECTURES / 733 The User Interface will continue to be developed, aided by the experiences of a user community that is just beginning to form. We expect more facilities for both logical and performance debugging, with emphasis on graphical representations. We will continue work already underway to provide compatibility with the Common Lisp standard. We expect that Butterfly Lisp will become a testbed for exploring data structures and procedural abstractions designed specifically for parallel symbolic computing. ACKNOWLEDGEMENTS The authors would like to thank the following people for their contributions to our efforts: James Miller, for developing C Scheme, and for critical assistance in creating a parallel C Scheme for the Butterfly. Anthony Courtemanche, for MultiTrash, our parallel garbage collector. The MIT Scheme Team, especially Bill Rozas, Chris Hanson, Stew Clamen, Jerry Sussman, and Hal Abelson, for Scheme, their help, and advice. Robert Halstead, for Multilisp, an excellent possibility proof, and for many enlightening discussions. DARPA, for paying for all of this. DARPA Contract Number MDA 903-84-C-0033. 1 Bolt, Beranek and Newman Laboratories Butterfly Parallel Processor Overview Bolt, Beranek and Newman Cambridge Massachusetts December, 1985 2 Crowther, W., Goodhue, J., Starr, R., Milliken, W., Blackadar, T. Performance Measurements on a 128-Node Butterfly Parallel Processor Internal Bolt, Beranek and Newman Laboratories Paper 3 Crowther, W. The Uniform System Approach to Programming the Butterfly Parallel Processor Internal Bolt, Beranek and Newman Laboratories Paper December 1985 4 Abelson, H. et al. The Revised Revised Report on Scheme, or An UnCommon Lisp AI Memo 848, M.I.T. Artificial Intelligence Laboratory Cambridge, Massachusetts August 1985 5 Halstead, R. Implementation of Multilisp: Lisp on a Multiprocessor ACM Symposium on Lisp and Functional Programming Austin, Texas August 1984 6 Halstead, R. Multilisp: A Language for Concurrent Symbolic Computation ACM Transactions on Programming Languages and Systems October 1985 C Figure 3: User Interface Display - Idle and Running 7 Gupta, A. Talk at MIT March, 1986 8 Douglass, R. A Qualitative Assessment of Parallelism in Expert Systems IEEE Software May 1985 g Bawden, A. and Agre, P. What a parallel programming language has to let you say. AI Memo 796, M.I.T. Artificial Intelligence Laboratory Cambridge, Massachusetts September 1984 lo Courtemanche, A. MultiTrash, a Parallel Garbage Collector for MultiScheme M.I.T. B.S.E.E. Thesis Cambridge, Massachusetts January 1986 ‘34 / ENGINEERING
|
1986
|
2
|
461
|
DESIGNING FOR MANUFACTURABILITY IN RIVETED JOINTS A. R. Kilhoffer Cincom Systems, Inc. 2300 Montana Avenue Cincinnati, OH 45211, USA ABSTRACT The study of human experts in the areas of design and manufacturing has lead to two hypotheses concerning the problem solving methods which these engineers utilize to at- tack difficult problems. The basis of both hypotheses is a modular approach to problem solving. One hypothesis ad- dresses the nature of the modules utilized while the other hypothesis deals with the organization of the modules. A knowledge-based system has been designed and implemented under the philosophy expressed in these hypotheses. The domain is the design and manufacture of riveted joints in sheet metal. Special emphasis is given to the integration of design knowledge and manufacturing knowledge for the con- cept of “designing for manufacturability”. The implemen- tation is described in some detail and two example problems are presented with their solutions. I INTRODUCTION There are at least three stages in the production of mechanical goods including product and part design, process planning and scheduling, and shop floor execution. There are a variety of reasons to attempt to automate the flow of in- formation within and between these stages, many of which have to do with increasing the quality and decreasing the cost of the goods produced, some of which have to do with the rapidly shrinking base of human experts capable of per- forming the necessary tasks. Application of the numeric processing power of the digital computer has already lifted part of the burden from the human engineer, making routine calculations more accurate (e.g. tolerance charting) and reduc- ing previously impractical calculations to standard practice (e.g. finite element analysis). Although this ongoing first stage of computer automation can be considered as eminently suc- cessful, it has not solved all problems. It is often the case that a significant portion of the information flow within a particular engineering task cannot be described numerically, and so has eluded automation. More importantly in the con- text of this research, it is frequently the case that numeric processing does not provide assistance in automating the infor- mation flow between tasks. In most cases it is obvious that the essential ingredient which has not been captured numerically is the experiential knowledge of the engineer. Authors have therefore suggested that the second wave of automating the information flow in design and manufacturing will be based on the techniques of Artificial Intelligence [Simmons 1984, Kempf 19851. This assertion has been tested in some areas of product and part K. G. Kempf FMC Corporation - Artificial Intelligence Center 1185 Coleman Avenue, Box 580 Santa Clara, CA 95052, USA design [Brown and Chandrasekaran 1983, Dixon and Simmons 19 831, as well as some topics in process planning and schedul- ing [Descotte and Latombe 1985, Phillips and Mouleeswaran 1985, Fox 19831. From the limited amount of data available at this time, it certainly appears that AI techniques are very useful within specific engineering functions. More recently, work has been started to assess the value of AI techniques in managing and automating the process planning and scheduling functions and shop floor execution functions [Fox and Kempf 1985a, Newman and Kempf 1985, Fox and Kempf 1985b, Fox and Kempf 1986 1. Once again, the preliminary results are encouraging. The purpose of the research described here is to begin an assessment of the utility of applying AI techniques to the integration of product and part design functions with process planning and scheduling functions. In the jargon of the engineer, we are concerned with the concept of “designing for manufacturability”. The specific domain of concern is the detailed design and process planning for riveted joints between sheets of aluminum alloy. From the design perspective, the engineer al- ready knows that a joint must be present, but has not yet designed the joint in detail. The input from the design side is a rough idea of the geometry of the joint and the detailed functional requirements that the manufactured joint must ex- hibit. From the process perspective, the engineer already knows his resource catalog - the tools, equipment, and capabilities that exist on the factory floor - and this serves as the manufacturing input. The designer is interested in producing as output a design blueprint with detailed sheet positions including overlap and any stiffening plates required for a joint which will meet the functional requirements. The output which the process planner is seeking is a manufactur- ing blueprint with the type, number, and positions of the rivets which need to be installed. Our concern in pursuing this research was to ascertain whether the knowledge of the design engineer and the process engineer can be captured and utilized, and, more importantly, whether the knowledge from these sources can be integrated. II THEORETICAL APPROACH The fact that it usually takes a new college graduate be- tween ten and twenty years to achieve expert status in the design and manufacturing community gives some indication of the inescapable difficulty of capturing such expertise in a computational model. Furthermore, the fact that few design and manufacturing engineers have made much progress in ef- fectively integrating their respective knowledge makes our problem even more challenging. But these human experts still serve as our existence proof that designing for manufac- 820 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. turability is possible. It is through careful observation of these experts that a phenomenonological model of their reason- ing methods has begun to emerge. This model is still ten- tative, containing errors of omission and commission, but serves as the basis for the implementation described here. Design and manufacturing experts give the impression that they solve problems in a modular fashion. Our current model of their methods includes two major hypotheses. One hypothesis concerns the organization of the reasoning modules used while the other involves the identity of the modules in- cluded in the process. It is encouraging to note that both of these hypotheses map onto existing AI techniques. In terms of module organization, the experts observed decompose the problems which they attempt to solve in two distinct ways (Figure 1). On one hand, they use techniques such as top down reasoning and hierarchical abstraction, solv- ing the problem at many levels of detail, usually working from less detail to more detail (the vertical bars in Figure 1). In this process they usually have a different representation of the problem and bring different knowledge to bear at each level. On the other hand, they use techniques such as divide and conquer and cooperative problem solving. Within the same level of abstraction, using a consistent problem represen- tation, the problem is dissected into separate but related pieces (the horizontal bars in Figure 1). These subproblems are then attacked using the appropriate knowledge for each. Abstraction More abstract set of subproblems Less abstract set of subproblems Figure 1. Basic Organization of Design and Manufacturing Expertise In terms of module identity, the experts observed use a wide variety of distinct reasoning methods (Figure 2). Some of these modules are described here with the qualification that the list is in no way exhaustive, including only those modules which are needed to describe our implementation. Notice that while the modules presented are all described at a common level, each can be applied at any level of abstraction or dissection presented in Figure 1. A module which is often encountered is the algorithmic module shown in Figure 2a. The basic idea here is to numerically compute a result given some data, but this often involves application of expert knowledge to select the ap- propriate equations or tables, to correctly insert or extract the data, and to interpret the result in the context of the problem at hand. A second commonly encountered module is the knowledge-based selection module shown in Figure 2b. Here the idea is to apply knowledge, often held as rules, to determine the heuristically best selection from a set, all the members of which might potentially suffice as solutions. The knowledge-based restriction module shown in Figure 2c is also encountered fairly frequently. In this module, knowledge, of- ten held as constraints, is used to prune a search space with the goal of ascertaining whether any solution remains. The response set includes no, yes, and maybe, the latter two ac- companied by subspaces which require further investigation. Data Result a) Algorithmic module Satisficing Data set Rules 4 Selection b) Rule-based Constraints 1 M Go Maybe NoGo (and subspace) c) Constraint-based selection module restriction module Figure 2. Some Modules of Design and Manufacturing Expertise In summary, the first hypothesis concerns an abstraction and dissection hierarchy of problem solving activity while the second hypothesis deals with a subset of current problem solving techniques (also expressible as an abstraction hierarchy from a more global perspective). Our implementation attempts a mapping between these two structures in the context of designing for manufacturability for riveted joints. III IMPLEMENTATION APPROACH The implementation described here represents an initial attempt to provide intelligent automated assistance to engineers concerned with the riveting, welding, bonding (with glue), and fastening (with bolts) of aluminum, steel, and titanium sheets in various geometries. It also serves as a test for the two hypotheses outlined earlier. The subsystem described here is called REX (Riveting Expert) and addresses the riveting of aluminum sheets. REX is implemented in VAX Common LISP (version 1.0) and runs under VMS (version 4.1A) on a VAX 8600. The current system represents roughly 350 hours of knowledge collection including direct interaction with a domain expert and study of documents provided by the ex- pert. A few dozen example problems were solved during the APPLICATIONS / 821 collection process. The system provides useful assistance to an engineer with many years of experience in that it frees the engineer from making detailed considerations, thereby promot- ing a more global view of the problem at hand, and provides consistent, quality solutions quickly (roughly one minute of user input followed by one to two minutes of REX run time>. Knowledge representations in REX are composed of frames, each frame with any number of slots, each slot having value and default facets. In some cases, slots contain the attributes of objects while the value and default facets contain individual or inclusive (or exclusive) sets of reals, in- tegers, or strings. Slots can also indicate inheritance with facet values taking on the frame names of parents and children. Values about the problem at hand are supplied by the user while the domain expert supplies values about the domain, the defaults, and inheritance. In other cases, default facets are not used, but value facets can contain executable constraints or rules supplied by the domain expert as well as textual ex- planation for the user and source information for system maintenance. Slots can also provide focus of attention contain- ing frame names of related or associated constraints and rules. The input to REX consists of two frames. One is user supplied and contains a detailed description of the problem to be solved. The contents are divided into two major sections - materials and requirements. In the materials section, infor- mation about composition is made available including metal type (currently limited to aluminum), alloy number, and temper condition. ALso stored in the materials section is the length, width, and thickness of the pieces to be joined. Note that the two pieces can have different compositions and/or dimensions. Finally, the materials section contains information about the basic joint geometry (currently limited to flat joints although L-shaped and T-shaped joints have been considered) and about how many sides of the joint will be accessible during manufacturing. In the requirements section, the loads to which the finished joint will be subjected are described by a type (currently limited to axial sheer although other types have been considered) and a magnitude. Appearance require- ments are also stored here and include flushness relative to sheet edges and relative to rivet heads. Finally, the require- ments section contains information about the operational en- vironment that the finished joint must withstand (currently limited to vibration although corrosion and temperature have been considered). The other input frame is user modifiable and contains design and manufacturing criteria. The design criteria include safety margin and weight criticality information. REX uses the safety margin data to design the joint to handle worse case overloads. The weight criticality data is considered when- ever there is a design tradeoff to be made and there is an option to minimize the amount of sheet overlap or minim% the number or size of rivets. The manufacturing criteria are included in two forms. On one hand there is riveting tech- nology data about the limits of riveting in any shop regard- less of facilities. Examples might include the standard length and dimensions of rivets supplied by manufacturers or the minimum rivet “drive-up” possible (the amount of shank flat- tened during rivet installation). On the other hand there is data about the particular facilities with which the user operates. Examples could include the minimum translational motion of automatic riveting equipment or the maximum gauge of sheet which can be bent to form joggled joints. The output from the system consists of a precise design and manufacturing specification of the joint to be riveted. One part of the specification contains information especially interesting to the design engineer and consists of the detailed positions of the sheets including overlap and the description and location of any additional strengthening sheets that REX has decided are needed. The other part of the specification is particularly interesting to the manufacturing engineer since it contains the type, number, and position of the rivets which need to be installed. Data Fail II 0 ) Fail v Succeed Figure 3. Organization of Riveting Expertise a) Simple lap joint b) Joggle lap joint I I I c) Single splice butt joint I I d) Double splice butt joint Figure 4. Joint Types 822 / ENGINEERING REX operates at three levels of abstraction, each with its own problem representation, each with access to the problem and criteria frames. At the abstract level of module I (Figure 3), the problem is to decide the suitability of using riveting technology to produce the joint described under the criteria given. The problem representation is thus in terms of the overall limits of riveting for any joint regardless of geometry. At the intermediate level of module II (Figure 31, the problem is to design the geometry of the joint. Thus, the representation is in terms of the number and relative geometric position of the sheets involved as shown in Figure 4 for flat joints. At the detailed level of modules III and IV (Figure 31, th e problem is to select and position the in- dividual rivets. The representation here includes the detailed geometry of each sheet and is used to solve the problem by considering the sheets in pairwise fashion since all joints are arrangements of some number of simple lap joints (Figure 4a). The most detailed level of REX shows a dissection into two cooperative problem solvers. Module III reasons over the selection of rivets of the appropriate type. Module IV reasons over the diameter and placements of the rivets in rows rela- tive to the sheet edges. Both of the modules have access to an abstraction hierarchy of frames containing information about each of the rivets available in the production facility. The contents are arranged in seven levels with individual rivets as leaves, and with each intermediate node expressing the range of properties of all of the rivet instances below it. Intermediate levels classify by such criteria as access type (one side/two sides), head type (flush/protruding), shank type (solid/tubular), and rivet material. The overall control among the modules is by simple serial invocation. Module I is activated after the user closes the input files. If module I fails, control is passed back to the user along with an explanation of the difficulty. Other- wise control is passed to module II, although the user may be notified if module I has detected that the solution to the problem is close to the limitations of riveting technology. After module II has completed its reasoning, control is passed to module III along with a description of the joint geometry of choice. Module III, if it can not find a rivet which it considers to be a good candidate, reports failure with an ex- planation to the user. Otherwise it passes control and a description of the rivet selected to module IV. If the rivet suggested can be placed so as to solve the problem, the user is notified of the details of the solution. If module IV fails in its attempt to place the rivet, control is returned to module III with a request for an alternative suggestion. Considering the individual components, module I (Figure 3) is a knowledge- based restriction module (Figure 2~). The problem and criteria frames supply the input, The search space implicitly contains all imaginable joints regardless of their manufacturability. A basic model of riveting is expressed in a network of constraints represented in frames. Values from the criteria frame parameterize the model while values for the current problem are extracted from the problem frame. Module I compares the key problem requirements with the crucial limitations of the technology. Each such com- parison results in an indication (go, maybe, nogo) and an ex- planation (for maybe and nogo). This module attempts to decide whether the stated problem lies inside or outside of the set of manufacturable joints, or is close to the border. It is here that the system holds knowledge that, for example, a joint cannot be flush relative to sheet edges on both sides, or that it is not a good idea to try to rivet foil. This module has the power, for example, to understand that a flat joint with a one sided edge flushness requirement between sheets of different gauges will need one sheet to be joggled, but that the thinnest sheet being over the maximum gauge which the shop can bend will preclude manufacture. It can also ex- plain this to the user. Module II (Figure 3) is a knowledge-based selection module (Figure 2b). The problem and criteria frames supply the input. The satisficing set for flat joints contains the ar- rangements shown in Figure 4. A basic model for making a selection is expressed in a set of rules represented in frames. Module II reasons (mainly) over the load and appearance criteria with strong influences from (mainly) the manufactur- ing access and weight restrictions to select the appropriate basic joint design in the context of the problem at hand. For example, the joggle lap joint presents a flush appearance without weight penalty, but is difficult to manufacture. The double splice joint can carry high loads, but adds significant weight and requires double sided access for easy manufacture. Module III (Figure 3) is a hybrid between knowledge- based restriction (Figure 2c) and knowledge-based selection (Figure 2b). The input to both is supplied from the problem and criteria frames along with the joint type passed from the previous module. The search space for the restriction section is the rivet frame. The goal of the constraints contained in this section is to prune all the rivets which will not work for the problem of concern. The satisficing set for the selec- tion section is the set of remaining rivets. The goal for the rules contained in the selection section is to rank the rivets in the context of the problem. The best candidate is selected and passed to module IV, but the ranked list is maintained in case module IV fails and asks for another selection. Module III fails when its list is empty and module IV has not yet produced a satisfactory solution. It is often the case that module III reasons about ranking a large number of rivets. This is because the problem characteristics do not al- ways allow the restriction section to prune much from the rivet frame. For example, a flushness requirement relative to rivet heads will eliminate further consideration of protruding rivets, but the lack of a fine flushness requirement does not allow flush rivets to be pruned. Module IV (Figure 3) is an algorithmic module (Figure 2a) containing three algorithms and knowledge in the form of frames about the algorithms. Input comes from the problem frame, the criteria frame, and the rivet frame along with the rivet passed from the previous module. The first al- gorithm is concerned with selecting a rivet diameter and cal- culating the rivet spacing. This task utilizes lookup in tables associated with rivet instances and requires rules for table and entry selection. The second algorithm distributes the rivets into rows, but since various tradeoffs need to be made here, rules are associated with this distribution process. The final algorithm checks the rivet and row spacing to verify load carrying capacity in light of the safety margin. Success of the last algorithm triggers a success report to the user. Failure causes a request to be sent back to module III for another candidate. At this point a fifth module could be added to complete the manufacturing blueprint in terms of clamping the sheets, drilling and countersinking the holes including deburring, and inserting and seating the rivets. While this module has not yet been implemented, initial studies indicate that it will be APPLICATIONS / 823 a knowledge-based selection module over manufacturing equi- pment and will include rules for sequencing the selected manufacturing operations. IV VERIFICATION One human expert provided all of the domain knowledge about riveting which was needed to bring REX to a prototype stage. Various trace and debug facilities which REX contains were used as well as consulting sessions with the domain expert to verify the initial system performance. At that point, a second domain expert from a different division was contacted to provide an independent test of the system. Two approaches were taken to obtain adequate test coverage. One set of four simpler problems was solved by REX and by the second expert independently and the answers were compared. A second set of four harder problems was solved by REX and the second expert was asked to review and comment on REX’s solutions. The input and REX’s out- put for one problem from the first set are presented in detail. This problem is of interest because it is the only problem from either set for which REX’s solution was not verified by the second expert. The input and REX’s output for one problem from the second set are also presented in detail. This problem is the most difficult problem in the en- tire set and demonstrates that REX is capable of solving problems of moderate complexity. Note that the numeric ac- curacies quoted reflect manufacturing data in the criteria data frame. In the first problem, one sheet is a 36” long, 12” wide, 0.25” thick piece of 2024T6 alloy and the other sheet is a 24” long, 12” wide, 0.1875” thick piece of 2024-T86 alloy. Manufacturing access can be gained from only one side of the joint and there is a one-sided sheet edge flushness requirement but no rivet head flushness requirement. The load is rela- tively low at 1500 pounds. REX correctly selected a Joggle lap Joint, a NAS1398D one-sided rivet with an 0.156” shank diameter and an 0.5” shank length, and placed a single row of rivets with an 0.3125” rivet-sheet edge spacing. Unfortunately, REX suggested that only three rivets were required with a 5.6875” rivet- rivet spacing. The expert was quick to point out that while that might be enough rivets to handle the load, the spacing between rivets was too large and the sheets might buckle when load was applied. On reflection, the system does contain knowledge about the minimum allowable spacing between rivets since the original focus was on heavy loads, but no knowledge about the maximum allowable spacing. The expert contributed the heuristic that sixteen times the rivet diameter should be used as the maximum, a chunk of knowledge easily incorporated into REX. In the second problem, one sheet is a 36” long, 12” wide, 0.375” thick piece of 2024-T6 alloy and the other sheet is a 24” long, 12” wide, 0.25” thick piece of 2024-T86 alloy. Manufacturing access can be gained from both sides of the joint and there is a onesided rivet head flushness require- ment but no sheet edge flushness requirement. The load is relatively high at 45000 pounds. REX correctly selected a single splice butt joint and a NAS20426AD two-sided rivet with an 0.156” shank diameter and an 0.75” shank length. REX suggested that 88 rivets were required in seven rows. Rows one, three, five, and seven hold 13 rivets each while rows two, four, and six hold 12 rivets each. The layout includes a rivet-sheet edge spacing of 0.375”, a row-row spacing of 0.75”, and a rivet- rivet spacing of 0.9375”. V CONCLUSIONS Although the two hypotheses put forward concerning the problem solving methods of human design and manufacturing engineers are simplistic, the prototype system designed and im- plemented under the philosophy expressed in these hypotheses appears to be reasonably useful. Both design and manufactur- ing expertise has been captured and integrated under the con- cept of design for manufacturability. The combined expertise has been utilized to provide solutions which have been validated not only by the expert who contributed the initial knowledge- to the system, but also by a second independent expert. However, as with any AI system, the real test of the ideas lies in the future as the prototype is expanded and extended towards a tool for daily use by engineers. ACKNOWLEDGEMENTS The work described here is extracted from the Masters thesis of A. Kilhoffer at the University of Missouri and owes much to the support of Prof. D. St Clair. The help of George Palcheff as our primary domain expert and of Richard Stover as our verification expert are gratefully acknowledged. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. REFERENCES M. K. Simmons, “Artificial Intelligence for Engineering Design”, Computer-Aided Engineering Journal, 1, 3, pp. 75-83, 1984. K. G. Kempf, “Manufacturing and Artificial Intelligence”, m 1, 1, pp. 13-26, 1985. D. C. Brown and B. Chandrasekaran, “An Approach to Expert Systems for Mechanical Design”, Proc. Automating Intelligent Behavior : Applications and Frontiers (Nat. Bureau Stds., Gaithersburg, MD), pp. 173-180, 1983. J. R. Dixon and M. K. Simmons, “Computers that Design: Expert Systems for Mechanical Engineers,” Comp. Mech. a, 2, 3, pp. 10-18, 1983. Y. Descotte and J.-C. Latombe, “Making Compromises among Antagonist Constraints in a Planner”, Artificial Intelligence, 27, 2, pp. 183-218, 1985. R. H. Phillips and C. B. Mouleeswaran, “A Knowledge- Based Approach to Generative Process Planning”, Proc. AUTOFACT ‘85 (Detroit, MI), Sect. 10, pp.l-15, 1985. M. S. Fox, Constraint Directed Search : A Case Study of Job-Shop Scheduling, Ph.D. dissertation, Computer Science Dept., Carnegie-Mellon University, Pittsburgh, PA, October, 1983. B. R. Fox and K. G. Kempf, “Opportunistic Scheduling for Robotic Proc. Assembly,” IEEE Inter. Conf. Rob. Auto. (St. Louis, MO), pp. 880-889, 1985a. P. A. Newman and K. G. Kempf, “Opportunistic Scheduling for Robotic Machine Tending,” Proc. IEEE -- Conf. Al Appl. (Miami Beach, FL), pp. 168-175, 1985. B. R. Fox and K. G. Kempf, “Complexity, Uncertainty, and Opportunistic Scheduling”, Proc. IEEE Conf. Al Appl. (Miami Beach, FL), pp. 487-492, 1985b. B. R. Fox and K. G. Kempf, “A Representation for Opportunistic Scheduling,” Proc. 3rd Inter. Symp. Rob. Res. (Gouvieux, France, 19851, to appear, 1986. 824 / ENGINEERING
|
1986
|
20
|
462
|
APPLICATION OF KNOWLEDGE BASED SYSTEMS TECHNOLOGY TO TRIFLE QUADRUPOLE MASS SPECTROMETRY (TQMS) Hal R. Brand and Carla M. Wong Lawrence Livermore National Laboratory P.O. Box 808, Mail Stop L-365, Livermore, Ca. 94550 ABSTRACT 2. The TQMS Domain The complexity of chemical instrumentation is such that automation of certain instrument functions by conventional algorithmic means is either very difficult or completely unsuitable. This paper details work in progress on the application of knowledge based systems technology to the tuning of a complex analytical instrument, a triple quadrupole mass spectrometer (TQMS) . The knowledge representation schemes and interface design between the expert system and the TQMS instrument are discussed. Preliminary results of optimizing the TQMS on chemical standards are presented. 1. Introduction In the past twenty years, chemical instrumentation has become very powerful and complex. This complexity occurs not only in the operational principles and physical construction of the instrument, but also in the acquisition and interpretation of data. In many cases, micro- or mini-computers are required for the operation of the instrument; and in most of these, a dedicated computer collects data, transforms the data when necessary and displays the results. This automation has traditionally been achieved using standard algorithmic methods implemented in procedural languages such as FORTRAN, FORTH and assembly code. However, these techniques are not suitable when the automation task involves poorly specified heuristics such as tuning or optimizing the operational parameters for very complex instruments or processes. Recent availability of commercial knowledge based systems tools has made the automation of these problems possible. This paper presents work in progress on the application of knowledge based systems techniques to the tuning of an analytical chemistry instrument, a triple quadrupole mass spectrometer (TQMS). "Work performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract number W-7405-ENG-48." A triple quadrupole mass spectrometer is a sophisticated chemical measurement instrument that has been described in detail elsewhere (Yost and Enke, 1978; Yost, et al, 1979; Yost and Enke, 1979) . It can be thought of as a two stage mass spectrometer with a collision cell between the two stages (figure 1). As in a normal, single stage mass spectrometer, a sample is introduced into the source where it is ionized and the fragments are accelerated into the first quadrupole. This quadrupole can be set to act as a mass filter, allowing only those ions with a specific mass to charge ratio to pass into the second quadrupole. If the instrument is operated in normal mass spectrometry (MS) mode, then all of the ions are detected and a mass spectrum (a plot of intensity vs. mass to charge ratio) is produced (figure 2). If the second quadrupole region is used as a strongly focusing reaction chamber and is pressurized with an inert gas, the mparent" ions selected by the first quadrupole collide with the inert gas molecules and are further fragmented to form "daughter" ions. These daughter ions are then mass selected in the third quadrupole and analyzed as in normal mass spectrometry producing a mass spectrum of a mass spectrum (MS/MS mode). Ion WUfCC Quad collision chmnbcr EktKlll multlplicr I i i 1 I i I OptrallOfl Mode Ouad 1 Ousd 2 auad 3 Rnultr 1 Separated by All masses All masses Normal mass spectrum mass paWd parrcd No gas 2 FIred on All masses Separated by Spectrum of all daughter mm speclflc mars passed mars from the relected parent *on Collision gal 3 Separated by Alf masses Fixed on Specrrum of parent lo”* thaut mass PaSSed SpeClflC mass fragment 10 gwe speclflc Coll~rion gas daughter ion 4 Separated by All maun Sepantti by Fixed mass d8ffercnce between mass parsed mass 2 scann,ng quads gwts specvfoc colllrlon gas neutral mass loss 5 F~aed on All mares Fixed on Single or multsplc rtKlaon spcclflc mass PXScd SpeclflC mass menltormg Colllrlon aas Figure 1. TQMS Schematic 812 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Figure 2. Quad t normal MS on Pl3-m 300 400 MASS LlL 500 600 700 Normal Mass Spectrum (MS mode) Each stage of this process (ionization, mass filtering, collision in the second quadrupole and another stage of mass filtering) is inefficient. In order to obtain maximum sensitivity and selectivity from this instrument, each of these inefficiencies must be minimized by careful tuning of the instrumental parameters in either MS or MS/MS mode. The TQMS built at Lawrence Livermore National Laboratory has over 30 operational parameters controlled by a DEC LSI-11/23 micro-computer (Wang et al, 1983). The LSI-11 is programmed to acquire data for the chemist after the instrumental parameters have been manually optimized (tuned). The tuning of the TQMS is a labor intensive process, requiring at least 30 minutes to obtain a parameter set where the resulting "tune" is a compromise over the mass range of interest. Studies have shown that the ion intensities may be increased by factors of 2 to 30 if the instrument is tuned over many small mass ranges instead of one large range. For routine analyses, however, the operator time required to tune for many mass ranges cannot be justified. The process of tuning the TQMS is simply an optimization problem with many independent parameters. The operational parameters of the instrument are varied until optimum sensitivity is obtained within the constraints that peak shape must be "good" and adjacent peaks must be resolved. This translates to maximizing peak height while maintaining a nearly parabolic peak shape with a peak width of less than one mass unit (Wong, Kunz, and Kehler, 1984). Figure 3 demonstrates the effect of one parameter on peak shape, and shows both an acceptable and unacceptable peak shape. The manual process of tuning the TQMS consists of the expert adjusting the operational parameters of the instrument while watching an oscilloscope display of several peaks distributed over the mass range of interest. The operator then maximizes peak heights while ensuring that peak width and shape remain within the RfRL DRTA FROM TQtlS SHOWS IMPORTANCE OF MERSURING PERK SHRPE 8000 1 , , 2240 22ke 2280 DRC SETTING - - - - - Bed tuning Good tunmg Figure 3. Good and Bad Peak Shape Comparison constraints. This process creates an average tune over the entire mass range where the sensitivity of each ion is compromised (Wong, Kunz, and Kehler, 1984; Wong, Crawford, et al, 1984; Wong, Lanning, et al, 1984). The ability to automate the tuning procedure would allow the chemist to tune the TQMS over small mass ranges (or even tune for specific parent/daughter combinations) to obtain increased sensitivity for every ion. To accomplish this, we tried two approaches: an algorithmic approach and a expert system approach. Experimentation indicated that the time required to tune the instrument using the algorithmic approach (in this case, a Simplex algorithm) was excessive due to the data acquisition time. A manual tuning required approximately 30 minutes while the Simplex tuning required approximately 300 minutes (figure 4). Comparing the Simplex approach with manual tuning revealed that the Simplex algorithm could not take advantage of any knowledge of the TQMS that would permit acquiring less data. For example, each step of the Simplex algorithm adjusts every instrumental parameter which s 15000 Y : g 10000 5000 1 A Manual single peak tuning A SIMPLEX tuning * Knowledge based tuning A Normal manual tuning 01 Figure 4. 300 Time (minutes) Comparison of Tuning Methods APPLICATIONS / 813 requires many data points to be taken in order to evaluate the effect on peak shape and therefore on overall performance. The human expert is able to chose which knob should be adjusted for the maximum desired effect and knows whether adjusting that knob will affect the peak shape or merely the peak height. Since many of the knobs do not affect peak shape under the right circumstances, adding this knowledge into the system leads to acquiring less data (single point peak height data instead of many point peak shape data), and the entire tuning process proceeds more rapidly (Wong and Brand, 1985). 3. Knowledge Representation Three major classes of knowledge are relevant to tuning the TQMS: 1. Knowledge about the instrument itself. 2. Knowledge about how to evaluate the instrument output and tune the instrument. 3. Knowledge about how to interface to the LSI-11 computer to control the instrument. The knowledge engineering tool KEE*, running on a Xerox 1109 LISP processor, was used to develop this expert system. KEE provides four basic mechanisms for representing and using knowledge: frames, methods, rules, and active values. Frames permit static knowledge to be represented in a class/sub-class/member inheritance hierarchy. Methods provide a mechanism for representing procedural knowledge in LISP code with an object-oriented interface and control structure. Rules provide an alternative mechanism for representing procedural knowledge with KEE providing both forward chaining and backward chaining control structures. Finally, active values provide a mechanism for attaching procedural knowledge in the form of methods or rules to frames. The hybrid knowledge representation environment provided by KEE permits significant flexibility in the representation of knowledge about the instrument and its tuning procedure. Frames provide a very clear and simple representation of the static (or declarative) knowledge about the instrument parts and controls (see figures 5 and 6). Knowledge about the attributes of the objects or classes represented by the frames are represented in slots within the frames. An inheritance mechanism allows specification by differences, greatly simplifying the job of specifying and maintaining the knowledge in the frames. The class/sub-class/member relationships ofl the frames provide another mechanism for encoding knowledge with frames. Methods are valuable for representing procedural knowledge about the time sequencing of the steps in tuning the TQMS, and for implementing the standard algorithms necessary to interface the expert system to the LSI-11 control computer. A flexible interface between methods and the rule system allows the knowledge of the *Trademark of IntelliCorp ;Detector ,‘;Filament ~J~Sou~ce ,I, Jr’SourceLens 1,, I “” .TQ- ,I,,, “))I nr, t ;Q 1PlateA a,, I #It, ,‘;Q 1PlateB (111 WI/ ,‘,‘;Q 1PlateC ma aw ,;;‘, QZPlateA ” m UI, a- #/I . ’ - QPPlateB I Plates@? G : - - QEPlateC 6 . - - Q3PlateA n+ * * nt’ \ - Q3PlateB ‘:,‘x‘ Q3PlateC ‘r\~‘4IepellerPlate ‘~‘flra*outPlate ‘*FocusPlate \ \ _C Lens1 QuadLenses- E E i - - Lens2 c- Lens3 -42 Quadrupoles<MassFilter- ~ I : - - Q 1 --Q3 Figure 5. TQMSParts Hierarchy .Yl-- ,::-q2swtMio ,Y1;RW ““~sEMvoltag0 W,, _.._. ~;;;lhWOUt -“’ zhcmJ~ WI,, WI‘, . . mwrrr- t Z’,‘, .FW I,, / -/ ,-Lens 1PlateA I, ’ -LmslPlateB ” I- c J:’ LenslPlatec // ,* w c + : - _ - LenszPIateA Lens?hteknobsc$ = - - - Lens2PhteB .-. \\ . - Lens2Platec 88’. .;, Lens3PlateA > ~Lens3PhtaB ‘*Lens3PW!c \ \ mbs-*::-- Q’M- - - q3nmass ResolutlonKnobs- * : : - - q iRwolutiun - - q3Resolution \ \ _ - q 1FieldAxis FieMAxi&nobs- e g i - - qZFieldAxis - - q3FieidAxis ,-qmc Switches- 6 f i : - QZLINKAGE - Q3DC Figure 6. TQMSControls Hierarchy iterative steps in tuning the TQMS to be simply and clearly represented in a method, while concurrently permitting the use of rules for the complex decision steps. Rules, with a backward chaining control structure, are used to represent the knowledge of which knob to adjust or "tweak". This largely heuristic and poorly defined knowledge was significantly easier to represent in rules for 814 / ENGINEERING two reasons. First, rules provided a procedural knowledge representation scheme that was readily understood by the experts which facilitated information transfer from the experts to the knowledge based system. Second, using rules provided for the incremental addition of knowledge about knob selection since new rules could be added without regard to their placement or order of use. Rules allowed the experts to concentrate on expressing knowledge about the TQMS and not on the decision control structure. A natural representation for the value of an instrument control parameter, or knob, is a "Setting" slot in the frame that represents the knob. Active values permitted this representation scheme by providing a mechanism that associates the procedural knowledge about interfacing to the LSI-11 computer with the "Setting" slot of each knob frame. By placing an active value on the "Setting" slot of each knob frame, methods are invoked at each access to the "Setting" slot. These methods cause the instrument's physical knob settings to track the "Setting" slot in the knob frames. The advantage of this interfacing scheme is the invisibility of the instrument interface to the rest of the system thereby permitting continued development and testing of the system during times when the instrument is not available for development work. 3.1 Representation of the Instrument Knowledge about the physical construction of the TQMS in terms of parts and assemblies is represented with frames as shown in figure 5. Frames are also used to represent the physical controls (knobs and switches) on the TQMS (figure 6) . Slots within each hierarchy are used to represent the knowledge about which knob(s) and/or switchtes) control which part(s)/assemblies. In addition, slots within the TQMSParts hierarchy represent the part/assembly relationships. This representation scheme clearly separates the instrument knobs and switches from the instrument parts and assemblies. The first attempt at representing the instrument with frames did not make this distinction as the experts very often blur the two together because there is often a one-to-one correspondence between the part being controlled and the control(s) of that part. The source of this confusion can be understood by examining figure 7, which was generated by methods within the TQMSParts frames that interpret the part/assembly and part/control links. In this figure bold faced print indicates the knob(s) that control the parts/assemblies shown in normal print. In many cases there is a one-to-one correspondence between TQMSParts and Knobs (i.e. the RepellerPlate part and the Repeller knob), but there are also cases where there is a one-to-many relationship (i.e. the Q3 part controlled by the knobs Q3Mass, Q3Resolution, and Q3FieldAxis). This caused some frames to represent a part/knob pair, while similar frames represented only a single knob. This led to difficulties with the active value interfacing scheme and inheritance. Separation of this Detector-SEhfVoltage -Lens3PlateC Q3PlateB-Lens3PlateB Q3PlatsA-Lcns3PlatcA rQhlS CAD.OAS.PRESSUBE QZPlateC-LensZPlateC Q2PlateB-LensZPlateB QlPlatsA-LLem2PlateA QlPlatsC-LenslPlatcC QlPlatsB-LenslPlateB QlPlateA-LenslPlsteA ElectronEnergy EmissionCurrent Figure 7. TQMS Part/Whole Graph knowledge into two inheritance hierarchies with accompanying methods for producing the TQMS part/whole graph improved the transparency of the representation, increased the usefulness of inheritance, and provides a clear depiction of the part/whole breakdown of the instrument which is the way experts view the TQMS. The chosen representation also makes possible the use of "virtual knobs" (see "LinkedKnobs" in figure 6). A "virtual knob" may be defined to control two or more TQMS knobs (and therefore one or more parts) simultaneously, giving them a single setting as though they are controlled by a single physical knob. The mapping from the setting of the virtual knob to the settings of the physical knobs is dependent upon the type of virtual knob implemented. The TQMS experts had determined that the settings of certain physical knobs should be varied together in a fixed way, but they were unable to do this effectively while manually tuning the instrument. Virtual knobs provide a simple and effective mechanism to accomplish this task. 3.2 Representation of the Tuning and Evaluation Procedures Knowledge about the process of tuning the TQMS is represented using methods and rules. A method is used to represent the high level procedure of tuning the TQMS which iterates over the following steps: 1. Use the current output of the instrument and the history of what has already been done to select a knob to adjust; if a knob cannot be selected, the tuning process is complete. 2. Determine the parameters necessary to "tweak" (adjust) that knob. APPLICATIONS / 815 3. Tweak the knob to achieve an increase in instrument performance. This procedure is represented as a method because it is well specified and not believed to be subject to significant change. While methods are often less transparent to the experts than rules, it was felt that the convoluted rules necessary to represent this iterative, step-wise procedure would only serve to obscure the simplicity of the process. In addition, much of the bookkeeping details underlying this process would have to be implemented in LISP code within the rules, further reducing the clarity of the rules. Rules are used to represent the heuristic decision-making knowledge needed by the first two steps of the tuning procedure. The clarity of the rule structure made incremental improvement of the knowledge possible. Backward chaining from the hypothesis "THE KNOB.TO.TWEAK OF CURRENT.TUNE IS ?X" was selected as the control structure for amlyiw the rules for knob selection. The same control structure starting from similar hypotheses is used with the rules that determined the tweaking parameters in step two. Backward chaining was chosen because it limited the inferences made to those necessary to make the required decision, and allowed for terminating the chaining process when the decision was made. The forward chaining mechanism in KEE was discarded because it did not provide any mechanism to terminate forward chaining once the required decision was made. Step two of the tuning procedure requires five separate decisions. These decisions are largely independent and the rules used to make them are separated into rule sets and invoked sequentially following the choice of the knob to tweak in step one. The coupling between these decisions is handled by permitting any rule set to make a decision that would normally be made later by another rule set. When this happens, the rule set corresponding to that decision is skipped since the decision has already been made. For example, if the rule set that selects which knob to tweak also specifies the limits of the knob adjustment, the rule set that normally determines the limits is skipped. The advantage to this sequential decision-making process is that any of the subsequent decisions can be made when unusual circumstances are recognized, and the routine decisions can be deferred to the rule sets that handle the usual cases. The method implementing the tuning procedure provides for separate, modifiable, default decisions should the rule sets fail. These defaults are stored in the frame that represents knowledge about the progress and state of the tuning process. These default decisions reduce the rules needed within each rule set to only those rules necessary for the "exceptional" cases. These defaults also permit the system to function, albeit not always optimally, under circumstances not previously considered. Methods are used to represent the knowledge of how to tweak a selected knob. The methods implement algorithms for one dimensional, noise insensitive optimization. Methods were chosen for three reasons: 1) such algorithms already existed and were relatively simple to implement as methods; 2) the heuristics associated with tweaking a knob were simple and could be incorporated into the existing algorithms as parameters; and, 3) should the algorithmic approach to representing knowledge about the tweaking procedure be successful, the algorithms could be easily ported to the LSI-11 control computer resulting in a significant increase in instrument tuning speed. The parameters to the tweaking algorithm are selected (as mentioned above) during the ordered decision making process immediately preceeding the tweaking of a knob. The final knowledge required to tune the TQMS is h'ow to evaluate the signal from the instrument. In the knob selection process, output from the TQMS needs to be analyzed to determine which knob, when properly adjusted, will most likely produce the greatest increase in instrument performance. During the tweaking step, constant evaluation of the instrument output is required to properly adjust the knob. Supporting the decision of which knob to tweak, rules are used to evaluate a condensed description of the peak height, width, and shape. Rules were chosen because they provide a flexible mechanism for evaluating the different factors in the TQMS output at a time when complete evaluation is critical. The evaluation rules are placed in the rule sets that select the knob to tweak, allowing them to be tailored to their companion knob selection rules. During the tweaking process, the state of the instrument isn't critical and only a single indicator of instrument performance is required. Accordingly, a method is used to convert the condensed peak description into a single numeric performance measure. Use of a method at this stage was motivated by the desire to eventually port the tweaking procedure to the LSI-11 control computer thereby achieving shorter tuning times. 3.3 Representation of Interfacing Knowledge The knowledge of how to interface the TQMS expert system to the LSI-11 control computer is partitioned into two pieces. The first part of the interface is represented using frames and methods, and the hierarchy of these frames is shown in figure 8. The frames and methods represent the knowledge needed to command the LSI-11 computer to manipulate any of the TQMS controls and to solicit any acquired/processed data from the LSI-11 computer. Active values are used to connect the interfacing knowledge with the knowledge about the physical controls of the TQMS. Each member frame shown in figure 8 (connected by a dashed line) contains methods that respond to read and write accesses to a slot (or set of slots) in the member frames of the TQMSControls hierarchy (figure 6) via the active value mechanism. The methods contain the knowledge of how to translate 816 / ENGINEERING values in an identical fashion. Lastly, the I - - PeakShape TQMSOutput- * : : _ - SignalIntensity ability to disconnect from the instrument facilitates system development and debugging during times when the instrument is not available I-QMSHook _ - Q 1OCOnOff OperationalSetting- E E i - - QZLinkedTo - * QBOCGnOff ;Q 1FieldAxis ;;Q 1FirstMass ,:‘;Q N&solution z;QZFieldAxis ~;;Q2FirstMass E;,‘;QZScanRatio ;z,:’ Q 3FieldAxis z;;f,.Q3FirstMass ~;y,*~.Q3Resolution ~:;‘;Repeller =” ‘( SEMVOLTAGE YII t -I * -, (, - SourceTemperature or unnecessary. The disadvantage of the active value scheme is that significant required to provide the correspondence between bookkeeping is the instrument representation frames (figure 6) and slots (figure 8). and the instrument interface frames The later problem has so far been addressed by a combination of methods and an extra slot in the TQMSControls frames which contains knowledge about interface frames of figure 8. connecting to the 4. Results \ ParameterZ : _ -:-- ev & 5 - - CollisionGasPressure a+5* mm\‘,- Drawout The use of knowledge based systems techniques automate the tuning of the TQMS has proven to to be very useful. The knowledge based system ~\\\‘. -11 x Emission -51 . mat\\ \ -Focus t:::,‘.Lens 1PlateA ZX\‘*Lens 1PlateB El\(t’.Lens 1PlateC %‘~‘.LensZPlateA ZYLensZPlateB ,,I I trl.LensZPlateC ‘:s:LensBPlateA :(Lens3PlateB !Lens3PlateC interfaces to the instrument and exercises intelligent control over the tuning process in real-time. Initial results have demonstrated that the system is able to tune the instrument in MS mode nearly as well as a Simplex optimization procedure (in one half the time) and better than an expert operator does in twice the time (figure 4) - peak, If the human expert optimizes on a single manual tuning and takes less time than the expert system can attain twice do the sensitivity. However, experts in a mass not individually tune every peak region Figure 8. REALTQMS Interface Hierarchy because it takes too much time; so a more valid comparison of the system performance is shown in figure 9. Optimizing the instrument in four separate mass regions (less than 100, 100-200, 200-350, greater than 350) has enabled us to increase the peak intensity (and instrument sensitivity) in all regions by factors of 2 to 30. As more rules are added to the system and the current rules are optimized, the sensitivity should increase most noticeably in the high mass region (above mass 500) (Wong, Crawford, et al, 1984; Wong, Lanning, et al, 1984; Wong and Brand, 1985). Having demonstrated the usefulness of knowledge based systems to MS tuning, we turned to the more general and difficult optimization problem, that of MS/MS tuning. NORRRL VS OPTIllIZED TUNIHG ON tlf%S 582 F!XNl PFTBA ieee* I I 264 z I 131 2x9 ; ;: z me* I ; ; I ii w I I I I 582 I I I I ; 1Qtk I I : 3zb I I I I I I I : III I I I I ‘t 414 426 II I 10 “‘,” I I I III II I I I 1 B ma 280 300 488 5fu3 688 MQSS --TININ6 --- --mllNp(G Figure 9. Comparison of Normal and Optimized Tuning APPLICATIONS / 817 the read/write request to a slot into the corresponding request/command to the LSI-11 control computer, and how to transmit that request/command. Additional methods are used to connect and disconnect the active values. In the disconnected state, accessing slots of the frames in the TQMSControls hierarchy will not affect the instrument. In the connected state, changing a knob's "Setting" slot results in the instrument being commanded to make the corresponding change, while reading the slot results in the instrument being interrogated for the current value. The second part of interfacing knowledge consists of LISP code that implements a low-level master/slave protocol to provide an interlocked, reliable communications path. LISP code was chosen because of the need for efficiency dictated by the Xerox 1109's RS-232C interface. The implementation uses hierarchical control and state machine emulation to reliably transfer commands and responses between the Xerox 1109 and the LSI-11 control computers. The RS-232C communication link was chosen because the interface was available on both machines and high bandwidth was not required. The separation of the interfacing knowledge from the representation of the instrument simplifies experimentation with different instrument representation schemes. The active value separation scheme employed is also ideally suited to the use of an instrument simulator since a simulator can be connected with active MS/MS operation of the TQMS differs from single MS mode in that selected parent ions are further fragmented and mass analyzed before being detected. This collision process introduces new parameters and conditions which don't exist in single MS operations. For example, the energy of the collision in the second quadrupole is a new parameter to optimize. Figure 10 is a plot of intensity (of the daughter ion at mass 219 from the parent ion at mass 502 from perflurotributylamine, PFTBA) VS. the collision energy. The lower intensity curve shows a typical energy profile which could be obtained by manual optimization of this instrument parameter. BY using a rule-based virtual knob to link several of the parameters together, a dramatic increase in the sensitivity was obtained (a factor of 40). This increased sensitivity was never obtained by manual tuning methods, but is easily accomplished with this automated optimization scheme. PFTBA 502+ --> 219+ h .% p 100000. 3 E -60 -SO -40 -30 -20 -10 0 10 Quad 2 Field Axis Offset Figure 10. Comparison of Peak Intensity from Linked and Unlinked Tuning Parameters The current MS/MS optimization system is able to tune the entire instrument to approximately 85% of the sensitivity of a manual tune. As with the single MS tuning, the sensitivity gain of the MS/MS tuning is expected to rise rapidly as rules are added and debugged. Off-loading the tweaking algorithms to the LSI-11 control computer is expected to significantly decrease the tuning time. An example of the usefulness of the AI technique for tuning the TQMS can be found in a routine analysis of 12 different sulfur compounds in oil shale pyrolysis gas. The expert cannot spend the time it would take to manually tune for each parent/daughter pair. If tuning for each pair took only 10 minutes (an optimistic estimate, as it usually takes about 30 minutes), two hours would be required to optimize the instrument for this analysis. The AI tuning system described in this paper is faster and does a better job of optimizing as it can use virtual knobs unavailable for manual tuning. This frees the operator of the tuning, allowing him/her time for sample preparation, office politics or a cup of coffee. 5. Conclusions The use of a knowledge engineering tool that integrates multiple knowledge representation schemes significantly decreased the system development time compared to tools that offer only rules. By permitting knowledge to be represented and used in multiple ways, it was possible to select a representation scheme that made the knowledge visible, clear, and easily encoded. The result was enhanced feedback from the experts because they were able to see where the knowledge was stored and how it was encoded. The interfacing of the expert system to the TQMS proved the value of expertise, encoded in the form of rules, to complex optimization problems. The system is able to optimize the output of a complex instrument running chemical compounds in a way not practical with manual methods. Because the expert system approach allows the instrument to be tuned quickly, multiple mass range, or even individual mass pair tuning is now practical, resulting in large gains in instrument sensitivity. Separation of the knowledge about knobs and how they are tuned from the knowledge about the TQMS domain makes many aspects of this system transportable to other tuning problems. Much of the knowledge representation techniques applied to the TQMS tuning process could be easily applied to other problem domains. An accelerator, a laser system, and many other complex physical and chemical instruments require that significant time be spent by experts to assure the system is properly tuned to fine working order. Such systems are prime candidates for the application of the same or very similar AI techniques as used on the TQMS to the automation of their tuning process. Two significant problems with the application of knowledge based systems to chemical instrumentation have been encountered. First, there is a significant learning curve associated with applying the technology, and second, knowledge based systems software tools and the supporting hardware are expensive. While our experience has shown that the hardware and software tools are cost effective for developing the system, these high costs make fielding multiple copies of the system on the development vehicle economically unattractive. The alternative is to port the developed system to other hardware using conventional languages, but this approach is practical only where the large costs of porting the software can be amortized over many systems. 818 / ENGINEERING 6. ACKNOWLEDGEMENTS DISCLAIMER The authors wish to thank Hugh Gregg and Richard Crawford for their valuable programming efforts on the LSI-11 control computer of the TQMS. We also wish to thank Peter,Friedland of Stanford, and Hugh Gregg for numerous helpful comments and suggestions on this paper. Special thanks also go to Charles Bender, former Chemistry and Materials Science Department Head at LLNL, who provided the initial funding and encouragement to get this project started, and to Art Lewis for support from the Oil Shale program. REFERENCES [l] Yost,R.A., Enke,C.G., "Selected Ion Fragmentation with a Tandem Quadrupole Mass Spectrometer", J. Amer. Chem. Sot., 100, pp. 2274-2275, (1978). [21 Yost,R.A., Enke,C.G.,McGilvery,D.C., Smith,D., Morrison,J.D., "High Efficiency Collision-Induced Dissociation in an RF-Only Quadrupole", Int. J. of Mass Spectrom. Ion Phys., 30, PP- 127-136, (1979). 131 Yost,R.A., Enke,C.G., "Triple Quadrupole Mass Spectrometry for Direct Mixture Analysis and Structure Elucidation", Anal. Chem., 51, pp. 1251A-1264A, (1979). [41 Wong,C.M., Crawford,R.W., Barton,V.C., Brand,H.R., Neufeld,K.W., Bowman,J.E., "Development of a Totally Computer-Controlled Triple Quadrupole Mass Spectrometer System", Rev. Sci. Instrum., 54, pp. 996-1004, (1983). 153 Wong,C.M., Kunz,J.C.,Kehler,T.P., "Application of Artificial Intelligence to Triple-Quadrupole Mass Spectrometry", IEEE Transactions on Nuclear Science, Vol. NS-31, No. 1, pp. 804-810, (1984). [6] Wong,C.M., Crawford,R.W., Lanning,S.M., Brand,H.R., "Artificial Intelligence for Optimization of Real-Time Data from Pyrolysis Experiments", 32nd Annual Conference on Mass Spectrometry and Allied Topics, May 27-June 1 1984, San Antonio, Texas, pp. 372-373. 171 Wong,C.M., Lanning,S.M., Brand,H.R., Crawford,R.W., "Application of AI Programming Techniques to Development an Expert System to Tune a TQMS", 32nd Annual Conference on Mass Spectrometry and Allied Topics, May 27-June 1 1984, San Antonio, Texas, pp. 642-643. This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government thereof, and shall not be used for advertising or product endorsement purposes. [81 Wong,C.M., Brand,H.R., "Artificial Intelligence: Expert System for Acquisition and Interpretation of Data in Analytical Chemistry", Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy, Feb. 25-Mar. 1, 1985, New Orleans, La., p. 127. APPLICATIONS / 819
|
1986
|
21
|
463
|
PLAN RECOGNITION FOR AIRBORNE TACTICAL DECISION MAKING Jerome Azarewicz Glenn Fala Ralph Fink Christof Heithecker Naval Air Development Center Warminster, PA 18974 ABSTRACT Airborne tactical decision making is degraded as a result of sophisticated threat capabilities, high data rates and uncertainties, and the necessity for timely response. Under investigation at the Naval Air Development Center is the concept of a plan recognition model to assist the tactical decision maker in interpreting and predicting the activities of enemy platforms.* On-going work in the field of plan recognition was surveyed, knowledge acquisition conducted, and a prototype plan recognition model has emerged. The model is a hierarchical, black- board based adaptation of a more general architecture of cognition. The model attempts to overcome some of the perceived shortfalls of other approaches relative to the com- plexities of the tactical situation. Ex- tensions to accommodate uncertain events and elusive goals in multi-hypothesis situations are the focus of current activities. I INTRODUCTION Command and Control decision makers aboard Naval aircraft face difficult tasks in assessing and acting upon the activities of enemy platforms (e.g., aircraft) in a tac- tical environment. Enemy activity is moni- tored via on-board and remote sensor systems. In high-threat situations, large volumes of sensor data must be analyzed and correlated in real time in order to construct an accurate representation of the situation as it unfolds. The data arrives quickly and may be incomplete, inaccurate, and ambiguous as a result of sen- sor limitations, threat deception, and other factors. A tremendous burden is placed on the decision maker, who must absorb and assim- ilate this data to make time-critical tactical decisions on which the survival of the task force may depend. A key factor in intelligent tactical decision making is the correct interpretation *The work described in this paper has been supported by Naval Air Development Center Independent Research, the Office of Naval Technology, and the Naval Air Systems Command. of the tactical situation. The interpretation process can be cast as a form of plan recognition which asserts that the tactical observer interprets the activity of enemy plat- forms by hypothesizing their goals and inferring the plans that are being carried out in order to achieve the goals. An automated, on-line plan recognition model would serve to assist the deci- sion maker in real-time tactical situation interpretation, alerting him to significant events and trends. Feasibility of this concept is under investigation at the Naval Air Development Center through the development of a prototype Plan Recognition Model (PRM). Previous work in plan recognition has focused on a number of problem domains, (Litman and Allen, 1984; Wilensky, 1983; Schmidt, 1976; and Carver, Lesser, and McCue, 1984). Although current models provide considerable insight into the plan recognition problem, they have not fully addressed the complexities encountered in domains such as that of tactical situation assessment. If one compares the present status of plan recognition problems against what is required for the tactical problem, several shortfalls can be identified as follows: Present Plan Recognition Tactical Problem Single Agent Multiple Agents-Inde- pendent or Interact- ing Cooperative Situations Adversarial Situa- tions Well Defined Goals Elusive Goals Known Events Uncertain Events Limited Hypotheses Set Large Hypotheses Set Time Factor is Negligible Time Factor is Critical It is the intent of the current work at NADC to overcome some of these limitations in current plan recognition models, especially in the areas of increasing the hypothesis set and analyzing adversarial situations with uncer- tain events. The remainder of the paper gives an overview of the design and operation of the single agent/multiple hypotheses PRM archi- tecture. APPLICATIONS / 805 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. II PRM: COGNITIVE ARCHITECTURE OVERVIEW The design of the PRM was largely derived from the work done by Anderson on the Architecture of Cognition (Anderson, 1983). Figure 1 is an overview of our interpretation of three major components of this theory: Long Term Memory, Procedural Memory, and Short Term Memory. Long Term Memory Declarative descriptions of goals, plans, platform tactics and mlsslons Procedural Memory Knowledge about how goals and plans are Inferred, mamtamed. and revised Short Term Memory Storage and control of the current state of the world sensor data/ Contains the set of active external 4 hypothesized plans most likely information -goal and plan FIGURE 1. PLAN RECOGNITION MODEL ARCHITECTURE Long term memory (LTM) consists of a de- clarative description of the set of missions which can be held by an agent under observa- tion. Each mission is represented by a class of plans which can be performed to achieve a particular goal. The structure of these plan descriptions is hierarchical. Plans can be decomposed into a sequence of events, which, in turn, can be decomposed into a set of parameters (Figure 2). Constraints, the necessary conditions for the occurrence of an object in the LTM, are imposed at each level (plan, event, or parameters) of the plan hierarchy. If a constraint has been violated, then the LTM object to which the constraint is attached cannot have occurred. conlexlual JPlanl pla”ofm PLAN LEVEL PARAMETER I I I expected and allowable ranges dagree 01 match computaban Figure 2. Plan Structure Organization Procedural Memory (PM) contains knowl- edge about how to reason about the state of the world. Using the knowledge resident in both short term memory and long term memory, procedural memory is the vehicle for matching observations of the environment to plan struc- tures, postulating goals, weighing hypotheses and revising plans. Components of PM include search strategies and utilities for matching objects stored in LTM. Short term memory (STM) is the storage space for the intermediate results of the plan recognition process. Knowledge about the current state of the world resides here, such as measurements of the environment, the history of observed events, the set of active hypothesized plans, and the measures of belief and disbelief associated with these plans. STM is the medium through which procedural memory interacts with LTM; procedural memory will only act on those plans stored in LTM which have been retrieved and posted on STM. III PRM COMPONENT DESCRIPTIONS In the following sections, the PRM is dis- cussed in the domain-specific context of airborne tactical decision making. Knowledge acquisition activities were carried out to extract and repre- sent the declarative and procedural knowledge the decision maker brings to bear in the plan recog- nition process. Work has focused on achieving a PRM that can generate and maintain multiple hypo- theses to interpret and predict the behavior of a single threat platform. Extensions to accom- modate multiple platforms are currently under study but will not be discussed below. A. Long Term Memory LTM consists of a declarative description of a set of possible missions held by an observed platform. Each mission represents a class of plans whose successful execution results in the achievement of the mission goal. For example, the goal of the mission class ATTACK can be achieved by invoking one of several available plans or variations of plans in the ATTACK class. Several different platforms may have the same mission goal but use different plans to achieve it. On the other hand, platform limitations restrict the set of plans that may be executed to achieve a particular mission. The plan descrip- tion must capture the significant features of the ways a mission goal may be achieved. The following is a discussion of the two components of each plan structure in LTM: the plan hierarchy and the Deterministic Finite Automata (DFA). The hierarchical component defines a plan as tiered structure (Figure 2) consisting of a plan name, a set of events, various parameters per- taining to the events, and constraints at each tier in the hierarchy. Constraints in the plan hierarchy form a nested set of necessary conditions which must be satisfied for entry to a tier. The matching of 806 / ENGINEERING environment measurements against objects in the plan structure involves checking the constraints within the structure of the plan, event, and parameter levels. If a measurement of the environment satisfies the plan constraints, the plan is plausible and the event constraints may be checked; if these are satisfied for a par- ticular event, then that event's parameter constraints may be checked. The constraints serve to reduce the search spaces of both the plans possibly held by the platform and the event being performed within that plan. The events are temporarily ordered features which a tactical decision maker judges to be sig- nificant for hypothesizing of inferring a plan held by a platform. The events, in turn, consist of a set of observable parameters and their asso- ciated constraints. Parameters that define an event represent measures of the platform's kinematics (platform motion) and emissions (electromagnetic signals) behavior that are expected and allowable within that event. Each parameter that characterizes an event has an expected value or interval of values. To allow for variations in the unfolding of events, the constraints extend the expected parameter value range out to the maximum allowable range for the particular event. If the event constraints are satisfied by an observed measurement, then it is possible that the platform is executing this event in the hypothesized plan. Otherwise, this event cannot be held by the platform. Event boundaries in the plan structures are defined in terms of significant transi- tions in expected parameter values. The events in Table 1 (El, E2, etc.) are delin- eated with respect to a set of expected para- meter values over an expected range. When there is a significant change in the expected parameter values, a transition from one event to the next occurs. For instance, the dra- matic change in the altitude of the bomber in Table 1 indicates a transition from El to E2. TABLE 1. PARTIAL DECLARATIVE DESCRIPTION BOMBER ATTACK PLAN El E2 : Imm FP Corn Rad ECH CPA W-L EVENT g . . *s *4 *3 *2 *I *0 Range, nm The event constraints serve to extend the expected event boundaries out to the maximum allowable parameter values for each event (Table 2). Because of these extended event parameter boundaries, events may overlap in time. The relaxation of expected parameter values through constraints provides for the gradual transition from one event to the next. Knowledge about to which event the plat- form will transition is embodied within the DPA. This concept is discussed at the end of this section. TABLE 2. SAMPLE EVENT CONSTRAINTS FOR BOMBER ATTACK PLAN i j I I I 1 1 I 1 I I I 1 1 1 1 l 1 R5 R4 5 R2 Rl RO Range, nm A measure of belief for the platform's events and state can be obtained by comparing the actual platform measurements against the expected/allow- able parameter values. Rules attached to the parameter values in the plan structure are invoked to determine the degree of membership and asso- ciated belief in the event. Attached to each kinematic parameter are linear functions which are used to determine the degree-of-match (DOM) be- tween an observed value for a parameter and the expected/allowable values for that parameter (Table 3). Utilizing kinematic information of the measurement, the linear functions return a value which falls into one of three classes: exact, partial, and none. The three results can be interpreted as follows: Exact: The measurement falls within the expected range of values--the measurement is a member of the event-parameter set. Partial: The measurement falls within the allow- able range of values--the measurement is a plausible member of the event-parameter set. None: The measurement falls outside of the allow- able range of values --the event is removed from consideration. TABLE 3. SAMPLE KINEMATIC PARAMETERS FOR EVENT 1 OF BOMBER ATTACK PLAN Alt Vel Imm Interpretation of Hatcher Results Exact Partial None range 40-50K range 25-40K range O-25K. ,50K b .I, = ' b rn i ,&jyj ait - s/3 b a,, = 0 + remove event range 350-450 range. O-350 b ",I = & vel range > 700 -------_-_---_______------ range 450-750 b *.I = ' b vr, = - &vel + 2 8 b "II = 0 + remove event range 0 5 - I range 0 - 5 range -10-o hmm = I b -= 21mm b mm’ remove event APPLICATIONS / 807 Knowledge about emission parameters repre- sents the decision maker's heuristics used to support or refute a plan hypothesis (Table 4). This differs from the more computationally in- tensive kinematic match. A series of actions are initiated when an emission measurement sat- isfies certain preconditions; e.g., a message may be issued to increase or decrease the be- lief in the plan, or even remove a plan from consideration. For example if a bomber ID sig- nal is observed, a message will be posted to remove all plans that cannot be held by a bom- ber. Values for the emissions parameters repre- sent strong evidence to support or refute a hypothesized plan. TABLE 4. EMISSIONS PARAMETERS FOR EVENT 1 Corn Rad ID ECM W-L OF BOMBER ATTACK PLAN Signal Present Signal Not Present decrease the belief In the plan a 11 TAR see RAO Rule (3) below if other RAO then remove plan 2 see ID Rule ( I ) below 2 remove plan i see W-L Rule (2) below 2 RULES: (I) If IO equals Somber then remove all non-0omber ~lnns from the Expectation board and decrease the belief ln the plan If ID does not equal Bomber then remove all non-ID plans (2) If W-L eqUOlS yes than assert PREMATURE MISSILE LAUNCH (3) II RAD equals TAR then assert PREMATURE RADAR ON The second major component of the plan structures is captured by the DFA representation. The search algorithm of PM utilizes a descrip- tion of a state transition diagram, which is the DFA stored in long term memory with each plan (Figure 3). The DFA of each plan specifies the set of legal sequences of events which must be performed in order to complete the plan. The state transition diagram indicates the various states a platform may be in and the signiEicant events that are needed to transition to the next platform state. The diagram is very dependent upon the domain expert's characterization of a typical platform mission. Event From To Transltlon Condltlon expected transitlon sequence - check for (W-L) In E, for successful ATTACK weapon-launch opllonal transrtlon sequence check for W-l. as above bypasstng E,, low rflght - FIGURE 3. STATE TRANSITION DIAGRAM FOR THE EVENTS IN BOMBER ATTACK PLAN In summary, LTM contains a set of plans which are represented as hierarchies and as deterministic finite automata. The hierarchies allow the portrayal of the tier structure of plans, events, and parameters. The DFAs are used to specify the sequences of events in which plans may unfold. The combination of these two struc- tures is well-suited for representing knowledge about plans. B. Procedural Memory PM is the mechanism used to hypothesize the plan and sequence of events carried out by the platEorm given the declarative knowledge in LTM (plan hierarchies and DFA's), contextual infor- mation, and the history of observations of plat- form behavior. This discussion focuses upon the search and match strategies in PM used to esti- mate the current event within a hypothesized plan and the state of the platform within the DFA. The goal of the search is to identify the current state of the platform. This is achieved through the recognition of a sequence of events which have been observed through measurements of the platform's behavior. This sequence of events is a partial instantiation of a hypothesized plan. When there is strong evidence that an event has occurred, the event is entered into the partial plan instantiation. Uninformed strategies such as the depth- first and breadth-first searches would be inef- ficient in this application, since these strategies rely upon a cost function which allows for a large search space. The declarative knowledge about plans resident in LTM provides the capability to instead implement an informed search such as the best-first. The best-first algorithm allows us to utilize knowledge about the problem domain, an estimate of the state of the platform, a probable goal of the platform, and the information gathered by the search to determine the plan held by the platform (Pearl, 1984). Since the state of a platform depends upon the event which is currently being performed by the platform, the searcher needs a heuristic eva- luation function for determining the current event of the platform. This is the function of the event matcher. The event matcher consists of two subordinate matchers: the constraint matcher and parameter matcher. The constraint matcher checks if there are any violations of the event constraints; if the constraints are violated, then the event being matched cannot be occurring. Violations of the constraints narrow the search for a hypothesized plan and the event. If the event constraints are not violated, the parameter matcher is invoked to compare the current measurement of the environment with the allowable values for each parameter slot of the event. The parameter matcher will return a degree of membership of the observed measurements t-308 / ENGINEERING in the set of allowable values for each para- meter. These degrees of membership are combined to obtain the overall belief in the occurrence of the event. The event weights supplied by the heuristic evaluation function are used to detect a change in the platform state, i.e., a transition from one event to the next. These weights are stored in a linked tree structure for each plan (Figure 4). For backtracking purposes, attached to each node of the tree is information about the type of decision made by the searcher. At each obser- vation the platform state is determined on the basis of these weights. The search path from the current estimate of the state to the initial guess of the platform state represents the par- tial instantiation of a plan. El-09 01 : ..‘. 2 :: : : ::. :: :. .: 5 5 01 2 1 ::.: ‘. k 3 +,j.:. : :: : : .:.. :. m :. g ., 0 - Platform state I - Event Edge between states - Best estimate of current state m - Alternate possibilities for estimate of the current state FIGURE 4. BEST FIRST SEARCH PM uses STM as a means of accessing the hierarchical plan structures and the DFAs stored in LTM. We would not want to access LTM directly, since we are only maintaining a local search of the plan structures. It is for this reason that the knowledge about the current state of the hypothesized plans is maintained in STM. This is discussed in the next section, C. Short Term Memory STM is a blackboard used as a workspace for the intermediate results of the PRM process. As can be seen from Figure 5 there are five par- titions of the blackboard workspace within STM. GOAL HYWTHESES I FIGURE 5. SHORT TERM MEMORY The Platform Observation Board integrates the measurement of the observed platform behavior with contextual information. This is a mechanism to supplement .the information observed by sensors to aid in the recognition of events. As the time into the mission increases, the emphasis on the observed platform behavior as a measure of an event increases, while the emphasis on contextual information decreases. The best fit events are captured on the Plan Instantiation Board. For a given sequence of observations, the event closest to representing the platform's actions is selected. Conceptually, segments of various plans are being assembled to represent the platform's actions. The Plan Hypotheses (PHI Board consists of a set of plans which the search and match processes indicate are likely to be held by the platform. Each hypothesized plan is a template retrieved from LTM. The system strives to maintain at least one plan on the board at all times. The plans on the PH board have the following charac- teristics: 1. Each hypothesized plan is assigned a weight which represents the accumu- lated evidence that the template fits the obser- vations; 2. The events of each hypothesized plan are tagged as either "grounded", "current", or "unmarked". A grounded event was observed, matched at least partially with the measurement of the platform behavior, and is no longer occurring. The current event best matches the current observations. An unmarked event has just gotten underway or has yet to be observed. A hypothesized plan contains the set of possible next events which describe future plat- form behavior and indicate a possible goal. The predicted events consist of a sequence of events or a path determined by the DFA. The sequence of grounded events attached to a plan on the PH Board is a path through the DFA APPLICATIONS / 809 which was supported by measurements: there was sufficient evidence to state that these events occurred. With each plan is a degree of belief; I.e., a measure of how well the hypothesized plan fits the plan templates in LTM. The current event is the one best supported by evidence from the most recent observations. The Goal Hypotheses Board contains the goals of each of the hypothesized plans, the threat level associated with the goal, and the degree of belief that the platform holds this goal. The threat level is an indication of the importance of maintaining a plan. For example, a plan and goal may have low degrees of belief, yet their high threat capability will merit their main- tenance on the board. Given a set of available focusing heuristics (Carver, Lesser, and McCue, 1984), the Best Hypotheses (BH) Board maintains a history of the most likely hypothesized plans and goals for the platform. The plan and goal hypotheses held on this board are the output of PRM. To summarize, STM supplies a cache of memory for the intermediate results of the search and match processes, a workspace for the blackboards, and control information essential to the PRM pro- cess (described in the next section). IV PRM PROCESS DESCRIPTION The current PRM is being implemented in a frame-based blackboard architecture. In this representation, STM is a blackboard which con- sists of several contributing blackboards (Figure 6), each of which is associated with various experts which make up the PRM. These experts consist of a STM Manager (i.e., the executive), a Plan Hypotheses (PH) Manager, a Plan Expert, an Event Expert, and a Parameter Expert. FIGURE 6. STM BLACKBOARD HIERARCHY The STM Manager will oversee the operation of the PRM process on the STM blackboard. It functions as an executive directing top-level control of the PRM. STM thus is a workspace for the various knowledge sources (experts) which conduct the various facets of plan recog- nition. The STM Manager is responsible for handling messages sent from the user (primarily at start-up time) and messages passed back from the PH Manager that may have been generated by either the PH Manager or one of the other ex- perts at a different level in the hierarchy of the process. In response to these messages, the STM Manager has duties pertaining to initializing the system, getting observations, and system restart/recovery. When all processing is complete at the exe- cutive level, control is generally passed to the PH Manager. The Plan, Event, and Parameter Experts deposit and withdraw information from the PH workspace. On this workspace, the set of all possible plans, given the state of the system, are posted. The PH Manager is responsible for managing this workspace by passing messages to both the STM Manager and the Plan Expert. The PH Manager maintains the set of all possible plans that may be held by the platform given the current state of the system. The PH Manager receives messages that refer to modifying the current state of the PH Board. These messa- ges may be from the Plan Expert, such as a "Remove this plan" message, or from the STM Manager, such as "Instantiate plans that correspond to the current observation." The PH Manager also sends messages to the STM Manager and to the Plan Expert. These messages may be "PH Board empty, take appropriate action" or "Match the current observation to this plan." Each plan on the PH board is a partial instantiation of some plan that exists in LTM. Each partially instantiated plan contains infor- mation referring to how well it corresponds to the observed platform behavior. Recall that a plan includes information regarding the plat- form's most likely current state and event, a list representing the history of observed (grounded) events, and the set of expected (extrapolated) states and events, all with respect to the plan in LTM. The maintenance of the plan on the PH board is the responsibility of the Plan Expert. The Plan Expert receives messages concerning which plans to deal with and in what order, given the current state of the system. The Plan Expert invokes the knowledge about searching, i.e., which events to process for each plan and in what order, This knowledge is represented as the DFA that reside in LTM. The Plan Expert sends a match message to the Event Expert to process prescribed events. As a result of this pro- cessing a DOM is assigned for each plan and a message to remove invalid plans may be sent. The Plan Expert only has knowledge about high-level plan maintenance such as transitioning to the next event of each knowledge sources invoked by the Plan Expert to perform the detailed steps of the plan maintenance process. When the Event Expert receives the match message and the current observation from the Plan Expert, the event constraints are first checked. If an event's constraints are violated, the Event Expert passes a message to the Plan Expert to remove that event from consideration as the current event, otherwise, the Event Expert invo- kes the Parameter Expert to process each of the 810 / ENGINEERING individual parameters corresponding to that event. The Parameter Expert does the actual matching of observed data against the parameter values that would be expected and are allowable. As a result of the matching phase, the degree to which a parameter matched is returned. V STATUS AND FUTURE WORK The PRM described above was developed and implemented on the Symbolics using the Flavors package. The plan library contains structures for nine threat scenarios gleaned from knowledge acquisition activities over a six month period. The simulation testbed system can provide the scenarios and variations of them to PRM in terms of input files of kinematic and emissions parame- ters for the threat of interest. Output to the user consists of a dynamic depiction of the emerging tactical scenario, as well as continual updates of the hypotheses being maintained and their associated belief measures. The PRM is currently a single-threat/ multi- hypothesis model. Although it can maintain multiple hypotheses to explain the threat be- havior, it is limited to an analysis of a single threat agent. Extensions to accommodate multiple threat agents are under investigation. Such agents could be pursuing multiple goals indepen- dently but more likely will be working in concert to effect a single goal. Effective management and pruning of the search space will become para- mount in a multi-threat environment. A judicious mix of data driven and goal driven processing will need to be invoked. Emphasis must be placed on the key features and indicators that serve to discriminate among the different possible plans. Also under scrutiny are richer representa- tions for the heuristic evaluation function and the handling of uncertain data and knowledge, Extensions to multi-valued logic, fuzzy schemata, and work on the theory of endorsements are po- tential candidates. Ongoing efforts in machine learning and reasoning by analogy could find suitable application here. Finally there is the issue of time-crit- ical operation. In a fleet air defense setting, correct interpretation of threat actions and appropriate friendly force response formula- tion must occur in a matter of minutes or event seconds. In addition to opportune search strategies and the use of focusing, it will be necessary to exploit the parallelism inherent in a multi-threat/multi-hypothesis plan recog- nition environment. The issue of mapping the PRM architecture onto various parallel pro- cessing topologies has received considerable attention in ongoing work at NADC. VI ACKNOWLEGDEMENTS In the area of knowledge acquisition, Joe Alfano has made significant contributions to the PRM project. Raymond Kirsch of LaSalle University has been instrumental in addressing the exploitation of parallelism in machine architec- tures for AI computing in general and the PRM in particular. ill L-21 131 [41 [51 [61 [71 181 REFERENCES Anderson, J., The Architecture of Cognition, Harvard University Press: Cambridge, MA, 1983. Carver, N., V. Lesser, and D. McCue, "Focusing in Plan Recognition", The National Conference on Artificial Intelligence, AAAl-84, Austin, Texas, August 1984. Litman, D., and J. Allen, "A Plan Recognition Model for Subdialogues in Conversations", Technical Report 141, Department of Computer Science, University of Rochester, Rochester, N.Y., November 1984. Pearl, J., kuristics: Intelligent Search Strategies for Computer Problem Solving, Addison-Wesley: Reading, MA, 1984. Schank, R. and R. Abelson, Scripts, Plans, Goals, and Understanding - An Inquiry into Human Knowledpe Structures, Lawrence Erlbaum Associates, 1977. Schmidt, C., N. Sridharan, and J. Goodson, The Plan Recognition Problem, An Intersection of Psychology and Artifical Intelligence", Artifical Intelligence, 11: 45-83, August 1978. Stenerson, R., "Integrating AI into the avionics engineering environment," IEEE Computer, February 1986. Wilensky, R., Planning and Understanding - A Computational Approach to Human Reasoning, Addison-Wesley, 1983. APPLICATIONS / 811
|
1986
|
22
|
464
|
KNOWLEDGE-BASED SIMULATION OF A GLASS ANNEALING PROCESS: AN AI APPLICATION IN THE GLASS INDUSTRY Richard A. Herrod, Ph.D. Jeff Rickel Texas Instruments Incorporated P.O. Box 225474 MS 439 Dallas, Texas 75265 ABSTRACT This paper describes a knowledge-based simulation sys- tem for a glass annealing process. The long ovens, known as lehrs, in which annealing takes place are not well understood by their operators. In fact, only a few experts can predict the effects of a change in the lehr controls. Attempts to sim- ulate the behavior of the lehr using conventional methods have not been successful due to the size and complexity of the lehr. Our knowledge-based approach is capable of both simulating the glass temperature curve in an annealing lehr and planning the necessary lehr control settings to achieve a desired curve. It consists of two cooperating expert systems, one rule-based and the other frame-based. The system also includes a high-bandwidth graphics display which allows op- erators to interactively test control-setting changes and ask for the control settings which meet desired specifications. A description of the domain, a history of the development, and details of the design are all presented, along with lessons learned from the experience. I INTRODUCTION The Texas Instruments (TI) Industrial Systems Division, located in Johnson City, Tennessee, is involved in the devel- opment of advanced industrial control products. Part of this thrust involves investigating the applicability of AI to a variety of industrial areas. One approach being used is to en- ter into development projects with key industries which are involved in bringing high technology solutions to bear on their businesses. This mutual interest resulted in an agree- ment between TI and Corning Glass Works to develop a knowledge-based system to address an important aspect of Corning’s production operation. Corning provided the pro- cess (domain) expertise and TI embedded this knowledge in an expert system. II THE DOMAIN Corning sees AI technology as a major factor in process control in the future. They want to start training people in the application of this technology to manufacturing prob- lems. In choosing a problem, they looked for one that was representative of their many manufacturing operations, and one that would provide real economic benefits. They also felt that finding a cooperative and enthusiastic expert would raise the chances of the project’s success. The application chosen was the capture of expertise in the process step of annealing glass to remove residual stresses that originate during the forming operation. In par- ticular, the project would focus on the annealing of televi- sion picture tubes at Corning’s State College, Pennsylvania, plant. Since upsets at the State College plant were costing the company money, they had a strong interest in finding a solution. This process is also used in Corning plants in Mex- ico and South Korea, and by all of the Consumer and Light- ing Products plants; therefore, there is a potential for wide use of the system. Improvements to the process typically reduce the product losses of the plant and are immediately translated into increased operating margins. Finally, Corn- ing’s experts in designing and trouble-shooting the lehrs in which annealing takes place are nearing retirement age, and capturing their knowledge is very important. Thus, this problem area met all of Corning’s criteria for problem selec- tion. The annealing process takes place in a very long oven known as a lehr. The lehr provides a controlled heat treat- ment cycle that softens the glass sufficiently to remove stresses built into the glass during forming. These residual stresses are the result of rapid cooling of the outer part of the glass when it comes in contact with the forming mold. The annealing temperature cycle must be cool enough that the glass will not lose its strength and change shape. Once the stresses have been removed, the glass must be cooled slowly to prevent regenerating them. The lehr which accomplishes all this at the State College plant is approximately 180 feet long. The temperature profile in the lehr is produced by hot gas generated by burners just inside the front end of the lehr and guided down the lehr through ducts below a steel mesh belt that carries the glass. Dampers located along the duct control the amount of hot gas introduced into the sections of the lehr. The hot gas is introduced into the lehr chamber by a series of openings designed to prevent the gasses from directly impinging on the glass. The hot gas is then recirculated through the burner system at the front of the lehr. The temperature of the inlet gas stream is directly controlled by two thermocouples in the lehr, one near the front and one about 60 feet inside. Generally, product defects caused by other processing steps become noticeable when the glass leaves the lehr. At that point the glass has cooled, and any defects which give rise to stress concentrations will cause breakage. When this 800 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. happens, an expert i: 3 usually called in to determine the cause of the problem. If he determines that the lehr is at fault, he makes adjustments to the firing and airflow systems until the annealing process is back 5n tune”. The expert is also frequently needed when a change in product character- istics necessitates adjustments to the lehr, or when the lehr is restarted after a period of inactivity. To achieve a desired temperature curve through the lehr, the expert has several controls he can modify. The thermo- couples can be set just like a household thermostat; if the temperature at the probe drops below the set temperature, the temperature of the gas produced by the burners is in- creased until the temperature reaches the desired setting again. Dampers throughout the lehr control where the hot air is directed. By opening all the dampers, the heat is dis- tributed fairly evenly through the lehr. By closing a certain damper, heat is blocked from all sections past that damper. Dampers can also be partially closed. Ports and louvers throughout the lehr act as air vents, letting hot air escape and thus lowering the temperature in the lehr. The difficult part is using all these controls to raise the temperature in some parts of the lehr while lowering it in others. Usually, an expert can successfully re-tune the process within two or three attempts. Corning wanted the project to focus on transferring the burden of assisting operators in adjusting lehr firing and airflow systems from its human experts to a computer system. A secondary objective of the project was to develop in-house expertise so that AI techniques and procedures could be applied to other Corning processes. III A HISTORY OF THE DEVELOP- The initial goal of the project was to develop a breakage diagnostic which would determine the cause of glass break- age in the lehr; however, after the first interview with the expert, it was obvious that his real expertise lay in his ability to predict how various adjustments to the control settings of the lehr would affect temperatures at various points within the lehr. Because of the immense size of the lehr and the complex interactions between the various controls, opera- tors at the plant cannot predict the effect of control setting changes, nor do they know what controls to modify in order to produce a desired temperaturi curve through the lehr. Corning had tried to simulate the behavior of the lehr us- ing principles of thermodynamics and heat transfer, but the results could not be correlated with the actual lehr control settings. Thus, changes to the lehr required the expert, who had acquired rules of thumb and knowledge of cause and effect in the lehrs through his many years of working with them. Although the breakage diagnostic expert system was to be developed first, it was determined that the real payoff would be in a knowledge-based simulator which could de- termine the temperature curve through the lehr given the control settings. This simulator would be useful not only for ordinary lehr operators, but also for the expert himself. Each time the expert proposes a change in the lehr control settings, about twelve hours elapse before he can see the result of this action as annealing curve changes. Since it typically takes him two to three tries before he achieves the desired effect, the immediate feedback which the simulator could provide would result in significant time savings. The development philosophy was to get a prototype sys- tem up and running as quickly as possible for early evalua- tion. TI returned to Corning after one month to review the breakage diagnostic and the initial design of the simulator, and again one month later with the first working simulator program. Although this simulator prototype contained a number of deficiencies, it served as a catalyst for uncovering a vast wealth of lehr knowledge previously thought irrele- vant by the expert. The entire system was reviewed at that time, with many misconceptions uncovered and a great deal of new knowledge acquired. Having something tangible to critique was enormously beneficial. The system was iteratively enlarged and refined over the next few months. Each review resulted in refinement of the knowledge and the user interface. Verification of the simu- lator’s accuracy was carried out by comparing its predicted temperature profiles with profiles measured by a thermocou- ple that was sent through the lehr. In general, the simula- tor’s results were within the repeatability of the measure- ments. After about four months, the system was demon- strated to Corning process engineers, and, soon after, to Corning executives. It was favorably received with the con- sensus being that it would be useful to have at the plants. About halfway through the project, the development of the diagnostic expert system was frozen and the range of the simulation expert system was expanded to include a plan- ning component. While the simulator allows a user to esti- mate the effects of a control-setting change on the annealing curve, the planner allows a user to input characteristics of a desired curve and receive the necessary control settings. After one month, the planner prototype was demon- strated to the lehr expert. Several further refinements later, the initial prototype was determined to be inadequate and its knowledge was encoded in a new version which more accurately models the expert’s thought process. This new version went through several refinements before finally being packaged up with the simulator to form the Lehr Simulation System, which has been installed in one of Corning’s plants for evaluation. With this new system, an operator can predict the ef- fects on the temperature profile when the control settings are changed. He can use the planner to determine which control settings will provide the desired annealing param- eters. The operator can then use the simulator to modify the temperature profile for other processing concerns. For example, if it is not possible to meet the desired annealing parameters, the operator can use the simulator to decide APPLICATIONS / 801 what trade-offs can be made to ensure an adequate temper- ature profile. IV THE SIMULATOR Since the nature of the simulator project did not neatly fit into any of the usual expert systems paradigms, TI and Corning agreed to develop the system from scratch. Corning operates a number of VAX computers, so it was decided to develop the initial prototype of the Simulator on a VAX 11/780 using a public domain version of Lisp (NIL) available from MIT. The system must provide graphic output to the lehr operators, so Tektronix 4107 terminals were chosen as the output device. To encourage modularity, the program relies heavily on Flavors, an object-oriented programming language embedded in NIL. Prototyping the systems would have been easier and quicker using a Lisp Machine like the TI Explorer, but the availability of the VAX for both groups dictated the choice. In deciding what control-setting changes to make, the expert does a mental simulation of the lehr. He can remem- ber certain temperature curves and their associated control settings, so he uses this as a starting point. He also has knowledge of the general effect a particular control-setting change will have and which sections of the lehr will be af- fected. This approach directly led us to our simulation strat- egy and knowledge representation. To encode the knowledge, we created structures much like frames that can represent both the magnitude and range of effect for each relevant lehr control. There is one such cause-and-effect structure for each damper, port, and lou- ver. Each structure has slots which associate the various valid settings for that control with their corresponding ef- fect (magnitude and range) on the slope of the temperature curve. Other knowledge is embedded in the calculation rou- tines. For instance, if a damper is only X% open, then the dampers after it cannot behave as if they were more than X% open. Thermocouples are accounted for by the knowledge that they represent points through which the temperature curve must pass, since any change in the temperature at these points causes the burners to compensate in order to bring the temperature back in line. Finally, the speed of the conveyor belt is used to determine how closely the glass temperature follows that of the air temperature. To determine the air temperature curve through the lehr, we start with a default curve and default settings, just as the expert does. We then multiplicatively combine the ef- fects of all control-setting deviations from the defaults with regard to the sections of the lehr they affect. The default curve and the resulting curve are expressed in terms of a series of slopes. By now using the temperature at which the air enters the lehr (a given), the temperature at each ther- mocouple, and knowledge of what the peak temperature will be and where it will occur (based on the control settings), we can propagate these slopes to arrive at the estimated air temperature curve through the lehr. Based on sessions with the expert, it was determined that the glass temperature curve basically follows the air 802 / ENGINEERING temperature curve with a slight lagging effect. The slower the belt speed, the less the glass temperature lags. However, to further complicate things, the responsiveness of the glass to changes in the air temperature also depends on the section of the lehr through which the glass is passing. Our knowl- edge representation scheme was expanded to allow the lehr to be sectioned off and the responsiveness of each of these sections recorded. To maintain modularity in the system, the lehr is rep- resented as a flavor object with a large number of instance variables. These instance variables record physical charac- teristics of the lehr as well as the lehr’s associated cause-and- effect structures. It turns out that the particular cause-and- effect knowledge structures remain constant within various classes of lehrs. Thus, by simply defining the physical fea- tures of a new lehr, the system is capable of simulating that lehr, even though its physical structure may be different than previous lehrs. This fact was verified when we obtained tem- perature curve data for a lehr for which we originally had no data. The curves estimated by the simulator matched quite closely with those actually measured at the plant. V THE PLANNER While the simulator is able to estimate the temperature curve based on the control settings, the planner takes as input the desired curve parameters and produces the con- trol settings which most closely achieve these parameters. Again, the expert’s approach to the problem provided a model for the system. The expert has a sort of bag of tricks he uses to achieve a certain response in the lehr. For in- stance, if he wants to lower the cooling rate in some section, he may raise the back thermocouple. Surprisingly, there are also cases when lowering the back thermocouple tempera- ture may lower the cool rate. He therefore uses his bag of tricks by pulling out of it the particular trick which applies to the current situation. By iteratively using these tricks to get closer to his goal, he eventually determines the necessary control settings. The planner emulates this style of reasoning with forward-chaining rules. Given a goal of, say, lowering the cool rate, it begins trying rules which are known to accom- plish this goal. It tries the rule by asking the simulator what the effect of the proposed change will be on the current tem- perature curve. If the change is in the right direction, it is made, resulting in a new temperature curve. If the change is in the wrong direction, the idea is abandoned and the next applicable rule is tried. Thus, through cooperation, the sim- ulator program and the planner program together arrive at the desired control settings. Forward chaining was chosen (as opposed to backward chaining) because the rules are not used for logical deduc- tion. Instead, they guide the system through the search space of possible control-setting combinations. Each rule consists of a context in which this rule is applicable (i.e. rais- ing the hold time), a set of actions to perform (i.e. changing a control setting or creating a subgoal), and conditions for success. The actions transform the current state of the con- trol settings to some new state. The new state is-then eval- uated with the success conditions. If these conditions are met, the new state becomes the current state; otherwise, we proceed to the next applicable rule. Conflict resolution is handled through rule ordering; this allows the system to try the changes which are most likely to succeed first. Because most rules are very similar, a small, extensi- ble rule language is provided to allow non-programmers to modify the knowledge-base. This rule language does not, however, limit the flexibility of the rules; although all cur- rent rules are expressed in the language, new rules can be arbitrary Lisp functions. The rule language supports all the power that was found necessary in the current rule set, in- cluding forward chaining, the creation of subgoals, the use of Lisp predicates, constraint-posting, the setting of multiple controls, and much more. The design of this rule language was driven by the fact that the knowledge base will be main- tained by people who are unfamiliar with Lisp. Besides the separation of the planner’s knowledge base and inference engine, the planner includes one more level of modularity: the strategy section. In talking with the expert, we found that achieving some temperature curve parame- ters is more important than meeting others. Also, some requirements can be relaxed in order to achieve others. The strategy section houses this knowledge. The strategy section provides the goals which the infer- ence engine and knowledge base will try to meet. It, can also post constraints in order to protect previously achieved goals. These constraints act as implicit conditions for suc- cess for all rules. For instance, the strategy section might direct the knowledge base to meet some cool rate specifica- tion without raising the temperature at which the glass exits the lehr by more than 5%. Based on the results of trying to meet this goal, it can then modify its strategy and send the next appropriate goal. Since the specification of each goal (and thus the whole strategy) is fairly simple, we hope that this strategy section will prove to be just as modifiable as the knowledge base. The strategy section is a simple yet effective way of handling multiple interacting goals. VI THE USER INTERFACE Since the system will be used by people who are not familiar with computers, care was taken in developing an operator interface that would be easy to use. In the simu- lator, particularly, much time was spent on giving the user as much relevant visual information as possible. Using a color Tektronix terminal, an interface was developed which shows a graph of both the air temperature curve and the glass temperature curve in the lehr (see Figure 1). By using the space bar and the backspace key, the user can position the cursor over a particular control setting, change its value, and watch the resulting change in the two curves. As shown in Figure 1, the user can change the temperature settings of the front and back thermocouples (denoted tc3 and tc7), open or close dampers (an integer represents the state of a damper, with 1 indicating completely closed and 5 indi- cating completely open), change the speed of the conveyor belt, or move the position of the first open port or louver (indicated on the graph by P and L respectively). Also, a curve analysis component was added which can give detailed Chtrots) Pbeuious curve) Ahalysis) T(argel) Qbit) Be 1 t:160f t/hr \ \ Exi t=263O 2 4 6 8 10 12141618 28222426 28 Sect ion 6 darper P L Figure 1: A typical screen from the heuristic simulator showing the air temperature and glass temperature curves along with the various control settings. APPLICATIONS / 803 information and statistics about the glass curve. The command line at the top of Figure 1 shows the user his available options. Pressing a T allows the user to specify the parameters of a “target” curve; this is how the planner is invoked. A takes the user to an analysis screen, which shows a detailed analysis of the glass temperature curve. Typing C takes the user to a screen which displays both the cur- rent control settings and the previous control settings. This is customarily used to determine what changes the planner made. The P option is closely related to this; it displays the previous glass curve (as a dotted line) on top of the current temperature curves so the operator can graphically see the changes. Finally, Q terminates the session. VII SYSTEM PERFORMANCE AND EVALUATION System response is fairly prompt. The simulator takes about one second after being invoked to return the estimated temperature curves. The planner is more variable, since it must invoke the simulator for each proposed change. On most problems, it takes less than one minute to run. In extreme cases, where the desired curve is very different from the current one, it may run for several minutes. Maintainability and modifiability were high priorities in this project. It is easy to add new lehrs to the system if they are fundamentally similar to existing ones; only the physical properties need be recorded. To add a lehr which is drasti- cally different from existing ones, the knowledge structures of cause and effect will also have to be modified. The planner is also easy to expand. New rules can be added with no real programming skills, and the strategy section can also be eas- ily changed to reflect new trade-off considerations. We think that the maintainability of this system is one of its strong points. VIII LESSONS LEARNED The field of expert systems is burgeoning. Reflecting on the experience gained from this project, some important lessons can be listed: l One of the keys to success is rapid prototyping of the system. It is more important to do the first attempt quickly than it is that the first attempt be complete. Experts are not necessarily aware of their thought pro- cess and it takes time for them to explain what they know. Having a prototype system to work with is a great help in uncovering the necessary knowledge. Each review of the system will elicit new information. l The process of finding out what experts know is a very difficult one for knowledge engineers. They must be willing to ask experts to go over something time and again until they understand it. There are times of great discouragement in this process. Conversely, the experts must be understanding enough to be fully co- operative even if they are skeptical about what is going on. i-304 / ENGINEERING 0 0 0 IX Strong management commitment to a project of this type is absolutely essential to its success. An expert’s time is in short supply and the project must have a high enough priority to have sufficient access to that expert. Early demonstration to potential users is important both in getting feedback on possible deficiencies and in gaining their support. After all, what good is an expert system if no one uses it? Expert systems must continue to grow, so attention should be given to the people who will maintain the system. The method for adding new knowledge should match their capabilities. Don’t expect process opera- tors to become Lisp programmers. SUMMARY This project demonstrated that it is possible to build a model of a manufacturing process based on the knowledge of an expert. The resulting system, which relies on the expert’s intuitive feeling of heat transfer rates and control variable in- teractions, is able to predict the measured system response. Consequently, the lehr operator is now able to make better changes in the control variables and to minimize process up- sets caused by making the wrong control change. Also, the plant has a tool that allows them to plan changes to the lehr control variables for different products rather than waiting until a problem arises and the solution is more expensive. ACKNOWLEDGEMENTS The authors would like to thank the following people for their contributions: Rick Saenz, one of the original knowl- edge engineers; Jacky Combs, who helped with the user in- terface; G. Thomas Holmes and Joseph Hurley, who man- aged the project for Corning; and Bob Spadinger, our coop- erative expert. We are also grateful to all the people who reviewed this paper. REFERENCES Adams and Williamson, “Annealing of Glass,” Journal of the Franklin Institute, Nov. and Dec. 1920. Burke, Glenn S., George J. Carrette, and Christopher R. Eliot, “NIL Reference Manual,” MIT LCS Publications, Cambridge, Mass., January 1984. Lillie, H. R., ‘Stress Release in Glass, A Phenomenon In- volving Viscosity as a Variable with Time,n J. Am. Ceram. Sot., 19( 1936)45. Moon, David, Richard M. Stallman, and Daniel Weinreb, “Objects, Message Passing, and Flavors,’ Lisp Machine Manual, Sixth Edition, MIT LCS Publications, Cambridge, Mass., June 1984, Chapter 21. Rajagopalan, Raman, “QualitativeModeling in the Turbojet Engine Domain,” Proc. AAAI-84, Austin, TX, 1984, pp. 283-286,
|
1986
|
23
|
465
|
Qualitative Simulation of Semiconductor Fabrication John Mohammed Schlumberger Palo Alto Research 3340 Hillview Avenue Palo Alto, CA 94304 ABSTRACT As part of a larger effort aimed at providing symbolic, computer-aided tools for semiconductor fabrication ex- perts, we have developed qualitative models of the oper- ations performed during semiconductor manufacture. By qualitatively simulating a sequence of these modele we generate a description of how a wafer is affected by the operations. This description encodes the entire history of processing for the wafer and causally relates the attributes that describe the structures on the wafer to the process- ing operations responsible for creating those structures. These causal relationships can be used to support many reasoning tasks in the semiconductor fabrication domain, including synthesis of new recipes, and diagnosis of fail- ures in operating fabrication lines. I Introduction Semiconductor fabrication is the long and complex process by which wafers of almost pure crystalline silicon are turned into integrated circuits. It is carried out according to a recipe, which is a linear sequence of parameterised operations that defines how to create devices belonging to a particular technological family such as Bipolar, NMOS or CMOS. The work described in this paper is part of a larger effort aimed at providing computer tools to facilitate diagnosis and the design of process recipes. In this paper we focus on the develop- ment of qualitative models which are used to reason symbolically about the fabrication process. The scenario we envision is shown in Figure 1. The ‘generic knowledge-base” would contain models of the processing opera- tions used in fabrication, such as “etching” and “oxidation.” It would also include models of the electronic behaviour of the de- vices being fabricated, and models of the manufacturing equip- ment used. A suite of symbolic reasoning tools would use these models to help the process designer create a recipe for a new pro- cess. The result of this design process would be a “recipe-specific knowledge-base” containing aLl the knowledge gained about the recipe and about the fabrication process it represents. Computer tools utilizing both the general knowledge and the recipe-specific knowledge would aid the production engineer in his tasks of im- proving the yield of the process and diagnosing failures. Today, the primary computer tools available to process de- signers are numerical, incremental-time simulators (e.g. [Ho and Hansen]). These simulators use mathematical models of the physical and chemical processes employed in semiconductor fabrication to determine the results of applying a recipe to a prototypical wafer. Such simulators do provide a very impor- tant source of quantitative information that might otherwise be obtained only by performing costly experiments with real wafers. Reid Simmons MIT Al Laboratory 545 Technology Square Cambridge, MA 02139 Production Figure 1: General scenario for Semiconductor Fabrication CAD/CAM tools. However, process designers and production engineers do much causal reasoning about the fabrication process for which numer- ical simulators provide little or no aid. This reasoning typi- cally involves relating attributes of the wafer to operations of the recipe. For example, when the resistance of some layer on the wafer is found to be too high, an engineer might want to know which operations might have been responsible. Also, the process designer or engineer often needs only a qualitative an- swer to a partially specified question, such as “will the resistance of layer X increase if the temperature of step 5 is increased?“. In order to automate this type of reasoning, we have con- structed qualitative, causal models for each type of fabrication operation. Each model describes how the structure of a wafer is affected by an operation. We have chosen to model operations at a level that captures the process engineer’s “naive” understand- ing of semiconductor manufacturing. This level is sufficient for many of the causal reasoning tasks an engineer would want to perform, yet it suppresses the unnecessary detail and mathe- matical sophistication that are required for accurate numerical simulation. These models constitute a set of “building blocks” that can be strung together to form a recipe. Our simulator takes such a recipe as input and produces a wafer history. A wafer his- tory describes how the structure of a prototypical wafer evolves over time as the fabrication processing proceeds. It also records causal dependencies that relate the structural attributes of the wafer to the operations responsible for generating those struc- tures. This causal dependency information can be used to sup- 794 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. port diagnosis of failures on a running fab line and can help in the synthesis of new recipes. We discuss our models in the next section. In Section III we then briefly describe the language in which the models are written, and the qualitative simulator. Section IV describes the reasoning tasks we have performed using these models, which include qualitatively simulating the fabrication of several devices according to a recipe for a bipolar process and using the causal dependency information gained to support diagnosis. Finally, we present several research issues related to this work. II The Models A. Modeling the Wafer ln semiconductor manufacture, electronic devices are formed on the “upper” surface of a thin wafer of silicon. A device is a three-dimensional structure with a particular geometric config- uration of regions of silicon (possibly with controlled amounts of impurities embedded), silicon compounds and metals. This section describes how we model these structures by explicitly representing physical, topological and geometric attributes of the wafer. The qualitative reasoning techniques that have been devel- oped in Al apply mainly to reasoning about scalar quantities related by partial orders. In order to employ these techniques we have adopted a simplified representation of wafer structure. Fortunately, much of the reasoning that fabrication experts do requires that only two-dimensional vertical cross sections of the wafer structures be represented. Furthermore, the cross section can be usefully modelled as a series of vertical strips (see Fig- ure 2). Many of the numerical simulation tools (e.g. SUPREM [Ho and Hansen]) simplify the wafer representation in the same way. Thus, we can describe wafer geometry as essentially a one- dimensional horizontal series of one-dimensional vertical layers. We represent the horizontal axis of the cross-section by a series of horizontal regions, explicitly representing their lateral extents and lateral topology (i.e., left and right regions). Unlike many fabrication simulators, our simulator actually creates this lat- eral topology using a simple representation of photolithography masks. For each horizontal region, we represent the sequence of vertical layers one would encounter in going down through the wafer at a point in the interior of the horizontal region. Besides describing the vertical topology, a layer has attributes describ- ing the material of the layer, any dopant and its concentration and the thickness of the layer. With the exception of mask-exposure, all the processing steps are described as “vertical processes”-only their effect on the vertical geometry of the layers within each horizontal region is described. Our models ignore the effect of such operations on the transitions between adjacent horizontal regions. For exam- ple, during an etching step material is removed from the layer at the surface of the wafer in those areas of the wafer that are not protected by photoresist. Realistically, in some etching tech- niques the etchant can remove material in a lateral direction as well as vertically, and thus “encroach” upon an adjacent hori- zontal region. This lateral effect is not described in our model of the etching operation. The one operation which cannot be specified as a “vertical process” is mask-exposure. However, mask-exposure is con- cerned exclusively with how masking affects lateral geometry on a region by region basis. Thus, modeling the wafer as a series horizontal regions is sufficient to capture the effects of masking. a. Vertical cross section through a test structure (CUO) Ic {. “’ . . :;.:: I:_ ,. ., ‘. : . .._ :. :..: :: .,., :..::. j :: .._ : .._ .: :, ; :: :::,:: ,: . . . . :. .j F \ \ \ L \ IIL . . ., . . ,‘i, ..I : .,. .,:: ._., :v : ,.::,::: . . . . . . . . .F... .P.. . . . . . . . . . . . . . . . . : .,..: . . : :.:. .:.. ..: .:. .: B f + ii2 \ L b. Strips representing vertical topology within horizontal regions. Figure 2: Representation of wafer cross-section as vertical strips. We have found this to be a reasonable approximation for many of the reasoning tasks we wish to undertake. For example, in section IV we see how these models support some fairly detailed diagnostic reasoning. B. Modeling the Operations Structures are created on a wafer by the application of a recipe, which typically requires between 100 and 200 fabrication steps. However, all these steps are drawn from a comparatively small repertoire of standard parameterized operations. We have modelled a reasonably complete set of these operations. They can be grouped into categories as follows: Addition of Material: these operations cover the upper surface of the wafer with a “blanket” layer of some material. 1. Chemical-Vapor-Deposition-deposits silicon com- pounds like silicon nitride and silicon dioxide; 2. Epitaxial-Growth-grows crystalline silicon; 3. Spin-On-Resist-coats the wafer with a positive or negative photoresist; and 4. Sputtering-deposits metal layers; Removal of Material: these operations remove material from the upper surface of the wafer (selectively, based on mate- rial type). 1. Etch-(we do not distinguish between “wet” acid bath, or “dry” plasma etch) removes materials other than photoresist; 2. Photoresist Clean-removes all photoresist indepen- dent of “hardness”; and APPLICATIONS / 795 3. Photoresist Develop-removes only “soft” photoresist; Change of Chemical Properties: these operations modify the chemical composition of existing layers. 1. Mask-Expose-changes the “hardness” of a layer of photoresist by using light or X-ray radiation to break or form chemical bonds; the radiation is patterned with a mask; this is the primary method by which the sur- face of the wafer is differentiated laterally into distinct regions to form devices and wires; and 2. Oxidation-combines silicon and/or silicon compounds with oxygen to form silicon dioxide; Change in Doping Profile: the controlled introduction of im- purities into the silicon crystal lattice is the key to the formation of devices that have interesting electronic be- haviour; these operations effect the presence and control the concentration of these impurities. 1. Diffusion-modifies the distribution of impurity ions by permitting them to diffuse through the crystal; 2. Ion Implantation-accelerates ions of an impurity elec- tromagnetically towards the wafer to implant them to a depth determined by the energy imparted to the ions; and 3. Pre-Deposition-introduces impurity ions in very high concentrations at the surface of the wafer. Each of these operations is parameterised. The parameters may be numeric or non-numeric. Numeric parameters specify, for example, the temperature at which an operation should oc- cur. An example of a non-numeric parameter is one that specifies the particular etchant used in an etch operation. A recipe consists of instances of these operations, with par- ticular values specified for the parameters. As an alternative to specifying numbers for numeric parameters, our system permits qualitative constraints on the values of parameters to be speci- fied. For example, rather than stating that the duration of an etch step is twenty minutes, one can state that it is “long enough to completely remove the uppermost layer.” This is especially useful during the design of a new recipe, when the designer has in mind what the effect of the operation should be, but has not yet determined what values for the parameters are necessary to achieve that effect. We represent the effects of an operation as a conjunction of logical implications. The consequents of these implications are the changes that occur to the world, including the creation and destruction of objects. The antecedents of the implications de- scribe the conditions under which these changes occur. As our model of the Etch operation is indicative of the nature and style of our models of processing operations, the rest of this section describes how we model that operation. Descriptions of all the models can be found in [Simmons and Mohammed]. Etching acts to remove material from the uppermost layers on the wafer, thereby reducing a layer’s thickness or destroying it altogether. In the event that layers are completely consumed, the topology of the wafer changes and a previously buried layer becomes the new uppermost layer. The total amount of mate- rial removed depends on the duration of the operation and the particular etchant used, and may differ from one region of the wafer to the next due to the fact that etching occurs at different rates for different materials. Thus, in order to determine the effect of an etch operation, our model must determine whether each layer in each horizon- tal region of the wafer is etched at all, and if so, how much is etched away. A layer is totally etched away if the duration of the operation is longer than the sum of two durations: (i) the time needed to etch through all layers above the layer; and (ii) the time needed to etch through the layer itself. We call this sum the Etch-Destroy-Time of the layer, and represent it as a function that depends on the layer’s thickness, the Etch-Rate (another function) at which the etchant etches through the material of the layer and the Etch-Destroy-Time of the layer above it. A layer is partially etched away if the duration of the operation is shorter than the Etch-Destroy-Time of the layer but longer than the Etch-Destroy-Time of the layer above it. The amount by which the thickness of the layer is reduced is determined from the Etch-Rate of the etchant for the material type of the layer and the difference between the duration of the operation and the Etch-Destroy-Time of the layer above it. The models of the other processes use techniques similar to those described above, and most are fairly good approximations to the actual fabrication operations. The only real exceptions are the ion-implantantion and dif- fusion operations that deal with the distribution of impurities within the wafer. The models we have written are complex yet not very faithful to reality. This is because of the difficulty of representing concentration profiles in a way that the simulator can reason about them. We haven chosen a very simple way to model impurity profiles: within each layer there can only be one dopant and the concentration of that dopant is considered to be constant throughout the layer. Concentration profiles are thus modelled as simple combinations of step functions. Two verti- cally adjacent layers that are made of the same basic material may be distinguished by the concentrations of impurities within them. III Representation The nature of our fabrication models has an impact on the features required in the language used to write them. ‘l’l.c: lan- guage must be capable of describing the changes that occur to attributes of the wafer, such as the thickness or existence of lay- ers. These changes are often complex functions of the attributes of the wafer before the operation and the parameters of the oper- ation. The simulator must be able to reason qualitatively about such functions. Finally, the language must make it possible to say that the same effects occur conditionally to all the layers of the wafer. The language we use to model the operations is an extension of the discrete action languages that have traditionally been used in the planning domain. An action is “discrete” in the sense that it maps the state of the world at the instant before the action occurs to the state of the world at the instant after the action occurs, but says nothing about the state of the world while the action is occurring. This type of model has a rich history in AI [Fikes and Nilsson, Sacerdoti, Stefik]. The language and the qualitative simulator were originally developed for doing geologic interpretation and are described in detail in [Simmons 831. Briefly, the language extends the traditional precondition/effects representation in that it allows Effects that are expressed in terms that are relative to the input state (e.g. “the thickness of layer L decreases by 5”) Effects that are universally quantified (e.g. “for all layers, the thickness decreases”) Effects that are conditionalized (e.g. “if the layer’s material is silicon, then the thickness decreases”) Creation and destruction of objects. ‘96 / ENGINEERING Arithmetic functions can be used in the specification of the effects of an action. The simulator can reason about the value of a particular function application either from the definition of the function (if it is supplied) or from constraints on the possible values for the function. For example, our models include the definition of the Etch-Destroy- Time function described above, and thus the simulator knows that the value of the function for a layer depends recursively on the value of the same function for the layer above that layer. From this definition and the constraint (provided by the process designer) that the duration of the Etch operation is greater than the Etch-Destroy-Time of a given layer, it can determine that the duration is longer than the Etch-Destroy-Time of all layers above the given layer-i Time is represented explicitly as point-like instants. Time intervals are defined by their end-points. One can assert ordi- nal relationships between time instants (>, <, =, 1, 5, #). The simulator maintains a consistent partial order and can de- duce new relationships based on the transitivity of existing ones. Basically, these temporal relationships allow one to temporally order operations and to refer to the state of the world at differ- ent points in time. Use of a partial order permits one to store and reason about incomplete temporal information. The “world model” is a set of objects. Like typical frame sys- tems, the objects have a set of attributes and the object types form a simple inheritance hierarchy. The set of attributes for an object of a given type includes those of any superior type. Un- like typical frame systems, our world model includes a temporal dimension. First, objects have a temporal extent. Thus, we can talk about when an object was created or destroyed. Second, the “value” of an attribute is represented as a sequence of intervals called a history. 2 This sequence encodes the complete history of how the attribute’s value changes over time. The intervals in the histories are of two types. “Dynamic” intervals indicate that some change was occurring to the attribute during that interval of time. UQuiescentn intervals indicate that no change was occurring and therefore the value of the attribute remained constant during the interval. The value associated with each interval is either a quantity (such as the thickness of a layer), another object (such as the neighbouring layer) or a set of ob- jects. Quiescent intervals encode a non-monotonic persistence assumption about the world. All attributes of alI objects are as- sumed to be quiescent (unchanging) during every time interval for which there is no evidence that their values are changing. A. Recipes and the Simulator A recipe is implemented simply as a list of “events.” The first event is an initialization step to create objects representing the initial wafer structure, the materials, such as NITRIC-ACID, to be used in the fabrication and the various masks to be used in the recipe. Each subsequent event represents a particular manufacturing step. The description of each event includes the type of the op- eration and a set of constraints. Typically, the constraints are assertions of qualitative relations between parameters of the op- eration and attributes of the wafer. In the absence of numerical information, these constraints enable the simulator to infer the 20ur notion of history is derived from, but not identical to, that of [Hayes]. The simulator works by instantiating each event in the list. The constraints and the definition of the model for the indicated type of operation enable the simulator to infer which changes occur to the world model. The simulator then manipulates the wafer history to reflect these changes. The end result is a set of objects whose attributes describe the complete history of how the object changed over time. This wafer history, which includes causal dependencies recorded by the simulator, essentially forms a causal explanation in terms of the events in the recipe and their parameters. IV Experimental Results We have successfully simulated the fabrication of several typ- ical devices according to a recipe for an oxide-isolated bipolar process. Our representation of the recipe involves forty-eight steps, including six masking steps. This includes all the essen- tial steps in the recipe through the addition of metal contacts. The only steps omitted are those preparatory steps which do not directly affect the topology or geometry of the wafer struc- ture such as the gettering step, wafer cleaning steps, dehydration steps and photoresist baking steps. We have implemented a capability that graphically displays the state of the wafer at each processing step in order to pro- vide visual feedback concerning the progress of the simulation. Figure 3 is a sample of this output, showing a NPN bipolar transistor. In order to generate the coordinates needed to draw the display, the system determines symbolic expressions for the geometric attributes of the structures, such as the thickness of a layer. These expressions are obtained by tracing the dependen- cies through the wafer history and are given in terms of the pa- rameters of the processing operations. They are then evaluated using approximate values for the parameters that are provided by the user. The dependencies recorded by the simulator make it possible to determine which operations influenced an attribute of the wafer and how the value of the attribute depends functionally on the parameters of those operations. This ability to trace causal dependencies is an important component of several reasoning tasks, such as diagnosis of failures in processing on a production line and synthesis of new recipes. We describe our investigations into the role of this ability in the diagnostic task below. In addition to the product circuits, a small number of de- vices called “test structuresn are created on every wafer in order to facilitate quality control. The electronic properties of these devices are measured and when these measurements lie outside their expected ranges the wafers are rejected. The measurements then provide information helpful in diagnosing the problem. Perturbations in the input parameters of processing opera- tions form a useful fault model for many semiconductor manu- facturing problems. Under this fault model, each of the input parameters that a measured attribute depends on gives rise to a diagnostic hypothesis for explaining an abnormality in the mea- surement: namely that the input parameter has an appropri- ately abnormal value itself. The expressions we obtain that re- late the measurable attributes of the wafer to the input parame- ters of the operations can be used to order these perturbation hy- potheses according to the sensitivity of the attribute to changes in each input parameter. Further, by tracing causal paths for- ward from each suspect input parameter to the attributes that they affect, one can determine what other measurements on the wafer would constitute confirmatory or contradictory evidence for the hypothesis. APPLICATIONS / 797 Figure 4 represents two test structures placed side by side, called Collector-Under-Silicon (CUS) and Collector-Under- Oxide (CUO). The CUS structure extends between the first two sinks from the left. As its name suggests, in this structure the collector is below the silicon layer formed during an Epitaxial Growth step. The CUO structure extends between the sec- ond and third sinks, with most of the buried collector under an isolation oxide layer. For both devices the electronic prop- erty measured is the resistance between the sinks. In each case, the dominant influence on the measurement is the resistance of the buried collector, but the measurements also reflect the resistances of the sinks and (for CUS) the epitaxial layer. We have implemented a program that uses the results of the simulator to identify all the input parameters that affect the re- sistance of each of these test-structures and to obtain expressions for the functional dependence of the resistance on these parame- ters. Since most input parameters contribute to the value of sev- eral measurable attributes on more than one test structure, the program can combine information from both normal and abnor- mal measurements to prune the set of fault hypotheses. First, those hypotheses concerning input parameters that contribute to measurements that are within their normal ranges can be eliminated as candidates. Second, those input parameters that would have to be abnormal in one direction to explain one mea- surement, and simultaneously abnormal in the other direction to explain another abnormal measurement can be eliminated. For example, the factors that control the resistance of the sinks affect both the CUS and CUO structures equally. Thus, a nor- Collector f k3se f ma1 CUS measurement exonerates those factors as contributors to the abnormality of the CUO measurement. An exception to this simple candidate elimination rule occurs when an input parameter plays a large role in determining the value of one measurement, but has only a negligible effect on the value of another one. For example, in the CUO structure, the narrow regions labelled “B” in Figure 4 undergo processing identical to that for the large part of the CUS structure labelled “A” in the figure. This means that every input parameter influ- encing the CUS resistance measurement also appears as a factor controlling the CUO measurement. By the simple rulr discussed above, a normal CUO measurement would exonerate all tne in- put parameters affecting the CUS measurement. However, in reality a normal CUO measurement can be consistent with an abnormal CUS measurement, because the factors governing the resistance of “B” have only a relatively small affect on the total CUO resistance. This underlines the importance of considering sensitivity of the functional dependencies, and hence the importance of being able to generate symbolic expressions that support quantita- tive analysis. Our system supports quantitative analysis in two ways. First, it can determine sensitivity by plugging numbers into these symbolic expressions directly. Second, we have imple- mented a capability to symbolically compute partial derivatives. With this capability, we can determine the relative magnitudes of the partial derivatives with respect to each of the input pa- rameters of the symbolic expression for the measured attribute. Emitter t Figure 3: Graphical output of the simulator, showing a NPN bipolar transistor. Isolation , Sink 4 -\/‘c A B Figure 4: CUS and CUO structures, side by side. 798 / ENGINEERING V Future Work This work suggests several research issues worth investigation. Currently, we relate the measureable electronic properties of the structures to the geometry of the structures by explicitly giving the system an expression for that dependence. The sys- tern then determines the relationship between the geometry of the structures and the input parameters to the processing oper- ations from the wafer history. We are beginning to address the question of how to obtain expressions for the electronic proper- ties of wafer structures by qualitative analysis of the structures using models of electronic behaviour. The parameter-perturbation fault-model mentioned above implicitly assumes that the problem does not involve gross de- viations from the normal structure. If the true problem involves omitting or repeating a processing step, or if the perturbation in the input parameter is very large, then the topology of the wafer structure may be sufficiently modified to make many of the causal pathways in the wafer history inapplicable. In most cases the wafer history for the normal topology will still be a good indicator of which processing steps to suspect. However, it would be advisable to qualitatively simulate the gross errors that are known to occur and %ompile” an associative rule-base of causal dependencies from the resulting wafer histories, An ex- pert system (called PIES [Pan]) has already been written that performs the diagnostic task we discuss using associative rules written by production engineers. Currently, each new recipe re- quires the hand-generation of a new knowledge base-a tedious, time-consuming and error-prone process. The ability to auto- matically generate a knowledge base for PIES directly from the recipe and the models of the processing operations would greatly enhance its utility. Finally, we have previously mentioned that qualitative simu- lation and dependency tracing have a role to play in CAD tools for process designers. The ability to qualitatively simulate semiconductor manufac- turing permits the process designer to take a “top-down” ap- preach to the design of new recipes. The designer can exper- iment with different sequences of operations, see the results of each sequence and concentrate on obtaining an appropriate se- quence, without the necessity of specifying precise numerical values for all the input parameters. Once the sequence of operations has been chosen and simu- lated, the causal dependency information can be used to help the designer choose appropriate values for the parameters. First, by determining all the attributes affected by the choice of a value for an input parameter, constraints on the range of values that are appropriate can be determined. As we mentioned earlier, when simulating the recipe qualitatively the designer indicates the desired outcome for each processing step by giving quali- tative constraints, such as “the duration of the etch operation is long enough to consume the uppermost layer.” These con- straints represent design goals to be satisfied by the choice of actual values for the parameters. Second, the expressions for the dependence of attributes on parameters might be used to determine initial estimates for the values of input parameters, by applying constraint propagation and/or numeric root-finding techniaues. quence of these models and have simulated the fabrication of several typical devices. The simulation generates a wafer his- tory that describes the complete history of processing for the wafer, from which our system can extract the causal relation- ships between the attributes that describe the structures on the wafer and the processing operations responsible for creating those structures. Further, the system can determine symbolic expressions for the functional dependence of the these attributes on the parameters to the processing operations. Finally, we have investigated how this information can be used to support a di- agnostic reasoning task. We consider that these models and reasoning processes have an important role to play in computer-aided tools to support many kinds of reasoning tasks in the semiconductor manufac- turing domain. Acknowledgments The authors thank J. Martin Tenenbaum, Randy Davis and Pat Hayes for reading early drafts of this paper and giving well- considered comments and suggestions. References [Fikes and Nilsson] R.E. Fikes and N.J. Nilsson. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2 (1971) pp189- 208. [Hayes] Pat J. H a y es. The Second Naive Physics Manifesto. In J.R. Hobbs and R.C. Moore (eds.), Formal Theories of the Commonsense World, Ablex Publishing Corporation, Norwood, NJ, 1985. [Ho and Hansen] Charles P. Ho and Stephen E. Hansen. SU- PREM HI - A Program for Integrated Circuit Process Modeling and Simulation. Technical Report No. SEL 83- 001, Integrated Circuits Laboratory, Stanford University, 1983. [Pan] Y.C. Pan and J.M. Tenenbaum. PIES: An Engineer’s “Do it yourself” Knowledge System for Interpretation of Parametric Test Data. In Proc. AAAI-86, Philadelphia, PA, August, 1986. [Sacerdoti] E.D. Sacerdoti. A structure for plans and behavior. Technical Note 109, AI Center, SRI International, Menlo Park, CA, 1975. [Simmons 831 R.G. Simmons. Representing and Reasoning About Change in Geologic Interpretation. Technical Re- port 749, MIT AI Laboratory, Cambridge, MA, 1983. [Simmons 861 R.G. Simmons. Commonsense Arithmetic Rea- soning. In Proc. A A A I- 86, Philadelphia, PA, August, 1986. [Simmons and Mohammed] R.G. Simmons and J.L. Moham- med. Qualitative Modeling of Semiconductor Fabrication. Technical Report, Schlumberger Palo Alto Research, Palo Alto, CA, in preparation. [Stefik] M.J. Stefik. An examination of a frame-structured rep- resentation system. In Proc. IJCAI-79, Tokyo, Japan, August, 1979, ~~845-852. VI Conclusion We have developed qualitative models of the operations per- formed during semiconductor manufacture. We have repre- sented a recipe for an oxide-isolated bipolar process by a se- APPLICATIONS / 799
|
1986
|
24
|
466
|
A RULE-BASED SYSTEM FOR DOCUMENT UNDERSTANDING Debashish Niyogi and Sargur N. Srihari Department of Computer Science State University of New York at Buffalo Buffalo, NY 14260, USA ABSTRACT .A rule-based system to make illfelences about ~CTU~VPUI images is int reduced. Given a digitized document ‘mage, the JVS tern controls the analysis of the document, and identifLes al! the diJerent printed regions in the document image. I,ogical “blocks” of info) mation on the document image are interpreted and classifLed by lhis system which then p?QduCeS as output a?[ cditablr descl ipticw 0-f the entile document. The system uses a goal directed top down appoach. and utilizes a three-level luLe hiera? ch y t 0 implement its con t 7 01 St rat egy. 1. INTRODUCTION Document understanding is a tash that is analogous to speech understanding and image understanding. A document, more specifically a printed document, has printed text, line draw- ings, half-tone pictures, graphs, icons, etc. .4s a domain for seri- ous research, document understanding has been gaining in impor- tance over the years. Early interest in this field was primarily because of the need to store and transmit large volumes of information that are contained in documents. A need was felt to be able to code the information in the documents, and then to store/transmit this code such that the document image can then be reconstructed at another site. Several document encoding techniques have been developed for this purpose. Efforts have also been made to develop new techniques for analyzing individual components of a document with a view to deciding whether a given component I\ composed of text or graphics. Different segmentation techniques ha\ e been proposed, and used, to varying degrees of success. Relevant work in this area include the use of a non-linear run-length smoothing algorithm for the segmentation and classification of digitized printed documents into regions of’ text and images [NTong, Casey and Wahl, 19821. A survey on docu- ment image analysis [Srihari, 19861 gives a comprehensive over- view of hnou-n techniques in all aspects of document image analysis. A discussion of techniques used in analyzing pieces of’ letter-mail to locate the destination address can be found in [Srihari et al, 19851. A more recent interest II document understanding is that of trying to design systems \~Iiich embody knowledge about the basic structure of different hinds of documents and use this hnowledge 1,) analyze and identify the different components of a document. Such a system would tie in together various aspects of document image analysis, like edge segmentation, filtering, et . along M 11 h a high-level control structure that interprets the document image uith the help of these image processing opera- 1 hlq worh Was supported in part b?: the United States Postal Ser\-Ice Con trllct lO42.N) WM.3349 and by the Xerox Webster Research Center. tions. \ hno\\‘ledgembased system that can direct the cld\\ificdtinn of the different entities on a document image and decide M hen an un~lmhigu,~u% classification of all the relevant entities has been ,~cl~~e~ed. IS one of the major goals in the field of document iinderst,lnding. hnow,ledge-based systems have been used in the past in various domains [Barr and I’eigenbaum, 19821. One of the first hnov..ledge-based systems \vas W-Cl\‘, u-hich illustrated ho\\. kno\vledge gleaned from experts could be represented in the form of production rules that could be used in medical diagnosis [Huchanan and Shortliffe, 19841. I’he use of hnovliledge-based systems for image analysis include an expert system for 10~ level image segmentation of \ isual scenes [Yazif and I.e\.in?. 19841 and a rule-based system for aerial imagery [McReo\\ n, ILti vey and McDermott, 19851. Rule-based strategies for im‘ig< interpretation have been proposed in [N’eymouth, Griffin, ILITX~I~ and Riseman, 19831, and a knowledge-based computer vision svs tern has been described in [Levine, 19781. The application of knowledge-based techniques to document image understanding have been discussed in [Kubota, lwaki and Arakawa. 19841, lvhich describes the application of a production system concept to an experimental document understanding system, and more recently in [Nags, Seth and Stoddard, 19851 which proposes the use of >;-Y trees for the representation of information about a document image. We propose here a kno\\ ledge-based system that is organized as a production system with different levels of‘ production rules that perform an analysis of a document image, and interpret and classify the various regions of‘ printed matter on the document. The input to this system is a digitized document image, and the output is an editable description of the document. This paper first describes (in Section 2) the overall architec- ture of the system, and then gives details about the various com- ponents of the system. The innards of the knowledge base, the control structure, the inputs to the system and the outputs of the system are explained. Section 3 describes techniques by which the system deals with uncertainty. In Section 4, some actual rules (in Prolog) used in the system are shown and explained. Section 5 describes actual results obtained so far using the rule-based svs- rem. A discussion on the applicability of rule-based systems to the docur lent understanding problem follows in Section 6. 2. ARCHITECTURE OF THE SYSTEM The knowledge-based system that we are de\-eloping is com- posed of two basic parts : the hnowledge Base and the Control Structure. The input to the system consists of‘ the document image data. The Control Structure (Inference l:ngine) uses the hnow ledge contained in the hnom ledge Hase along v, ith its con- trol strategy to mahe inferences about the document from the gilen image data. The output of the system 15 a descriptive classification of the various identifiable printed regions, or blocks APPLICATIONS / 789 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. tested belore the original hypothesis can be considered to be justified. Thus, an entire set of backward-chaining processes are set up, and the system only reaches a satisfactory conclusion when all these processes have run to completion. 2.1. INPUTS TO THE SYSTEM .4s mentioned above, the initial input to the knowledge- based system consists of data obtained from low-level image seg- mentation and filtering performed on the original document image. This data is composed of two parts : Descriptions of the characteristics of each region of printed matter on the image, and a Relational Data Structure which represents the spatial relation- ships between each of these regions. The regions identified b?: the initial segmentation and filtering process may not, ho\\eever, always represent the logical Mocks that the hnowledge-based sys- tem would attempt to classify. Thus, an intermediate step is necessary to combine/ split up these regions as necessary so as to arrive at the logical blochs required by the rule-based system. This mtermediate step js bn\l\\ II ;LS Region-Merging. This step essentially combines individual letters (connected components identified by the initial segmentation) into words, lines and finally paragraphs (logical “blocks”). The outputs of the region- merging process are the descriptions and relational data structures for the logical printed blocks in the document image. in the document image. Figure 1 shows a sample document Image that is input to the system and the editable description that is output by the system. (As Figure 1 indicates, it is possible to “reconstruct” the document from its editable description; how- ever, the current system provides no facilities for this, but rather concentrates on the problem of analyzing the original image to obtain the description.) The control flow in the overall document understanding svstem is as follows : the document is first digitized and the resulting digital image is segmented to obtain data about the vari- ous printed regions in the document. This data includes the Intrinsic properties (e.g., shape, size, aspect ratio, etc.> of each of the identified regions, as well as the spatial relationships between the various identified regions in the document image. The control structure then uses the knowledge base to examine this data, and attempts to arrive at a consistent classification for each of the identified regions, or blocks. The system consists of three levels of rules : Knowledge Rules, Control Rules and Strategy Rules. The knowledge rules contain knowledge about the intrinsic properties of the various regions of a document image, and also the spatial relationships between these regions. The control rules decide what knowledge rules are to be executed and in what order, and thus act as focus- of-attention mechanisms to guide the search towards a more efficient resolution. The strategy rules supervise the entire search and classification process, and determine what control rules are to be executed at any given time and in what order. Strategy rules also determine whether a consistent interpretation of the image has been obtained. 2.2. THE KNOWLEDGE BASE The knowledge base for this system consists of a set of rules that embody knowledge about the general characteristics about document images. These rules are expressed in terms of predicates in first order predicate logic. The rules in the knowledge base are called Knowledge Rules. These rules define the general characteristics expected of the usual components of a document image, and the usual relationships between such com- ponents in the image. For example, in a document like the cover page of a journal article (shown in Figure 2 (a>>, thr various blocks of printed matter correspond to the journal banner, the title of the article, the names of the authors, the abstract, section headings, various paragraphs, perhaps one or more line drawings or figures or tables, footnotes, etc. (These different blocks are shon n in Figure 2 (b)). Also, the usual relationships, e.g., the title being abo\le the author names, the abstract being above the first p;rragraph of text, the footnotes being at the bottom of the page, etc. are generally true of such documents. Intrinsic properties, l~he the block-to-white pixel ratio for half-tone figures in the image being larger than the corresponding ratio for text, are also true 111 general for such blocks. From such known facts about these kinds of document images, rules are constructed that can be used by the inference engine to make inferences about the vari- nus identified “blocks” on the given document image. 2.3. THE CONTROL STRUCTURE The control structure for the rule-based system consists of an inference engine which uses the knowledge base to make uf~~mbrguous inferences about the classification of various blocks in a given document image. The inference engine is also rule- ha\ed, and contains two levels of rules : Control Rules and Stra- tegy Rules. These rules regulate the analysis of the document image, and decide when a consistent interpretation of the image ha\ been obtained. The inference engine uses a top-down approach in arril-ing at its solution, since the solution space is not very large, and a lot of knowledge exists (in the knowledge base) about the domain. A backward-chaining process is used by the control structure. If the data from the initial segmentation of the image is not sufficient for an unambiguous interpretation of the document image, then the system decides to obtain more data from the given image. Thus, any further image processing operations that are required are progressively invoked under the supervision of the inference engine. These operations could include further seg- mentation of the image, color filtering, text reading, etc. 4 goa-driven (top-down) approach is used by this system, % hrch uses a hypothesize-and-test strategy for arriving at its con- (luslon\. Thus. the system makes hypotheses about different ~ntermedtate conclusions and chains backwards through the rules in order to test the hypotheses. In trying to satisfy a hypothesis, wJme other hypotheses may be generated which must first be The rules comprising the inference engine are also coded in terms of predicates in first-order predicate logic. The control structure determines the order in which these rules are executed in order to test various conditions effectively. Control rules can 790 / ENGINEERING . . ..------4-----.s -.- -..- .I lrtftin~ Corrdon of Scpmrablc Elrcrricml Contacts Fig. 2 (a) A sample document Fig. 2 (b) Logical blocks identified in the be “focus-of-attention” rules or “meta-rules”. For example, at any given stage of the analysis, control rules can decide that all the relevant knowledge rules for footnotes be executed so as to test whether the given bloch. say bl, is a footnote. Strategy rules can guide the search in a more general way, i.e., they can determine what strategy is to be followed at any given time for analyzing the image. This means that the strategy rules determine what the order of execution of the control rules will be. It is important to note here that the control structure of the system is actually more than just a set of production rules. Pro- log, which is used to implement the rule-based system, has been used here as a sophisticated programming language, and various intricate features of this language have been put to use for moni- toring and controlling the different levels of production rules. 3. REPRESENTATION OF UNCERTAINTY The system has to deal with many situations where a com- bination of rules, rather than a single rule, lends credence to a particular hypothesis. Thus, the success of each of these rules adds evidence towards that hypothesis. If the total evidence obtained from the successful rules is sufficiently high, then the hypothesis is assumed to be true, and the next stage in the Jnalysis process can then be tackled with the assumption that the given hypothesis has been confirmed. To deal with such a scenario, each Knowledge Rule in the system is given a certain ‘confidence value” between 0 and 1. When the knowledge rules 1 or testing the characteristics of a certain type of block are exe- cu ted, the confidence values for all the rules that succeed are .IJded up. The sum thus obtained indicates the “certainty factor” tor the conclusion obtained from the control rule which invoked these knowledge rules. This certainty factor is used for the purpose of ordering the conclusions at any given stage so that the ,more likely conclusions can be examined in further detail before lthe less lihely ones. This has the effect of making the search pro- .ess more efficient, thus reducing execution time in the system. One possible inconvenience in this representation is that lvhen a set of new rules are added to the system, the relative sample document importance of the existing rules may change, and thus the “confidence value” for the existing rules may have to be altered. 1‘0 counter this problem, an even more robust scheme for the representation of uncertainty is currently under consideration. I‘his scheme is based to a large extent on the representation used in the IXMliS scheme proposed in [Wesley and Hanson, 19821. In this representation, the evidence for a hypothesis is maintained in two parts : the Support (evidence in favor of the hypothesis) and the Plausibility (evidence against the hypothesis). The e\%ience IS thus maintained as a pair [SUPT,PL] where SUPT = Supl%)rt ;lnd PI. = Plausibility. The confidence value associated with ~lnv rule can therefore be a positive or a negative number. ln~~~i:\ r11e SUPT value for a control rule is 0, and the PL value I< 1. \I 11rln the knowledge rules are executed, the positive evidence 14 .~&ird to the SUPT value, and the negative evidence is subtracted I I 8 ~1 the PL value. The final analysis of the hypothesis is thu\ done based on both the evidence in favor of the hypothesis as w ell .IS the evidence against it. This scheme has the advantage o! ~XII‘~ able to distinguish between negative evidence and no evidence. Implementation of this uncertainty representation scheme 1s planned in the next stage of the system development process. 4. THE RULES The rule-based system has been implemented in Prolog, which has built-in machanisms for backward chaining through its rules that are expressed as predicates in first-order logic [Clock- sin and Mellish, 19811. As mentioned above, the production rules used in this system are of three kinds : Knowledge Rules, Control Rules, and Strategy Rules. Examples of each of these kinds of rules are given in Figures 3, 4 and 5. Brief explanations of these rules are given below. Detailed explanations of the Prolog code have been avoided for the sake of brevity, but can be obtained by referring to (Clocksin and Mellish, 1981) or any other book on Prolog. APPLICATIONS ! 791 isblockc’dest-address’,B) :- color([K 1 ,K2,K3]), bl-and-wh(K1 ,K2,K3), B is 0.25. bl_and_wh(X~[HlfTl]~H2~2]~H3~3]]) :- extract-b-w(T3). extract_b_w([H4/T4]) :- H4 == 0. Fig. 3.1 Unary Knowledge Rules blockc’dest-address’,A,Xl> :- blockc’postage-stamp’,B,X2), left-of(A,B), below(A,B), Xl is 0.3 * X2. Fig. 3.2 Binary Knowledge Rule findblock(X,Y) :- findall(Z,isblock(X,Z),L), addup(L,Y). addup([],Y > :- Y = 0. addup([H/f],Y) :- addup(T,Z), Y is H + Z. findall(X,G,J :- asserta(found(mark)), call(G), asserta(found(Xi)), fail. findall(,,l.) :- collectfound([],M), !, L = M. collectfound :- collect-found(L,L). getnext( !, collectfound([XB],L). getnextoi) :- retract(found(X)), !, X == mark. Fig. 4 Control Rules begin :- p&“What type of block would you like to identify ? “1, read(X), X == end-of-file, decide(X), begin. decide(X) :- datablocks( try(X,Z). try<XJ> :- fail. try(XIHp]) :- tryone(X,H). try(XLHfr]) :- try(X,T). tryone(X,Y) :- reconsult(Y), identify(X,Yi). identifN(X,Y) :- findblock(X,Z), nl, pstr(“The “1, write(X), @r-C” is : “>, write(Y), pstr(” with certainty = “1, write(Z),nl,nl. Fig. 5 Strategy Rule Figure 3.1 shows a set of unary knowledge rules, I.e., rules that test the intrinsic characteristics of a block. In this example, the rules state that IF the block is black-and-white in color (i.e., Jter extracting the ‘hue, ’ ‘intensity’ and ‘saturation’ of the block hv means of color filtering, IF we find that the ‘saturation’ value IS yero), TILES the hypothesis that the block is a destination- ;rddress gets strengthened by the amount 0.25 . Frgure 3.2 shows a binary knowledge rule, i.e., one that tests the spatial relationships between different blocks. In this example, the rule states that IF a block B has been found to be a poStage stamp with a certainty factor X2, and if block A is to the left of and below block B, THEN the hypothesis that bloch A is a destination address gets strengthened by the amount 0.3 times \2. Figure 4 shows a set of control rules. In this example, the rules state that to find the total certainty factor for the hypothesis that the given block is block type X, the system should execute all the ‘isblock’ rules for the bloch type 1 and then add up the ‘confidence’ values for all the rules that succeed. The sum thus obtained then gives the desired certainty factor. Figure 5 shows a strategy rule. In this example, the rule represents the most straighforward strategy, i.e., to find a bloch of type X from among all the blocks in the image, the system should apply the control rules given in Figure 4 to all the data blochs in the image. The order in which the blocks are tested is determined by the ordered list datablds, which is dynamically modified during the execution of the program and contains the block names in decreasing order of the likelihood of the block being of type S. The strategy rule shown here queries the user on what kind of block is to be identified, and prints out an Intermediate result. This is a simplification of the actual process where inputs are all taken from files, and the outputs are composed and put into files. 5. RESULTS The expert system described is implemented on a i’.A.\- 11’ 780 running UNIX. The rule-based system is written in Pro- log, and uses low-level image analysis routines written in (’ as w.ell as intermediate image processing routines written in Lisp. The rule-based system currently contains ninety three rules. Of the\e, fifty seven are knowledge rules, twenty five are control rules and the remaining eleven are strategy rules. The domain for testing has so far been that of postal mail-pieces. The system was used in trying to locate the destination address block on pieces of letter-mail. The location of all the other blocks on the envelope image was also done in the process. The usual block classifications in this domain include ‘destination address, ‘return address’, ‘postage stamp’, ’ markup label’, ‘other text’, ‘graphics’, etc. A variety of standard and non-standard mail-pieces that cannot be handled by the U.S. Postal Service’s current OCR machines, were digitized and used as inputs to the system. A reasonably high degree of accuracy (over 80%) was achieved in this domain. A sample of the kind of envelope images used in the testing of the program is given in Figure 6. In the example shown, the sys- tem successfully classified the destination address, and also identified all the other blocks correctly. For the envelope images used, the rule-based system achieved an “understanding” rate of approximately one image per second; efforts are under way to improve this rate so that the system can be more effectively used as a practical tool for document understanding. Addition of new rules for a larger variety of domains has been done, and more rules are continually being incorporated into the system. Creating rules for a wider set of domains poses no specific problems except that some of the rules have to be made more general (e.g., instead of trying to find an “address block”, the same rules now try to locate a “block containing an address”, 792 I ENGINEERING Fig. 6 Example of envelope image input for system thus allowing for the existence of such a block in various kinds of documents). Further improvements are also being made to the handling of uncertainty, as mentioned above. 6. DISCUSSION The Lx-isdom of’ using production rules to represent knon ledge has heen the topic of discussion among many AI researchers. 1lYClS, an earlv expert system, used production rules to encode hnowledge about diseases caused by certain types of bacterial infection. Since then, many rule-based expert systems have been successfully developed for various domains. In the domain document understanding, a rule-based system IS extremely elegant because unlike natural scenes, documents are very structured in character, and thus knowledge about features of documents can be very effectively formulated in terms of‘ pro- duction rules. There are other advantages to using production rules in document image understanding. First, it is easy to apply either an additional strategy to a region that is hard to interpret with onnly one strategy, or a retry process habing modified parameters. Second, software maintenance becomes easier, since addition/modification of rules is a relatively simple process that does not disrupt the rest of the system. Third, in a production system the processing is not carried out over the entire image uni- formly, but only on necessary segments; thus, high efficiency i? achieved. All these reasons make production rule-based system4 eminently suitable for use in the domain of document under standing. 7. REFERENCES [I] A.Barr and E.A.Feigenbaum, (ed.), The Handbook of Artificial Intelligence, Vol. 11, William Kaufman Inc., 1982, 77-294. [2] B.G.Buchanan and E.H.Shortliffe, Rule-based expert sys- tems: the MYCIN experiments of the Stanford Heuristic Programming Project, Addison-Wesley, Reading, Mas- sachusetts, 1984. 131 W.F.Clock sm and C.S.Mellish, Programming in Prolog, Springer-Verlag, 1981. [4] K. Kubota, O.Iwaki and H.Arakawa, “Document Under- standing System”, 7th international Conference on Pattern Recognition, Montreal, Canada, July 30-Aug 2, 1984, 612- 614. [5] M.D.Levine, “A Knowledge-based Computer Vision System”, in Computer Vision Systems (Proceedings of a Workshop, Amherst, Massachusetts, June 1-3, 1977) , A.R.Hanson and E.M.Riseman (ed.), Academic Press, New York, 1978, 335- 352. !hl D.M.Mcheown, W.A.Harvey and J.McDermott, “Rule-Based Interpretation of Aerial Imagery”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMl-7, So. 5, Sept. 1985, 570-596. [7] 4.M.Nazif and M.D.Levine, “Low Level Image Segmenta- tion: An Expert System”, IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-6, Ko. 5 (Sep- tember 19841, 555-577. [S] G.Nagy, S.C.Seth and S.D.Stoddard, “Document Analysis LX ith an Expert System”, Proc. of Pattern Recognition in Practice II, Amsterdam, June 19-21, 1985. [9] S.N.Srihari, J.J.Hull, P.W.Palumbo, DNiyogi and C-II Wang, “Address Recognition Techniques in Mail Sorting: Research Directions”, Tech. Report 85-09, Dept. of Computer Science, SUNY at Buffalo, August 1985. [IO] S.N.Srihari, “Document Image Analysis”, unpublished manuscript, Dept. of Computer Science, SUNY at Buffalo, February 1986. [l I] L-Wesley and A.1 lunson. ‘The Lse of an Evidential-Based Model for RepresentIng hno\\ ledge and Reasoning about Images in the Vismns System”, In Proc. of the Workshop on Computer L’isior Rinpe Zeta llampshire, Aug. 23-25, 1982, IEEE Computer $klety Press. [12] h.\r..Wong, k.G.Casey and l‘.H.\t’ahl, “Document Analysis System”, Proceedings of the rjt h 1 nternational Conference on Pattern Recognition, Munich, tiermany, Oct. 19-22, 1982. [13] T.Weymouth, J.Griffin, A.Hanson and E.Riseman, “Rule Based Strategies for Image Interpretation”, Proceedings of AAAI-83, August 1983. APPLICATIONS I 793
|
1986
|
25
|
467
|
AN EXPERT SYSTEM CHORALE HARMONIZATION’ Kemal Ebcioglu2 Department of Computer Science 226 Bell Hall State University of New York at Buffalo Buffalo, NY 14260 Abstract We have designed an expert system called CHORAL, for harmonizing four-part chorales in the style of J.S. Bach. The system contains over 270 rules, expressed in a form of first order predicate calculus, for re- presenting the knowledge required for harmonizing a given melody. The rules observe the chorale from multiple viewpoints, such as the chord skeleton, individual melodic lines of each voice, and the Schenkerian voice leading within the descant and bass. The program harmonizes chorales using a generate-and-test method with intelligent backtracking. A substantial number of heuristics are used for biasing the search toward musical solutions. Examples of program output are given in the paper. BSL, a new and efficient logic programming lan- guage which is fundamentally different from Prolog, was designed to implement the CHORAL system. Introduction In this paper, we will describe a rule-based expert system called CHORAL, for harmonization3 and Schenkerian analysis4 of chorales in the style of Johann Sebastian Bach. We will first outline a program- ming language called BSL, that was designed to implement the project, and we then will describe the CHORAL system itself. BSL: an efficient logic programming language Lisp, Prolog, and certain elegant software packages built on them, are known to be good languages for writing A.I. programs. However, in many existing computing environments, the inefficiency of these lan- guages has a tendency to limit their domain of applicability to compu- tationally small problems, whereas the problem of generating non-trivial music appears to require gigantic computational resources, and a sizable knowledge base. As a result, we were led to look for an alternative de- sign language for implementing our project. We decided to use first order predicate calculus for representing musical knowledge, and we designed BSL, an efficient logic programming language. From the execution point of view, BSL is an Algol-class non- deterministic language where variables cannot be assigned more than once except in controlled contexts. It has a Lisp-like syntax and is compiled into C via a Lisp program. We have provided BSL with formal semantics, in a style inspired from [de Bakker 791. The semantics of a BSL program F is defined via a ternary relation ‘I’, such that +(F, u, a’) means program F leads to final state 0’ when started in ini- tial state u, where a state is a mapping from variable names to elements of a “computer” universe, consisting of integers, arrays, records, and other ancillary individuals. Given an initial state, a BSL program may lead to more than one final state, since it is non-deterministic, or it may ’ This work was supported by NSF grant MCS-83 16665. lead to none at all, in case it never terminates. What makes BSL dif- ferent from ordinary non-deterministic languages [e.g. Floyd 67, Smith and Enea 73, Cohen 791, and relates it to logic, is that there is a simple mapping that translates a BSL program to a formula of a first order language, such that if a BSL program terminates in some state u, then the corresponding first order formula is true in u (where the truth of a formula in a given state u is evaluated in a fixed “computer” interpre- tation involving integers, arrays, records, and operations on these, after replacing any free variables x in the formula by u(x)). A BSL program is very similar in appearance to the corresponding first order formula, and for this reason, we call BSL programs formulas. A formal and rigorous description of BSL and a proof of its soundness can be found in [Ebcioglu 861. In this paper, we will only try to give an idea about the language, without attempting to explain all details. Here is a BSL program to solve a classic puzzle [Floyd 671, followed by its first order translation: Place eight queens on a chess board, so that no two queens are on the same row, column, or diagonal. Assume that the rows and columns are numbered from 0 to 7, and the array elements p[O],... p[7] represent the column number of the queen on row 0,...,7, respectively. (E ((p (array (8) integer>)) (AnO(<n8) (l+ n) (E j 0 (< j 8) (I+ .i> (and(AkO(< kn) (l+ k) (and (!= j (p k)) (!= (- j (p k)) (- n k)) (!= (- j (P k)) (- k n)))) (:= (P n) j))))) First order translation: (3p ] type(p)=“(array (8) integer)“) Pin I OIn<8) (3j I Olj<8> [Wk I Olk<n) [jfpkl & j-p[kl#n-k & j-phlfk-nl & p[nl=jl As a reader familiar with logic can readily see, the first-order translation of the BSL formula shown here asserts that there exists an array p that is a solution for the eight queens problem. It can be seen that the BSL program and the corresponding first order assertion are very similar. In fact the assertion can be obtained from the program, provided that we translate the quantifiers in the program to a conventional notation, and we convert the assignment symbol in the program to an equality symbol. This BSL program compiles into an efficient backtracking program in C that finds and prints instantiations for the array p, that would make ’ Author’s present address: IBM, Thomas J. Watson Research Center, P.0: Box 2 IX, Yorktown Heights, NY 10598. ’ A chorale is a short musical ptece that is sung by a choir consisting of men’s and women’s voices. There are four parts in a chorale (soprano, alto, tenor, bass) which are sung together; the soprano part is the main melody. Harmonization is the process of composing the alto, tenor and bass parts when the soprano is given. J.S. Bach has produced many chorale harmonizattons [Terry 641. a Schenkerian analysis refers to a music analysis method developed by Heinrich Schenker [ 1868-19351. whereby an entire piece of tonal music is reduced to a fixed descending se- quence of three, ftve or eight notes (accompanied by a bass), via a process roughly similar to parsing using a formal grammar. It is often regarded as the deepest way of understanding music. ‘84 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. the (3p)-quantified interpretation. part of the corresponding assertion true in the fixed We can informally describe the non-deterministic semantics of BSL by drawing our examples from this eight-queens program: The existential quantifier (E ((p (array (8) integer))) Fl) is like a begin-end block with a local variable p: it is executed by binding p with (in this case) an array of eight elements whose values are initially equal to U (the unassigned object), and then executing the constituent formula Fl. The possible type declarations for p in the context of this construct include the inte- ger type, and inductively defined array and record types. The bounded universal quantifier (A n 0 (< n 8) (1 + n) F;) is similar to a C “for” loop with a local variable n; its constituent formula F, is executed suc- cessively with n=O,l ,...,7. The bounded existential quantifier (E j 0 (< j 8) (1 + j) Fi) is a non-deterministic choice construct; it is executed by setting its local variable j to 0, incrementing j an arbitrary number of times (possibly zero times), and finally executing the constituent for- mula F,. If j is incremented too many times so that it is no longer less than 8, the program does not terminate. The construct (and F, F2 . ..) is like the Pascal semicolon; it is executed by executing E;, F2,... one after the other. (or F, F, . ..). not exemplified in the program, is another non-deterministic choice construct; it is executed by executing one of F, , F&.... BSI*‘s tests and assignments are called atomic formulas. A test such as (!= j (p k)), is executed through ordinary comparison, but if the test does not come out to be true, the program does not terminate. (!= means “not equal” and (p k) is an abbreviation for (sub p k), i.e. p[k]). An assignment such as (:= (p n) j) is executed in the usual de- structive manner, but the value of the left hand side p[n] must be U before the assignment, or else the program does not terminate (the purpose of this check of the left hand side is to ensure that formulas such as (E ((x integer)) (and (: = x 0) (:= x 1))) cannot terminate; BSL formulas involving more than one explicit assignment to a variable are considered erroneous). Other erroneous computations (such as at- tempting to use a variable while its value is still U, or dividing by 0) also cause non-termination. The translation of a given BSL formula to the first order assertion which is true at its termination states is mostly obvious, as the eight-queens example illustrates, however, both the equality test (==) and the as- signment (:=) symbols of BSL are translated to the equality symbol in the logical counterpart. Thus, the program may contain procedural in- formation not present in its logical translation. The first order trans- lation of a BSL program without free variables is a sentence, whose truth does not depend on the value of any variable at the termination state; successful execution of such a BSL program amounts to a con- structive proof of the corresponding first-order sentence. A BSL program of the form (E ((x typ)) . . . ) compiles into a back- tracking program that attempts to simulate essentially all of its possible executions, and prints out the value of x at the end of every execution that turns out to be successful. However, certain intuitively unneces- sary executions involving assignment-free formulas are skipped over via a built-in “cut” convention, similar to Prolog’s cut. The compiled code does not implement backtracking blindly; it omits the run-time checks against double assignment, incorporates elaborate optimizations, and is very efficient, rivaling hand-coded C. On the simple integer computa- tions for which BSL is intended, BSL tends to be significantly faster than Prolog and Lisp in the traditional or RISC computing environ- ments. Also, because the single assignment nature of BSL facilitates the detection of parallelism, some further modest speedup for BSL appears to be achievable in the future, via the emerging “very long instruction word” architectures and compilation techniques [Fisher 79, Ellis 86, Nicolau 85, Touzeau 841. The language subset described up to here is called L*, and constitutes the “pure” subset of BSL. The full language has some more, but not many more features; we tried to keep BSL small. These features are mainly user-defined predicates that allow Prolog-style backward chain- ing, user-defined functions, enumeration types, and macro and constant definitions that allow access to the full procedural capabilities of Lisp. A limited form of the “not” connective is defined as a macro, which is expanded by moving the “not”s in front of the tests via DeMorgan-like transformations, and then eliminating the “not? by changing == to !=, etc.. The language is also extended with heuristics, which are BSL formulas themselves, which can guide the backtracking search in order to enumerate the better solutions first. Representing knowledge from multiple viewpoints Representing knowledge using multiple views of the solution object is a need that arises during the design of complex expert systems. For ex- ample the Hearsay-II speech understanding system [Erman et al. 801, had to view the input utterance as mutually consistent streams of sylla- bles, words and word sequences. Similarly, the “Constraints” system [Sussman and Steele 801 had used equivalent circuits for viewing a given circuit from more than one viewpoint. A similar need for a multiple viewpoint knowlcdgc representation was felt during the design of the CHORAL system. In a first order logic representation of knowledge, a good way to encode multiple viewpoints is to use different primitive predicates and functions for each viewpoint. For example, to represent the harmonic view of a polyphonic piece of music, two functions p(n,v>, a(n,v) and a predicate s(n,v) can be used as primitives that stand for the pitch and accidental of voice v at time unit n, and whether a new note is struck by voice v at time unit n. A different set of primitives would be required for ex- pressing constraints about the melodic lines of the individual voices. Multiple sets of primitives are important, because formulas tend to be unnecessarily long when written with the wrong primitives. However, since BSL incorporates native operations on Pascal-style data structures, it is preferable to use data structure substitutes for the primitive functions and predicates of a viewpoint when this is possible. We now describe one particular method of representing knowledge from multiple viewpoints in BSL, where we assume that each viewpoint is represcntcd by a different data structure, typically an array of records (called the solution array of that viewpoint), which serves as a rich set of primitive pseudo functions and predicates for that view. This multi- ple view paradigm, which was used in CHORAL, has the following procedural aspect, which amounts to parallel execution of generate- and-test: It is convenient to visualize a separate process for each view- point, which incrementally constructs (assigns to) its solution array, in close interaction with other processes constructing their respective sol- ution arrays. Each process executes a sequence of “generate-and-test step”s. At the n’th generate-and-test step of a process, an acceptable value is selected and assigned to the n’th element of the solution array of the viewpoint, depending on the elements O,...,n-1 of the same sol- ution array, the currently assigned elements of the solution arrays of other viewpoints, and the program input. The processes, implemented as BSL predicate definitions, are arranged in a round-robin scheduling chain. With the exception of the specially designated process called the clock process, each process first attempts to execute zero or more generate-and-test steps until all of its inputs are exhausted, and then gets blocked, giving way to the next process in the chain. The specially designated clock process attempts to execute exactly one step when it is scheduled, all other processes adjust their timing to this process5 By adjusting the input-wait predicates of the processes, a variety of known techniques can be implemented, ranging from graceful shift of focus In certain cases a view may be completely dependent on another, i.e. it may not introduce new choices on its own. In the case of such redundant views, it is possible to maintain several views (solutions arrays) in a single process, and share heuristics and constraints, provided that one master view is chosen to execute the process step and comply with the paradigm. APPLICATIONS / 785 among the different viewpoints ning by stages [Sacerdoti 741. [ Erman et al. 801, to hierarchical plan- The knowledge base of each viewpoint is expressed in three groups of subformulas, which determine the way in which the n’th generate-and- test step is executed: Production rules: These are the formal analogs of the production rules that would be found in a production system for a generate-and-test application [Stefik 781. The informal meaning of a production rule is “IF certain conditions are true about the partial sol- ution (elements 0 ,...,n-1, and the already assigned attributes of element n), THEN a certain value can be added to the partial solution (assigned to a group of attributes of element n).” Their procedural effect is to generate the possible assignments to element n of the solution array. Constraints: These side-effect-free subformulas assert absolute rules about elements 0 ,...,n of the solution array, and external inputs. They have the procedural effect of rejecting certain assignments to element n of the solution array (this effect is also called early pruning). Heuristics: These side-effect-free subformulas assert desirable proper- ties of the solution elements 0 ,...,n and external inputs. They have the procedural effect of having certain assignments to element n of the sol- ution array tried before others are. The purpose of the heuristics is to guide the search so that the solution first found is hopefully a good sol- ution. The worth of each candidate assignment to solution element n which complies with the constraints is determined by summing the weights of the heuristics that it makes true. Execution then continues with the best assignment to solution element n (with ties being resolved randomly), and then, if backtracking occurs to this step, with the next best, etc.. Heuristics are weighted by decreasing powers of two; this weighting scheme was chosen because it does not involve arbitrary nu- merical coefficients, and because it is known to yield good results in music generation [Ebcioglu 8 11. In case no possibilities can be found at a particular step of a process, control does not necessarily return to the chronologically preceding step in the history of the steps of the processes. Every scalar variable (or scalar part df a variable) has a tag associated with it. When a variable is assigned a value, its tag is assigned a stack level to backtrack to in order to undo that assignment. During the execution of step, a running maximum of the tags of the variables that occur in the failing tests is maintained; and when the step fails, backtracking occurs to the most recent responsible step (stack level) thus computed. This is a domain independent, compilable intelligent backtracking technique, and has little run-time overhead when it is useless. It does eliminate the typical need for the (somewhat inelegant) explicit intrusion into the control mechanism in the style of Conniver [Sussman and McDermott 721. Other approaches to this problem were tried by, e.g. [de Kleer 861, [Bruynooghe and Pereira 811, [Stallman and Sussman 771, [Doyle 791. An expert system is often praised by the esoteric control structures that it introduces. We must therefore explain why we have chosen such a streamlined architecture for designing an expert system, rather than a more complex paradigm such as the multiple demon queues of [Stallman and Sussman 771, where demons are arranged within several scheduling queues, or the opportunistic scheduling of Hearsay II ([Erman et al. 801, also [B. Hayes-Roth SS]), where the production system control is achieved by essentially a separate expert system. We believe that striving to use simpler control structures is a better approach to the de- sign of large systems, provided that an attempt is made to alleviate the non-optimal nature of such control structures through an efficient im- plementation. Unfortunately, we do not know of an easy way to extend the streamlined design approach to the knowledge base itself: certain application domains appear to resist simplification. The knowledge models of the CHORAL system We are finally in a position to discuss the CHORAL system itself. The CHORAL system uses the back-trackable process scheduling technique described above to implement the following viewpoints of the chorale: The chord skeleton view observes the chorale as a sequence of rhythmless chords and fermatas, with some unconventional symbols underneath them, indicating key and degree within key. This is the clock process, and produces one chord per step. This is the view where we have placed, e.g., the production rules that enumerate the possible ways of modulating to a new key, constraints about the preparation and resolution of a seventh in a seventh chord, and heuristics that prefer Bachian cadences. The fill-in view observes the chorale as four interacting automata that change states in lockstep, generating the actual notes of the chorale in the form of suspensions, passing tones and similar ornamentations, depending on the underlying chord skeleton. This view reads the chord skeleton output. This is the view where we have placed, e.g., the pro- duction rules for enumerating the long list of possible inessential note patterns that enable the desirable bold clashes of passing tones, a con- straint about not sounding the resolution of a suspension above the suspension, and a heuristic on following a suspension by another in the same voice (a Bachian cliche). The time-slice view observes the chorale as a sequence of vertical time- slices each of which has a duration of an eighth note, and imposes the harmonic constraints. This view is redundant with and subordinate to fill-in. We have placed, e.g., the constraint about consecutive octaves and fifths in this view. The melodic string view observes the sequence of individual notes of the different voices from a purely melodic point of view. The merged melodic string view is the similar to the melodic string view, except that the repeated adjacent pitches are merged into a single note. These views are also redundant with, and subordinate to fill-in. These are the views where we have placed, e.g., a constraint about sevenths or ninths spanned in three notes, and a heuristic about continuing a linear progression. The Schenkerian analysis view is based on our formal theory of hierar- chical voice leading, inspired from [Schenker 791 and [Lerdahl and Jackendoff 831. The core of this theory consists of a set of rewriting rules [Ebcioilu 85, 861 which are used for parsing the bass and descant (melody) lines of the chorale separately. The Schenkerian analysis view observes the chorale as the sequence of steps of two non-deterministic bottom-up parsers for the descant and bass. These read the fill-in view output. In this view we have placed, e.g., the production rules that enumerate the possible parser actions that can be done in a given state, a constraint about the agreement between the fundamental line acci- dentals and the key of the chorale, and a heuristic for proper recognition of a Schenkerian D-C-B-C ending pattern. The chorale program presently incorporates over 270 production rules, constraints and heuristics. The rules were found from empirical obser- vation of the Bach chorales [Terry 641, personal intuitions, and certain anachronistic, but nevertheless useful traditional music treatises such as [Louis and Thuille 061 and [Koechlin 281. As a concrete example as to what type of knowledge is embodied in the program, and how such musical knowledge is expressed in BSL’s logic-like notation, we take a constraint from the chord skeleton view. The following subformula asserts a familiar constraint about false re- lations: “When two notes which have the same pitch name but different accidentals occur in two consecutive chords, but not in the same voice, then the second chord must be a diminished seventh, or the first inver- sion of a dominant seventh, and the bass of the second chord must sound the sharpened fifth of the first chord, or the soprano of the sec- ond chord must sound the flattened third of the first chord.” (The ex- ception where the bass sounds the sharpened fifth of the first chord is commonplace, the less usual case where the soprano sounds the flat- tened third, can be seen in the chorale “Herzlich thut mich verlangen,” 786 / ENGINEERING no. 165.6 There exist some further, less frequent exceptions, e.g. false relations between phrase boundaries when the roots of the two chords are equal (no. 46), but we did not attempt to be exhaustive.) The complexity of this rule is representative of the complexity of the pro- duction rules, constraints and heuristics of the CHORAL system. We see the BSL code for this rule below: (Au bass (<= u soprano) (l+ u) (A v bass (<= v soprano) (l+ v) (imp (and (> n 0) (== (mod (pl u) 7) (mod (p0 v) 7)) (!= (al u) (a0 v)) (!= u v)) (and (member chordtype (dimseventh domseventhl)) (or (and(== (a0 v) (l+ (al u))) (== v bass) (== (mod (- (p0 v) rootl) 7) fifth) (and(== (a0 v) (l- (al u))) (= = v soprano) (== (mod (- (p0 v) rootl) 7) third))))))) Here, n is the sequence number of the current chord, (pi v), i-O,1 . . . is the pitch of voice v of chord n-i, encoded as 7*octave number+pitch name, (ai v), i=O,l,... is the accidental of voice v in chord n-i, and chordtypei and rooti, i=O,l... are the pitch configuration and root of chord n-i, respectively. The notation p0, pl, etc. is an abbreviation system, obtained by an enclosing BSL “with” statement, that allows convenient and fast access to the most recent elements of the array of records representing the chord skeleton view. (imp Fl F2), and (mem- ber x b, y2 . ..)) are macros that have the predictable expansions. We repeat the constraint below in a more standard notation for clarity, us- ing the conceptual primitive functions of the chord skeleton view in- stead of the BSL data structures that implement them: wish to make a few remarks here. We are not aware of previous re- search on computer-generated tonal music that has yielded results of comparable quality. Unfortunately, the style of the program is not Bach’s, except for certain cliche patterns; in particular the program is too greedy for modulations. Whether the present algorithm is indeed a natural cognitive model for Bach chorales or for musical composition in general (e.g. whether modifying the constraints and heuristics could yield a significantly better approximation of the Bach style), would be the topic of a much longer research. However, the results appear to demonstrate that tonal music of some competence can indeed be produced through the rule-based approach. The program has also produced good hierarchical voice leading analyses of descant lines, but the Schenkerian analysis knowledge base still reflects a difficult basic research project; we were simply unable to produce a sufficiently large number of rules for Schenkerian analysis. The CHORAL system ac- cepts an alphanumeric encoding of the chorale melody as input, and produces the chorale score in conventional music notation, and the parse trees in Schenkerian slur-and-notehead notation. The output can be directed to a graphics screen, or can be saved in a file for later printing on a laser printer. We present some output examples at the end of this paper. The examples show an harmonization of Chorale no. 48 from [Terry 641, and an analysis of its descant line. In the last three measures of the bass part of the harmonization, the g-a-a-g#-a pattern (x-y-x-y pitch pattern with possible repeats), and the (eighth eighth quarter) rhythmic pattern that falls on a strong beat, may be considered objectionable by a trained musician, however, for computational econ- omy reasons, we had to install some of the rules advising against these patterns as “negative” heuristics, which unfortunately cannot rule them out completely. The figures underneath the descant analysis of no. 48 indicate the internal depth of the parser stack and the state of the parser, after the corresponding note is seen. For those familiar with Schenkerian analysis, the numbers might be taken to mean the lowest level that the note belongs to, where level numbers increase as we go from the background to the foreground. We can follow the fundamental line, a fifth progression preceded by an initial ascent in this case, at those notes whose levels are 1, except for the final note, whose level is 0. (Vu ] bass I u I soprano) (Vv ] bass 5 v 5 soprano) ]n>O & mod(p(n-l,u),7)=mod(p(n,v),7) & a(n-l,u)#a(n,v) & u#v + chordtype E (dimseventh,domseventhl) & [a(n,v)=a(n-l,u)+l & v=bass & mod(p(n,v)-root(n-1),7)=fifth V a(n,v)=a(n-l,u)-1 & v=soprano & mod(p(n,v)-root(n-1),7)=third]]. Acknowledgements To exemplify the BSL code corresponding to a heuristic, we again take the chord skeleton view. The following heuristic asserts that it is unde- sirable to have all voices move in the same direction unless the target chord is a diminished seventh. Here the construct (Em Q (ql qi . . . ) (F Q)) is a macro which expands into (or (F q,) (F &)...), thus producing a useful illusion of second order logic. I wish to thank my advisor Prof. John Myhill for getting me interested in the mechanization of Schenkerian analysis, and his enlightening dis- cussions. References (imp (and (> n 0) Bruynooghe, M. and Pereira, L.M. “Revision of Top-down Logical (EmQ t< >> Reasoning through Intelligent Backtracking” Centro di (A v bass (<= v soprano) (l+ v) Informatica da Universidade Nova de Lisboa, Report no. (Q (~1 v) (~0 v)))>) 8/81, March 1981. (= = chordtype dimseventh)) Cohen, J. “Non-deterministic Algorithms” Computing Surveys Vol. 11, We again provide the heuristic in a more standard notation, for clarifi- No. 2, June 1979. cation: de Bakker, J. “Mathematical Theory of Program Correctness” North Holland, 1979. ]n>O & (IQ E I<,>] )(Vv I bassIvIsoprano)[Q(p(n-l,v),p(n,v))] +. chordtype(n)=dimseventh]. What has been accomplished Although the CHORAL system is primarily a research project rather than a commercial expert system, we have spent considerable effort to make it do well in its task; it has not been easy, and our success has only been moderate by scholarly standards. While it is certainly up to the music theorist reader to evaluate the harmonizations, we nevertheless de Kleer, J. “An Assumption-based TMS” Artificial Intelligence 28 (1986), 127-162. Doyle, J. “A Truth Maintenance System” Artificial Intelligence 12 (1979), 231-272. Ebcioilu, K. “Computer Counterpoint” Proceedings of the 1980 International Computer Music Conference, Computer Music Association, San Francisco, 198 1. Ebcioglu, K. “An Expert System for Schenkerian Synthesis of Chorales in the Style of J.S. Bach” Proceedings of the 1984 Interna- tional Computer Music Conference, Computer Music Associ- ation, San Francisco, 1985. All chorale numbers in this paper are from [Terry 641 APPLICATIONS I 787 Ebcioglu, K. “An Expert System for Harmonization of Chorales in the Style of J.S. Bach” Ph.D. thesis, Department of Computer Science, S.U.N.Y. at Buffalo, February 1986. Ellis, J.R. “Bulldog: A Compiler for VLIW Architectures” MIT Press, 1986. Erman, L.D., Hayes-Roth, F., Lesser, V.R., and Reddy, D.R., “The Hearsay-II Speech Understanding System: Integrating Know- ledge to Resolve Uncertainty” Computing Surveys, Vol 12, No 2, June 1980. Fisher, J. “The Optimization of Horizontal Microcode within and be- yond Basic Blocks: An Application of Processor Scheduling with Resources” Ph.D. Thesis, Dept. of Computer Science, New York University, October 1979. Floyd, R. “Nondeterministic Algorithms” JACM, Vol. 14, no. 4, Oc- tober 1967. Hayes-Roth, B. “A Blackboard Architecture for Control” Artificial Intelligence 26 (1985), 251-321. Koechlin, Ch, “Traite de 1’Harmonie” Volumes III, III, Editions Max Eschig, Paris, 1928, 1930, 1928, respectively. Lerdahl, F. and Jackendoff, R. “A Generative Theory of Tonal Music” MIT Press, 1983. Louis, R. and Thuille, L. “Harmonielehre” C. Grtininger, Stuttgart, 1906. Nicolau, A. “Percolation Scheduling: A Parallel Compilation Tech- nique” TR 85-678, Dept. of Computer Science, Cornell Uni- versity, May 1985. Sacerdoti, E.D. “Planning in a Hierarchy of Abstraction Spaces” Arti- ficial Intelligence 5 (1974), 115-135. Schenker, H. “Free Composition (Der freie Satz)” translated and ed- ited by Ernst Oster, Longman 1979. Smith, D.C. and Enea, H.J. “Backtracking in Mlisp2” Proceedings of the third IJCAI, 1973. Stallman, R.M. and Sussman, G.J. “Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis” Artificial Intelligence 9 (1977), 135-196. Stefik, M. “Inferring DNA Structures from Segmentation Data” Artifi- cial Intelligence 11 (1978), 85-l 14. Sussman, G.J. and McDermott, D.V. “From PLANNER to CONNIVER -- A Genetic Approach” Proc. AFIPS 1972 FJCC. AFIPS Press (1972), 1171-l 179. Sussman, G.J. and Steele, G.L. “Constraints - A Language For Ex- pressing Almost-Hierarchical Descriptions” Artificial Intelli- gence 14 (1980), l-39. Terry, C.S. (ed.) “The Four-voice Chorals of J.S. Bach” Oxford Uni- versity Press, 1964. Touzeau, R.F. “A Fortran Compiler for the FPS-164 Scientific Com- puter” Proceedings of the SIGPLAN ‘84 Symposium on Compiler Construction, June 1984. Chorale no. 48 2322212212332122 788 / ENGINEERING
|
1986
|
26
|
468
|
Generating Tests by Exploiting Designed Behavior Mark Harper Shirley MIT Artificial Intelligence Laboratory 545 Technology Square Cambrzdge, MA 02139 ([email protected]) Abstract One of the hardest problems in digital circuit design is test pattern generation for a complex device. This is difficult in part because it requires reasoning about how to control a device whose behavior can be extremely complex. Knowledge of the specific operations that the device was designed to perform can help solve this problem. The key observation is that a device’s designed behavior is often far more limited than the device’s potential be- havior. This limitation translates into a reduction of the search necessary to achieve planning goals. We describe an implemented program based on this idea. 1 Introduction When reasoning about how to manipulate complex engineered systems (e.g. for test generation or diagnosis), knowing how the system is designed to behave is an important kind of knowledge to apply. This paper demonstrates how this knowledge can be used in several ways to produce an effective problem solver for one of the hardest problems in electronic design: test pattern generation for VLSI devices. Test generation encounters several traditional AI concerns, including conjunctive planning, and remains an important, un- solved problem: existing programs are notably unsuccessful on large circuits. Human test programmers are more successful, in part because they use knowledge from many sources and a variety of reasoning techniques. Here we focus on one kind of knowledge - designed behavior - which they find important. At the core of our test generation system is a planner based on one key observation: a device’s designed behavior, i.e. the opera- tions that a device is intended to carry out, is often far more lim- ited than its potential behavior. For example, very few sequences of inputs to a disk controller correspond to valid commands. When solutions to planning problems can be found within de- signed behavior, this limitation can translate into a reduction of the search necessary to achieve planning goals. In testing, the goals involve manipulating individual compo- nents. This work relies on what we call the Designed-Behavior Hypothesis: that every component in the device can be fully and efficiently exercised while staying within the device’s designed This report describes research done at the Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology and at GenRad Inc. Sup- port for the laboratory’s Artificial Intelligence research on digital hardware troubleshooting is provided in part by the Digital Equipment Corporation, and in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505. AAAI-86 National Conference on Artificial intelligence behavior. When this is true, we can reduce the search neces- sary to generate tests. Our program successfully generates tests in situations where this assumption is valid, and (quickly) fails to generate tests where it is not. An example is shown which illustrates both cases. We represent the space of designed behavior with a behavior graph, a structure that describes how data is supposed to flow through the device. We generate these graphs by simulation of the operations defined at the device’s interfaces, e.g. the instruc- tion set defined for a processor. The list of operations is assumed to be complete. The planner operates by matching a testing goal, like loading a register with a certain value, against the behavior graphs and specializing successful matches to create solutions. This plan- ning method is most appropriate for devices whose command interfaces are narrow in the number of distinct operations they admit, since each operation requires the non-trivial effort of cre- ating a behavior graph. In the domain of digital circuits, we be- lieve the method is appropriate when generating tests for many of the building blocks of microprocessor systems (e.g. processors, UART’s, graphics controllers, and the like). The next section gives an overview of the test generation problem, and section 3 describes the consequences of several characteristics of this problem for representation and reasoning strategy. Section 4 introduces a simple processor which provides examples throughout the paper, while section 5 contains the de- tails of our method. The method’s performance on the processor is shown in section 6. Finally, a simple account of processor de- sign is presented in section 7, which provides justification for the performance achieved. 2 The Task We are interested in the task of test generation for VLSI com- ponents. This task is part of the quality control phase of circuit manufacturing where we verify that the physical circuit in fact produces the designed behavior. Test generation takes a design description for a circuit and produces a set of tests. Taken to- gether the tests must specify sets of inputs and predicted outputs such that, when the inputs are applied to the circuit and all out- puts match the predicted values (i.e. the circuit passes all tests) we can be confident the circuit was manufactured correctly. Tests are traditionally created by partitioning the design into components and generating a test for each component by (1) working out how to test the component as if it were alone, and (2) working out how to execute that test within the context of the larger circuit. The second problem is conjunctive planning, since 884 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. it involves achieving multiple, potentially-interacting goals: we have to work out how to manipulate the circuit’s primary inputs to cause a desired pattern of activity around a component. The classical approaches (e.g. [Roth66]) use a combination of heuristic search and constraint propagation using gate-level representations of the circuit’s structure and behavior. Current runtimes are excessive for anything other than combinational cir- cuits and simple state machines, and we think this is a direct consequence of the use of gate-level representations. More recent research (e.g. [SinghG]) appl ies similar search techniques to hi- erarchical models of device structure and behavior, yet still does not take advantage of knowing the designed behavior. For com- plex circuits, the human experts remain much more successful than any existing test generation program, and the next section describes several ways - inspired by what the experts do - of limiting search. 3 Characteristics of the Problem Domain In this section we describe two characteristics of the problem domain and the effects they have on our choices of representation and reasoning strategy. 3.1 Purposefully Designed Systems Teleology, or the notion of purpose, has been used in several programs for reasoning about physical systems. In [DeKleer79], for example, the knowledge that a device had some purpose, even though unspecified, was sufficient to aid in determining how an analog circuit worked. We require more information than this; we need to know what that purpose is, i.e. the tasks the system is supposed to accomplish. Our program then uses this information to focus on solutions to testing problems. Figure 1 illustrates the relative sizes of solution spaces that different test generation algorithms search. The sets contain pos- sible states of a circuit, i.e. the different ways of consistently assigning values to the circuit’s nodes. If the components com- prising the circuit are disconnected, then there is no constraint between their states, and the set of possible circuit states is the cross product of the sets of individual component states. Con- necting the components together makes many potential states inconsistent, resulting in a smaller set of possible states. This is the space that a planner with perfect ability to propagate con- straints could search. Since local constraint propagation tech- niques are incomplete in their ability to detect global conflicts, algorithms using these techniques search a somewhat larger space and backtrack when inconsistencies are found. Within the space of consistent states of the circuit lies the space of behaviors the circuit was designed to perform. This is the space our test generator searches. The ratio of designed behavior to possible behavior depends upon the particular circuit, but the more specialized and complex the circuit, the smaller it tends to be. 1 ‘In the case of the MARK-2 processor, this ratio is on the order of 22”“. Naturally this number says nothing about how the spaces are searched nor about the frequency and distribution of solutions within the spaces. We haven’t yet done the performance measurements for some typical circuits that would yield these numbers. But, as suggested by the performance of our method on a microprogrammed processor, we expect our method to be a useful addition to the spectrum of techniques helpful for solving test generation problems. Figure 1: Relative sizes of sets of circuit states We explicitly represent the space of a circuit’s designed be- havior with the behavior graphs mentioned above, which essen- tially are annotated simulation traces. For efficiency, circuit oper- ations are parameterized, and the corresponding behavior graphs contain expressions with variables in them, e.g. a READ cycle is described in terms of a symbolic address and symbolic data. The point of the behavior graph representation is this: con- tained within the set of graphs are many patterns of activity around every component in the circuit. “All our planner must do” is find the patterns of activity that are useful in testing. We assume the designer can provide a complete list of operations de- fined for the circuit’s interfaces. Because this list is complete, the behavior graphs generated from them represent all normal circuit behavior. Hence, failure to find a useful pattern of activity in- dicates that there is no solution within the normal operations of the circuit. 3.2 Extreme Complexity A second and dominating characteristic of this domain is the extremely complex behavior of modern VLSI components. Algo- rithms which manage this complexity are essential. One way to do this is to identify useful abstractions2. The traditional partitioning of the problem into two parts, one that focuses closely on a single component and another that involves the rest of the circuit, suggests that we might usefully build our problem solver in two parts, each part potentially work- ing at a different abstraction level. And this is what we do: a micro-planner is responsible for solving conjunctive goals around individual components, and a macro-planner is responsible for taking the output of the micro-planner and then controlling grosser aspects of the circuit state, namely its data registers. We must describe the circuit structure and behavior for these two parts of our program. For simplicity’s sake, the prototype system models the circuit structure as a flat network of compo- nents, roughly at the level of the block diagrams in databooks. 3 The system uses four levels of behavioral description. The simulation models for the individual components are the foun- dation. We only require them to predict outputs from inputs, 21n fact, because the circuits are designed, it is not surprising that there are abstractions readily available; they are the abstractions used by the designer. 3Note that, although the model is flat, it is considerably above the level of individual gates. AUTOMATED REASONING I 885 EXAMPLE INPUT SEQUENCES + COMPONENT BEHAVIORS accomplished using Design for Testability techniques. The next section introduces a simple processor which we use 1 VIA SIMULATION BEHAVIOR GRAPHS ~UMMIRIIAT~VIA SUMMARIZATION REGISTER CHANGES COMPONENT ACTIVATIONS Figure 2: Behavior Representations so we can implement them directly as Lisp functions. The next level is the set of behavior graphs mentioned above. The final two are summaries of the behavior graphs. The Component Activation Sunzmary links component operations to the be- havior graphs which contain one or more instances of them. For example, if the simulator notices that the ALU performs an add sometime during an instruction, then the ALU’s add operation is linked to the behavior graph generated by simulating that in- struction. The Register Transfer Summary captures the cu- mulative effects each behavior graph has on the circuit’s internal registers. Figure 2 shows how the various behavior representa- tions are generated. 3.3 Summary of the Domain Characteristics When reasoning about VLSI circuits, two observations stand out: first, that they are engineered artifacts, designed to accomplish a purpose, and second, that their behavior can be extremely com- plex. The first observation gives us a way to overcome the diffi- culties resulting from the second. These observations have several implications which together guide us to the strategy of explicitly representing the space of designed behavior with behavior graphs and of searching those graphs for solutions to testing goals. We recognize that test generation is a very difficult problem: partial ‘solutions are likely to be the norm in this field for many years to come. Consequently, rather than trying to build a test generator which solves all problems, we are aiming for one which solves some of the problems quickly - and just as important - which fails quickly on the rest so that other methods can be applied. Naturally, it is undesirable for a test generator to fail, but it is much worse for it to run for a large and unpredictable amount of time with no result either way. Our method quickly and exhaustively searches the circuit’s designed behavior for solutions to testing goals. If a solution is present, then our program will find it, and it does so quickly because the search space is limited. To the extent that the Designed-Behavior Hypothesis is true, i.e. that efficient tests for components exist within the designed behavior, then our method is an effective test generation strategy. The example in section 6 shows the performance of our program on a simple processor. The question of what the designer should do when the test generator fails can be answered by stepping back from the test generation problem and considering its place within the larger context of circuit design and manufacturing. Test generation is but one part of the design task. It is possible to trade off ease of testability against performance and reliability considerations. Thus, if generating tests for a given circuit requires too much effort or is impossible, then the designer can consider modifying the circuit to make the problem easier. Any redesign can be as an illustration throughout the rest of the paper, and the sec- . . I tion after that describes the details of our method. 4 The MARK-2 Microprocessor The circuit shown in figure 5 is a simple horizontally micro- programmed processor. It is a slightly modified version of the fictional MAC-l processor presented as a teaching example in [Tanenbaum84]. The central portion of this circuit is a lgbit- wide datapath; the righthand portion is a sequencer, including a ROM holding 80 lines of microcode implementing 23 instructions; and the lefthand portion is a RAM. The address and data busses and their associated signal lines are this circuit’s primary inputs and outputs. The internal nodes are not directlv accessible and ” must be controlled indirectly through intermediate components. 5 The Details Our method has’three parts: (1) a preprocessor for creating the behavior graphs, (2) a micro-planner for finding patterns of ac- tivity among the behavior graphs which solve testing goals, and (3) a macro-planner for managing global aspects of the circuit’s state which aren’t captured in the behavior graphs. We discuss each part in turn. Preprocessor The purpose of the preprocessor is to create a database of be- havior graphs which describe how information flows through the circuit. We do this using simulation, although it could be done in other ways (e.g. by observing a working version of the circuit). We use simulation because it can easily create behavior graphs containing symbolic data, thus reducing the number simulation runs (or observations) we need. For example, suppose we’re in- terested in the ADD instruction of the MARK-2. Rather than simulate the instruction once for each combination of numbers the processor could add, we need only simulate the instruction once, using variables for data. The inputs to the simulator are the complete set of circuit op- erations collected from the databook, plus perhaps some interest- ing sequences of operations provided by test experts. We repre- sent these simulator inputs as programs, because that works well with another approach to test generation discussed in [Shirley85]. We divide the circuit state into two components: control state and data state4. Control state is initialized the same way for each simulation run, while data state is initialized with variables. For example, the program counter is initialized to ?PC and is left by most instructions holding the expression (+ ?PC 1). The simulator itself is event-driven, where an event is a circuit node changing its value. Its essential features are (1) propagating and simplifying symbolic expressions and (2) recording justifica- tions between events. These justification links form a flow graph which is one component of the behavior graph. The expressions ‘This division depends on the level at which the circuit is described, but for any given level it is fairly clear and generally corresponds to the division of controller and datapath. For the MARK-2, the control state is the microprogram counter and the micro-instruction register, and the data state is everything else (e.g. the accumulator and the program counter). 886 / ENGINEERING Time OP IrlPUT-1 II:PUT-2 OUTPUT A composite event with subevents: OP = :PLUS AT ?Time IIIPUT-1 = ?Value-1 AT ?Time Ilr'PUT-2 = ?Value-2 AT ?Time OUTPUT = ?Output AT ?Time and constraints: (and (controllable? ?Value-1) (controllable? ?Value-2) (observable? ?Output)) Specific Patterns IrJPUT-1 II/PUT-2 0 65535 65535 0 1 65535 2 65535 4 65535 8 65535 16 65535 <more patterns> ------------- 98 99 ----------_ .----------_ :PLUS ?Addend ---------- ??? 100 -> 101 ?AC) 102 ?AC i + ?Addend ??? indicates an unknown value . indicates an unchanged value (i.e. ditto) -> indicates the match ?Addend = Contents(RAId, ?ADDR) Figure 3: An event pattern describing the values of nodes are functions of the values on cir- cuit inputs during the simulation run and the initial states of the data registers. The variables in these expressions are marked to say where they originated, consequently, the expressions form another, much simplified, flow graph linking the values at each node back to primary inputs. The behavior graph is the union Figure 4: Portion of ADDD’s behavior graph (shown as a table) of these two flow graphs. the matcher to divide the set of behavior graphs into relevant and irrelevant subsets depending on whether the component “did anything interesting” during a particular behavior. The matcher then tries each potentially relevant behavior graph in turn until it finds a situation which meets all of the constraints. The matching process itself is straightforward unification augmented to check 5.2 The Micro-Planner The micro-planner’s task is to take a test for a single component, expressed in terms of the component’s I/O pins, and to rewrite it in terms of the circuit’s pseudo-inputs and outputs (i.e. pri- mary I/O pins plus data registers). The core of this planner is a matcher which takes component tests and finds instances of them in the behavior graph database. If successful, this planner produces a sequence of inputs for the circuit which will do most of the work needed to cause the test to occur within the circuit. The remainder of the work is done by the macro-planner. the additional pattern constraints. The matcher finds numerous places where the MARK-~‘S be- havior graphs match the subevents of the pattern in figure 3, i.e. when the processor is “put through its paces,” there are many situations in which the ALU performs an addition. But only one of these potential matches meets the additional restric- tions needed for it to help test the ALU. This match occurs in the behavior graph for the ADD-Direct-from-memory instruction (ADDD) , which increments the value of the accumulator by an amount stored in main memory. The relevant portion of this graph appears in figure 4, displayed in tabular form. In all of Where do the component tests come from? We get them from two sources: for standard combinational devices and sequential devices, we ask the domain experts. For other combinational de- vices, we use a conventional test generator. In principle, we could also use a recursive application of our test generation method, though we have not yet tried to do this. Component tests are expressed in two parts: the other potential matches the ALU is either incrementing the program counter or the stack pointer: in all of these cases, one of the ALU’s inputs is a constant, and thus is not controllable. (a) descriptions of prototypical operations that the component must be able to perform (e.g. a register must be able to load data) and (b) actual test data (e.g. have the register try to load these numbers). We provide a rich language for describing prototypical operations; the actual data are just sequences of numbers. Recall that our purpose here is to apply specific test data to the ALU. An additional result of matching an event pattern against a behavior graph is a variable binding environment. The bindings resulting from matching the pattern in figure 3 against the behavior graph in figure 4 are: The pattern in figure 3 describes a prototypical add operation of the MARK-~‘S ALU. The four subevents describe the relations between place, value and time that define the ALU’s add opera- tion. The additional constraints further specialize this pattern to match only instances of add operations that are useful for test- ing the ALU. In order to help, the expressions on the two input nodes must be controllable, i.e. they must, be invertible functions of two distinct primary inputs. This allows us to apply test data to the ALU by transforming the data using the inverse functions, then feeding the inverted data into the two primary inputs. As the inverted test data moves through the circuit to the ALU, it is transformed back to the desired value, thus exercising the ALU properly. Similarly, there must be a output of the circuit which is an invertible function of the ALU's output. With that we can determine what actually came out of the ALU, by applying the inverse function to data that we observe at the circuit output. ?Time == 75 ?Value-1 == Contents(RAId, ?Addr) ?Value-2 == ?AC ?Output == (+ Contents(RAM, ?Addr) ?AC) The Component Activation Summary from section 3.2 allows The program can then back-solve the equations in this binding environment to determine how it must set up the circuit’s reg- isters in order to apply particular values to the ALU’s inputs. In this case, one addend (i.e. ?Value-1) must be in the RAM at some address which can be specified later, and the other addend (i.e. ?Value-2) must be in the accumulator. At this point the micro-planner has now identified the ADDD instruction as the one to use to test the ALU’s addition operation. Moreover, it has also determined how the data registers must be set up so that the ADDD instruction can do this job. It is now the job of the macro-planner to work out a sequence of circuit operations which will initialize the data registers properly. AUTOMATED REASONING / 887 5.3 The Macro-Planner The purpose of the macro-planner is to generate sequences of operations (in this case sequences of instructions) which set up the circuit’s state so that the single test operation found by the micro-planner will start with the right data. Since a test opera- tion will sometimes leave important data in one or more registers (e.g. ADDD leaves the ALU’s output in the accumulator), a sec- ond job of the macro-planner is to work out how to move results to one of the circuit’s outputs where it can be observed. The macro-planner searches through the space of instruction sequences to find one which moves data around properly. The planner is implemented as straightforward best-first search with a metric based on the sequence length and the amount of useful data contained in the registers. The operator descriptions (i.e. descriptions of the effects of instructions) are contained in the Register Transfer Summaries, which are generated by our system from the behavior graphs. 5 An example of the end product of all of this is the sequence of circuit operations below. The micro-planner chose the ADDD operation for testing the ALU and provided information for the macro-planner, which subsequently included three operations to set up circuit state and two operations to observe it. Setup inputs ----> 1: (PRELOAD-IIEIYIORY ?Ac ?ADDR) ,I 2: (LODD ?ADDR) II 3: (PRELOAD-MEMORY ?vAm ?ADDR) Test ------------> 4: (ADDD ?ADDR) Observe outputs -> 5: (STOD ?ADDR) 1, 6: (OBSERVE-14~140~~ (+ ?VALUE ?AC) ?ADDR) 6 Performance on an Example Circuit Figure 6 shows the fault coverage that our prototype system achieves with the MARK-2 processor6. The program was able to generate tests for the highlighted components, while it failed to do so for the others. Note that the program can quickly work out how to test virtually all of the datapath, which includes more than half of the physical circuit, since data paths are typically much wider than control paths. ’ Our program also quickly fails on the controller, an area which the expert says is partially testable with effort, but really should be modified. Thus our program’s failure points out areas of the circuit that a redesign program should focus on. It is also important to realize that the program succeeds in testing the datapath despite the fact that the datapath is con- trolled almost completely by 80 lines of intricate microcode. Part of the program’s success can be attributed to its use of abstract ‘ We include two special operations describing capabilities of hardware. These are the preload-memory and observe-memory which write data into and read data from the RAM. the tester oper .ations GThis circuit has 1G components and 91 nodes. Each simulation run is approximately 50 clock cycles long, taking about 10 seconds of real time on a Symbolics 3640. Generation of prototype tests, i.e. tests with symbolic data, for this circuit takes 1 minute, including both the time taken for successfully creating tests for some components and failing to do so for descriptions of the circuit’s behavior (e.g. while controlling the register state). The program generates these abstract descrip- tions from the designed behavior. Also, the use of simulation and pattern matching is critical: reasoning backwards through the microcode sequencer is very difficult, yet this is exactly what goal-directed planners and classical test generation algorithms would try to do. A clear advantage of our method is that it can work forward through extremely complex components using simulation (a well-behaved reasoning method) to solve problems involving the components on the other side. 7 Explaining the Performance Why is there a radical difference in coverage between the dat- apath and the controller ? We conjecture that the performance of our method depends on the relationship between the circuit operations and the operations of the components that implement them. In particular, the more “direct” the relationship between individual component operations and circuit operations, the bet- ter our method performs. For example, the ADD operation of the ALU very directly implements part of the ADDD instruction of the processor, and consequently, our method is easily able to solve testing goals involving that ALU operation. We believe the directness of the mapping between component and circuit operations varies over different parts of the circuit, and that this variation is a direct result of the design process. Consider the following account of processor design: a designer starts with a specification for an abstract machine which includes the programmer accessible registers and’the instruction set. His job is to implement this abstract machine in hardware while meeting myriad performance, reliability, and cost constraints. The specification can be viewed as a set of dataflow graphs, one for each instruction, describing how data is transformed as it moves from register to register. Naturally the designer can’t implement these dataflow graphs directly in hardware: without some sharing there would be a wasteful duplication of function- ality. So he looks for ways of merging the graphs together. To do this, he adds to the graphs components which perform identity transformations. For example, he might insert a reg- ister in one graph to shift some of its operations later in time, thereby allowing components to be shared with another graph via time-multiplexing. In another situation, he might introduce identity boxes into two graphs and implement them using a sin- gle multiplexor. When the flow graphs fit together, the designer collects all the control signals from all the graphs and creates a finite state machine (FSM) to provide these signals at the right times. This FSM is implemented using any one of the well-known methods (e.g. with a PLA). By this account, the way the datapath is designed (incremen- tal refinement and merging) is very different from the way the controller is designed (stylized implementation of a state ma- chine). The availability of components which directly imple- ment large portions of individual processor operations (e.g. ALU chips), plus the fact that the merging process doesn’t normally change existing components (it just adds new identity boxes), means that many datapath component operations tend to “very directly implement” circuit operations. The state machine de- sign process, however, need yield no such simple relationships. The behavior of the whole controller (a state machine) is very different from the behavior of any controller component (e.g. a register, ROM, or MUX). 888 / ENGINEERING CA I MFIRK-2 Figure 5: The MARK-2 Processor I I HA , , 1 1 I MARK-2 Figure 6: Coverage of processor components AUTOMATED REASONING / 889 This account also suggests a difference between our approach to test generation and the classical approaches. The usual result of the design process, a circuit topology, is the superposition of the dataflow graphs, plus the identity boxes added to allow shar- ing, plus the controlling state machine. This topology doesn’t include the list of operations the circuit can perform, nor does it make explicit how data moves through the circuit, e.g. timing and selection relationships, both of which were explicit in the flow graphs before they were merged. Classical test generation algorithms perform poorly, because they attempt to generate tests directly from the circuit compo- nents and topology. Testing experts perform well, because they know the same things designers know and can reconstruct and use the designer’s flow graphs. Finally, our program is successful for the same reason, it reconstructs and uses behavior graphs, which are an approximation of the flow graphs. 8 8 Conclusion We have demonstrated how knowledge of a system’s designed be- havior can help solve test generation problems, or more generally, problems involving manipulation of complex engineered systems. The key to our method is that a device’s designed behavior can be far more limited than its potential behavior. This limitation can translate into a reduction of the search necessary to achieve planning goals. A prototype system has been implemented and run on a small number of examples. This system has 3 components: (1) a pre- processor to create the behavior graphs using symbolic simula- tion of input sequences out of the databook; (2) a micro-planner to match testing goals gotten from domain experts against the simulation runs (the matcher is guided by the Component Ac- tivation Summaries); and (3) a macro-planner to control,global aspects of the circuit’s state (the behavior graphs also supply the planner’s operator descriptions). The system has generated tests for a simple processor’s dat- apath. It was successful despite the fact that the datapath is controlled by 80 lines of intricate microcode. A clear advantage of our method is that it can, by using simulation, reason forward through complex components, such as the controller, in order to solve problems involving the components on the other side. 9 Acknowledgments The following people have read drafts or otherwise contributed to the contents of this paper: Randall Davis, Gordon Robinson, Yehudah Freundlich, Jeff Van Baalen, Walter Hamscher, Glenn Kramer, Howard Shrobe, Ralil ValdCs-Perez, Brian Williams, Peng Wu. I would especially like to thank Gordon Robinson, who is the testing expert whose methods inspired much of this work. ‘The domain experts report an interesting dichotomy. They say that, for humans, test generation problems are either very easy or very hard. The problems are easy when they can see clearly which manipulations of the device’s interfaces are useful for exercising components inside the circuit, and hard otherwise. In our work, we have have made some effort, albeit informally, to match the performance of our test generation system to the expert’s intuitions about which problems are easy for humans and which are hard. And the expert does report that the results on the MARK-2 match his expectations fairly closely. 890 / ENGINEERING References [DeKleer79] [ Roth661 [Shirley851 [Singh85] [ Tanenbaum841 de Kleer, J., Causal and Teleological Reasoning in Circuit Recognition, Diss., Massachusetts In- stitute of Technology, 1979, AI-TR-529. Roth, J. P., “Diagnosis of Automata Failures: A Calculus and a Method,” IBM Journal of Re- search and Development, Vol. 10, July 1966, pp. 278-291. Shirley, M., “An Automatic Programming Ap- proach to Testing,” IEEE Workshop on Simu- lation and Test Generation Environments, Sept 1985. Singh, N., Exploiting Design Morphology to Man- age Complezity, Diss., Stanford Department of Electrical Engineering, April 1985. Tanenbaum, A. S., Structured Computer Orga- nization, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1984.
|
1986
|
27
|
469
|
EVIDENTIAL REASONING Will-l TEMPORAL ASPECTS Dr. Thomas C. Fall Advanced Decision Systems 201 San Antonio Circle, Suite 286 Mountain View, California 94040-l 289 ABSTRACT In the real world, one usually cannot gather enough evidence to completely determine the activity of a system. Typically, different pieces of evidence tell you about different aspects of the system with dif- ferent certainties. Correlating these at a given point of time is a much studied problem. The problem becomes even less tractable when the evidence is acquired at different points in time. The system described in this paper uses frame-like objects called “models” that propagate the effects of a piece of evidence through time and uses Gordon and Shortliffe’s theory to combine the effects of the active models. These models do not require either a great deal of storage or that the evidence be processed in temporal order. Further, they seem to be a construct that the experts in our problems easily relate to. Results with test problems are consistent with the estimates of experts and run in not unreasonable time. problem that will show the kinds of issues the processing part of the system addresses but avoids the real life knowledge content. 2. The Example Let’s say we’re in a town with three competing taxicab companies: the Blue Cabs, the Green Cabs and the Orange Cabs. The owner of the Blue Cab Company feels that if he can set up a system to determine the whereabouts of all the competing taxis, he can get a competitive advantage by dispatching his cabs to areas where the competitors are underrepresented and keeping his cabs out of areas where they are overrepresented: He gets his information from re- ports from his drivers or from listening in to his competitors’ dispatches. His drivers might only be able to identify which company the cab was with, but would be fairly precise as to activity. On the other hand, when overhearing the competitors’ dispatches, the identity of the cab might be precise, but the activity might not. The knowledge of the activity of each cab can be diagramed using a hierarchy as illustrated in Figure 1. I WHATTHE SYSTEM NEEDSTO ACCOMPLISH A. An Examole Problem 1. Problem Background Several systems,currently under development, address problems which share three characteristics: 1) In each problem, the goal is to determine and/or predict the behaviors of a class of objects. 2) The sources of information available vary in accuracy and descriptive power. 3) Each object can be ultimately classified as being in one of a fixed number of states. Further these states can be arranged hierarchically. Unfortunately, the problems these systems are designed to solve cannot be publicly discussed. However we developed the core processing system to be domain independent. So I have made up an example Figure 1. Activity Hierarchy for a Taxi Cab Numbers are assigned to each slot of the diagram according to the certainty we have that the cab is AUTOMATED REASONING / 89 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. engaged in that activity. If we had absolutely no knowledge, then we would put 1 .O in the “not known” slot and 0.0 elsewhere. If we were sure that it was garaged, but could not come to any more definite conclusion than that, then we would put 1 .O in the “garaged” slot and 0.0 everywhere else. If we were 100% sure that the cab was garaged, and we were 50% sure that he was garaged because of shift change, then we would put 0.50 in “garaged”, 0.50 in “shift change” and 0.0 everywhere else. The total certainty in the hierarchy is always 1 .O and as we get more and more specific knowledge, that amount will be pushed lower into the hierarchy. Each piece of evidence yields an assignment to the diagram and Gordon and Shortliffe’s work [I] allows us to combine the assignments from several pieces of evidence. 3. A Naive Method for Prediction These hierarchy diagrams are, unfortunately, only a snapshot at a particular instant. Our pieces of evidence come in at various times and all need to be combined to get an evaluation for a particular point of time. Further, that point of time will be in the future if we want to use our system to predict activities. One way is to use our knowledge of probable future behavior to project the effects of evidence through time. For instance, say we know that usually a pickup from the airport would be going to midtown hotel after which he would wait in the hotel’s cab queue for another fare. One would predict then that if a cab were in the airport queue, then in 30 minutes there would be a 30% likelihood that the cab would have picked up a fare and a 25% likelihood that he’d picked up a fare and had already dropped him off and was waiting in the hotel queue for another fare. The remaining 45% would be relegated to not known. One of our drivers reports at I:30 that he is 80% sure that cab Green-4 was in the airport taxi queue. Using our knowledge, we would say that in 30 minutes, there is a 24% (0.8 x 0.3) likelihood that Green-4 will be cruising to the hotel, a 20% (0.8 x 0.25) chance that he will already be in the hotel queue and there would be 56% uncertainty. We could fill this into the hierarchy diagram for 2:00 as one piece of evidence. If there were two Blue cabs at the airport and each made a separate report, the combination of the two pieces of evidence would yield a 36.11% certainty the cab would be carrying a fare to the hotel at 2:00, a 29.20% chance that the cab was in the hotel queue and now only a 34.69 % uncertainty. Which is sort of questionable. If there are IO Blue cabs who each reported separately into the system, then the com- bination of evidence yields 61.88% for the cab having a fare, 36.32% for the cab being in the hotel queue and only 1.80% for not-known. In repeated additions of similar pieces of evidence, the slot which is lowest in the hierarchy eventually dominates. If there are two competing slots that do not have an ancestral relationship, the larger will dominate. So, even though there is a difference of only 4% between “fare” and “hotel queue” in the report (24% vs. 20%), after 60 applications there would be 95.60% in the “fare” slot, 4.40% in the “hotel queue” and a negligible quantity in “not-known”. This conclusion is clearly counterintuitive. 4. Deficiencies of the Naive Approach This procedure for propagating evidence has two main deficiencies: the effects of a. The procedure for projecting effects through time is inflexible. If a tdxi was doing one activity now, it would project a probability distribution of activities for each time in the future. Unfortunately, this does not take into account other history for the taxi. For example, though we know it is now in the airport queue, it may be that 5 minutes ago it was still driving the fare to the airport. This would imply that it is still fairly far back in the queue. b. The Gordon and Shortliffe combination method assumes that the pieces of evidence are indepen- dent. The example above shows what can happen when you treat several reports which are all just separate reports of one activity as if they were independent. These deficiencies are closely related and can be to a large extent solved by the introduction of an additional mechanism, the “model”. B, The “Model” Mechanism 1. The Model Instantiated The “model” is a frame-like object whose slots are activities that a taxi would engage in along with the minimum and maximum residency time in each of them. For instance, a model might have “fare to airport” which would take the cab a minimum of 20 to a maximum of 50 minutes followed by “wait in airport taxi queue” for 30 to 60 minutes, “fare from airport” for 20 to 50 minutes and finally “wait in hotel queue” for 20 to 40 minutes. If a report showed that the taxi was in a state with some certainty and that state was one of these four or a child of them, the model would be instantiated with the given certainty. The residency times of the instantiation would be fixed according to the report time. For instance, say at I:00 we hear the Orange dispatcher tell Orange-7 to go ahead and join the airport queue after dropping off his fare. This data only tells us that Orange-7 is 892 / ENGINEERING probably driving out to the airport (say 60% sure), not what his ETA is. The following diagram shows how the other states would be predicted. AaRT T’ME ‘i”” To 2:oo I *i30 3:Po 3:30 I 4:Yo FARE IN ZE y d H3ELaJEE Figure 3. The Model Updated by a Report at 2:lO FARE IN Earlier, we had said there were two problems with our original system. This model mechanism addresses the first in that it clearly is flexible and does account for history from other evidence. For instance, the second report in our example had an effect on the minimum residency for the subsequent states, but had no effect on the maximum, since the first piece was still the constraint for it. Figure 2. A Model as Instantiated by a Report at I:00 Assuming our finest time resolution is to IO minutes, the earliest the taxi could be at the airport is at 1 :lO. If the taxi had just started, it could take him 50 minutes. If he had a minimum wait at the airport, 30 minutes, and a minimum ride back, he could be at the hotel at 2:O0. However, if he had a long ride out and a long wait, he could still be in the airport queue till almost 3:O0. The hatched lines show the possible times the taxi could be in each state. If we were trying to predict what the taxi was doing at 2:30, this model would say that he could be waiting in the airport queue, driving to the hotel or in the hotel queue. If we divided the certainty of being in this model by the number of possible states at 2:30, we would get a certainty of 0.2 for each. 2. The Model Updated Now, say a Blue driver reports at 2:lO that he’s 70% sure that he sees Orange-7 in the airport taxi queue. Combining the two certainties using the Gordon Shortliffe theory gives us a new certainty for the model of 0.88 = (0.7 x 0.6) + (0.7 x (I-0.6)) + ((I-0.7) x 0.6). Also, we now know that he cannot start driving back until 2:20 and even if he has the shortest drive, he won’t be in the hotel queue until 2:40. Thus, at 2:30, Orange-7 could only be in one of two states. If we divide the 0.88 between these two states, we get 0.44 apiece as compared to the 0.2 we had from just the original piece of evidence. As for the deficiency due to dependency, the second of the two, this mechanism at least partially addresses it. For instance, repeated reports early in the model would be combined as if they were independent to give the net effect of increasing the certainty of being in that model. However, when we look into the future, that model may predict several possible activities, so that certainty would be split among them. Only a piece of evidence temporally distinct from the first flurry can have the effect of reducing the number of states. Note that the evidence could have been processed in the reverse order with the same results II HOW THE SYSTEM ACCOMPLISHES lT The systems developed consist of four major parts: I) the hierarchy of states, 2) the set of models that describe the patterns of behavior, 3) a report conditioner that takes the reports in the form they are received and converts them to the form (state object-id time certainty) and 4) the engine that performs the procedures. The first three are domain dependent and the last is domain independent. This section will go a little more deeply into the architecture of the engine. A report is processed by the report conditioner to produce the data item (maintenance Orange-7 4:00 0.50). Let’s say we have a model where the state “garaged” appears and another where the state “overhaul” appears. Referring back to the state hierarchy, illustrated in Figure 1, we see that AUTOMATED REASONING / 893 “garaged” is a parent of maintenance. That is, if Orange-7 is in “maintenance” then it is surely “garaged”, so we would want to call that model with the full 0.50 certainty. The other model with the state “overhaul” is more problematic. In the current system we make the ad hoc valuation by calling the model with half the certainty of the parent state (since in this case there are two parents). This could be changed to reflect greater knowledge, using countings for instance. This process is handled by a function (state-relation state-l state-2) that returns the value 0 if state-l and state-2 are not related, 1 if state-l is above state-2 and l/n if state-l is one of n children of state-2. If state-l is further below state-2, it returns the product of the intermediate steps. Once it is determined that there is a non-zero state-relation between the event state and a state on a model, another module is brought into play that adjusts the timings of that instance of the model. This mechanism is what implements the timing adjustments discussed in the example of Section l.B, “The ‘Model’ Mechanism”. A couple of additional comments are in order about the time adjustment function. First is that if a state occurs several times on a model, the model will be instantiated for each occurence. The second is that there are two phantom states: “begin” and “end”. Every instance of a model has the “begin” at its start and “end” at its finish where both have a zero time duration. If an event state does not match any states on the instance, a match to “begin” or “end” is attempted. In the case of the “maintenance” report at 4:00 from above, it would have the effect on the example model of Section I.B of closing it off at 4:00 and incrementing its certainty. If there were no match on these two phantom states either, then the certainty of the model would be decremented. An incoming piece of evidence will be used to adjust timings of instances already on the blackboard. Only if it cannot will it be used to instantiate new models. Typically, the question the system is supposed to answer is what does the accumulated evidence tell us about the object’s activities at some particular point in time. Depending on the evidence, there might be many models which are active at that time. This is the realm of the combining function which uses the methods of Gordon and Shortliffe. Each model on the blackboard active is treated as a piece of evidence. For instance, in Section I.B we discussed how the first piece of data would instantiate a model and how we would use that to make a prediction for 2:30. We use that prediction to assign a valuation of 0.2 to each of “airport queue”, “fare in” and “hotel queue”, 0.4 to “not known” and 0.0 elsewhere. The combining function uses the Gordon and Shortliffe techniques to combine this valuation with valuations derived from other active models to arrive at the assessment of likely activities of Orange-7 at 2:30. I I I DISCUSSION OF THE SYSTEM IN OPERATION A common activity can be on several models. A report of this activity would call all of them. Of course, when we looked into the future where these models would be predicting activities that conflict with activities predicted by other models ‘also called, when we combined them, they would cancel each other out so that we wouldn’t get too much information out. But that is intuitive; a common activity should not have a lot of predictive power. In the real world. we can also get contradictory evidence. To incorporate this type of evidence in our implementation, each piece of evidence either combines and enhances models already on the blackboard if it is compatible, or it decrements the certainty of the existing models and adds new models that it is compatible with (these new ones have their certainty decremented by any evidence already posted and in conflict). Again, the effect of these models will tend to cancel one another out, but a preponderance of complementary evidence will push the prediction in its direction. This system has been implemented in ZetaLisp on a Symbolics 3670 for a problem which has about 30 states and IO models. In this problem, we receive a report that could correspond to one of several candidates and we would like to narrow the field. The system projects the the effect of known evidence about each candidate to the time of the report of the unknown vehicle for comparison. The candidates are ranked by their degree of similarity to the unknown. In a typical test problem involving two candidates, the experts felt the evidence indicated one was about 60 to 70 percent likely to be the correct one. Our system required about 5 minutes to analyze the evidence, about 50 reports, to arrive at a 64% likelihood for that candidate. Further, the experts can relate to the system’s reasoning process, since they use similar internal constructs. As has been pointed out by several authors, some systems of handling uncertainty have regions of parameter values where they are unstable. For instance, Lesmo, Saitta and Torasso [2] cite a case in PROSPECTOR where a 10% relative error in parameters causes a 50% relative error in the output 894 / ENGINEERING certainty, i.e. 0.53+0.29. Our experience with models is that they tend to make the system stable. That is changes in input parameters tend to only make proportional changes in output certainties. About the only exception is a change in residency time which might add or detract states from a prediction. Another issue is control. Martin-Clouaire & Prade [3] discuss the need for trying to use the inferences which will have the greatest effect in order to prune the number of inferences that have to be processed. The models effectively replace large numbers of rules, but pruning is still necessary. If too many models get on the blackboard and/or an activity calls too many (certain common activities may be on numerous models) the machinery that determines compatibility can be seriously slowed. Our implementation has a “global rejection value” (GRV). If evidence calls a model or decrements a model to below this global rejection value, that model will be rejected. This test is done at many points in the implementation. For instance, the rejection test is done before the compatibilty test, since compatibility chews up CPU cycles. In fact the evidence is tested right at the front. Usually we run with the GRV set at 0.01 which seems to allow weak evidence to have some effect without slowing the system unreasonably. At this setting, a piece of conditioned evidence will process in about 5 seconds. In contrast, when set at 0.0, some pieces of evidence required IO minutes for processing, since they called up a mammoth number of models. In sum, using the Gordon .and Shortliffe theory in combination with the model concept to temporally propagate the effect of evidence mimics much of the expert’s reasoning and achieves results that are intuitive to the expert. [l] Gordon, J. & Shortliffe, E.H., “A Method for Managing Evidential Reasoning in a Hierarchical Hypothesis Space” Artificial lntelliaence 26 (1985) 323-357 [2] Lesmo, L., Saitta, L. & Torasso P., “Evidence Combination in Expert Systems” international Journal of Man-Machine Studies 22:3 (1985) 307-326 [3] Martin-Clouaire, R. & Prade, H., “On the Problem of Representation and Propogation of Uncertainty in Expert Systems” International Journal of Man-Machine Studies 22:3 (1985) 251-264 AUTOMATED REASONING / 895
|
1986
|
28
|
470
|
A FRAMEWORK FOR EVIDENTIAL-REASONING SYSTEMS* John D. Lowrance Thomas D. Garvey Thomas M. Strat Artificial Intelligence Center SRI Internat ional Menlo Park, California Abstract Evidential reasoning is a body of techniques that supports automated reasoning from evidence. It is based upon the Dempster-Shafer theory of belief functions. Both the formal basis and a framework for the implementation of automated reasoning systems based upon these techniques are presented. The formal and practical approaches are divided into four parts (1) specifying a set of distinct propositional spaces, each of which delimits a set of possible world situations (2) specifying the interrelationships among these propositional spaces (3) rep- resenting bodies of evidence as belief distributions over these propositional spaces and (4) establishing paths for the bodies of evidence to move through these propositional spaces by means of evidential operations, eventually converging on spaces where the target questions can be answered. I Introduction For the past several years, we have been addressing perceptual problems that bridge the gap between low-level sensing and high-level reasoning [Low82,GLF81,LG83b,LG83a,LSG86,Wes86]. Problems that fall into this gap are often characterized by mul- tiple evidential sources of real-time data, which must be prop- framework for reasoning with perceptual data that forms the basis for evidential-reasoning1 systems. The information required to understand the current state of the world comes from multiple sources: real-time sensor data, previously stored general knowledge, and current contextual information. Sensors typically provide evidence in support of certain conclusions. Evidence is characteristically uncertain: it allows for multiple possible explanations; it is incomplete: the source rarely has a full view of the situation ; and it may be completely or partially incorrect. The quality and the ease with which situational information may be extracted from a synthesis of current sensor data and prestored knowledge is a function both of how strongly the characteristics of the sensed data focus on appropriate intermediate conclusions and on the strength and effectiveness of the relations between those con- clusions and situation events. ‘This research was sponsored in part by the U.S. Navy Space and Naval Warfare Systems Command and the Defense Advanced Research Project Agency under contract N00039-83-K-0656 and by the U.S. Army Signal Warfare Center under contract DAAL02-85-C-0082. Given its characteristics, evidence is not readily represented either by logical formalisms or by classical probabilistic esti- mates. Because of this, developers of automated systems that must reason from evidence have frequently turned to informal, heuristic methods for handling uncertain information. The “probabilities” produced by these informal approaches often cause difficulties in interpretation. The lack of a formally con- sistent method can cause problems in extending the capabili- ties of such systems effectively. Our work in evidential reason- ing was motivated by these shortcomings. Our theory is based on the Shafer-Dempster theory of evidence [Dem68,Sha76,Sha86] and aims to overcome some of the difficulties in reasoning from evidence by providing a natural representation for evidential information, a formal basis for drawing conclusions from evi- dence, and a representation for belief. In evidential reasoning, a knowledge source (KS) is allowed to express probabalistic opinions about the (partial) truth or falsity of statements composed of subsets of propositions from a space of distinct, exhaustive possibilities (called the frame of discernment). The theory allows a KS to assign belief to the individual propositions in the space or to disjunctions of these propositions or both. When it assigns belief to a disjunction, a KS is explicitly stating that it does not have enough informa- tion to distribute this belief more precisely. This condition has the attractive feature of enabling a KS to distribute its belief to statements whose granularity is appropriate to its state of knowledge. Also, the statements to which belief is assigned are not required to be distinct from one another. The distri- bution of beliefs over a frame of discernment is called a body of evidence. Evidential reasoning provides a formal method, Dempster’s Rule of Combination, for fusing (i.e., pooling) two bodies of evidence. The result is a new body of evidence representing the consensus of the two original bodies of evidence, which may in turn be combined with other evidence. Because belief may be associated directly with a disjunction of propositions, the probability in any selected proposition is typically under- constrained. This necessitates an interval measure of belief, because belief associated with a disjunction may, based upon additional information, devolve entirely upon any one of the disjuncts. Thus, an interval associated with a proposition im- plies that the true probability associated with that proposition must fall somewhere in the interval. A side-effect of applying Dempster’s rule is a measure of conflict between the two bodies of evidence that provides a means for detecting possible gross errors in the information. ‘Ewidentid reamning is a term coined by SRI International [LG82] to denote the body of techniques specifically designed for manipulating and reasoning from evidential information as characterized in this paper. 896 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Current expert-systems technology is most effective when domain knowledge can be modeled as a set of loosely intercon- netted concepts (i.e., propositions) [DK77]; this loose intercon- nection justifies an incremental approach to updating beliefs. In most of our work, there is the potential for strong intercon- nectivity among beliefs in propositions. We, therefore, focus on a body of evidence as a primitive, meaningful collection of interrelated (dependent) beliefs; updating the belief in one proposition affects the entire body of evidence (other work has addressed the concept of a body of evidence in a production- rule formalism [Kon79,LB82] by creating special entities). Evidential reasoning provides options for the representa- tion of information: independent opinions are expressed by multiple (independent) bodies of evidence; dependent opinions (in which belief in one proposition depends on that of another) can either be expressed by a single body of evidence or by a net- work that describes the interrelationships among several bodies of evidence. These networks of bodies of evidence capture the geneology of each body (similar in spirit to those of [Coh85]) and are used in a manner similar to data-flow models [WA841 updating interrelated beliefs (i.e., for belief revision [Doy81]). Shafer theory of In this paper we assume some familiarity with the Dempster- beliefs, although the appropriate equations from this theory are included. We begin with a discussion of the formal approach to the problem of reasoning from ev- idence and then progress to a description of the implementa- tion approach, including an example. We close with a short description of the system that we have developed for applying evidential reasoning. this frame; e.g., Other propositions related to locating this vessel can be similarly represented as subsets of @A (i.e., as elements of the power set of @A, denoted 2O-J). Once this has been accom- plished, logical questions can be posed and resolved in terms of the frame. Given two propositions, A; and Aj, the follow- ing logical operations and relation can be resolved through the associated set operations and relation: -+A; c @A-A; A;AAj _ &nAj Ai V Aj _ Ai U Ai A; + Aj _ Ai C Aj . If other aspects of ships are of interest besides their loca- tion, then additional frames of discernment might be defined. For example, the activities of these ships might be of inter- est. If so, an additional frame OB might be defined to include elements corresponding to refueling, loading cargo, unloading cargo, being enroute, and the like. Propositional statements pertaining to a ship’s activity can then be defined relative to QB = {Wz,. . A} Bj C @B . So far, propositional statements pertaining to a ship’s loca- tion or pertaining to its activity can be addressed separately, but they cannot be jointly considered. To do this, one must 2 Formal Approach first define a compatibility relation between the two frames. A compatibility relation simply describes which elements from the two frames can be true simultaneously. For example, a 2.1 Framing the Problem The first step in applying evidential reasoning to a given prob- lem is to delimit a propositional space of possible situations. Within the theory of belief functions, this propositional space is called the frame of discernment. It is so named because all bodies of evidence are expressed relative to this surrounding framework, and it is through this framework that the inter- action of the evidence is discerned. A frame of discernment delimits a set of possible situations, exactly one of which is true at any one time. For example, the problem to be ad- dressed is that of locating a ship. In this case, the frame of discernment consists of the set of all possible locations for that vessel, This might be represented by a set @A in which each element ai corresponds to a possible location: ship located at a loading dock might be loading or unloading cargo, but is not refueling, or enroute. In other words, be- ing located at a loading dock is only compatible with one of two activities, loading or unloading. Thus, the compatibility relation between frames @A and 08 is a subset of the cross product of the two frames. A pair (ai, bj) is included if and only if they can be true simultaneously. There is at least one pair (ai, bj) included for each ai in @A (the analogue is true for each bj): @A,B s @A x @B . Using the compatibility relation F~A,B we can define a com- patibdity mapping CA+,B for translating propositional state- ments expressed relative to eA to statements relative to eB. If a statement Ak is true, then the statement CA++B(A~) is also true: Once a frame of discernment has been established, propo- sitional statements can be represented by disjunctions of el- ements from the frame corresponding to those situations for which the statements are true. For example, the proposition Ai might correspond to the statement that the vessel is located in port, in which case 4 would be represented by the subset of elements from @A that correspond to possible locations within port facilities: Ai C eA e CA-B : 2 0 A ++ 2@B CA-B (Ak) = {bjl(ai,bj) E OA,B,~~ E Ak} . Instead of translating propositional statements between these two frames via CA-B and CB,+A, we might chgose to trans- late these statements to a common frame that captures all of the information. This common frame is identical to the com- patibility relation @A,B. Frame 63~ (and analogously Qg ) is AUTOMATED REASONING I 897 trivially related to frame OA,B via the relation and compatibility mappings: following compatibility @A&-V) = {(ai, (Ui,bj))l(Ui,bj) E @A,B) CA~(A,B)(Ak) = ((Uj,bj)l(U;,(Ui,bj)) E @A,(A,B)+i E Ak) = {(Ui,bj)j(U;,bi) E @A,B,Ui E Ak} C(A,B)-A(Xk) = {U;j(%,bj) E @A,B,(Ui,bj) E xk> * Clearly, as more aspects of these ships become of interest, the number and complexity of the frames and compatibility mappings increases. However, there is a trade-off between the complexity of individual frames and the complexity of the net- work of compatibility mappings connecting them. We might define a single (complex) frame that encompasses all aspects of interest or, alternatively, define a (complex) network of frames that includes a distinct frame for each aspect of interest. Of course, these may not be equivalent. For example, consider the following frame: @4,&C = {(Ul,bl,cl),(U2,b2),(U2,b2,CZ)) . If this frame properly captures the relationship among frames @A, Og, and Qc, then cl is the only element from 0~ com- patible with al from @A. However, if we maintain these as three separate frames connected by compatibility mappings, CA-B, CB-A, CB++C, and CC,+B, both ~1 and ~2 are compati- ble with al because al is compatible with br, and br is compat- ible with both cl and ~2; i.e., CB,+C(CA++B({U~})) = {cr,cz}. However, if al is true, then it follows that either cl or c2 is true. Thus, the reasoning based on a well-formed gallery of in- terconnected frames is sound but not necessarily complete. A gallery is well formed if there exists a single all encompassing frame whose answers are always included in the answers based upon the gallery. In dynamic environments, compatibility relations can be used to reason over time. If $Ar represents the possible states of the world at time one and 0~2 represents the possible states at time two, then a compatibility relation, OA~,A~, can cap- ture the possible state transitions. For example, @Ar and e.42 might both represent the possible locations of a ship (i.e., they are identical to @A as previously defined), then @,Q,A2 could represent the constraints on that ship’s movement. A pair of locations (ui, ui) would be included in @Al,,42 if a ship located at a; on Day 1 (i.e., time) could reach aj by Day 2. If we assume that the possible movements of a ship are constrained in the same way over any two day period, then the compatibil- ity mapping associated with this compatibility relation can be reapplied as many times as necessary to constrain the possible locations of a ship across an arbitrary number of days. 2.2 Analyzing the Evidence Once a gallery has been established, the available evidence can be analyzed. The goal of this analysis is to establish a line of reasoning, based upon both the possibilistic informa- tion in the gallery and the probabilistic information from the evidence that determines the most likely answers to some ques- tions. The gallery delimits the space of possible situations, and the evidential information establishes the likelihoods of these possibilities. Within an analysis, bodies of evidence are ex- pressed relative to frames in the gallery, and paths are estab- lished for the bodies of evidence to move through the frames via the compatibility mappings. An analysis also specifies if other evidential operations are to be performed, including whether multiple bodies of evidence are to be combined when they ar- rive at common frames. Finally, an analysis specifies which frame and ultimate bodies of evidence are to be used to an- swer each target question. Thus, an analysis specifies a means of arguing from multiple bodies of evidence towards a partic- ular (probabilistic) conclusion. An analysis, in an evidential context, is the analogue of a proof tree in a logical context. To begin, each body of evidence is expressed relative to a frame in the gallery. Each is represented as a mass distribution (e.g., mu) over propositional statements discerned by a frame (e.g., @A): mA : 2 @A - WI c ‘-U(k) = 1 Ais@, mA(8) = 0 . Intuitively, mass is attributed to the most precise propo- sitions a body of evidence supports. If a portion of mass is attributed to a proposition Ai, it represents a minimal com- mitment to that proposition and all the propositions it implies. Additional mass attributed to a proposition Aj that is compat- ible with Ai, but does not imply it (i.e., 0 # A; II Aj # Aj), represents a potential commitment: mass that neither sup- ports nor denies that proposition at present but might later move either way based upon additional information. To interpret this body of evidence relative to the question Aj, we calculate its support and plausibility to derive its evi- dential interval as follows: S@(Aj) = c MA(k) AicAj PZs(Aj) = 1 - Spt(@A - Aj) [Spt(Aj), f’ls(Aj)] C [O,l] * The lower bound of an evidential interval indicates the de- gree to which the evidence supports the proposition, while the upper bound indicates the degree to which the evidence fails to refute the proposition, i.e., the degree to which it remains plausible. This evidential interval, for the most part, corre- sponds to bounds on the probability of Aj. Thus, complete ig- norance is represented by an evidential interval of [O.O, 1.01 and a precise probability assignment is represented by the “inter- val” collapsed about that point (e.g., [0.7,0.7]). Other degrees of ignorance are captured by evidential intervals with widths other than 0 or 1 (e.g., [0.6,0.8],[0.0,0.5],[0.9,1.0]). If a body of evidence is to be interpreted relative to a ques- tion expressed over a different frame from the one over which the evidence is expressed, a path of compatibility relations con- necting the two frames is required. The mass distribution ex- pressing the body of evidence is then repeatedly translatedfrom frame to frame, via compatibility mappings, until it reaches the 898 / ENGINEERING ultimate frame of the question. In translating mA from frame @A to frame OB via compatibility mapping CA-B, the fol- lowing computation is applied to derive the translated mass distribution mg: mB(Bj) = c mA(k) CA-B (Ai)=Bj Intuitively, if we (partially) believe Ai, and A; implies Bj, then we should have the same (partial) belief in Bj. This same method is applied to move mass distributions among frames that represent states of the world at different times. However, when this is the case, the operation is called projection. Once two mass distributions mfi and rni representing in- dependent opinions are expressed relative to the same frame of discernment, they can be fused (i.e., combined) using Demp- ster’s Rule of Combination. Dempster’s rule pools mass dis- tributions to produce a new mass distribution ml that repre- sents the consensus of the original disparate opinions. That is, Dempster’s rule produces a new mass distribution that leans towards points of agreement between the original opinions and away from points of disagreement. Dempster’s rule is defined as follows: mi(Ak) = (1 - !c)-’ c mi(Ai)mi(Aj) AinAj=Ak k = c mi(Ai)d(Aj) # 1 - AinAj=@ Since Dempster’s rule is both commutative and associa- tive, multiple (independent) bodies of evidence can be com- bined in any order without affecting the result. If the initial bodies of evidence are independent, then the derivative bodies of evidence are independent as long as they share no common ancestors. Thus, in the course of constructing an analysis, at- tention must be paid to the way that evidence is propagated and combined to guarantee the independence of the evidence at each combination. Other evidential operations can also be included in an anal- ysis. One frequently used operation is discounting. This op- eration adjusts a mass distribution to reflect its source’s cred- ibility (expressed as a discount rate r E [0, 11). If a source is completely reliable (r = 0)) d iscounting has no effect; if it is completely unreliable (r = 1)) discounting strips away all ap- parent information content; otherwise, discounting lowers the apparent information content in proportion to the source’s un- reliability: m2(Ai) = (1 - +-b&k), Ai # @A r i- (1 - r)mA(@A), otherwise . Other evidential operations include summarization and gist- ing (among others). Summarization eliminates extraneous de- tails from a mass distribution by collecting all of the extremely small amounts of mass attributed to propositions and attribut- ing the sum to the disjunction of those propositions. Gisting produces the “central” Boolean-valued statement that captures the essence of a mass distribution. This is particularly useful when explaining lines of reasoning. 3 Implementation Approach In implementing this formal approach, we have found that the gallery, frames, compatibility relations, and analyses can all be represented straightforwardly as graphs consisting of nodes connected by directed edges. This has led us to use Grasper IITM [ Low86,Low78], a programming language extension to LISP that introduces graphs as a primitive data type. A graph in Grasper II consists of a set of labeled subgraphs. Each sub- graph consists of a set of labeled nodes and a set of labeled, di- rected edges that connect pairs of nodes. Each node, edge, and subgraph have values that can be used as general repositories for information. Once the graphical representations have been established for the gallery, frames, compatibility relations, and analyses, the remainder of the formal approach is easily imple- mented. The first step is to define the gallery. If the problem is to reason about the locations and activities of ships, we might include two frames: a LOCATIONS frame and an ACTIVI- TIES frame. These are each represented as nodes in a sub- graph called the SHIP-GALLERY (Figure 1). In addition, the gallery might include three compatibility relations repre- sented as edges. One compatibility relation, LOCATIONS- ACTIVITIES, relates locations to activities and is represented by an edge from LOCATIONS to ACTIVITIES. The two other compatibility relations, DELTA-LOCATIONS and DELTA- ACTIVITIES, describe how a ship’s location and activity on one day are related to the next day’s. Each of these is repre- sented by an edge that begins and ends at the same node. The next step is to define the frames in the gallery. Each of these is represented by a subgraph sharing the same name as a node from the gallery. Each such subgraph includes a node for each element of the frame and may include addi- tional nodes representing aliases, i.e., named disjunctions of elements. Each of these additional nodes have edges pointing to elements of the frame (or other aliases) that make up the disjunction. The LOCATIONS frame (Figure 2) includes six elements (ZONEl, ZONEZ, ZONE3, CHANNEL, LOADING- DOCK, REFUELING-DOCK) and three aliases (IN-PORT, DOCKED, AT-SEA). The ACTIVITIES frame (Figure 3) in- cludes five elements (ENROUTE, TUG-ESCORT, UNLOAD- ING, LOADING, REFUELING). Each compatibility relation in the gallery is represented as a subgraph that includes the nodes from the frames that they relate with edges connecting compatible elements. For exam- ple, in the LOCATIONS-ACTIVITIES compatibility relation (Figure 4)) ZONEl, ZONE2, and ZONE3 -are all connected to ENROUTE (becuase these zones represent areas at sea), CHANNEL is connected to TUG-ESCORT (because a ship entering or leaving the port at the end of this channel would be under tugboat control), LOADING-DOCK is connected to both LOADING and UNLOADING (because either activity is consistent with being at that dock), and REFUELING- DOCK is connected to REFUELING. DELTA-LOCATIONS and DELTA-ACTIVITIES (Figures 5 and 6) relate frames to themselves. They represent possible state transitions in their respective frames over any two day period. Edges connect com- patible elements from one day to the next. DELTA-LOCATIONS indicates that the zones are linearly ordered and that a ship must pass through the channel to get to either the loading or refueling docks. It also indicates that a ship will only remain AUTOMATED REASONING / 899 at the refueling dock or in the channel for one day at a time but may remain anywhere else for any number of days. In DELTA-ACTIVIES it can be seen that a ship must progress through TUG-ESCORT from ENROUTE before proceeding to REFUELING or UNLOADING and that REFUELING and TUG-ESCORT are one day activities. Further, a ship must go through LOADING after UNLOADING before returning to TUG-ESCORT. After the gallery and its supporting frames and compati- bility relations have been established, evidential analyses can be constructed. These analyses are represented as data-flow graphs where the data and the operations are evidential. Fig- ure 7 is one such analysis. Here primitive bodies of evidence are represented by elliptical nodes and derivative bodies of evidence are represented by circular nodes. Diamond-shaped nodes represent interpretations of bodies of evidence. The val- ues of these nodes are used as repositories for the information (i.e., data) that they represent (Figure 8). For bodies of evi- dence this includes a frame of discernment (including the day to which the evidence pertains), a mass distribution, and other supporting information. Edges pointing to a derivative node are labeled with the evidential operation that is applied to the bodies of evidence, at the other ends of the edges, to derive the body of evidence represented by this node. In the analysis of a ship in Figure 7, there are three primi- tive bodies of evidence. REPORT1 locates the ship on Day 1 saying that there is a 70 percent chance that it can be found in the CHANNEL and a 30 percent chance that it is in ZONEl; REPORT2 says that the ship was IN-PORT on Day 2; and REPORT3 indicates that the ship was LOADING cargo on Day 3. REPORT1 is taken at face value, but REPORT2 and REPORT3 have been discounted by 20 percent and 40 percent, respectively, to derive D2 and D3, reflecting doubt in the cred- ibility of these reports. REPORT1 has been projected forward by one day to derive Pl 2 and then has been fused with D2 to derive a consensus for Day 2, F12. D3 has been projected backwards in time by one day to derive P3 and then has been translated from the ACTIVITIES frame to the LOCATIONS frame. Finally, this result, T3, has been fused with F12 to derive a consensus, based on all three reports, about the ship’s location on Day 2. The interpretation nodes in this analysis track the evi- dential intervals for some key propositions. 11 is based soley on REPORT1 and indicates that there is precisely a 70 per- cent chance of the ship being IN-PORT[0.7,0.7] and no chance of it being DOCKED [O.O,O.O]on Day 1. IPl indicates that, based soley upon REPORTl, after one day has ellapsed, noth- ing is known about whether the ship is IN-PORT [O.O, 1.01, but that it may now be DOCKED [O.O, .7.0]. If REPORT2 is included after being discounted, IF12 indicates that there is strong reason to believe that the ship is IN-PORT [0.8,1.0], but there is conflicting information concerning whether or not it is DOCKED [0.56,0.7]. IT3 indicates that based soley upon RE- PORT3, after having been discounted, projected backwards a day, and translated to the LOCATION frame, that there is 0.6 support and 1.0 plausibility for both IN-PORT and DOCKED. Finally, when all three reports are considered, IF123 indicates 2Note that the distribution at REPORT1 is a Bayesian distribution (i.e., a distribution over exclusive elements), but application of the pro- jection operation results in a non-Bayesian distribution at PI. 900 / ENGINEERING strong belief that the ship is IN-PORT [0.9,1.0] on Day 2 and a reasonably strong belief, though mixed, that it is also DOCKED [0.78,0.85]. 4 Evidential-Reasoning Systems TO support the construction, modification, and interrogation of evidential analyses, we have developed GisterTM. Gister supports an interactive, menu-driven, graphical interface that allows these structures to be easily manipulated. The user simply selects from a menu to add an evidential operation to an analysis, to modify operation parameters (e.g., discount rates), or to change any portion of a gallery including its frames and compatibility relations. In response, Gister updates the analyses. All of the figures in this paper are actual screen images from Gister. Figure 7 includes the menus for working with analyses. On the left side of the screen is a menu of nouns. The user determines with what class of objects he wishes to work and selects the appropriate noun from the menu. Once a noun has been selected, a menu of verbs appears on the right side of the screen. A selection from this menu invokes the operation corresponding to the selected verb on the previously selected noun. The user then designates the appropriate nodes, edges, and the like for the selected operation. Unlike other expert systems, Gister is designed as a tool for the domain expert. With this tool, an expert can quickly and flexibly develop a line of reasoning specific to a given domain situation. This differs markedly from other expert systems in which a single line of reasoning is developed by an expert and then is instantiated over different situations by nonexperts. This approach has been successfully applied to Naval in- telligence problems. New work is focusing on adapting this technology to multisource data fusion for the Army. 5 Summary Evidential reasoning has already been successfully applied to problems in several domains. However, the addition of the compatability relation to the theory of beliefs, the formaliza- tion and development of new evidential operators, and the use of graphical representations have greatly improved the overall usefulness and accessibility of these techniques. References [Coh85] Paul C h o en. Heuristic Reasoning about Uncertainty: An Artificial Intelligence Approach. Pitman Publish- ing, Inc., 1985. [Dem68] Arthur P. Dempster. A generalization of Bayesian inference. Journal of the Royal Statistical Society, 30:205-247, 1968. [DK77] R. Davis and J. J. King. An overview of production systems. In E. Elcock and D. Michie, editors, Ma- chine Intelligence 8, pages 300-332, Ellis Horwood, Chichester, England, 1977. [DoY~I] Jon Doyle. A truth maintenance system. In Bon- nie Lynn Webber and Nils J. Nilsson, editors, Read- ings in Artificial Intelligence, pages 496-516, Tioga Publishing Company, Palo Alto, California, 1981. [GLF81] Thomas D. Garvey, John D. Lowrance, and Mar- tin A. Fischler. An inference technique for integrat- ing knowledge from disparate sources. In Proceeding of the Seventh Joint Conference on Artificial Intelli- gence, pages 319-325, American Association for Ar- tificial Intelligence, 445 Burgess Drive, Menlo Park, California 94025, August 1981. [Kon79] Kurt Konolige. Bayesian methods for updating prob- abilities. In R. 0. Duda, P. E. Hart, K. Konolige, and R. Reboh, editors, A Computer-Based Consultant for Mineral Exploration, Final Report, SRI Project 6415, 333 Ravenswood Avenue, Menlo Park, Califor- nia 94025, 1979. [LB821 John F. L emmer and Stephen W. Barth. Efficient minimum information updating for bayesian inferenc- ing in expert systems. In Proceedings of the National Conference on Artificial Intelligence, pages 424-427, American Association for Artificial Intelligence, 445 Burgess Drive, Menlo Park, California 94025, August 1982. [LG82] John D. L owrance and Thomas D. Garvey. Eviden- tial reasoning: a developing concept. In Proceeding of the Internation Conference on Cybernetics and Soci- ety, pages 6-9, Institute of Electrical and Electronical Engineers, October 1982. [LG83a] John D. Lowrance and Thomas D. Garvey. Evi- dential Reasoning: An Approach to the Simulation of a Weapons Operation Center. Technical Report, Artificial Intelligence Center, SRI International, 333 Ravenswood Avenue, Menlo Park, California 94025, September 1983. [LG83b] John D. Lowrance and Thomas D. Garvey. Eviden- tial Reasoning: An Implementation for Multisensor Integration. Technical Report 307, Artificial Intel- ligence Center, SRI International, 333 Ravenswood Avenue, Menlo Park, California 94025, December 1983. 1 LOCATIONS DELTA-LOCATIbNS DELTA-AC!nWllE Figure 1: SHIP-GALLERY Gallery. [Low781 John D. Lowrance. Grasper 1.0 Reference Manual. COINS Technical Report 78-20, Department of Com- puter and Information Science, University of Mas- sachusetts, Amherst, Massachusetts 01003, Decem- ber 1978. [Low821 John D. Lowrance. Dependency-Graph Models of Evidential Support. PhD thesis, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, September 1982. [Low861 John D. Lowrance. Grasper II Reference Manual. Artificial Intelligence Center, SRI International, 333 Ravenswood Avenue, Menlo Park, California 94025, 1986. In preparation. [LSG~~] John D. L owrance, Thomas M. Strat, and Thomas D. Garvey. Application of Artificial Intelligence Tech- niques to Naval Intelligence Analysis. Final Re- port SRI Contract 6486, Artificial Intelligence Cen- ter, SRI International, 333 Ravenswood Avenue, Menlo Park, California 94025, May 1986. In prepa- ration. [Sha76] Glenn Shafer. A Mathematical Theory of Evidence. Princeton University Press, Princeton, New Jersey, 1976. [Sha86] Glenn Shafer. Belief functions and possibility mea- sures. The Analysis of Fuzzy Information, 1, 1986. To appear. [ ~~841 W. W. Wadge and E. A. Ashcroft. Lucid, The Dataflow Programming Language. Academic Press U. K., 1984. [Wes86] Leonard P. Wesley. Evidential Based Control in Knowledge-Bused Systems. PhD thesis, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, 1986. In preparation. Figure 2: LOCATIONS Frame. Figure 3: ACTIVITIES Frame. AUTOMATED REASONING / 90 1 1 ZONE3~wm ENROUTE [ I 1 CHANNmwm W/ TUG-ESCORT 1 y]w= + REFUELING 1 p WMP 1 UNLOADING 1 Figure 4: LOCATIONS-ACTIVITIES Compatibility Relation. NExr NEXT REFUELING-DOCK Figure 5: DELTA-LOCATIONS Compatibility Relation. IA-ALllVlllCS 1 REFUELING Figure 6: DELTA-ACTIVITIES Compatibility Relation. ANALYSIS1 I DISCOUNT I [FUSEIX PROJECT TRANSLATE DISCOUNT SUMMARIZE INTERPRET CREATE DESTROY ANCESTORS DESCENDANTS EXAMINE/REVISE EVALUATE Figure 7: ANALYSIS1 Analysis. 902 / ENGINEERING 81 MRSSFUN: (((cHRNNEL) 8.7) ((ZONEI) 8.3)) ;:PE: PROJECTION DELTA-T: 1. FOD: (LOCATIONS 2.1 HRssFun: (((REFUELING-DOCK LOADING-DOCK ZONEI) 8.7) ((20~E2 CHRNNEL ZONEI) 8.3)) Fxit I7 I REPORT2 TYPE: EUIDENCE FoD: (LOCRTIONS 2.1 ~~RSSFUN: (((CHRNNEL LORDING-DOCK REFUELING-DOCK) 1.e)) Exit ll 02 TYPE: DISCOUNT DISCOUNT-RRTE: 28. FOD: (LOCRTIONS 2.) BRSSFUN: (((CHRNNEL LORDING-DOCK REFUELING-DOCK) e.8) ((REFUELING-DOCK ZONE2 CHRNNEL LORDING-DOCK ZONE1 ZONEB) 6.2)) Erie l-l F12 TYPE: FUSION FoD: (LOCRTIONS 2.) MRSSFUN: ((LORDIiiGIDOCK REFUELING-DOCK) 8.56) ((CHRNNEL) ~.24BB8981) ((REFUELING-DOCK LoRDING-DOCK ZONEI) 8.14) ((ZONE2 CHANNEL EONEl) 8.86888@882)) CONFLICT: @.a rr,+ n DISCOUNT-RATE: 48. FOD: (ACTIVITIES 3.1 RASSFUN: (((LORDING) 0.6) ((TUG-ESCORT UNLOADING ENROUTE LOADING REFUELING) 8.4)) IDELTR-T: -I. I 3 :YPE: TRRNSLRTION THETA: LOCATIONS FOD: (LOCRTIONS 2.) ~RSSFUN: (((LORDING-DOCK) 8.6) ((20~~3 20~~2 ZONEI LORDING-DOCK REFUELING-DOCK CHRNNEL) 8.4)) Exit n I ((ZONEI LOADING-DOCK REFUELING-DOCK) 8.86829268) ((zotiE2 zonEi CHRN~EL) 8.829268293)) Figure 8: Data from ANALYSISl. AUTOMATED REASONING / 903
|
1986
|
29
|
471
|
CIS: ,4 MASSIVELY CONCURRENT RULE-BASED SYSTEM Guy E. Blelloch AI Lab, Massachusetts Institute of Technology Rm. 739. 545 Technology Square Cambridge, MA 02139 Net Mail: guyb@mit-ai Abstract Recently researchers have suggested several computational models in which, one programs by specifying large networks of simple devices. Such models are interesting because they go to the roots of concurrency - the circuit level. A problem with the models is that it is unclear how to program large systems and expensive to implement many features that are taken for granted m symbolic programming languages. This paper describes the Concurrent Inference System (CIS), and its implementation on a massively concurrent network model of computation. It shows how much of the functionality of cur- rent rule-based systems can be implemented in a straightforward manner within such models. Unlike conventional implementations of rule-based systems in which the inference engine and rule sets are clearly divided at run time, CIS compiles the rules into a large static concurrent network of very simple devices. In this network the rules and inference engine are no longer distinct. The Thinking Machines Corporation, Connection Machine - a 65,536 processor SIMD computer - is then used to run the network. On the current implementation, real time user system interaction is possible with up to 100,000 rules. 1 Introduction The Concurrent Inference System (CIS) is a interactive rule- based system similar to Mycin ;Davis77]. It asks the user ques- tions and makes inferences according to the answers. The cur- rent version is capable of forward and backward chaining, which run concurrently; using meta-rules of the sort described by Davis j1980]; and reasoning with uncertainty, using a variation of Zadeh’s [1965] rules. With 100,000 rules on the current implementation of CIS, a global inference step takes less than two seconds. A global inference step is the time needed for a single change to percolate through all the rules. CIS was implemented to show that much of the function- ality of a rule-based system can be implemented with a simple and implementationally cheap concurrent model of computation, and furthermore that programming the system in the model is relatively straightforward. The model used is the activity flow network (AFN) model [Blelloch86]. Activity flow networks are similar to the connectionist networks of Hinton [1981], Feldman 119821 and Rumelhart 119861. CIS does much of its work at compile time, leaving at run time a static network of computational devices not significantly more complex than logic gates. It is easy and efficient to run such networks on massively concurrent SIMD computers such as the Connection Machine. Because the networks are completely static and use very sim- ple devices, it is hard and expensive to implement the general power of logic programming languages such as Prolog. For ex- ample, with CIS it is expensive to use high-precision numbers, hard to dynamically bind arbitrary values to a parameter, and not possible to execute general-purpose unification or create an arbitrary number of instances of an object. This paper argues that many practical rule sets do not require these features. For example, the rule sets of Mycin [Davis 771, Rl [McDermott 801 and Prospector [Gaschnig 821 can be implemented cleanly with- out them. Section 2 discusses the AFN model. Section 3 gives a brief outline of AFL-l, the language CIS is programmed with. Sec- tions 4 and 5 discuss CIS. Section 6 discusses the implementation of AFNs on the Connection Machine. Section 7 discusses some issues of concurrency. 2 Activity Flow Networks In the past decade, researchers have proposed many models of concurrent processing many of which may be described as net- works of nodes and links. As Fahlman 119821 noted, a useful way to categorize these models is by the cbmplkxity and content of the messages sent among the nodes. Figure 1 shows a taxonomy of models categorized in this way. Network Models of Concurrent Computation T 7 Static Networks Dvnamic Networks /- \ Unbounded Message Node Limited Non-Consing State Networks Message State Networks Finite Message State Networks Value Passing Lbgic Token’ Passing Networks Simulators Networks f---l \ With’Global Witho;lt Global Communication Communication f-- I THlifLE BOLTZMANN MACHINES NkL Activity Flow Networks Figure 1: Hierarchy of Network Models of Concurrent Compu- AI LANGUAGES AND ARCHITECTURES / 735 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The taxonomy divides network models of concurrent compu- tation into two sub-classes: static and dynamic networks. In a static network each node communicates with a fixed set of other nodes, while in a dynamic network each node can dynamically choose which other nodes it wants to talk to. Static networks are further categorized by the complexity of their messages. Fi- nite message state (FMS) networks are static networks that only send messages containing one of a fixed finite set of states. Be- cause the messages of an FMS are typically short and simple, F.MS networks usually consist of a large number of simple nodes. FMS networks are categorized by the content of their messages. Value passing networks (VPKs) are FMS networks that only send messages containing a finite approximation of the real numbers. VPNs are categorized by whether they allow global commu- nications. Any path that allows a central controller to inspect the nodes and make a decision according to the results is con- sidered global communications. In Thistle, a V’PN suggested by Fahlman ‘19831, global communications are used heavily. For example: the host might put a value on all the human nodes and then have those nodes send the value over their hair-color links. The system might then take a global OR of the brown nodes to see if the network knows of any brown-haired humans. In contrast. activity flow networks (AFNs) do not include any global communications. The only global control in AFNs is a clock signal that allows the network to take a step. This clock does not provide a path from the nodes back to the controller Some connectionist models are AFNs. Many fall in other categories either because they use global control, such as those discussed by Ackley ‘19851 or Touretzky ‘19851. or because not all their messages are finite representations of real numbers, such as in the model of Feldman [1982’. In summary, an AFN is a static network of simple nodes and links, that: l only passes finite approximations of real numbers over its links; l is controlled by a global clock; l can only communicate with the external world through a set of nodes designated as its inputs and outputs. The particular AFN model used by AFL-l consists of five types of nodes - input, output, max, min and sum; and two types of links - inhibitory and excitatory. Nodes can have an arbitrary number of inputs and outputs. Each node has four parameters - a threshold, a slope, a rise and a saturation - which are set at compile time. A node combines its inputs using maximum: minimum or sum (according to the nodes type) and applies the function shown in Figure 2 to compute its output. In AFL-l. in contrast to the distributed view used by Hinton 1981: and Touretzky :1985:, the nodes are each assigned a single name. A link makes a directional connection between two nodes. Each link has a weight. As an activity is sent through a link it gets multiplied by this weight. The clock is used to synchronize all the nodes after each node sends its activity through its output and receives a new activity. OUTPUT - Saturntion Slope : Rise (EXCITATORYINPUT - INHIBITORYINPUT) Figure 2: The Output Fuiictinn of an AFL-I &ode. 3 AFL-l AFL-l is an extension of the Symbolics 3600 programming envi- ronment. The purpose of the language is to translate high level description of tasks into AFNs which can then be simulated. The language consists of functions that add nodes and links to the network and the defgroup form which allows the user to hierar- chically define structures (node groups) out of nodes and links. To add nodes and links to the AFN, one instantiates a NG by appending “make-” to the group name and passing the instance name and any arguments. This structure defining mechanism is similar to the mechanisms found in constraint languages [Steele 80, Sussman 801 and circuit design languages [Batali 801. 4 CIS This section introduces the Concurrent Inference System. In this section we will use parameter-value pairs to represent proposi- tions. The next section describes how to assign parameters to specific objects so as to use object-parameter-value triplets to represent propositions. The specification of a rule set in CIS consists of a list of parameter definitions followed by a list of rule definitions. Each parameter definition contains a parameter name and a set of possible values. Each rule contains a list of if-parts, each a parameter-value pair, and a list of then-parts, each a parameter-value pair with a certainty factor. The following set of parameter and rule definitions will be used as an example throughout the de$cription of CIS. (make-parameter ‘covering ’ (feathers hair)) (make-parameter 'animal-class '(mammal bird reptile)) (make-parameter 'eating-class '(ungulate carnivore)) (make-parameter 'ped-type '(claws hoofs)) (make-rule 'animal-rule-l '(if (covering hair)) '(then (animal-class mammal .95))) (make-rule 'animal-rule-2 '(if (animal-class mammal) (ped-type hoofs)) '(then (eating-class ungulate .Q))) 736 / ENGINEERING Figure 3: Part of the Network Created by the Animal Example. In the above example, rule and parameter are both names of node groups (NGs) defined by CIS using the defgroup form. Each of the make-rule or make-parameter statements creates an instance of one of these NGs. The reader should keep in mind that each top level call in AFL-l adds nodes and links to the AFN at compile time. At run time, the only computation that occurs is the flow of activity values along the precompiled network. Figure 3 shows part of the network that results from compil- ing the above definitions. In this figure, the nodes of the AFK are shown as circles and the links as small squares. Within a node, the type of node is specified below the node instance name. The weight of each link is shown inside the link’s square. &ode group instances are circumscribed by rectangles. The node group name is given in the lower left corner and the instance name is given in the upper left corner. 4.1 Forward Chaining At run time, the activity value of an asserted node gives the certainty that a parameter-value proposition is true. Inferences in CIS are made by forward flow of this activity through the active nodes of the rule NG instances. The active node of each rule SG instance takes the AND of that rule’s if-parts. Since the if parts are activity values: not binary, they are combined with MIN and thresholded. This method for taking the AND of uncertain propositions is the same as the methods used by Mycin’s rules and Prospector’s logical relations. The asserted node of the value SG instances takes the OR of its inputs. Maximum is used for the OR of uncertain propositions. The rule implementer. using the weight in each then-part of a rule, sets the weights on the link‘s between the active and asserted nodes. 4.2 Backward Chaining A backward chaining inference systems (also called goal directed or consequent reasoning systems) looks for parameters that can satisfy a goal parameter. By backward chaining, a rule-based sys- tem will only ask the user questions relevant to what the system is looking for. CIS backward chains by using the want-to-know nodes. The “want-to-know” activity flows in the opposite direc- tion than the “truth” activity ~ i.e. from the then-parts to the if-parts. The backward chaining done by CIS has two advantages over many current rule-based systems. Firstly, since the “want-to- know” and “true” activity flow across different links, the an- tecedent and consequent reasoning happen concurrently. Con- sider a medical diagnosis program which is searching for disease X, and meanwhile stumbles across all the symptoms for a per- haps much more serious disease Y. Most current systems would ignore this, and perhaps not come to disease Y for a long time. CIS would find Y as it searched for X. Secondly, unlike Prolog and Mycin, the inferences are not restricted by recursion to follow the same path as the control. The backward chaining done by CIS is therefore easier to extend. Three examples of such extension are: l Backward chaining links can easily be excluded from some rules; this allows a mixing of antecedent and consequent rules. l With few changes, meta-rules of the sort discussed by Davis [1980] can be implemented; such rules are discussed in sec- tion 4.5. l It is easy to make “fuzzy” backward chaining links and use these links as one of many heuristics that guide the search rather than force it. A potential problem with concurrent backward chaining is that the questions the system asks the user might be unfocused. This problem can be solved in CIS by making the want-to- know and ask nodes have analog values and using heuristic rules to judge the current “interest” of a parameter and activate the want-to-know nodes accordingly. These rules are implemented using more network structure. The host will pick the param- eter with the highest value on its ask node when it selects a question to ask the user. The heuristic rules for activating the want-to-know nodes might include: a) asking related questions together (the rule set implementor can specify which questions are related), b) ask about the if-parts of almost active rules, and c) ask about parameters that lead to more goals. Methods for implementing these heuristics are discussed in (Blelloch 861. AI LANGUAGES AND ARCHITECTURES / 737 h,Whair?(yCS ml): El [ml User hctlviltcs Input Nodes z Accord I ng to hllswcrs I- Ihu Structures Host Computer Activity Flow Network Input Node output Nodes 0 Illtcrthrl Nodes Figure 4: The Interface Between the User and the AFN of CIS 4.3 Input and Output The only way an AFN can communicate with the outside world is through the input and output nodes. At run time, the CIS system uses a serial host computer to communicate with these nodes. Figure 4 pictures how the communications work. For sensor based rule-based systems such as PDS [Fox 831 it is pos- sible to have the l/O nodes connected directly to sensors and activators. Such direct connections will not be discussed in this paper. 4.4 Values Each parameter can have several values, each of which creates an instance of the value NG ( see Figure 3). The a-value-asserted node of a parameter is used to recognize if any of its values are asserted above some threshold. A mutually-exclusive group is placed around the asserted nodes of the values so that only one of the values (the one with the highest input) is asserted at a time. The mutually-exclusive group can be left out if desired. Because there is a separate NG for each value, there can only be a moderate number of values specified at compile time. Most rule sets do not require many values and in many current rule- based systems including Mycin, KEE and Prospector, one defines the possible values of each parameter at compile time. This helps prevent errors. The need for a NG instance for each value also precludes the use of high-precision integers and floating point numbers. Al- though most other rule-based systems allow such values, many applications don’t need them. For example Mycin uses integer values only for age, body temperature and dates of last exam- ination or immunization. For these parameters, 32 bit integers are not needed; 100 or so values will suffice. The rule sets of Prospector and Rl also require no high-precision numbers. 4.5 Meta Rules In practice, it is important to have task specific rules that control the invocation of other rules [Davis 80, Gaschnig 821. An example of such a rule is: “if the patient has stepped on a rusty nail, then ask questions about tetanus (activate the tetanus rules).” Davis [ 19801 names such rules, “meta-rules”, and Prospector jGaschnig 821 names them “contextual relations”. mcta-rule group ’ pnrdmctcr group L - -_ - - - - Figure 5: The Network Created by the Tetanus hleta-Rule. In the diagram the dashed groups signify that not all the nodes in those groups are shown. It is easy to add this type of rule to CIS. Figure 5 shows the network required for the tetanus rule. This network causes the system to ask all the questions relevant to tetanus when the parameter “stepped on rusted nail” is asserted. 5 Objects When a rule set includes several instances of an object that all obey the same rules, it is convenient to create a single set of rules which are valid for all instances. To allow for this, systems such as Mycin, OPS5 and KEE have generic objects (often part of the object, attribute: value triplet) which are used in the rules. As well as allowing multiple instances, objects allow a clean way to separate sets of rules into modules. The same sort of object abstraction can be used in CIS by creating a separate network for each instance of an object. This is done by defining a Node Group for an object and instantiating it, at compile time, for each instance. i2t run time, the host computer assigns names of new instances to the precompiled in- stance sub-networks. For example, if we wanted CIS to reason about two animals we could use the following definition. (defgroup animal (> (make-parameter 'covering '(feathers hair)) (make-rule 'animal-rule-l '(if (covering hair)) '(then (animal-class mammal .95))) . 1 (make-animal 'first-animal) (make-animal 'second-animal) To create the needed network at compile time, CIS must know the maximum number of instances that might be needed. In most applications this is not a problem since this number does not vary greatly from one use of a system to the next: it is easy enough to put an upper bound on the number. For example, in 738 / ENGINEERING Mycin we know there will only be one patient and the patient will usually have at most three cultures taken, each with possibly three organisms. The above method of creating multiple instances of an object does not address the problem of creating rules that cross between different instances. Such rules might include: IF (and (father x y> (zebra x)) THEIi (zebra y)) IF (= (number-of zebras) 3) THEIJ (herd-of zebras) There are relatively simple ways to include such rules in CIS; see [Blelloch86]. 6 Implementation The network-processor is used to run a compiled activity flow network. A single cycle of the network processor is called an afl-step. It consists of all the nodes sending their activities, receiving new ones and computing the node function. The Thinking Machines Corporation Connection Machine is currently used as the network-processor. At compile time, each node is placed on a separate processor. All the output links of a node are placed in processors immediately following the node processor. Processors are also used for each input of a node and are placed immediately preceding the node processor. This means that for 1 links and n nodes, 21 + n processors are required (by overlapping input and output links, E+ n processors can be used). At run time, a copy-prefix operation is used [Kruskal85] to distribute the output value of a node to all the output links. After the copy-prefix, each link multiplies its input value by its weight and sends the result, using the router, to the other end of the link - a processor preceding a node processor. Max, min and sum prefixes are used to compute the max, min and sum of the inputs for each node. The node processors then compute the node function. Because of the prefix operations, the time taken by this method is independent of the largest fan-in or fan- out. The routing cycle is the most expensive step taken by this method. To include more links than processors, each physical proces- sor can simulate several virtual processors (VPs). Such simu- lation causes a slightly greater than linear slow-down with the number of virtual processors. Figure 6 shows the time required by an afl-step as a function of the number of links when imple- mented on a 64K-processor Connection Machine. By making some assumptions about the rules and parame- ters and imposing a limit on the time the user is willing to wait between questions, an upper bound on the number of rules that CIS may have can be given. With the following assumptions, it is possible to include 100,000 rules in CIS. l The maximum time a user is willing to wait is 2 seconds. l The maximum depth of inferences in the system is 20 rules, l The average rule has three antecedent and two consequent parts. l The average parameter has five values. 15 AFL-STEP TIME (In msccs) 10 0 5- 0 0 0 I I I I I 1 2 4 8 16 VP RATIO (x 64.000 to get max numb of links) Figure 6: The Time Taken by an Afl-step as a Function of the Number of Links in the Activity Flow Network (On 64K Processor CM). l There are five times as many rules as parameters For 100,000 rules, the above assumptions require a network of 2 million links. With 2 million links an afl-step requires .05 seconds which allows 40 of them to be executed between questions. This is enough time for the answer to propagate the whole depth of the inferences. 7 Concurrency Researchers have argued that one can achieve at best a con- stant speedup by implementing a rule-based system on a parallel rather than serial machine [Forgy 84, Oflazer 841. They make these arguments based on rule-based systems developed for sin- gle processor machines and only consider a limited interpretation of a rule-based system. In particular, they consider a model that forces the selection of rules through a single channel so only one rule can fire at a time (the “conflict resolution” stage). This type of concurrency is pictured in Figure 7a. Concurrent Matching a)Typc discussed by [Ofker841 and [Fwgy84]. Concurrent I<tdc Concurrent Matching b) Type used by CIS. Figure 7: Two Types of Forward Chaining Concurrency. AI LANGUAGES AND ARCHITECTURES / 739 The selection of a limited set of rules is necessary for some applications, in particular problem-solving systems, but even for these applications the restriction to a single rule is unnecessary. Usually only certain sets of rules must be prevented from firing simultaneously. In CIS, one can prevent rules from firing simul- taneously by placing a mutually exclusive group around the rules of concern. Because much of the work is done at compile time, and be- cause of the concurrency of AFNs, CIS can take advantage of several sources of concurrency at run time. Among these are: l Subrule and subparameter concurrency: Within the rules and parameters all the parts act concurrently. For example, at the same time that a value activates the rules it is con- nected to; it deactivates all the other values of its parame- ter, activates the parameter-known node: and activates its output node. a Concurrent matching: All the antecedent parts of a rule are matched concurrently. In fact it only takes a single afl-step to match every rule in the system. This is possible because the variable references are compiled out so the variable slots do not become bottlenecks. l Concurrent forward propagation: All the rules can propa- gate their inferences concurrently. There can potentially be a large fan-out so that a single change could propagate to make thousands of changes in just a few afl-steps. Note that this offers much more concurrency than the model considered by Forgy 119841; Figure 7 shows the difference between the two types. l Concurrent backward propagation: A completely concur- rent AND/OR search is executed from the ‘(goal” parame- ter to the parameters which can affect it. Unlike the con- current implementations of logical inference systems dis- cussed by Douglass [1985] and Murakami ;1984]: concurrent “AND” searching is does not present problems. Again, this is because the variables are compiled out. l Forward and Backward propagation mentioned in section 4.2. happen together: As l Concurrent question selection: With the addition of the question focusing mechanism: the system can do a concur- rent search of a single question to ask the user. Although CIS allows all these sorts of concurrency, how well the system takes advantage of them depends on the rule set being used. Since no large rule sets have been implemented in CIS, no data is available. 8 Conclusions The initial study of CIS suggests that it is possible to implement practical systems using activity flow networks, and more gener- ally, with connectionist or other circuit level models. The AFL-l code needed to implement CIS is quite simple, no more complex than the code used in other rule-based systems, and the running time of CIS is very good. With expected advances in massively concurrent hardware, the times will greatly improve. As mentioned in the paper, the AFN rn.c?el imposec sc,rre restrictions on CIS. Some of these limitations can be . ,oided by expanding the model slightly. For example, to manipulate high- precision numbers, one could use a model that intermixes data flow and activity flow networks. To dynamically create instances of an object. one could use a system that creates extra network structure as it is nceded. Acknowledgements I thank Phil Agre and David W7altz for their helpful comments on various parts of this work. This research was supported by the Defense Advanced Re- search Projects Agency {DOD), ARPA Contract &0014-85-K- 0124. References Ackley, D.H., Hinton, G.E., Sejnowski, T.J., “A Learning Algo- rithm for Boltzmann Machines”, Cognitive Science, 1985, 9. 147-169. Batali, J., Hartheimer, A., “The Design Procedure Language Manual” , Memo 598, MIT AI Laboratory, September 1980. Blelloch, G.E., “AFL-l: A Programming Language for Massively Concurrent Computers”, MS Thesis, Dept. of Electrical En- gineering and Computer Science, MIT, June 1986. Davis, R., Buchanan, B., Shortliffe, E., “Production Rules as a Representation for Knowledge-Based Consultation Program”, Artificial Intelligence, 1977, 8, 15-45. Davis, R., “Meta-Rules: Reasoning about Control”, Artificial Intelligence, 1980, 15, 179-222. Douglass, R.J., “A Qualitative Assessment of Parallelism in Ex- pert Systems”, IEEE Software, May 1985, 70-81. Fahlman, S.E., ‘(Three Flavors of Parallelism”, Proc. National Conference of the Canadian Society for Computational Stud- ies of Intelligence, May 1982, Saskatoon, Saskatchewan, 230- 235. Fahlman, S.E., Hinton G.E., Sejnowski, T.J., “Massively Parallel Architectures for AI: Neti, Thistle and Boltzmann Machines”: Proc. AAAI, August 1983, Washington D.C, 109-113. Feldman, J.A., Ballard, D-H., “Connectionist Models and Their Properties”, Cognitive Science, 1982, 6, 205-254. Forgy, C.L., “The OPS5 User’s Manual”, TR, Carnegie-Mellon University, Department of Computer Science, 1981. Forgy, C., A. Gupta, A. Newell, R. Wedig, “Initial Assessment of Architectures for Production Systems”, Proc. AAAI, August 1984, Austin, TX., 116-120. Fox, M.S., Lowenfeld, S., Kleinosky, P., “Techniques for Sensor- Based Diagnostics” Proc. IJCAI, August 1983, Karlsruhe W. Germany, 1 S8-163. Gaschnig, J., “Prospector: An Expert System for Mineral Ex- ploration”, in Michie (Ed.), Introdzlctory Readings in Ezpert Systems, New York, Gordon and Breach, 1982. 740 / ENGINEERING Hinton, G.E., “Implementing Semantic IVetworks in Parallel”, in G.E. Hinton and J.A. Anderson (Ed.), Parallel Models of Associative Memory, Hillsdale, XJ: Erlbaum, 1981. Kruskal: C.P., Rudolph, L., Snir. M., “The Power of Parallel Pre- fix”: Proc. Int’l. Conference on Parallel Processing, iZugust 1985, 180-185. McDermott, J., “RI: an Expert in the Computer Systems Do- main”, Proc. AAAI, August 1980, Stanford University, 269- 271. Murakami, K., Kakuta, T.; Onai: R.: “Architectures and Hard- ware Systems: Parallel Inference Machine and Knowledge Base Machine”, Proc. Int’l Conf. Fifth Generation Computer Systems; 1984, Tokyo, 18-36. Oflazer: K.. “Partitioning in Parallel Processing of Production Systems” : Proc. Int’l Conf. Parallel Processing. August 1984, 92-100. Rumelhart, D.E., McClelland, J.L., Parallel Distributed Process- ing: Erplorations in the Microstructure of Cognition, Volume I: Foundations, MIT Press: Cambridge Mass., 1986. Steele, G.L. Jr., “The Definition and Implementation of a Com- puter Programming Language Based on Constraints”, AI-TR 595: MIT AI Laboratory, August 1980. Sussman, G.J., Steele, G.L. Jr., “Constraints - A Language for Expressing Almost-Hierarchical Descriptions”, Artificial In- telligence, 1980, 14: l-39. Touretzky, D.S., “Symbols Among the h’eurons: Details of a Connectionist Inference Architecture”, Proc. IJCAI, August 1985, Los Angeles, 238-243. Zadeh, L.A., “Fuzzy Sets”, Information and Control, 1965, 8, 338-353. AI LANGUAGES AND ARCHITECTURES / 74 I
|
1986
|
3
|
472
|
PROTEAN: DERIVING PROTEIN STRUCTURE FROM CONSTRAINTS Barbara Hayes-Roth*, Bruce Buchanan*, Olivier Lichtarge+. Mike Hewett*, Russ Altman*, James Brinkley*, Craig Cornelius*, Bruce Duncan*, and Oleg Jardetzky+l KNOWLEDGE SYSTEMS LABORATORY STANFORD MAGNETIC RESONANCE LAB. COMPUTER SCIENCE DEPARTMENT STANFORD UNIVERSITY MEDICAL CTR. STANFORD UNIVERSITY STANFORD, CA 94025 Abstract PROTEAN is an evolving knowledge-based system that is intended to identify the three-dimensional conformations of proteins in solution. Using a variety of empirically derived constraints, PROTEAN must identify legal positions for each of a protein’s constituent structures (e.g., atoms, amino acids, helices) in three- dimensional space. In fact, because protein-structure analysis is an underconstrained problem, PROTEAN must identify the entire family of conformations allowed by available constraints. In this paper, we discuss PROTEAN’s approach to the protein-structure analysis problem and its current implementation within the BBl blackboard architecture. 1. Introduction PROTEAN [3, 7, 91 is an evolving knowledge-based system, framed within the blackboard architecture, that is intended to derive the three-dimensional conformations of proteins in solution from empirical constraints. PROTEAN’s problem belongs to a sub-class of constraint- satisfaction problems in which physical objects must be positioned in n-dimensional space so as to satisfy a set of constraints. Accordingly, in designing PROTEAN, we are developing knowledge and methods that apply to arrangement problems generally. We describe the PROTEAN system, as implemented in the BBl blackboard architecture [S]. and present a trace of PROTEAN’s efforts to solve a small protein fragment, the lac-repressor headpiece. Finally, we discuss PROTEAN’s current status. 2. Protein Structure Elucidation Determining the structures of individual proteins is a fundamental problem in biochemistry. It is the first step toward understanding the physical basic underlying protein functions and, possibly, designing new proteins for medical or industrial use. Biochemists distinguish the primary, secondary, and tertiary structure of a protein. A protein’s primary structure is its defining, linear sequence of amino acids. A protein’s secondary structure is the sequence of architectural subunits (alpha helices, beta sheets, and random coils) superimposed on successive subsequences of its primary structure. A protein’s tertiary structure is the folding of the primary and secondary structures in three- dimensional space. Figure 1 shows the structure of a protein called the lac repressor headpiece. l ‘This work was funded in part by:NIH Grant #RR-00785; NIH Grant #RR-00711; NSF Grant #DMB84-2038; NASA/Ames Grant #NCC 2-274; Boeing Grant #W266875; DARPA Contract #NOOO39-83-C-0136. We thank Jeff Harvey, Vaughan Johnson, and Alan Garvey for their work on BBl. 3-D Structure of the & Repressor Headpiece Defined by NMR N-Heli C-Helix Biochemists have developed reliable methods exist for determining a protein’s primary structure and its secondary structure. In addition, they know the atomic structure of each of the twenty different amino acids that can appear in the primary structure and the radius of each different atom (its van der W&s’ radius). They know the architectural characteristics of alpha helices, beta sheets, and random coils. They can determine the overall size, shape, and density of the protein molecule with hydrodynamic and light-scattering methods. Protein crystallography currently is the best method for determining tertiary structure and there has been some success in developing knowledge-based systems for interpreting crystallographic data [12]. However, obtaining crystallized samples of proteins is not always possible. Moreover, it is not known whether the identified crystal structures match the structures of proteins in solution. The crystal structures almost certainly deviate from the solution structures in one respect: they conceal the potential mobility of a protein’s constituent structures. High-resolution nuclear magnetic resonance (NMR) offers an alternative method of obtaining structual information about proteins in solution [ll, 141. NMR experiments yield a set of measurements called nuclear Overhauser effects (NOES). Each NOE signifies that two of a molecule’s constituent atoms are in close spatial proximity (within a range of 2-5 angstroms). Other measures reveal the overall size and shape of the protein and identify atoms located near the surface of the molecule. Taken together, these data substantially constrain the space of plausible tertiary structures. 904 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Efforts to develop computer programs for deriving protein structure from NMR data have focused on distance geometry algorithms that minimize the value of some distance error function [14, 13, 1, 151. These approaches suffer two major limitations. First, since NMR data are sparse, they do not identify a unique solution for a given protein. Existing programs do not thoroughly explore the “conformational space” of solutions that satisfy a given set of constraints. Instead, they explore a local region of solutions around a plausible starting structure. Second, existing programs treat potential mobility in a very limited fashion. They may hypothesize minor mobility of small substructures (such as amino acid sidechains), while failing to consider major mobility of larger substructures (such as helices). 3. Approach PROTEAN is intended to surmount the limitations of existing methods for elucidating protein structure from NMR data. Thus, zET$AN must: identify the furnil? of conformations by available constraints; incorporate all available constraints to restrict the family as much as possible; and characterize the mobility of protein substructures allowed by the constraints. In so doing, it must cope with the large, combinatoric search space entailed in protein-structure analysis. PROTEAN’s fundamental operation is to identify and then refine the family of positions in which a structure satisfies a designated set of constraints. Successively applied constraints successively restrict the family hypothesized for a given structure. We have identified .a variety of potentially useful constraints on protein structure (see Table 1). Some of these are local constraints, such as NOE data signifying the proximity of a particular pair of atoms. Others are global constraints, such as molecular size. By combining these qualitatively different kinds of constraints, PROTEAN should be able to restrict the space of possible protein conformations. Table 1. Some of the Available Constraints on Protein Structure m-----v __________ _____-_________-____-------- -e-v_--- -_--__-__-_--_____-- Primary structure Atomic structures of individual amino acids van der Uaals’ radii of individual atoms Peptlde bond geometry Secondary structure Architectures of alpha-helices and beta-sheets Molecular size Molecular shape Molecular density NOE measurements Surface data PROTEAN must consider two factors in reasoning about structural mobility. First, it must infer mobility whenever it finds that no position for a structural subunit (such as a helix) satisfies a set of applicable constraints. Second, it must incorporate user-specified hypotheses that particular sets of constraints are or are not satisfied simultaneously in a single conformation. In both cases, PROTEAN must reason about alternative families of positions for affected structures under non-simultaneous constraint sets. To reduce the combinatorics of search, PROTEAN adopts a “divide-and-conquer” approach. It defines partial solutions that incorporate different subsets of a protein’s constituent structures and different subsets of its constraints. It focuses first on satisfying constraints within each partial solution, positioning each structure relative to a single fixed structure. After substantially restricting the positions of structures within two overlapping partial solutions, PROTEAN applies constraints between them. Also to reduce search combinatorics, PROTEAN reasons bidirectionally across different levels of abstraction (see Figure 3). At the molecule level, PROTEAN reasons about the overall size, shape, and density of the molecule. At the solid level, it reasons about the protein’s constituent alpha-helices, beta-sheets, and random coils, representing these structures as geometric cylinders, prisms, and spheres. At the superatom level, it reasons about the protein’s constituent amino acids, in terms of peptide units (represented as prisms) and sidechains (represented as spheres). Finally, at the atom level, PROTEAN reasons about the protein’s individual atoms. When PROTEAN reasons top-down, it uses the hypothesized position of a structure at one level to restrict its examination of positions of constituent structures at a lower level. When PROTEAN reasons bottom-up, it uses the hypothesized position of a structure at one level to restrict the position of its superordinate structure. Since most of the current implementation operates at the Solid level, we have not yet explored the full power of bidirectional reasoning. PROTEAN’S LEVELS OF REASONING We envision a basic successive refinement strategy, with some opportunistic deviation. Thus, PROTEAN should reason reason top-down through the levels of abstraction, with some bottom-up adjustment of results. It should apply this strategy simultaneously to several overlapping partial solutions, integrating them only after it has applied all or most of their internal constraints. Within this general strategy, PROTEAN still faces an extensive solution space and must reason more specifically about the most efficient order in which to apply individual constraints to individual structures in particular partial solutions. We have implemented a strategy that combines domain-independent computational principles (e.g., choosing partial-solution “anchors” that have many constraints to many other structures; focusing on structures that have been restricted to small families; and preferring strong constraints) with biochemistry knowledge (e.g., defining the space of potentially useful constraints; and characterizing the constraining power of different constraints). Since intelligent control is a critical component of effective problem-solving in PROTEAN, we plan to experiment with these and other control strategies. 4. Current Implementation We are developing PROTEAN within the BBl blackboard architecture [ 51, which defines: (a) functionally independent problem-solving knowledge sources to generate and refine solution elements: (b) a multi-level solution blackboard on which these knowledge sources record evolving solutions; (c) control knowledge sources to reason about problem-solving strategy; (d) the BBl control blackboard on which control knowledge sources record the evolving control plan; and (e) an adaptive scheduler that AUTOMATED REASONING ! 905 uses the current control plan to determine which knowledge source should execute its action on each problem-solving cycle. How BBl Works A BBl system (see Figure 3) iterates the following steps: 1. An action (called a KSAR or knowledge source activation record) is executed, causing changes to elements on the solution or control blackboard. 2. These blackboard changes trigger one or more problem-solving or control knowledge sources, placing new KSARs on the agenda. Each KSAR instantiates an action definition from a knowledge source in the context of the current problem-solving state. 3. The scheduler chooses the pending KSAR that best satisfies the current control plan. Thus, BBl provides a uniform. integrated blackboard mechanism for reasoning about the solution to the problem at hand as well as about the problem-solving process per se. PROTEAN currently uses a four-level solution blackboard, including the levels in Figure 2. Figure 4 shows a partial solution for the lac-repressor headpiece at the Solid level of the blackboard. The example also illustrates PROTEAN’s language of partial solutions. All structures within a partial solution are positioned relative to a uniquely-positioned anchor. In Figure 4, Helix1 is the anchor. When PROTEAN applies constraints between the anchor and another structure, it anchors an anchoree. In Figure 4, Helix1 anchors five anchorees, Coill, Coil2. Helix2, Helix3, and Coi14. When PROTEAN applies constraints between an anchoree and a structure that has no constraints with the anchor, it appends an appendage. In Figure 4, Hexli2 appends an appendage, Coi13. When PROTEAN applies constraints between two anchorees or appendages, it yokes them. In Figure 4, for example, Helix2 and Helix3 yoke one another. Blackboard representation lac-repressor headpiece. of a partial solution for the PROTEAN’s current problem-solving knowledge sources (see Table 2) define partial solutions at the Solid level and position alpha-helices and random coils relative to one another within those partial solutions. Although the current implementation of PROTEAN has only eleven problem-solving knowledge sources, it instantiates most of them many times for a single protein. For example, the knowledge source Anchor-Helix generates different KSARs for different anchor-anchoree pairs and for different constraints between a given pair. Table 2. PROTEAN's Eleven Problem-Solving Knowledge Sources ____________--___-------------------------------------------------------- Knowledge Source Post-the-Problem Post-Solid-Anchors Activate-Anchor-Space Add-Anchoree-to- Anchor-Space Express-MOE-Constraint Express-Covalent- Constraint Express-Tether- Constraint Anchor-Helix Anchor-Coil Append-Helix Yoke-Helices Behavior Retrieves the description of a test protein and associated constraints from a data file and posts them on the blackboard in a form that is Interpretable by other PROTEAN knowledge sources. Creates obiects that reoresent and describe the detail; of all of the test-protein's secondarv structures (aloha-helices and random is a'poiential anchor for a coils). kach one solution. Chooses a particular solld-anchor to be the anchor of a partial solution. Chooses a particular solid-anchor (representing It as a token object that copier the chosen solid-anchor) to be an anchoree in a previously established anchor-space. IdentItles the family of positions in which the MOE contact site of a structure can lie while satlsfylng an MOE with another structures. Identifies the tarlly of posltions in which the site of a covalent bond connecting a structure to another structure can 118. Identifies the family of positions in which the Site of a covalent bond connecting a structure to another structure via a short random co11 can lie. Identifies the tamlly of positions in which a helix can lie while satisfying one or more constraints with an anchor, along with all constraints previously applied to It. Identifies the family of positions in which a coil anchoree can lie while satistylng one or more tether constraints with an anchor, along with all constraints previously applied to it. IdentItles the tamlly of positions In which a helix appendage can lie while satisfying one or more constraints with an anchoree. along with all constraints previously applied to It. Restricts the established tamllles of posItionS for two helix anchorees to satisfy one or more constraints between them. Three knowledge sources, Anchor-Helix, Anchor-Coil, and Yoke-Structures, rely upon a set of numerical functions called the geometry system or GS [Z]. The GS represents the position of each structure as a set of six parameters. Three parameters place the structure at a particular location in the three-dimensional coordinate space and three parameters orient the strucuture about its own axes. The GS explores all values of the parameters at 906 / ENGINEERING some level of resolution, determining whether a designated structure positioned with those values can satisfy the designated constraints. PROTEAN currently operates under the following problem-solving strategy: 1. Establish the longest, most constraining helix as the anchor. 2. Position all other secondary structures in the protein relative to the chosen anchor, giving priority to actions that apply strong constraints to structures that are helices, that are long, that constrain many other structures, and that have many constraints with the anchor. Table 3. PROTEAN's Sixteen Control __--______---____----------------- - - - - _ - - - - Knowledge Sources ___-------_--_-_-_------------ Knowledge Source Behavior _________________-__----------------------------------------------------- Generic BBI Control Knowledge Sources ________-________---____________________--------------------------------- Initialize-Focus Identifies the initial focus by a newly recorded strategy prescribed Update-Focus Identifies each by a strategy, subsequent focus prescr ibed Terminate-Focus Changes the status of a focus to "inoperative" the focus's goal is satisfied. Terminate-Strattegy Changes the status of a strategy to "inoperatlve" when the strategy's goal is satisfied. _______--________--_----------------------------------------------------- Domain-Specific Control Knowledge Sources __________----___-__----------------------------------------------------- Develop-PS-of- Best-Anchor Create-Best- Anchor-Space Position-All-Structures Prefer-Helix>Sheet>Coll -Anchors Records the develop-ps-of-best-anchor strategy. Working within the BBl architecture, PROTEAN represents this strategy as a hierarchy of decisions on the control blackboard (see Figure 5). At the strategy level, PROTEAN records a decision to use this particular strategy, along with the information it needs to generate the prescribed sequence of steps at the appropriate time. PROTEAN records the steps as individual decisions at the focus level, encompassing sequential problem-solving time intervals. Each focus decision also records the information PROTEAN needs to generate the associated heuristics, which it records as decisions at the heuristic level. Each heuristic encompasses roughly the same time interval as its superordinate focus decision. Records the create-best-anchor-space focus. Records the posltlon-all-structures focus Records a heuristic that gives high ratings To KSARs that operate on helix anchors, intermediate ratings to KSARs that operate on beta-sheet anchors. and low ratings to KSARs that operate on random coil anchors. Prefer-Long-Anchors Prefer-Strongly- Constraining-Anchors Records a heuristic that gives higher to KSARs that operate on long anchors ratings Records a heurlstlc that gives higher ratings to KSARs whose anchors have many constraints with many other structures. Records a heuristic that gives higher ratings to KSARs that operate on the strategically-selected anchor. Records a heuristic that gives high ratings to KSARs that operate on helix anchorees. intermediate ratings to KSARs that operate on beta-sheet anchorees. and low ratings to KSARs that operate on random coil anchorees. Prefer-strateglcally- Selected-Anchor Prefer-Hellx>Sheet>Coil -Anchoree Prefer-Long-Anchoree Prefer-Strongly- Constrained-Anchoree Records a heuristic that gi ves higher KSARs that operate on long anchorees. ratings to Records a heurlstlc that gives higher ratings to KSARs that operate on anchorees that have many constraints with the anchor. Prefer-Mutually- Records a heuristic that gives higher ratings Constraining-Anchoree to KSARs that operate on anchorees that have many Constraints with other anchorees. Prefer-Strong- Constraint Records a heuristics that gives higher ratings to KSARs that apply strong constraints. 5. Example: PROTEAN’s Partial Sohtion of the Lac-Repressor Headpiece The lac-repressor headpiece is a protein with fifty-one amino acids. Its true structure is unknown, but NMR data are available for it and several research groups have partially identified its structure [lo, 81. Interpreted data for the lac-repressor are shown in Table 4. This section describes the first 25 cycles of a program trace of PROTEAN’s efforts to solve the lac-repressor headpiece. Table 4. Interpreted Data for the Lx-Repressor Headpiece PROTEAN generates its strategy incrementally, one decision at a time, with the sixteen control knowledge described in Table 3. Four of these are generic BBl control knowledge sources: Initialize-Focus, Update-Focus, Terminate-Focus, and Terminate-Strategy. The other twelve control knowledge sauces are domain-specific. The next section illustrates PROTEAN’s use of the knowledge sources to control its efforts to solve a small protein, the lac-repressor headpiece. Amino acids are numbered sequentjally in the primary structure named according to biochemical conventions. LYSE. for example, IS the second amino acid in the sequence and is a lysine. NOES identify particular atoms within particular amino acids that are within 2-5 angstroms of one another. For example. NOE 1 specifies the atom 3 of Valine 4 is within 2-5 angstroms of at of tyrosine 17. and oil 5 Data Type Data Value PROTEIN-NAME LAC-REPRESSOR-HEADPIECE PRIMARY-STRUCTURE MET1 LYS2 PRO3 VAL4 THR5 LEU6 TYR7 ASPB VAL9 ALA10 GLUII TYR12 ALA13 GLY14 VALl5 SERIG TYR17 GLNlB THRIS VALLO SERLI AR622 VAL23 VAL24 ASN25 GLN26 ALA27 SER28 HIS29 VAL30 SER31 ALA32 LYS33 THR34 ARG35 GLU36 LYS37 VAL38 GLU39 ALA40 ALA41 MET42 ALA43 GLU44 LEU45 ASN46 TYR47 ILE48 PRO49 ASNCO ARG51)) AUTOMATED REASONING i 907 SECONDARY-STRUCTURE (Coil1 MET1 THR5) (Helix1 LEU6 GLY14) Coil2 VAL15 SER16 Helix2 TYR17 ASN25 Coil3 GLN26 ARG35 Helix3 GLU36 LEU45 (Coil4 ASN46 ARG51 NOES 1 VAL4 3 TYR17 5 ( 2 VAL4 3 LEU45 4) 3 VAL4 3 TYR47 5 4 THRB 3 TYR47 5) I 5 7 LEU6 LEU6 4 4 MET42 TYR17 5 5 I 6 LEU6 4 VAL24 3) 6 LEU6 4 TYR47 5) 9 TYR7 5 TYR17 5) 10 ASP6 3 LEU45 4) t 11 VAL9 3 MET42 5 (12 VAL9 3 LEU45 4) 13 VAL9 3 TYR47 5 (14 ALA10 2 TYR17 5) 1 15 ALA10 2 VALPO 3) (16 TYRlP 5 ALA32 2) 17 TYRlP 5 ALA40 2) (18 TYR12 5 ALA41 2) 19 TYRlP 5 MET42 5) (20 TYRlZ 5 GLU44 4) (21 TYRlE 5 LEU45 4) (22 ALA13 2 VAL38 3) (23 ALA13 2 ALA41 2) (24 VAL15 3 TYR47 5) (25 TYR17 5 MET42 5) (26 VALZO 3 VAL38 3) (27 VAL24 3 TYR47 5) (28 VALBO 3 MET42 5) (29 MET42 5 TYR47 5) Post-the-Problem initiates PROTEAN activity at the Molecular level by recording a new protein-analysis problem and all available constraints. This event triggers two knowledge sources: Post-Solid-Anchors and Develop- PS-of-Best-Anchor. Since there are no control heuristics on the control i:;kboard yet, the scheduler uses the default schedulmg : Prefer-Control-KSs. It schedules and executes Develop-PS-of -Best-Anchor, which records PROTEAN’s strategy (see Figure 5). This event triggers Terminate- Strategy, which will not be executable until the strategy’s goal (explained below) is satisfied. It also triggers Initialize-Focus. The scheduler chooses Initialize-Focus, which uses the strategy’s generator to identify the first focus it prescribes. It records the name of that focus, “Create-Best-Anchor- Space,” as the strategy’s current-focus. This event triggers Create-Best-Anchor-Space. The scheduler chooses Create-Best-Anchor-Space, which records the corresponding focus (see Figure 5). This event triggers three control knowledge sources whose names are listed as the new focus decision’s heuristics: Prefer- Helix>Sheet>Coil-Anchors, Prefer-Long-Anchors, and Prefer-Strongly-Constraining-Anchors. It also triggers Terminate-Focus, which will not become executable until the new focus’s goal is satisfied. On the next three cycles, the scheduler chooses three pending KSARs, each of which records a heuristic (see Figure 5). These events do not trigger any new knowledge sources. The scheduler chooses the only pending KSAR, Post- Solid-Anchors, which creates a potential anchor representing each secondary structure in the protein. Each of these events triggers a corresponding KSAR involving Create-Anchor-Space. Now the scheduler uses the three heuristics posted on the control blackboard to determine which of the Create- Anchor-Space KSARs to execute. Since Helix1 is the longest, most constraining helix, it chooses the KSAR that creates an anchor space for Helix1 (see Figure 4). This event satisfies the goal of the Create-Best-Anchor-Space focus (the best anchor space has been created), thereby making the corresponding KSAR for Terminate-Focus executable. The event also triggers the knowledge source Add-Anchoree-to-Anchor-Space once for each other secondary structure in the protein. The scheduler chooses Terminate-Focus, which changes the status of the existing focus and its subordinate heuristics to “inoperative.” It also records the focus name, “Create-Best-Anchor-Space,” as the strategy’s expired- Focus. This event triggers the control knowledge source Undate-Focus. The scheduler chooses Update-Focus, which uses *the strategy’s generator to identify the next focus it prescrrbes and records the name of that focus, “Position-All- Structures,” as the strategy’s current-Focus. This event triggers the knowledge source Position-All-Structures. The scheduler chooses Position-All-Structures, which records the corresponding focus (see Figure 5). This event triggers the knowledge source Terminate-Focus, which will not become executable until its goal is satisfied. The event also triggers the six control knowledge sources named in the new focus decision’s heuristics: Prefer- Strategically-Selected-Anchor, Anchorees, Prefer-Helix>Sheet>Coil- Prefer-Long-Anchorees, Constrained-Anchorees, Prefer-Strongly- Prefer-Mutually-Constraining- Anchorees, and Prefer-Strong-Constraints. On the next six cycles, the scheduler chooses KSARs that record heuristics for the new focus.. These events do not trigger any new knowledge sources. Now the scheduler uses the six new control heuristics to choose pending KSARs. At this point, the agenda contains only KSARs involving the knowledge source Add- Anchoree-To-Anchor-Space. The scheduler chooses the KSAR that adds Helix3 (see Figure 4). This event triggers several KSARs for Express-NOE-Constraint, one for each of the NOES between Helix1 and Helix3. The scheduler chooses a series of Express-NOE- Constraint KSARs. Each one records the family of positions in which the NOE contact site on Helix3 can lie, relative to Helixl. Each ‘of these events triggers a corresponding KSAR for the knowledge source Anchor- Helix. The scheduler continues using the six control heuristics to choose pending problem-solving knowledge sources, including many different triggerings of the knowledge sources: Add-Anchoree-to-Anchor-Space, Express-NOE- Constraint, Express-Tether-Constraint, Anchor-Helix, Anchor-Coil, and Yoke-Structures. Each such action triggers new KSARs, which are added to the agenda and compete for scheduling priority. All of these?r KSARs together position all secondary structures relative to Helix1 with all applicable constraints (see Figure 4). Because the results of these actions satisfy the goal of the Position-All-Structures goal (all structures have been positioned), Terminate-Focus becomes executable and the scheduler chooses it. Terminate-Focus changes the status of the current focus and its associated heuristics to “inoperative.” It also records the focus name as the strategy’s expired-Focus. This event triggers Update-Focus. The scheduler chooses Update-Focus, which uses the strategy’s generator to identify the next focus it prescribes, which in this case is “None,” and records it as the strategy’s current-focus. This event satisfies the strategy’s goal and makes the pending Terminate-Strategy KSAR executable. The scheduler chooses Terminate-Strategy, which changes the strategy’s status to “inoperative.” In performing the actions summarized above, PROTEAN produces a solid-level solution for the lac-repressor headpiece, specifying the positional families within which each of the protein’s secondary structures can lie while satisfying the applicable constraints. PROTEAN’s solution closely matches the manually identified solution described in [S]. 6. Current Status of PROTEAN The current PROTEAN system demonstrates the appropriateness of the blackboard architecture for protein- structure analysis. Although PROTEAN currently reasons only about helices and random coils, we anticipate that the its representational conventions and geometric reasoning methods will apply to other protein structures as well. The current system incorporates reasoning about a variety of constraints: the known architectures of helices, covalent bonds, NOES, the known architectures of amino- 908 / ENGINEERING acid sidechains, and van der Waals’ radii. However, we anticipate a need to introduce qualitatively different representational conventions and geometric reasoning methods to handle the global constraints on the overall size, shape, and density of the molecule. The blackboard architecture easily incorporates different solution representations at different blackboard levels and incorporates different methods in its functionally independent knowledge sources. The current system also suggests that the BBl blackboard control architecture will support the critical control reasoning PROTEAN must perform. PROTEAN currently uses a single control strategy that is well captured in control knowledge sources and produces a perspicuous control plan during problem solving. This strategy works well enough for reasoning about the secondary structures of a small protein with a subset of the available constraints. However, when reasoning about all constituent structures in larger proteins with all available constraints, PROTEAN will need a new strategy. It will have to reason about multiple partial solutions and their relationships to one another. It will have to sequence its constraint- satisfaction operations intelligently to avoid a computationally intractable explosion of hypothesized structures. It will have to reason about alternative protein conformations corresponding to constraints that are not satisfied simultaneously. Since we do not know an optimal general control algorithm for this problem, we must experimentally evaluate alternative control strategies. To support this investigation, we are developing learning mechanisms to aquire control knowledge from experts automatically [6] and to comparatively evaluate different control strategies. We are also developing explanation mechanisms that explicate the relationships between problem-solving strategy [4]. actions and the underlying control Cl1 PI L-31 Cdl PI PI References Braun, W., Bosch, C., Brown, L.R., and Wutrich, K. Biochemistry and Biophysics 667, 1981. Brinkley, J., Cornelius, C., Altman, R., Hayes-Roth, B., Lichtarge, 0.. Buchanan, B.. and Jardetzky, 0. Application of constraint satisfaction techniques to the determination of protein tertiary structure. 1986. Buchanan, B., Hayes-Roth. B. Lichtarge, O., Altman, R., Brinkley, J., Hewett, M., Cornelius, C., Duncan, B., Jardetzky, 0. The heuristic refinement method for deriving solution structures of prqteins. Technical Report, Stanford, Ca.: Stanford University, 1985. Hayes-Roth, B. BBI: An architecture for blackboard systems that control, explain, and learn about their own behavior. Technical Report HPP-84-16, Stanford, Ca.: Stanfd University, 1984. Hayes-Roth, B. A blackboard architecture for control. Artificial Intelligence Journal 26:251-321, 1985. Hayes-Roth, B., and Hewett, M. Learning Control Heuristics in a Blackboard Environment. Technical Report HPP-85-2. Stanford, Ca.: Stanford University, 1985. I?1 I31 PI Cl01 WI cm Cl31 Cl41 Cl51 Hayes-Roth, B., Buchanan, B Lichtarge, O., Hewett, M, Altman, R., Brinkley, J., Cornelius, C., Duncan, B., Jardetzky, 0. Elucidating protein structure from constraints in PROTEAN. Technical Report KSL-85-35, Stanford, Ca.: Stanford University, 1985. Jardetzky, 0. Definition of the tertiary structure of proteins by NiUR: The DNA binding domain of the lac- repressor. Technical Report, Stanford, Ca.: Stanford University, 1984. Jardetzky, O., Lane, A., Lefevre, J-F., Lichtarge, O., Hayes-Roth, B., and Buchanan, B. Determination of macromolecular structure and dynamics by NMR. Proceedings of the NATO Advanced Study Institue: NiUR in the Life Sciences , 1985. Kaptein, R., Zuiderweg, E.R.P., Scheek, R.M. and Boelens, R. Journal of Molecular Biology 182:179-182, 1985. Roberts, G.C.K. and Jardetzky, 0. Advances in Protein Chemistry , 1970. Terry, A. Hierarchical control of production systems. PhD thesis, UC, Irvine, 1983. Wagner, G. and Wutrich K. journal of Magnetic Resonance 33:675-679, 1979. Wutrich, K. Advances in Protein Chemistry 24:447-545, 1976. Zuiderweg, E.R.P., Kaptein, R., and Wutrich, K. Proceedings of the National Academy of Sciences 80:5837-5841, 1983. AUTOMATED REASONING / 909
|
1986
|
30
|
473
|
BACK TO BACKTRACKING: CONTR.OLLING THE ATMS Johan de Kleer Intelligent Systems Laboratory XEROX Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, California 94304 and Brian C. Williams M.I.T. Artificial Intelligence Laboratory 545 Technology Square Cambridge, Massachusetts, 02139 ABSTRACT The ATMS (A ssumption-Based Truth Maintenance Sys- tern) provides a very general futility ior all types of default reasoning. One of the principal advantages of the ATMS is that cl11 of the possible (usually mutually inconsistent) solu- tions or partial solutions are directly available to the prob- lem solver. By exploiting this capability of the ATMS, the problem solver can eficiently work on all solutions simulta- neously and avoid the computational expense of backtrack- ing. However, for some applications this ATMS capability is more of a hindrance than a help and some form of back- tracking is necessary. This paper first outlines some of the reasons why backtracking is still necessary, and presents a powerful backtracking algorithm which we have imple- mented which backtracks more eficiently than other ap- proaches. for problems where only one of a number of the possible solutions arc required. It achieves this efficiency by orga- nixing the search to find a single specific solution first. By combining them we get the advantages of both. 2. The ATMS A TMS- based problem solver consists of two compo- nents: an inference engine and the TMS. The inference engine deduces new data from old (usually by the applica- tion of rules, or consumers in ATMS terminology). Asso- ciated with each consumer is a set of data, referred to as the consumer’s antecedents. A consumer is invoked on its antecedents when all of them are believed in the current context. Every inference resulting from a consumer invoca- tion is recorded as a justification using the TMS =and must include the antecedents of the consumer. The TMS’s task is to determine what data is believed in each context given the justifications produced thus far in the problem-solving effort. 1. Introduction The complexity of a problem-solving task is a function of both the number of rules executed and number of con- texts considered in the search. Many techniques have been developed to minimize this complexity For various types of tasks. In this paper we show how two such approaches, the ATMS and conventional dependency-directed backtrack- ing (DDB), can be combined to produce a control strategy more efficient than either. An ATMS-controlled problem-solver tends to be more efficient for problems where all the solutions are needed. It achieves this efficiency by organizing the search to find the most general inferences first. Thus, the number of rules ex- ecuted and contexts examined are significantly reduced. A DDB-controlled problem-solver tends to be more efficient Justification-based TMS’s force the problem solver to focus on a ;iingle consistent data base at a time. This has many disadvantages (see [4,7]). The ATMS permits the problem solver to operate simultaneously in several mutu- ally inconsistent contexts. To achieve this, the ATMS aug- ments the conventional TMS data structures in a several of ways. I’he ATMS introduces the notion of a primitive assvmption. Unlike other data, which are believed only if belief in them can be justified, assumptions are believed unless there is evidence to the contrary. By tracing back- wards through supporting justifications for a datum the ATMS identifies the set(s) of assumptions upon which the datum ultimately depends. Such a set of assumptions is 910 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. called an environment. Typically a datum can be derived in a variety of ways and so a datum can follow from a va- riety of environments. The complete set of environments from a which a datum can be derived is called its label. The set of data derivable from an environment is called its con- text. Any environment from which false (-1) can be derived is inconsistent and is called a nogooil. The ATMS data base achieves its efficiency by (a) optimizing the representation of nogoods, (b) exploiting the fact that if a datum follows from an environment it follows in all the environment’s su- persets and thus it is only necessary to explicitly store the minimal environments of a label, and (c) recognizing that all ATMS operations reduce to set operations for which good algorithms are available. 3. Three Reasons for Backtracking ATMS-based problem solvers suffer from three kinds of difficulties related to backtracking: (a) the task may require that only a small fraction of the search space be explored, (b) even for problems where all solutions are de- sired, they often search more than necessary, and (c) they are inherently more difficult to debug. First, conventional backtracking searches depth-first, producing solutions one at a time. Without exercising ex- ternal control, the ATMS-based problem solver identifies all solutions at once. If the overall goal of the problem solver is to find only one solution which satisfies the goal, then the ATMS-based problem solver can be hopelessly inefficient. Consider an example (adapted from [6] and [lo]) where the task is to construct an n-bit number with odd parity (i.e., a string of bits with an odd number of ones). For each bit position two variables are created. Let b, indicate the ith bit’s value, and p, the parity for all the bits up to and including bi. Each b; can be zero or one (I’b,I~i represents the assumption that bi is 0). Parity is defined recursively as (-+ indicates a justification): p-1 = 0 pi-1 = O,l&l * pi = 1 The ATMS, which finds all solutions, discovers all 2i bit strings having odd (and even) parity. If the goal was to End only one odd-parity bit string, then the simple ATMS- based problem solver has done ILi more work than neces- sary. Second, even if the goal of the problem solving is to identify all solutions, the ATMS-based problem solver may still search more than necessary. The problem solver may ex;Lmine regions 0 f the search space which are never reached by dependency-directed backtracking. Consider the familiar Th-queens problem. (For simplic- ity assume that there are 3 quccus and that queen Qi is placed on row i.) A set of rules is constructed which checks that no two queens are placed in attacking positions. The problem is then to find all placements of the three queens which are consistent with these rules. The ATMS-based problem solver would try the placc- ment of all singletons, pairs and then triples of queens in search of a consistent solution. For example, it would check whether placing Qs in column 3 was consistent with placing Q2 in column 2 and find that they lay on the same diagonal. Within a backtracking problem solver one would first attempt to place Qr , then Q2, and only then Qs. As there is no way of placing a queen in row 1 and Qz in coluinn ‘2, the above situat,ion (Qs in column 3 and Qz in column 2) never arises. Fig. 1 : Unreachable situation for DDB. Third, debugging an ATMS-based problem solver can be difficult. Most of the effort of constructing a problem solver involves debugging the knowledge base and inference rules. In our experience, by far the most common error of ATMS users is failing to specify all the ways a context can be inconsistent. Thus, during debugging far more solutions are found than expected. Commonly, so mhny solutions exist that problem solving cannot terminate within a rea- sonable time. With the ATMS, all solutions are explored at once. If the solver is halted prematurely, no solution is found, leaving the user with little information about what knowledge hc failed to incorporate into his knowledge base or inference engine. A common and related problem is that the user has framed a problem such that one of the solutions cont,ains an infinite amount of data. problem solver will never terminate. If so the These debugging difficulties pose far less of a prob- lem for a solver based on conventional backtracking where the problem solver finds one solution at a time. When the first solution appears inadequate, the implementor imme- diately notices the missing knowledge. In addition, when a particular solution seems to take excess computational resources, 111~ coiilputntion can be interrupted and the cur- rent state cxaillincd. This is dificult in an ATMS-based AUTOMATED REASONING / 9 11 problem solver because the intermediate states always rep- resent pieces of many solutions, any of which could be caus- ing a problem. Dependency-directed backtracking is typically applied to problenrs satisfying the following constjraints: 1) given a finite set of choices, each solution must select exactly one choice from each set, 2) a solution is confirmed by checking for inconsist,encies, 3) testing for inconsistencies must take a finite amount of time, and 4) the solution does not depend on the order in which choices are made. In the next two sections we discuss ATMS-guided and DDEguided problem solving, respectively, for this class of problems. In Section 6 we compare the approaches, and then propose a hybrid approach combining the features of both (without the limitations) in Section 7. 4. AT&f S-guided Problem Solving For most tasks, the majority of the problem solv- ing resources are involved with executing the c0nsumers.r Therefore, a central goal is to minimize the number of con- sumers that must be executed to solve a particular prob- lem. The ATMS is typically used for tasks where multiple solutions are required. For such tasks, the best approach is to ensure that no consumer is ever executed unnecessarily. There are two types of situations where a consumer could be uselessly invoked. First, a consumer can be executed in an environment which is later discovered to be contra- dictory, making the consumer’s execution useless (unless it applied to another environment as well). Second, a con- sumer can deduce some datum in some environment, but some other consumer may later deduce the same datum in a more general environment; therefore, the first consumer’s execution was superfluous. To avoid such inefficiencies the consumers are scheduled such that consumers in smaller environments are executed first and within each set of con- sumers for a particular environment, consumers directed towards detecting inconsistencies are executed first. Within this framework the consumer scheduler repeat- edly picks the smallest consistent environment with non- executed consumers and runs one of its consumers until all consumers of all consistent environments are executed. For example, suppose the problem solver must search through a space of possibilities where it must pick one from each of the sets: {A, B}, {a,p}, and {1,2}. Fig 2 illustrates the lattice of possibly consistent environments. The con- sumer scheduler first executes any pending consumers of the first row (iA}, {B}, {h {P}, {I), ad {2)), then 1 An ATMS-based problem solver only examines contexts which result in the execution of consun~ers. Thus the number of consumers exfalted must be at least as large as the number of contexts exnm- inch As a result we o111y ~~ccd to evaluate the systan’s pcrformnnce with respect to rule invocations, not coutcxts cxnmined. the second row ({A, CY}, {A,P},...), and then the third row ({A a, 1),-J. If the ATMS finds an inconsistent environment, then the problem solver stops exploring that environment and any superset of it. Thus the ATMS may explore some of the consequences of inconsistent environments with a min- imal set of assumptions; however, it will not explore <an inconsistent environment which is not minimal (i.e., has a subset environment which is also inconsistent). The set of minimally inconsistent environments can be pictured as a line, dividing the the search space into two parts; envi- ronments above the line are consistent and those below are inconsistent. If we associate each consumer with the small- est environment it runs in, then 1) all consumers associated with environments above the line will be executed, 2) some consumers associated with minimally inconsistent environ- ments will be run (possibly all), and 3) any consumer asso- ciated with environments below the line are guaranteed not to be run. Returning to the example, assume that among all the rules three inconsistencies are ultimately detected (H indicates a consumer): B t-+ I, a, 1 ++ 1. The resulting lattice is shown in Fig. 2: . 2) 7 Fig. 2 : ATMS-controlled search. 5. DDB-guided Problem Solving As illustrated in Section 3, for tasks requiring only a small number of the set of possible solutions the simple ATMS-controlled approach can be extremely inefficient. Instead, some form of backtracking controI could be ex- ploited. Even in those cases where all solutions are being explored, the ATMS may have to explore portions of the search space which DDI3 would avoid. 912 / ENGINEERING There is a wide variety of backtracking techniques. For the sake of discussion we shall use one of the best: the general dependency-directed backtracking embodied in a conventional justification-based TMS (such as [9]). The crucial characteristic of this backtracker upon which our argument depends is that the backtracker can test a par- ticular context for any inconsistencies previously encoun- tered. Consumers are executed only after a context has been determined not to contain any known inconsistency. Fig. 3 : DDB-controlled search. Backtracking exploits the fact that any solution must pick exactly one element from each set of choices. In our example, any solution must contain one assumption from each of {A B), (0, P>, and { 1,2}. Backtracking enumer- ates the space through depth-first search, selecting one as- sumption from each set of choices in the order given and executing the consumers only when a terminal Tnviron- ment is reached (i.e., environments which contain exactly one assumption from every set of choices). When this oc- curs the consumers of that environment, as well as those in any subset of that environment, are invoked until a con- tradiction occurs or no inconsistency remains. No ordering is placed on the order of consumer invocation. The TMS ensures that no consumer will ever be executed in a ter- minal environment which contains a previously discovered inconsistency. For example, if some consumer determines that {cy, 1) is inconsistent while analyzing the terminal en- vironment {A, M, l}, no consumer will ever be executed in the terminal environment {B, (Y, 1). (Note that here as in Section 5 we presume that every inference performed is recorded as a justification and thus no consumer need ever be executed twice.) Given the same consumers as Section 4., DDB ex- plores the search space as shown in Fig. 3. For presenta- tion we associate tonsumers with their minimal environ- ments although without the ATMS these cannot be ex- plicitly computed. The DDB-controlled search runs con- sumers only in terminal environments which is equivalent to running all consumers in all subsets of the terminal envi- ronment. Fig. 3 illustrates the DDB-guided search. Rules . have been run in every environment above this line2. 6. A Comparison of the Approaches Each of the approaches explored above has its own advantngcs and disadvantages. These are described below. The primary advantage of the ATMS is that it is guaranteed not to explore any inconsistent environment which is not minimally inconsistent. To accomplish this the ATMS organizes the search to find the most general inferences first. The ATMS has the additional feature that it only examines environments which contain at least one pending consumer. Thus for those problems with a sparse number of pending consumers and a very large set of en- vironments the ATMS will have a significant performance advantage over those approaches where every environment is examined. The primary disadvantage of the ATMS is that it es- sentially works in a breadth-first manner, working on all solutions simultaneously. As pointed out in Section 3, for those problems where only a few of a large number of solu- tions is desired the ATMS can have significant drawbacks. In addition, the ATMS’ unfocused behavior makes it diffi- cult to debug. These, however, are exactly the advantages of depen- dency-directed backtracking. DDB focuses on one solution at a time, making its inferences easy to follow (by the im- plementor). In those cases where one or a few solutions are desired, DDB does not waste effort exploring additional so- lutions never used. The primary disadvantage of DDB is that, unlike the ATMS, it can explore inconsistent envi- ronments which are not minimal. This happens because, given a terminal environment, there is no ordering placed on the subset environments being explored. Thus an envi- ronment may be explored before its subset is shown to be inconsistent. If these environments contain a number of consumers using enormous computational resources then the consumers are invoked needlessly and the performance of DDB will be clea~rly inferior. For example, {B, 0) is an example of an inconsistent environment which was use- lessly explored by DDB in the example of section 5 but not explored by the ATMS in section 4. DDB has <another advantage over the ATMS approach. Even when looking for all solutions there are environments explored by the ATMS, which <are not explored using DDB. The reason for this is subtle, and depends on two key ob- servations: First, each solution must select one assumption from every set of choices, any incomplete set of choices is not are a solution. Second, the order in which th ese choices made is irrelevant. For any problem with more than two sets of choices, there are several orders in which the ’ As with the ATMS-guided search, depending on the order in which the rules are executed, few or many of the rules of an incon- sistent environn:erit’lllny be run. AUTOMATED REASONING / 9 13 choices could be made. Each sequence of choices corre- sponds to a path moving from the root of the environment lattice to one of the solutions. DDB makes the observation that ordering is irrelevant and thus explores only one path to a particular solution, (based on the ordering in which choices are supplied). The ATMS, on the other hand, ex- plores all paths to the same solution in parallel. This is clearly wasteful. Thus it is not surprising that the ATMS stumbles across environments which need not be explored using dependency-directed backtracking. {p, 2) is an envi- ronment which was explored by the ATMS in the example of section 4 but not explored by DDB in section 5. 7. Assumption-based DDB Given the analysis above, our goal is to combine the features of each approach without inheriting any of their disadvantages. Figs. 2 and 3 highlight the differences be- tween the two approaches. Note that each approach ig- nores certain environments explored by the other. Thus ideally we would like to construct an approach which ex- plores only the intersection of the environments explored by the two approaches taken alone. This is depicted in Fig. 4. More precisely we would like an approach which explores an environment if 1) it is a subset of a terminal environment explored by DDB, and 2) it is either a consis- tent environment or a minimally inconsistent environment. To accomplish this task we construct an algorithm called assumption-based DDB which takes the DDB algorithm and embeds the ATMS within it. When DDB decides to explore a terminal environment, we use the ATMS sched- uler to explore the subsets of the terminal environment, smallest first (again only examining environments which have pending consumers). Thus DDB provides the search strategy with a coarse focus, while the ATMS provides an additional level of discrimination. Fig. 4 : Assumption-based DDB controlled search. Because this approach explores the intersection of the environments explored by DDB and the ATMS, we are guaranteed that its worst case performance with respect to the munbcr of consumer invocations will be at least <as good if not better than the two approaches taken sepa- rately. This statement is true independent of the nulnber of solutions being explored. Even if the problem solver is only interested in one solution!3 A more detailed description of the algorithm follows, The consumer scheduler for assumption-based DDB main- tains an ordered set of choices, each referred to as a control disjunction: control(C1, Cz, . ..} A control disjunction consists of an ordered set of assump- tions, called control assumptions, which are pairwise in- consistent. The system also supports assumptions which are not part of any control disjunction; these are handled by the traditional ATMS mechanism. The scheduler also maintains a single current environment consisting of a set of assumptions, one from each control clisjunction. A con- sumer is not run unless the union of one of its antecedent environments and the current environment is consistent.” When a contradiction is encountered, the scheduler finds the next (in chronological backtracking order) environment free from any known contradiction. Initially we start with the empty current environment E and a stack of control disjunctions S. The backtracker can be implemented by the procedure bncktrack(E, S): 1. If S is empty, schedule(E), and return. 2. Let D be the first control disjunction of S, S the remainder. 3. If no remaining assumptions in D, return. 4. Let a be the first control assumption of D, D the remainder. 5. E’ 1 {a} u E. 6. If E’ is consistent, backtrack(E’,S). 7. If E is now inconsistent, return. 8. Go to 3. Schedule(E) executes consumers in the smallest envi- ronments first. For example, scheduZe({A, a? 1)) executes the consumers in the following order: {rl}, {a}, {A, a), {l}, {A, I>, {a, 11, {A Q, 1). We must place two conditions on the problem solver to ensure the correct operation of this backtracking scheme. First, all problem-solving operations are performed by the consumers. In particular, the user may not add new con- sumers, assumptions, justifications, data, etc. during the search. The reason for this is that the new information could create new contexts which have already been im- plicitly examined by the backtracking mechanism. Thus, 3 In addition, the same claim holds for the number of environ- ments explored, since cnvironmcnts arc only explored if they have consumers associated with them. 4 The set of assumptions in an environment can be broken into two sets: control assumptions and non-control assumptions. The environment will be consistent with the current environment only if 1) the set of control assumptions I ‘s a subset of the current anvi- ronmrnt, and 2) the addition of the non-control assumptions does not cause the current cnvironmcnt to become inconsistent with the current environment. 914 / ENGINEERING if new knowledge is added externally during the problem- solving activity, the backtracker must start searching the space from the beginning. In the ATMS framework this is relatively inexpensive (but not fret) as the consumer sched- uler guarantees no work is ever done twice and remembers all contradictions ever encountered. Second, every action of a consumer must itself depend on all its antecedents (a stipulation already imposed by the ATMS scheduler itself). If a consumer were permitted to perform arbitrary actions, then this would be the same as adding external knowledge. 8. Minimizing Consistency Checks : Prior to running any consumers in an environment, the backtracker first checks it for previously discovered in- consistencies. Although limiting the number of consumers executed is of primary importance, it is also important to limit the number of contexts the backtracking procedure is forced to examine for possible inconsistencies. Intrinsic to the ATMS are two capabilities which reduce the number of contexts tested for consistency. Although these two capa- bilities do not reduce the number of consumers executed, they do reduce the number of contexts the backtracker must examine. Unlike a conventional justification-based TMS, the ATMS is guaranteed to explicitly identify allnogoods which follow from the current set of justifications. A conventional TMS will only identify the first nogood it comes across (which necessarily identifies the current environment as inconsistent). As a result, a conventional TMS may have to consider environments which would have been excluded using nogoods explicitly identified by the ATMS. For ex- ample, suppose we already have the justifications: A,z =+ I, 1,z =+ I, P =$ L and in the current environment {A, cy, 1) a consumer pro- duces: In a conventional TMS only one of the two contradic- tions may be explicitly noted, say {(u, 1) while an ATMS detects {A, a} immediately as well. As a result a con- ventional TMS tests {A, a, 2) while the ATMS never con- siders it. In addition, {p} is marked nogood immedi- ately, so {A, p, 1}, {A, ,B, 2) will never be considered by the assumption-based backtracker either. The ATMS also takes advantage of the fact that the set, of choices in each control disjunction is exhaustive. Consider the example ( H indicates a consumer): control{A, B}, control(C, D, E, . ..}. control {X, Y}, A-x=2, X+-vc=l, Yw2=1. The backtracker first explores the environment {A, C, X}. It is found that {A, X} is inconsistent and the correspond- ing nogood is recorded. Next it explores the context {A, C, Y} noticing {A, Y} is nogood as well. Traditional dependent; directed backtracking would then try exploring the envi- ronments {A, D,X}, {A, D,Y}, {A, E,X}, {A, E,Y} ,.... In each case the backtracker will realize that the cnviron- ment is a superset of one of t,he explicit nogoods before run- ning any consumers. Nevertheless, time is wasted switch- ing to each context. Using the ATMS the backtracker is able to infer from the two nogoods, plus the exhaustivity of the third control disjunction that {A} is also a nogood. Thus no superset environment of {A} is considered further. To accomplish this the ATMS contains the following hyperresolution rule for disjunctions (of which control dis- junctions are an instance). control(A1, AZ, . ..} nogood at where A, E q and Ajf-i @ cq for all i nogoodUi[a; - {A;}] In this case, the ATMS infers: control{X, Y} nogood{ A, X) nogood{A, Y} nogood Therefore, the ATMS-controlled backtracker considers the environment {B, C, X} next. It is important to note that the backtracking algorithm is unchanged: all the neces- sary nogoods are detected by the ATMS itself in its nor- mal operation and are indistinguishable from the nogoods detected explicitly by consumers. 9. Generalized Assumption-based DDB The ATMS backtracking scheme outlined in the previ- ous section presumes the set of control disjunctions is lixed at the beginning of problem solving and that each set of choices is completely independent. Neither is the case in practice. During problem solving a new set of choices may become of interest in addition to the ones already known. Furthermore, some choice sets are logically dependent on others. Together these make it diihcult for the problem solver to reason about it#s own control. We follow the ideas AUTOMATED REASONING 1 915 of PI and allow control decisions to have justifications as well. For example, we write: x =+- control(A, B) to state that if 5 holds, then control{A, B} is an active con- trol disjunction and has the full force of a normal control disjunction. If a control disjunction has no valid justifica- tion, then it is pnssive and is ignored completely. When a control disjunction is passive, its assumptions (unless they appear in other control disjunctions), will never be part of the current environment and hence no consumer which solely depends on them will be executed. The exe- cution condition for consumers is the same as that for the assumption-based DDB of Section 7. The control disjunction is designed to be used within a schema which ensures that any inferences following from one of its control assumptions also depends on the control disjunction’s antecedents. For example, the proposition (if al, u2,... hold, explore cl first, then ~a,...), al A a2 A . . . --) Cl v c2 v ..* is encoded by the control disjunction, a1,a2, -** * control(C1, C2, . ..}. and justifications, a1,a2, "', ci 3 c;. This encoding ensures that some c; will hold only in a con- text in which al, ~22, . . . hold as well as the specified control assumption being active. A control disjunction may be be- lieved in some context, but in order to be active it must be consistent with the current environment. This formulation of dependency-directed backtrack- ing is extremely powerful and allows the problem solver to dynamically manipulate the shape of its search space without requiring any change of terminology or represen- tation. The backtracking algorithm is, however, rather complicated and we briefly outline it here. The general- ized backtracker follows the simpler version by depending heavily on the nogood data-base of the ATMS. The back- tracker maintains a single stack of the currently active con- trol disjunctions. There are three significant events: a pas- sive control disjunction becomes active, an active control disjunction becomes passive, cand the current environment becomes inconsistent. When a passive control disjunction becomes active add it to the active stack, select the first control assump- tion from it that can be consistently added to the current environment, and make the resulting environment the new current environment. If no such assumption exists, the hyperresolution rule will have determined that the current environment is inconsistent. As long as the active stack does not change, search continues in chronological order. More than one control disjunction can become active si- multaneously. In such cases if desired the control disjunc- tions can be sorted, oldest first, and an assumption chosen from each in turn. The sorting guarantees that the search space is explored in the order intended by the problem solver. The case of an active control disjunction becoming passive is more complex. Although far more complex strate- gies are possible, the simplest technique is to temporarily unwind the control disjunctions on the active stack up to and including the affected control disjunction. As each control disjunction is removed, the active control assump- tion for that disjunction is removed from the current con- text. After the affected disjunction is removed the remain- ing temporarily removed control disjunctions still active are pushed back on the control stack (selecting their first consistent control assumption for the current context). A more complex strategy would only reexamine those environments whose exploration was blocked (i.e., the en- vironment was inconsistent with the current environment) by nogoods containing a control assumption of the newly passive control disjunction. Such a scheme would avoid ex- amining some.environments twice. Fortunately, the ATMS scheduler explicitly records all pending consumers with en- vironments, thus it is almost free to reexamine an environ- ment. When the current context becomes inconsistent, but the active stack is unchanged, the backtracking proceeds as in the simple assumption-based DDB scheme. However, when more than one of these three conditions occurs simul- taneously, these operations must be interleaved to avoid needless thrashing. 10. Related Work Our approach to backtracking is dependent on the ATMS [4,5,6] but exploits the ideas of explicit control of reasoning [8] and previous approaches to backtracking [9,12]. Our approach is also strongly related to the use of in- telligent backtracking in PROLOG [ 1,2,3]. The backtrack- ing scheme of [l] is similar to the case where the control disjunctions are fixed (our control disjunctions correspond to [11’s value generators). When a contradiction is encoun- tered, the stack is unwound to the first generator contribut- ing to the contradiction. When a generator is exhausted, the reasons for each eliminated value are combined to form a contradiction which is used to guide the backtracking. This corresponds to the ATMS’ use of hyperresolution. The basic difference (as far as backtracking is concerned) is that the assumption-based backtracker records all no- goods permanently while the PROLOG intelligent back- trackers throw away contradiction records when the stack 916 / ENGINEERING is unwound. As a consequence some contradictions must be continually rediscovered. [l] argues that the cost of Becording <and checking for possible contradictions is not worth the computational overhead incurred. 11. Conclusions Problem solvers built on the ATMS are often very dif- ficult to control. Some problems are inherently ill-suited to the ATMS, but for many others the addition of a simple backtracking control structure resolves many of the diffr- culties. This paper has presented a simple backtracking scheme which exploits some of the unique properties of both the ATMS and DDB, resulting in a hybrid algorithm whose worst case performance based on rule invocations is superior to the two approaches individually. This perfor- mance holds from problems demanding anywhere from a single solution to all solutions. Non-control assumptions are treated as conventional ATMS assumptions, while control assumptions are treated much like DDB in a justification-based TMS. In addition, the system supports conditional control disjunctions used to model the interactions between “not quite independent” choices. The result is an overall problem solver with the ad- vantages of both an ATMS and dependency-directed back- tracking. ACKNOWLEDGMENTS Ken Forbus, David McAllester, Phil McBride, Paul Morris, Bob Nado and Leah Williams provided lively dis- cussion, insights, and comments on the topic. BIBLIOGRAPHY 1. Bruynooghe, M., Solving combinatorial search prob- lems by intelligent backtracking, Information Process- ing Letters 12 (1981) 36-39. 2. Bruynooghe, M. and Pereira, L.M., Deduction revi- sion by intelligent backtracking, in: J.A. Campbell (Ed.), Current Issues in Prolog Implementation, (Wi- ley, New York, 1984) 194-215. 3. Clocksin, W.F. and C.S. Mellish, Programming in Pro- log, (Springer-Verlag, New York, 1981). 4. de Kleer, J., .4n assumption-based truth maintenance system, Artificial Intelligence 28 (1986) 127-162. 5. de Kleer, J., Extending the ATMS, Artificial Intelli- gence 28 (1986) 163-196. 6. dc Kleer, J., Problem solving with the ATMS, Art+ cial Intelligence 28 (1986) 197-224. 7. de Kleer, J., Choices without backtracking, Proceed- ings of the National Conference on Artificial Intelli- gence, Austin, TX (August 1984), 79-85. 8. 9. 10. 11. 12. de Kleer, J., Doyle, J., Steele, G.L. and Sussman, G.J., Explicit control of reasoning, in Artificial In- telligence: An MIT P erspective, edited by P.H. Win- ston and R.H. Brown, 1979. Also in: Proceedings of the Symposium on Artificial Intelligence and Program- ming Languages, 1977, Also in: Readings in Knowl- edge Representation, edited by R.J. Brachman and H.J. Levesque, (Morgan Kaufman, 1985). Doyle, J., A truth maintenance system, Artificial In- telligence 24 (1979). McAllester, D., A widely used truth maintenance sys- tem, unpublished, 1985. Stallman, R. and Sussman, G.J., Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis, Artificial Intelligence 9 (1977) 135-196. Steele, G.L., The definition and implementation of a computer programming language based on constraints, AI Technical Report 595, MIT, Cambridge, MA, 1979. AUTOMATED REASONING / 9 17
|
1986
|
31
|
474
|
KNOWLEDGE-BASED VALIDITY MAINTENANCE FOR PRODUCTION SYSTEMS+' Philip R. Schaefer Martin Marietta Denver Aerospace P.O. Box 179, M.S. 0427, Denver, CO 80201 Isil H. Bozma Yale University, New Haven, CT 06520 Randall D. Beer Center for Automation and Intelligent Systems Research Case Western Reserve University, Cleveland, OH 44106 ABSTRACT In many problem domains, an action may be taken by an expert, which, due to new inferences or a changing domain situation, should be retracted. To this end, an effective problem solver will need to use some kind of validity-maintenance system, so that it can gracefully recover from invalid previous decisions. Unfortunately, the standard IF/THEN paradigm often used to encode expert behavior does not readily allow the expression and processing of this validity knowledge. We present a new extension to that rule paradigm which can be used to augment production-rule-based systems with validity maintenance capabilities, and demonstrate a straightforward algorithm for its interpretation. I INTRODUCTION Through effectiveness for Knowledge Engineering and modularity of Expert Systems construction, Production Rules have become a very popular AI paradigm (Barr and Feigenbaum, 1981). Their advantages as a formalism in Artificial Intelligence are due to several reasons. First, small chunks of knowledge can be incrementally assembled to augment the behavior of the intelligent system. As new knowledge is acquired, the system performance is upgraded incrementally (Winston, 1984). Second, an effective control strategy can handle complex domains using only simple rules. Although the control strategy will need to resolve problems arising from interacting or conflicting rules, achieving this is usually easier than dealing with the corresponding complications that would arise from designing and modifying a single, complex program encoding the same expertise. $:-This work was performed at the Center for Automation and Intelligent Systems Research, Case Western Reserve University. Unfortunately, several problems can arise in systems where a large number of rules may sequentially fire during the problem-solving process: -Previous inferences can become inaccurate -Old ,infer,ences can become inconsistent with the new -Intermediate solutions can be non-optimal when compared with new knowledge. To overcome these problems, a non-monotonic validity maintenance scheme is desirable (Rich, 1983). With such a scheme, the results of rules which have previously fired can be retracted when necessary. In this way, inferences which become inaccurate or inconsistent as a result of new observations or new inferences can be gracefully removed from the system, allowing newer, correct inferences to be made. Additionally, assumptions which were originally made in an effort to achieve a feasible solution can be retracted when contradictions arise or when superior assumptions are found. Human experts, of course, are proficient at dealing with this kind of validity maintenance in our dynamic everyday world. People seem quite willing to make assumptions or conclusions before complete evidence is available. The important point, however, is that they also are able to correct or reject these inferences as more information becomes available or as the domain situation changes over time. Several systems (called "Truth Maintenance Systems," or TMSs) for introducing similar validity maintenance abilities to computer implementations have been described in the literature. Systems such as Doyle's (1979) require the AI system to maintain a single consistent set of inferences. A newer kind of validity maintenance, an "assumption-based" approach, keeps track of assumption sets supporting the various inferences, and thus allows multiple, possibly contradictory, inference sets to be developed simultaneously (de Kleer, 1984). In each of these systems, a network of reasons or assumptions behind the inferences is constructed, which is manipulated to maintain validity. 918 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. However, it is not clear exactly how the validity-maintenance abilities of the human expert can be naturally expressed in terms of such networks. It may be possible for the expert to indicate how the individual working-memory elements in the network are dependent on each other. However, the implementation-level network is at a considerably lower level of representation than the "knowledge level" expert production rules. It would be unsatisfactory to require the expert to think about the domain at both levels. It is clearly important, therefore, to be able to maintain and process validity knowledge in a Production System in a TMS-like fashion. Just as important, however, is the need to express the human expert's validity knowledge in a natural and straightforward way, to preserve the Knowledge Engineering advantages of the production-rule paradigm. II EXPRESSING VALIDITY MAINTENANCE KNOWLEDGE IN - PRODUCTION RULES Although a validity-maintenance mechanism is clearly important in many expert systems, it is not immediately apparent how an IF/THEN rule approach can capture the necessary additional knowledge. The usual reading of a rule in a production system is something like IF <this condition is seen> THEN <make this conclusion> (Barr and Feigenbaum, 1981, Weiss and Kulikowski, 1984) During the reasoning process, if the condition is observed even for an instant, the conclusion is asserted. Although a separate rule might be able to remove the conclusion, the semantics of this rule assumes that it will apply for all time. This kind of rule will unfortunately cause problems if the expert system is reasoning in a world of changing knowledge or data. If the following rule were fired under such an interpretation: IF <the weather appears bad> THEN <do not have the picnic> IF <the ignition switch is on> and <the starter makes no sound> THEN <there is an electrical problem>. When the ignition switch were turned off to perform some other test, the <electrical problem> conclusion would be forgotten. Occasionally, knowledge seemingly of a rule-based nature, may defy expression in either of these interpretations. For example, a meeting scheduler might have a rule such as IF <everyone can attend the meeting at time X> THEN <schedule the meeting at time X> with the condition "this is valid as long as conflicts arise for no more than 10% of attendees." In thi s case, the validity condition i s not even one of the antecedants of the original decisi .on. Usin<; either of the above two rule interpretations, it is probably possible to get the desired behavior in the IF/THEN approach by including clever "patches" of several additional rules. From a Knowledge-Engineering perspective, however, this i's an unsatisfactory solution. The modularity of the rule-based knowledge representation would be compromised, and rules which are quite straightforward for a human to understand would be stated to the system in a complex, barely comprehensible fashion. In each case, it is evident that a human expert stating such rules has a clear intention of the validity conditions implied. We propose that the IF/THEN rule paradigm be extended to allow a natural and conceptually "clean" expression of this knowledge. The proposed extension is to use, rather than IF/THEN constructs, an IF/THEN/AS-LONG-AS construct. The interpretation of a rule in this new form is IF <this condition is seen> THEN <make this conclusion> AS-LONG-AS <this validity condition remains true>. the system would not have the knowledge that the conclusion was no longer valid if the weather later became good. The three above examples could be written as: IF <the weather appears bad> THEN <do not have ‘the picnic> One alternate IF/THEN rule interpretation to AS-LONG-AS <the weather remains bad handle this kind of problem could be or threatening> IF <this condition is true> THEN <this conclusion is true> In that case, when <the weather appears bad> became false, the <do not have the picnic> would become false as well. Such an interpretation would also lead to problems in a dynamic knowledge environment. Consider the effects of the conclusion of the rule IF <the ignition switch is on> and <the starter makes no sound> THEN <there is an electrical problem> IF <everyone can attend the meeting at time X> THEN <schedule the meeting at time X> AS-LONG-AS <conflicts arise for <lo% of attendees>. AUTOMATED REASONING / 9 19 With this construct, the Knowledge Engineer states an IF/THEN rule in the usual form, then adds the validity conditions under which the result remains valid. In this way, the reasoning and validity maintenance of the expert system will more closely resemble that as perceived by the expert when stating the rules. There is no longer a need to use elaborate "rule programming" to achieve the desired validity-maintenance behavior. III PROCESSING IF/THEN/AS-LONG-AS RULES Processing rules containing AS-LONG-AS parts will, of course, require some additional steps beyond the usual matching and chaining of standard IF/THEN rules. Here, we discuss algorithms that can be used for such processing. Fortunately, techniques exist today which, although not designed with AS-LONG-AS rules in mind, do provide the dependency processing such rules will require. Dependency networks, such as found in Truth Maintenance Systems, are one way to internally store the dependence of conclusions on the validity conditions associated with the inference rules. With such storage of dependency information, an efficient rejection of invalidated decisions can take place as soon as any validity condition is violated. Although the full implications of the integration of IF/THEN/AS-LONG-AS rules with a complete TMS, including the associated assumption selection, have not yet been studied, it has been established that at least the retraction portions of TMS mechanisms are effective for these rules. Our validity-maintenance algorithm, most similar to (Doyle, 1979), comprises three steps: -note the dependency relationships implied by an AS-LONG-AS part when a rule fires to find a value for an element of the working memory -check for validity of other memory elements when a new value is entered into memory. Invalidate those elements whose AS-LONG-AS conditions are violated -replace the old dependency information in the system when the value of an element is altered. The first step can following algorithm be implemented with the When an data element E is inserted into memory by rule R, 1. Examine the AS-LONG-AS part of R 2. For each working memory element E' associated with that AS-LONG-AS evaluation 1. store E as a dependent of E', along with R and a list of the variable bindings associated with R at that time 2. store E' as A-CAUSE-OF E. Further, if the element had a value previously, the system must check to see if any of its dependents are affected, and invalidate as necessary: 1. 2. For each dependent D associated with memory element E 1. Examine the AS-LONG-AS part of the rule which depends on D, in the context of the variable bindings stored, to see if D's validity conditions remain true 2. If it evaluates to "false," invalidate the value of D. For each invalidated dependent of D, recursively perform this invalidation algorithm until no more invalid working memory elements result. Additionally, it is necessary to "clean out" any old dependency relations that the previous value of E had: When a previously-existing element E is given a new value for each A that is A-CAUSE-OF E, remove E from the dependencies list of A. Using this validity-maintenance algorithm, effective expression and processing of validity knowledge can be performed, as we next demonstrate. IV AN EXAMPLE OF AS-LONG-AS PROCESSING --- To illustrate the effectiveness of rules in the IF/THEN/AS-LONG-AS construct, the following meeting-scheduler system is presented, which makes use of the kinds of validity conditions previously discussed. The following rules show the rules for initially scheduling a meeting, determining when a person has a conflict, and for scheduling a meeting around conflicts: rule-l: IF no attendee has a conflict at proposed time T THEN schedule the meeting at time T AS-LONG-AS <= 25% of attendees get conflicts rule-2: IF person P has multiple meetings scheduled for time T THEN person P has a conflict of value T AS-LONG-AS those multiple meetings remain at time T rule-3: IF meeting requested time = t and meeting scheduled time cannot = t THEN propose meeting for time T+l 920 / ENGINEERING Figure 1 shows the initial knowledge base, with "forms" to be filled with the meeting and attendee information, just one of many possible knowledge representations. Suppose that the expert system first considers MEETING-l. It is desired to assign it a scheduled time, so rule-l and rule-3 are matched as providing scheduling information. In evaluating rule-3, the system tries to schedule the meeting at its requested time, lO:OO, and, because of rule-l, succeeds. The "scheduled time" element is therefore filled with the value lo:oo. Next, the AS-LONG-AS part of rule-l is examined. In so doing, the system looks at the list of MEETING-l attendees and looks for conflicts of each one. Validity-maintenance links are stored to record the dependency of the scheduled time on each of the working memory elements accessed during this AS-LONG-AS evaluation. In this case, links would be stored between the "attendees" element of MEETING-l and the "scheduled time" element. For the same reason, links would be stored between the MARY, ELLEN, and STAN "conflict" element and the "scheduled time" element. Example dependency links can be seen in Figure 2. Next, the system considers MEETING-2, and similarly stores 10:00 in its "scheduled-time" element, dependent on the conflicts of JOE, SUE, and BOB, and the "attendees" element of MEETING-2. Additionally, because rule-2 was used to find the possible conflicts of the attendees, all attendees' "conflicts" elements will be stored as dependencies of the respective meeting "scheduled time" elements, due to the rule-2 AS-LONG-AS part. Joe pL&Lzq Bob -1 Sue meetings: meeting-2 conflicts: ~~~~ 1 Figure 1. The initial knowledge base for the scheduler example. Mary meetings: meeting-l Stan I meetings: meeting-l conflicts: Marv Ellen 1-1 meeting-l attendees: Mary Stan Ellen requested time: 10:00 proposed time: scheduled time: meeting-2 attendees: Joe Sue Bob requested time: 10:00 proposed time : scheduled time: meeting-l I attendees: JOE Mary Stan Ellen requested time: 10:00 rule-2 proposed time: 10:00 & scheduled time: 10:00 &- Joe rule-l meeting-2 rule-2 attendees: Joe Sue Bob - requested time: 1O:OO Sue rule-l rule-2 rule-l AS-LONG-AS dependency link Figure 2. After the initial scheduling, JOE la placed on the meeting-l attendees list. This causes validity maintenance, which will reschedule meeting-2, as described in the text. AUTOMATED REASONING / 92 1 Now, suppose that someone decides that JOE should attend MEETING-l as well. Correspondingly, MEETING-l is put on his "meetings" list and JOE is put on the meeting "attendees" list. Figure 2 shows how this new knowledge affects the validity-maintenance network. Validity maintenance processing is invoked whenever a working-memory element upon which other elements depend, is modified. Therefore, because the "scheduled time" of meeting-l is a dependent of the modified "attendees" entry, its validity conditions, the AS-LONG-AS part of rule-l, are checked. The "conflicts" slots of all the attendees are evaluated, and in the process, JOE receives a "1o:oo" in his "conflict" element. Despite the conflict, the AS-LONG-AS part of rule-l remains true for MEETING-l, and its scheduled time remains valid. However, the MEETING-2 scheduled time is a dependent of the now-modified JOE "conflict" entry, so its validity condition must be checked as well. For MEETING-2, though, 33% of the attendees now have conflicts, violating the AS-LONG-AS part. Therefore, the validity-maintenance mechanism removes the scheduled time from MEETING-2 and its dependent, JOE's conflict. A new value for the scheduled time must be found, and in rescheduling, rule-3 fires, giving MEETING-2 the scheduled time of ll:oo. At this point, everyone is happily scheduled without any conflicts. From this example, it becomes clear that even with simple AS-LONG-AS parts in the rules, quite complex validity maintenance can result. To encode this explicitly without IF/THEN/AS-LONG-AS rules would have required considerably more rule engineering. V A STUDY CASE FOR THE EXTENDED PRODUCTION RULES ----- We have implemented a form-filling production-rules-based expert system called DIFF- The Domain Independent Form Filler (Beer, 1986). The knowledge in DIFF is stored in user-defined forms, which contain as their values either knowledge provided to the system a priori or knowledge inferred by the system as it proceeds. The system is goal-driven to fill requested entries of specified forms, using production rules and the other form entry values, Many form-filling tasks, such as a implemented Course Scheduling task, similar to the example above, require validity maintenance on form entries. DIFF uses IF/THEN/AS-LONG-AS rules to accomodate this requirement. Within the form-filling paradigm of DIFF, it was quite straightforward to implement the validity maintenance algorithm described above. The various form entries in the system correspond to elements of Working Memory. Therefore, each time a form entry is filled, all of the form accesses in the AS-LONG-AS part of the rule are given the new entry as a dependent. A future change in the values of any of these form accesses will cause validity checking of the dependent form, 922 / ENGINEERING In addition to the completed course-scheduler example, current work includes using DIFF for cardio-vascular diagnosis and tax-form consulting. VI CONCLUSION Validity Maintenance in production systems used in domains of changing knowledge is crucial if the system is to avoid the inaccuracy, inconsistency, and non-optimality problems of the basic IF/THEN formulation. To this end, in addition to the usual knowledge about what domain inferences to make, the knowledge of the context under which these inferences will remain true in the future is necessary. For Engineering purposes, Knowledge the validity should be expressible in a knowledge natural and The new compact way. approach of using rules in the IF/THEN/AS-LONG-AS form meets these requirements. Because of the straightforward algorithms available for its implementation, the IF/THEN/AS-LONG-AS paradigm should provide inroads to more practical rule-based design for complex domains. ACKNOWLEDGEMENTS We would like to express our thanks to the Director of CAISR, Professor Yoh-Han Pao, for support of this work, and to Professor Leon Sterling, for his comments on an earlier version of this paper. REFERENCES [l] Barr, A. and Feigenbaum, E.A., The Handbook of Artificial Intelligence, Vol. I, William Kaufmann, 1981. 190-199, [2] Beer, Randall, "Interim DIFF User's Manual," Technical Report TR-101-86, Center for Automation and Intelligent Systems Research, Case Western Reserve University, 1986. [3] de Kleer, Johan, "Choices without Backtracking," Proceedings of the National Conference on Artificial 79-85. - InGlligence, 1984, [4] Doyle, Jon, "A Truth Maintenance System," Artificial Intelligence, Vol. 12, 231-272, North-Holland, 1979. [S] Rich, Elaine, "Knowledge Representation Using Other Logics," Artificial Intelligence, McGraw-Hill, New York, 1983. [6] Hayes-Roth, F., Waterman, D.A., and Lenat, D.B., Building Expert Systems, Addison-Wesley, Reading, MA, 1983. [7] Weiss, S.M., and Kulikowski, C.A., Designing Expert Systems, Rowman and Allanheld, Totowa, NJ, 1984. [8] Winston, P.H., "Rule-based Systems for Analysis," in Artificial Intelligence, Addison-Wesley, Chapter 3, 1984.
|
1986
|
32
|
475
|
A PARALLEL SELF-MODIFYING DEFAULT REASONING SYSTEM Jack blinker Donald Perlis Krishnan Subramanian Department of Computer Science University of Maryland Institute University of Maryland for Advanced Computer Studies College Park, MD 20742 ABSTRACT As a step in our efforts toward the study of real-time moni- toring of the inferential process in reasoning systems, we have dev- ised a method of representing knowledge for the purpose of default reasoning. A meta-level implementation that permits effective monitoring of the deductive process as it proceeds, providing infor- mation on the state of the answer procurement process, has been developed on the Parallel Inference System (PRISM) at the Univer- sity of Maryland. Also described is an implementation in PRO- LOG (and to be incorporated in the above) of a learning feature used to calculate, for purposes of issuing default answers, the current depth of inference for a query from that obtained from similar queries posed earlier. Keywords: automated (default) reasoning, modelling, user interface technology learning, cognitive 1. Introduction In continuing the study of default reasoning in real-time sys- tems [13] we have encountered the phenomenon of tentative answers to queries, which may alter as the system continues to search and to perform deductions. The idea underlying this is that for queries having natural default responses when no other response is available, it may be the case that the failure of the rea- soning system to respond with a positive answer quickly is an indi- cation that no such answer is likely to be forthcoming; in such a case the default answer may be provided, even though the system has not finished all possible lines of reasoning. To carry out such reasoning, the deductive engine must be monitored so that at any time it is known whether an answer has been returned, allowing a decision as to whether to issue a default conclusion in the absence of an answer. \Ve have implemented a mechanism for this purpose, both in PRISM (a parallel inference system) and in PROLOG. We state the problem below. Then we describe the approach adopted, and briefly the PRISM system. The implementation is described in section 2. We give examples of application of our methods in section 3, and in section 4 we discuss related future work. The problem addressed can be stated abst)ractly as follows: Given an inference engine, we wish to monitor its behavior so that while deductive efforts are in progress, another mechanism can decide when (and whether) to issue default answers based on the (so-far) failure of the original engine to find an answer. That is, our new mechanism will be an interface between the user and the deductive engine. However, the interface is to react in real-time to the real-time behavior of the engine, this being the key to its default conclusions. This also has ties with human cognitive behavior. When asked a question, such as ‘what is Tom’s phone number?’ we may respond by cogitating, then saying ‘I don ‘t know’ only later to amend this with ‘Wait! Yes, I do know, it’s 34G-9344.’ The possibility of error is explicitly prcscnt, in such reasoning. For cer- tain queries, it may be inappropriate to conclude the falsity simply because it is not answered quickly, while for others it may not. As a pract,ical matter, the interface that is to make these decisions can be part of the deductive engine itself; but conceptu- ally it is perhaps more easily regarded as separate. In the next sec- tion we describe the operation of the particular mechanisms we have developed. At the present time, we do not have a mechanical procedure in the main system to decide \vhen to employ a default; i.e., defaults are employed at all times if no answer is (so-far) pro- vided by the engine. Much of the work in default reasoning has been of a theoretic and formal nature, e.g. [7,8,9,10,11,14,16]. We are here concerned with issues involving the practical aspects. The primary motivation is the study of intelligent and parallel question- answering capabilities in computers. Our initial attempt was a sim- ple parallel meta-interpreter, with a desire to examine and study its functioning at modelling human answering behavior. An infer- ence step count exhibits, in some sense, the ‘depth’ or ‘intensity’ of the reasoning involved. A dynamic feedback capability keeps the user informed of the status of the inference process, in real time. A simple learning feature has been implemented in PROLOG using depth information from previous queries in the inference for the current query. An exclusive object-level implementation would have yielded a much less flexible system. PRISM (PaRallel Inference SystcM), developed at the University of Maryland, is the inference engine that we used to exploit parallelism. It employs logic programming techniques and affords explicit control of goals, in an evolving logic programming environment. It is designed to run on ZhlOB, the Department’s experimental parallel computing system [2,5,15]. Currently, PRISM runs with a software belt that simulates the ZMOB hardware belt. The PRISM system is an integration of four major subsys- tems: the Problem Solving R4achines (PShls) that manage the tree of goals, the Intensional Database Machines (IDBs) that contain the general axioms, the Extensional Database Machines (EDBs) that contain fully ground function-free atoms, and the Constraint Machine (CShl) that contains integrity constraints. A host subsys- tem serves as the interface between the user and-PRISM, receiving queries and relaying back answers. PRlShl supports goal types by a notation that consists of angle brackets (for sequential, left-to- right execution) and braces (for parallel execution) 1151. A query posed to PRISM system can belong to one of the fol- lowing categories: (1) A single goal, e.g. Gl or rGl> or {Gl}; (2) A list, of goals that have to be solved, strictly sequentially, e.g. <Gl,G2,G3>; (3) A list of goals, all of which may be solved in parallel, e.g. {Gl,G2,G3}; (4) A goal list to be solved basically sequentially, but contains sublists solvable in parallel e.g. <Gl,{G2,G3}> , <Gl,{GfZ,<G3,G4>}> etc.; and (5) A list of goals that can be solved basically in parallel, but cont,ains sublists that have to be solved sequentially, e.g. {Gl,<GZ,{G3,G4}>} etc. {Gl,<GS,G3>} , AUTOMATED REASONING / 923 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. System Description The basic system centers around parallelism. The key notion is a mechanism that provides default reasoning and comes to deci- sions rapidly even at the expense of making mistakes, and can revise when to make a defa.ult decision based on past performance. Initially defaults are made as follows. Given that a predicate letter is fully extensional, the system should conclude on lookup either that it has the answer or no answer is possible. However, given a predicate letter that is fully intensional, it should conclude, after the system has gone along the shortest possible path to a solution, either failure or success. As the system progresses it may learn that the (final) solution on the average takes longer (or shorter) than anticipated by the current default specification, obtained as the depth (AID, or actual inference depth) of the and- or tree that corresponds to the inference. After each query, the depth is reestimated for a subsequent query that would involve the same predicate letter. In the case of a predicate letter that is both extensional and intensional, we ignore the extensional possibility and calculate the depth as if it were fully intensional. In fact, we can arbitrarily specify default depth conditions and let the system learn the appropriate value to use. Indeed, this will be seen in some of the experiments that we describe in section 3. One should, however, start with reasonable values rather than arbitrary values since, the system will converge more rapidly to the correct default depth values. A simple formula is used for the purpose of ‘learning’ new default depth values: New depth = (Old depth * N + Latest depth)/(N + 1) where N is the number of queries (before the current one) that involved the same predicate and latest depth is current AID. Thus at any instant, a predicate has tagged to it a depth estimate and the number of queries thus far posed involving it. Clearly, the more the number of queries, the better the estimate (supposedly indicating better learning). As PRIShl hasn’t an assert/retract capability as yet, a sequential version has been implemented in PROLOG. When a depth is reached an answer is returned; how- ever, if the answer is negative (nothing found) the system contin- ues to search for an answer in order to find any possible greater depth it ‘should’ have gone to; this latter is the ’ latest depth’ in the formula. VVe employ t,wo binary predicates to represent all informa- tion that a user wishes to supply. One identifies all facts that go into the EDBs, while the other identifies the clauses (facts and rules) placed in the IDBs. These correspond to the object-level placement of knowledge, and encompass all information in the user ’ s programs. This is used by the meta-interpreter (discussed next) that issues the rneta-answers, while monitoring the answer- deduction process. 2.1 Design Aspects The basic structure of the system is shown in Fig. (1). The knowledge base module is exclusive for all user-supplied informa- tion, and consists of atomic clauses each of which is identified as belonging either to the EDB or the IDB with the two predicates. This is the only section that is directly relevant to the user, as all meta-level activity is transparent to him. The second module comprises the inference machinery, responsible for the meta-interpretation activity upon the knowledge base. In this module, we have maintained a clear distinction between a kernel that exclusively handles all interaction with the user- supplied knowledge, and a layered structure that encompasses the kernel. The latter serves to reduce any demand for an interaction with the knowledge base to elementary level interactions, which the kernel routines can directly act upon. Such a methodology proves to be advantageous for experiments that need a dynamic and evolving inferential structure, particularly in a parallel logic programming environment. I LAYER3 i I Fig. (1) A single layer is responsible for a class of queries that would comprise one or more categories. A partition within a layer is responsible for all queries that fall in the same category. A layer may only make use of thC facilities available in any layer ensconced within and those at its own level. Such a discipline enables one to construct the entire assemblage (inside-out) in an incremental, structured fashion. 2.2 A Modular, Multi-layered Implementation Solving a single goal is fundamental to solving any query, and this is the function of the kernel in the inference module. This may be regarded as the elementary level at which any inferencing has to begin. Solving any query, irrespective of the number of goals involved or their nature (i.e. sequential/ parallel), can be reduced to solving single-goals. This isolates the kernel from interactions between the inference assembly and the knowledge base, and proves to be beneficial when one constantly needs to adjust the inference mechanism (to model the answering process of a human reasoner) i.e. it can be iteratively refined till its operation begins to exhibit the intended characteristics. Also, this helps focus attention on a small and compact section that con- tains but a few routines, needed to meta-interpret a single goal. Just as important would be the way a query is reduced to elementary level interactions. The layers that encompass the ker- nel effect this reduction, which would then require solving single- goals only i.e. exclusive kernel activit,y. The layer that immedi- ately surrounds the kernel handles the cat,egories 2 and 3. That half of this layer that handles a sequential list hands in a goal to the kernel, and only when the kernel is through with it does it hand in the next goal in the sequence. The other half of the layer that is responsible for the parallel ones, i.e., category 3, causes PRISM to spawn the requisite number of kernels to handle all the goals in the list in parallel. The complete system is shown in Fig. (2). The kernel is used for solving a single goal, be it a query by itself, or part of one. We view a goal as an ordered pair of the predicate and the argument list. The skeleton kernel as tailored for the answer-behavior model is presented here. The underlying notion of default reasoning that we are exploiting is implemented in part by the kernel, as follows: We consider that EDB data is readily accessible for immediate retrieval, and that IDB data may take longer to utilize. Thus if a query is such that the system’s database would normally be expected to contain an answer to a given query in EDB, such an answer should be forthcoming quickly. If none so appears, the appropriate default assumption is that the query is false. However, this does not rule out the possibility of a later answer being found, that is, as IDB is searched and inferences are made. These ideas are (partially) illustrated in the following sample axioms from the kernel: 924 / ENGINEERING (I(1) *Solve(Pred,Arglist) <- EDB(Pred,Arglist), Report-Success(). (K2) *Solve(Pred,Arglist) <- Report-Tentative-Failure(), lDB( [Pred,Arglist] ,MatchingBody), Analyze(Arglist,Matching-Body). (K3) Analyze(Arglist, NIL). (K4) Analyze(Arglist, Subgoal-List) <- Recurse(Subgoal-List). (K5) Recurse(NIL). (K6) Recurse( [[Predl,Arglistl] IRest-Subgoals]) <- Solve-Aux(Predl,Arglistl), Recurse(Rest-Subgoals). (I(7) *Solve-Aux(Pred, Arglist) <- EDB(Pred, Arglist). (K8) *Solve-Aux(Pred,Arglist) <- lDB( [Pred, Arglist] ,MatchingBody), Analyze(Arglist,Matching-body). 6 5 - 3 * I l0 1. User 2. Preprocessor 3. Meta-interpreter 4. Knowledge Base 5. Dispatcher 6. Kernel 7. Inner Layer 8. Outer Layer - Inner Tier 9. Outer Layer - Outer Tier 10. The PRISM System A. The half handling sequential queries B. The half handling parallel queries a. Data/Programs b. Queries c. Response d. Internal Format Fig. (2) In the above clauses, ‘EDB’ and ‘IDB’ are the two predi- cates used for providing all the user-supplied information. Since the EDB models the short-term memory, it is searched first and only if it fails is the IDB search taken up. The use of the asterisk is imperative in this instance, since in our model we do not want any attempt at the IDB until after the EDB has failed to produce a viable answer. Further we intend to have a success/failure report from the EDB and other status information, only for the top-level goals in the immediate query posed and not for any subgoals. And this necessitates the auxiliary ‘Solve-Aux’ procedure above. (The kernel handles category (1) of the queries discussed earlier). Thus a query P(X) will b e solved if it matches an EDB fact, or can be inferred from the IDB. In the latter case, a tentative failure as the default assumption is issued (i.e. if the EDB failed, then in all likelihood the query is false), followed by reports on the IDB search. 3. Examples We present here two illustrations - one in PRISM and one in PROLOG - in order to give the reader an idea of the answering process capabilities as performed by our model. It seems to us that within the limits of the answering behavior modelled by the meta- interpreter, most real-life questions to a human reasoner would fall, in the majority of the cases in the first category. Only on a few occasions would they fall in the second (the third is different from the second only in the nature of execution, and not in the nature of the query per se), and rarely in the other. As most queries are single goals (i.e. category l), the kernel captures fairly adequately the answering process simulation, with much lesser activity from the other layers. (The complete testing was done using a genealogy database). Example 1 : The meta-answers in the following example should exhibit a flavor of the monitoring activity performed by the system in the process of answering the query. Short term memory : ‘A plane is an air-vehicle. Pup is a light-weight, compact, airy tent. Tarpaulin is tough and compact’. Air-vehicle(Plane). Tent(Pup). Light-weight(Pup). Airy(Pup). Compact(Pup). Compact(Tarpaulin). Tough(Tarpaulin). Long term memory : ‘Air-vehicles and shelters are water-proof. Anything that is port- able and light in weight is a shelter. A tent is portable. And so is something that is tough and compact’. Waterqroof(X) <- Air-vehicle(X). Watergroof(X) <- Shelter(X). Shelter(X) <- <Portable(X), Light-weight(X) > . Portable(X) <- Tent(X). Portable(X) <- <Tough(X), Compact(X) > . Query: ‘Give me something that’s a waterqroof and airy shelter ’ . <-<Waterqroof(X),{Shelter(X), Airy(X)}>. [This is a sequential query with embedded parallelism. This choice of control has been made for purposes of illustration, but can be motivated in terms of the example by supposing we are especially eager for something that is waterproof, after which we are equally concerned about its being a shelter and its airiness]. The Meta-answers : I can’t affirm “Water-proof(X)“; it is false, tentatively. ( Al ) Well, here is some answer... Waterqroof(Plane). ( -42 1 I can’t affirm “Shelter(Plane)“; it is false, tentatively. ( A3 1 I can’t affirm “Airy(Plane)“; it is false, tentatively. ( A4 ) Well, here is some answer... WaterDroof(Pup). ( A5 1 I can’t affirm “Shelter(Pup); it is false, tentatively. ( A6 1 I can definitely say that “Shelter(Plane) is false now. ( A7 1 I can definitely say that “Airy(Plane)” is false now. ( A8 1 Sure, I can answer this... ( A9 1 here goes... Airy(Pup). I AlO) I got some answer you want... Shelter(Pup) ( AllI and that’s it...! Here you go!! ( Al21 AUTOMATED REASONING / 925 The Answers : ! Answer obtained in 10 inference steps ! x = Pup Query Succeeded. Explanation: First, the query has overall sequential control, with an embedded sublist which has goals amenable to parallel solving. The first goal is taken up, and passed over to the kernel. The Short Term Memory (STM) reports failure (Al). Augmented with the Long Term Memory (LTM), the goal succeeds, and X is bound to “Plane” (A2). Work is begun on the sublist, with X instan- tiated to “Plane” in both the goals, and at the same time an attempt is made to find an alternative solution for ” Waterqroof(X)” . Two kernels are spawned simultaneously, and the two take up solving “Shelter(Plane)” and “Airy(Plane)“, in parallel. Both fail in the STM, and have the failures reported (A3, A4). How- ever, they continue to work in the LTM. Note that this would not have been possible in a sequential environment, where a goal is not even attempted until the preceding one succeeds. The benefit of the parallel environment would be particularly striking when the user has several mutually non-interdependent goals running, since he can get the status of each and every goal independently of their ordering in the query. The first goal succeeds again with “Pup” for X (A5), and two more kernels are created for the two goals in the sublist. “Shelter(Pup)” fails in STM, and this is reported (A6). By this time, the two original goals fail in LTM as well (A7, A8), while “Airy(Pup)” succeeds in STM (A9). Accordingly, immediate confirmation of the latter is issued (AlO). Eventually, the other kernel also succeeds and reports success, affirming “Shelter(Pup)” (All). When all the goals are thus solved (A12), the number of inference steps for an answer (if success) (A13) and the final answer (if any) are issued (A14), and the system reports success/failure (A15). Example 2: The plots in Fig. (3) h s ow the system asymptotically stabiliz- ing at a certain value of depth for a given predicate. That is, the more the number of queries encountered earlier (with the same predicate), the more representative is its estimate of the inference required for the next query. Recall the formula for calculating the new default depth (at which the effective search is cut off and a negative “closed-world” default is invoked): New depth = (Old depth * N + Latest depth)/(N + 1) In effect, a deductive database can over time become more familiar with itself, i.e., with its own particular configuration of data in regard to the likelihood of determining an answer to a par- ticular kind of query within a certain number of inferences. In order to illustrate our idea here, we have chosen three simple examples in which default depths approach an average value. That is, as more queries are entered, the level of inference that is allowed before a default answer is invoked is modified to better represent the average level at which the actual search ended. The database is a variant of that in example 1, in which ‘Compact(Pup)’ and ‘Compact(Tarpaulin)’ in the EDB are replaced with ‘Light-weight(Tarpaulin)‘, and the last IDB axiom is replaced with the following three: ‘Portable(X) <- Tough(X).‘, ’ Tent(X) Tough(X). ’ and Watergroof(X~Shelter(X),Airy(X). ’ . ‘WSA(X) <- Note that WSA is a new predicate symbol. In plot (a) we perform repeated queries of ‘Waterproof’ with an initial depth tag of 1. The plot shows that as queries are answered, the depth tag is repeatedly reset at increased levels, approaching an asymptotic value. This indicates that the initial tag was too low, in that “experience’ ’ with the database has shown the system there are answers beyond the point at which defaults are invoked. The system gradually corrects this (in the average case) as time goes on. In particular, the user will notice increased cautiousness of the system (and fe\ver changes of mind). For instance a query of Waterqroof(Plane) will initially be answered (negatively) by default after one inference step only to be corrected later (AID: 2). But after three further queries the depth tag becomes > 2, and so the very Same query will now be answered positively. In plot (b) the query is WSA(X). The initial depth tag for ‘WSA’ is 1. We see here that performance changes over time much as in (a). 8 r I I a> 5 i Fig. (3) 926 / ENGINEERING In plot (c) the query is Airy(X), which is a purely EDB query. The initial depth tag is set at 10, which is far too cautious. This illustrates the-case of it not being known in advance that all the data is EDB and that therefore all queries will terminate rapidly. However, over time, the system adjusts the depth tag to be near 1. Note that if new axioms were later to be added which involved IDB for Airy, then defaults would come into play as a result of this prior alteration of the depth tag. 01) (12) (13) (14 4. Conclusions and Future Work We have shown that real-time monitoring of query-answering to provide default answers is feasible, at least in a limited context. We have illustrated this with an implementation on the PRISM parallel inference system and partially in PROLOG, using a meta- level approach. This has focussed on the existing structure of PRISM, which has a natural division between look-ups (EDB) and inference steps (IDB); the meta-level allows explicit declarative statments to be made and proven concerning these two notions. Meta-interpretation is the main technique that we have used to implement a flexible system that can evolve easily with changes, to exhibit some rudimentary ‘cognitive’ or self-modifying behavior. Future work includes several extensions of the current work. For one thing, we want to pursue the idea of placing a query in background mode once a new query comes in and the original one has not yet finished executing. This would be then enhanced by the inclusion of dynamic proof-tree generation in interactive mode, so that the user can direct the system’s behavior as it executes. Additional extensions will include tackling the problem of interact- ing defaults, providing informative answers, and deciding automat- ically when a given query is to be treated in default mode or in normal mode. ACKNOWLEDGEMENTS This research was supported in part by grants from the fol- lowing organizations: AFOSR-82-0303, ARO-DAAG-29-85-K-0177, and the Martin Marietta Corporation. (1) (2) (3) (4 (5) (6) (7) (8) (9) (10) REFERENCES Bowen,K.A. & Kowalski,R.A.,“Amalgamating Language and Meta-language in Logic Programming” in “Logic Programming”,eds. C1arkK.L. & Tarnlund,S.A., Academic Press, London and N.Y., 1982. Chakravarthy,U.S. et al., “Logic Programming on ZMOB: A Highly Parallel Machine”, Procs. Intl’ Conference on Parallel Processing, 1982. Dincbas,M. & le Pape,J-P., “Meta-control of Logic Pro- grams in METALOG”, Procs. Intl’ Conference on Fifth Generation Systems, ICOT, 1984. Gallaire,H. & La.sserre,C., “Meta-control for Logic Pro- grams” in “Logic Programming” ,eds. Clark,K.L. & Tarnlund,S.A., 1982. Kasif,S., Kohl&M. & Minker,J., ‘PRISM - A Parallel Infer- ence System for Problem Solving”, Procs. IJCAI, 1983. Kowalski,R.A., ‘Logic for Problem Solving”, Elsevier Sci- ence Publishing Co., 1979. Lifschitz,V., “Some results on circumscription”, Workshop on Non-monotonic Reasoning, (Mohonk) 1984. McCarthy, J., “Circumscription: A form of Non-monotonic Reasoning”, AI Journal v.13, 1980.. McCarthy, J., “Applications of Circumscription to Formaliz- ing Common Sense Knowledge”, Workshop on Non- monotonic Reasoning, 1984. McDermott,D. & Doyle, J., “Non-monotonic Logic”, AI Journal v.13, 1980 (15) (16) (17) Minker, J. & Perlis,D., “Protected Circumscription”, Workshop on Non-monotonic Reasoning, 1984. h/linker, J. & Perlis,D., “Computing Protected Circumscrip- tion”, Journal of Logic Programming v.4, 1985 Perlis,D., “Non-monotonicity and Real-time Reasoning”, Workshop on Non-monotonic Reasoning, 1984. Perlis,D. & Minker,M., “Completeness Results for Cir- cumscription”, AI Journal (to appear). PRISM Reference Manual, Dept. of Computer Science, Univ. of Maryland, 1983 Reiter,R., “A Logic for Default Reasoning”, AI Journal v.13, 1980 Sterling,L., “Expert System = Knowledge + Meta- interpreter”, Weizmann Institute of Sciences, Rehovot, Israel, 1984. AUTOMATED REASONING / 927
|
1986
|
33
|
476
|
Towards Explicit Integration of Knowledge in Expert Systems: An Analysis of MYCIN’s Therapy Select&n Algorithm Jack Mostow’ Computer Science Department Rutgers University Hill Center - Busch Campus New Brunswick, New Jersey 08903 Abstract The knowledge integration problem arises in rule-based expert systems when two or more recommendations made by right-hand sides of rules must be combined. Current expert systems address this problem either by engineering the rule set to avoid it, or by using a single integration technique built into the interpreter, e.g., certainty factor combination. We argue that multiple techniques are needed and that their use -- and underlying assumptions -- should be made explicit. We identify some of the techniques used in MYCIN’s therapy selection algorithm to integrate the diverse goals it attempts to satisfy, and suggest how knowledge of such techniques could be used to support construction, explanation, and maintenance of expert systems. 1. Introduction As expert systems develop and proliferate, researchers have increasingly noticed the serious problems caused by confounding various different kinds of knowledge in an expert system’s knowledge base. In particular, [Clancey 83a] focusses on control knowledge, showing how problem- solving strategies are (a% best) clumsily encoded in rules, making the rule base difficult to understand and extend. In a sense, the control problem has to do with the relations among the left hand sides of different “if <condition> then <action>” rules -- that is, with deciding which of several apparently relevant rules to fire in a given situation. In contrast, the integration pro61em has to do with the relations among the tight hand sides of rules -- that is, with combining multiple recommendations made by different rules in the same situation. This problem pervades expert systems but tends to get swept under the rug. In this paper we shall try to bring it out in the open and shed some light on it. Most expert system designers use two basic approaches to integrate the recommendations made by the right hand sides of rules: either they try to finesse the problem by manually compiling it out of the rule set, or they rely on a’ single uniform integration mechanism. As we shall see, both approaches leave implicit the knowledge and assumptions underlying the integration process. This opens the door to various abuses: leaving knowledge and assumptions implicit makes them easier to violate without noticing. * Much of the work described here was performed at the University of Southern California’s Information Sciences Institute, where it wa8 supported in part under DARPA Grant #MDA 903-81-C-0335. Bill Swartout USC Information Sciences Institute 4676 Admiralty Way Marina de1 Rey, California 90292 The compiling out approach engineers the rule set so as to avoid the need for integrating recommendations at runtime. For example, consider the problem of assessing the likelihood that a patient has some disease D, for which there are two symptoms, A and B. Suppose that either symptom by itself suggests the disease with moderate likelihood (40% or 50%), but the two together are very strong evidence (95%). To compile out the problem of integrating the evidence, we might define three rules: If A and not B then conclude (D, 0.4) . If B and not A then conclude(D, 0.6). If A and B then conclude(D, 0.96). While this approach could get the estimates right, it has the unfortunate side effect of introducing dependencies among the rules and of making them less understandable. For example, if just symptom A appears and the system is asked to explain its level of belief in disease D, it might give an explanation like the following: “Since A is present and B is not present, there is moderate evidence that the diseaee is D.= This explanation misleadingly suggests that the absence of B is actually evidence in favor of D. The fact that this rule was carefully constructed along with the other two so as to compile out the integration problem is not available to the system for explanation purposes, since it was never more than an intention in the mind of the rule base mthor. Engineering implicit knowledge into the rule base makes it difficult to understand and extend. For example, suppose that a third symptom, C, provides additional evidence for disease D; adding this knowledge would require splitting each rule listed above into two rules, one for when C is present and and one for when it isn’t. The usual alternative to compiling out the integration problem into the rule base is to build a uniform integration mechanism into the rule interpreter for combining recommendations at runtime. For example, MYCIN dynamically computes a ‘certainty factor” for the recommendation made by the right hand side of a rule, based on the rule’s’ inferential strength, the certainty factors associated with the conditions satisfying its left hand side, and the connectives (like AND and OR) relating those conditions. When two or more rules produce the same recommendation with different certainty factors, MYCIN integrates them by means of a numerical formula. The appropriateness of any such formula depends on certain assumptions. For example, suppose both of the following rules are satisfied: If A then conclude (D, 0.4). If B then conclude (D, 0.6). 928 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. If A and B represent independent evidence for D, then the result of combining these two measures using the certainty factor mechanism may be correct, but if B is a special case of A, then that result will be wrong and it may be necessary to revert to the ad hoc “compiling out” approach described earlier. As this example illustrates, a uniform integration mechanism is based on certain implicit assumptions; in cases where these assumptions are violated, it becomes necessary to program around the mechanism, producing artifacts in the rule base that degrade the quality of the explanations generated by the system. In summary, most expert systems represent their integration mechanisms procedurally, whether in the rule interpreter or compiled into the rules. In either case, the reasoning that goes into integration is not perspicuously represented and hence is unavailable to the system for explanation purposes. Rather than hide knowledge integration in the design of the rule base or its interpreter, we argue that it should be made explicit so that it can be reasoned about and explained to the user. One strategy toward this end is to encode integration mechanisms as meta-rules, i.e., rules for interpreting and combining other rules. This strategy has received some attention [Davis 77, Clancey 83b]. However, it is not clear that standard rule-based architectures are well-suited to representing and applying integration mechanisms. We are not saying that rule-based architectures (which would be more explicit) could not in principle support integration, just that in practice, system builders have not seemed to find it convenient to use rules to reason about integration. An alternative approach to expert system construction that makes integration knowledge explicit was first adopted in the XPLAIN system [Swartout 831 and is undergoing further development in the Explainable Expert Systems project [Neches et al 851. The approach is based on the observation that usually (sometimes only) the person who created an expert system can give a very good explanation of how it works. Our approach for capturing the knowledge normally lost during expert system construction is to use an automatic program writer to create the expert system from an initial knowledge base. This knowledge base contains both abstract knowledge about the domain and general knowledge about expert system construction, including integration knowledge. As the writer creates the system, it records its reasoning in a development history. Explanation routines use this history to produce explanations of the design rationale incorporated in the constructed system. These explanations are valuable both to end-users interested in understanding the system and to system builders interested in modifying or extending it. This approach promises other benefits in addition to improved explanations. Because integration knowledge is explicitly separated out, it does not become confounded with problem-solving knowledge as occurred in the example with MYCIN certainty factors above. It allows greater flexibility, because several different techniques for integration can be represented in the system’s knowledge base, with each one used only when it is appropriate. Finally, the explicit separation makes the system more modular and easier to extend. For more details on this approach, see [Neches et al 851. This paper presents some general knowledge integration techniques we have identified and shows how they were incorporated in the therapy selection algorithm used by MYCIN [Buchanan & Shortliffe 841 to prescribe drugs after it had diagnosed which organisms were likely infecting the patient. We found this algorithm to be a rich source of such techniques, since it integrates a set of diverse and conflicting criteria in selecting the best therapy. In fact, the interactions among these criteria made it impractical to implement therapy selection using MYCIN-style rules alone, so a somewhat ad hoc procedure [Shortliffe 841 was used. Because the knowledge for therapy was implicitly encoded in a procedure, explanations of therapy decisions could not be given and the code proved difficult to maintain. A subsequent reimplementation by William Clancey [Clancey 841 factored out most of the medical knowledge embedded in the procedure into sets of rules pertaining to various therapeutic factors like drug sensitivity and contra- indications. The reimplemented therapy selection algorithm was considerably more general and invoked these rules to evaluate individual therapeutic factors based on knowledge about specific drugs, organisms, or patients; however, the algorithm itself was responsible for integrating these factors into therapy recommendations. The algorithm makes a good case study because its design is dominated by knowledge integration concerns, rather than medical or computational details. 2. Specification of the therapy selection problem The therapy selection problem is easy to specify informally -- given a diagnosis (one or more organisms suspected of infecting the patient), choose the therapy (set of drugs) that best satisfies the following medical goals: 1. Maximize drug sensitivity. 2. Maximize drug efficacy. 3. Continue prior therapy. 4. Minimize number of drugs. 5. Give priority to covering likelier organisms. 6. Maximize covered. number of 7. Don’t class. give two drugs from the same general suspected organisms 8. Avoid contraindications for the patient. Suppose we implemented these goals as rules, e.g.: Rule for goal 4: If therapy x uses fewer drugs than therapy y, then prefer therapy x over therapy y. Rule for goal 6: If therapy x covers fewer suspected organieme than therapy y, then prefer therapy y over therapy x. Therapy selection would require integrating the recommendations made by the rules’ right-hand sides. For instance, if therapy A has fewer drugs than therapy B, but covers fewer organisms, the rules would conflict. AUTOMATED REASONING / 929 Integrating goals requires knowledge about their relative importance. The informal specification above is ill-defined because it omits this knowledge, i.e., it doesn’t specify which therapy is “best” where the goals conflict. What’s medically ‘best” (i.e., for the patient, the physician, the hospital, and society at large) depends on information not available with certainty at therapy selection time, e.g., the actual effectiveness of the therapy, whether the benefits of the therapy will turn out to compensate for its side effects, and so forth. That is, this ideal sense of “best” is non-operational [Mostow 811, and we must settle for a heuristic approximation. Indeed, the operational definition of ‘best” is determined by the decisions made in the course of designing the therapy selection algorithm [S wartout & Balzer 821. The design of MYCIN ‘s therapy selection algorithm was also influenced by computational concerns and restrictions on the design process itself. A comprehensive analysis of the algorithm would explicitly model these aspects as well, but is outside the scope of this paper. 3. MYCIN’s therapy selection algorithm We now describe in brief how MYCIN performs the therapy selection task informally specified in the previous section. MYCIN’s (revised) therapy algorithm begins by considering in turn each of the organisms classified by the diagnosis component as most likely. For each organism, it uses rules to assess each known drug as a first, second, or third choice based on the organism’s apparent sensitivity to that drug. For example, a typical rule is If the organism growing from the culture appears resistant to the drug, then classify the drug as a third choice. Since MYCIN uses rules to handle part of the sensitivity criterion, the reasons why a drug is classified as a first, second or third choice are accessible and MYCIN can explain them. However, the reasons for partitioning the drugs into three categories (and why three is an appropriate number of partitions) are implicitly built into the algorithm and MYCIN doesn’t explain them. Once the drugs have been classified, MY GIN proposes various combinations of them as possible recommendations. This is done by a series of fixed “instructions” that express how many of each category of drug to select (see Figure 3- 1). Number of 1st Number of 2nd Number of 3rd choice drugs: choice drugs: choice drugs: 1. 1 0 0 a. 2 0 0 3. 1 1 0 4. 1 0 1 6. 0 1 0 Figure 3-l: MYCIN’s Table of Therapy “Instructions” MYCIN goes through the instructions in order until an acceptable therapy is found. For example, the third instruction specifies that one first choiCe drug and one second choice drug should be selected. Thus the table of instructions integrates the goals of minimizing the number of drugs to administer and selecting the most effective drugs. 950 / ENGINEERING The proposed set of drugs is then subjected to three tests to determine whether or not it is acceptable. First, coverage is tested to see whether the proposed drugs cover for all of the most likely organisms. Then the set of proposed drugs is examined to ensure that all the drugs prescribed are in different drug classes. Drugs in the same class work via the same mechanisms so prescribing a second drug from the same class will not increase the overall effectiveness of therapy. Finally, MYCIN checks for patient-specific contraindications. Since all three tests are performed by sets of rules, MYCIN can explain them, e.g., explain the rejection of a given therapy by describing which test it failed and why. While some of MYCIN’s therapeutic expertise is explicit, its overall therapy selection strategy and its knowledge about how to integrate the various therapy goals are encoded procedurally (and implicitly). In Section 5 we will show how such knowledge could be made explicit. But first we discuss how the therapy goals themselves are formulated. 4. Representations of goals Our analysis of MYCIN’s therapy selection algorithm suggests that many of the crucial decisions in its design dealt with how to formulate (or reformulate) the therapy goals listed in Section 2. A goal like “maximize drug effectivenessn can be represented in several ways. The choice of representation determines what kind of information about the goal can be encoded, how much space it takes to do so, what inferences can be drawn from the representation, and how much time they take. For example, information about drug effectiveness might be expressed as: l A set of axioms describing which drugs are more effective than others and by how much. This representation can encode arbitrary kinds of information, but drawing inferences from it requires a theorem-prover. l A partial ordering represented as a boolean connection matrix or as an acyclic graph with a drug at each node. This representation encodes no information about the magnitude of differences in drug effectiveness. The matrix representation allows two drugs to be compared in constant time but requires quadratic space; the graph representation is smaller but slower. .A “preference” ordering, which is like a partial ordering but allows distinct elements to be considered equivalent .* * A preference ordering on drugs can be represented as a graph with an equivalence class of drugs at each node, rather than a single drug. l A linear ordering represented as an array of drugs in decreasing order of effectiveness. This representation takes linear space and allows two drugs to be compared in constant time. ** Formally, a preference ordering is a reflexive, transitive binary relation; unlike a partial ordering, it need not be anti-symmetric. However, it imposes a preference between any two drugs. l A metric represented as a table showing a numerical effectiveness score for each drug, or as a procedure or set of rules for computing a drug effectiveness score from other data (e.g. sensitivity and efficacy). Comparing two scores takes constant time. This representation assigns numerical magnitudes to all differences in effectiveness. l A partition of the set of drugs into symbolic categories corresponding to different levels of effectiveness. This representation suppresses differences among elements of the same category. Relationships among the categories themselves can be encoded using any of the representations described here. l A yes-or-no predicate that tells whether a given drug is effective. This representation converts the optimization problem of choosing the most effective drug to the satisficing problem of choosing a drug that is good enough. These representations are not equivalent: they differ in the kind of information they can express. For example, a partial ordering or preference can represent incomplete knowledge about which of two drugs is more effective, while a metric cannot. However, they represent no information about the magnitude of the difference in effectiveness, while a metric does. The different representations also vary in their computational costs; generally speaking, a simple representation like a metric is cheaper to store and use than a more precise representation like a preference graph. In general there is a tradeoff between the expressive power and computational efficiency of knowledge representations when it comes to integrating knowledge. Consider the problem of integrating two therapy goals. Representing each one as a set of axioms allows us to express arbitrary knowledge about it, but requires a correspondingly sophisticated inference mechanism to combine the goals, or to compare how well various therapies satisfy each goal. It is much simpler to represent each goal as a metric, even though this representation has less expressive power. The metrics can then be combined by a weighted sum or other numerical formula. It is even simpler to represent each goal as a constraint -- a predicate that a therapy must satisfy to be considered acceptable. The constraints can be combined simply by conjoining them; that is, a therapy achieves the combined goals if it satisfies both constraints. Faced with the task of integrating diverse goals like the therapeutic criteria listed in Section 2, expert system designers often formulate (or reformulate) them into simple representations such as linear orderings, metrics, or constraints. 5. Partial derivation of the algorithm We now rederive portions of the therapy selection algorithm described in Section 3 by (re-)formulating and integrating the eight medical goals listed in Section 2. Reformulation and integration techniques are highlighted, e.g., EXTEND, and defined as they are introduced. A forthcoming extended version of this paper will present a more complete derivation and list of techniques. 5.1. Combine effectiveness with number of drugs MYCIN’s table of “instructions,” described in Section 3, integrates the goals of fewer and more effective drugs. How can this mechanism be explained? 1. Decide how to represent each goal. 2. Extend the preference for effective individual drugs into a preference among therapies (sets of drugs). 3. Combine the two single ordering. preferences on therapies into a First we must represent the goals to be integrated. The number of drugs defines a linear ordering, call it < fewer ’ on therapies. Individual drug effectiveness is represented as a metric on a scale of 100-1000. To make the table of instructions small enough to construct and store, MYCIN’s designers elected to PARTITION the drug effectiveness metric into three categories: “first choice,” “second choice,” and “third choice.’ Definition: A preference ordering is CONDENSEd by defining a many-to-one function F such that F(x) < F(y) implies x < y. F(x) can be thought of as an abstraction of x. In particular, a metric M(x) is PARTITIONed into the linearly ordered categories 1, . . . . N+l by splitting the range of the metric into the N-t1 intervals defined by N breakpoints given as parameters: PARTITION(tr, . . . t,): M(x) --+ F(x), where F is defined as X(x) (i 1 ti-r I M(x) < ti), t, 5 Mb) < tN+p and + means “is reformulated as.” In the case at hand, M is MYCIN’S drug effectiveness metric, which ranges from to = 1000 (most effective) down to t, = 100 (1 east effective), there are N = 2 breakpoints, t, = 700 and t, = 700, and F(x) is the classification of drug x as a lst, 2nd, or 3rd choice. The PARTITIoNed ordering 1 < 2 < 3 (note that < means ‘is more effective than”) is small enough to combine with the preference for fewer drugs by using an ordered table of 5nstructions”; at runtime MYCIN assigns each drug to one of these categories based on its score, and uses the table to generate therapies in (roughly) best-first order. If the effectiveness metric was not PARTITIONed, the table would be much too big to store, and would have imposed an unreasonable informational burden on the table’s designers by requiring them to make distinctions between therapies based on minute differences in drug effectiveness. In general, CONDENSE assumes that the function F captures all the information required to compare x and y with respect to <. In practice, this assumption is typically violated to some extent, i.e., the original preference is distorted in order ‘to condense it. Also, notice that F(x) < F(y) usually -- but not always -- AUTOMATED REASONING / 93 1 implies a significant difference between M(x) and M(y). Exceptions occur when x and y lie close to the breakpoint separating two categories. For example, the difference between drugs in categories 1 and 2 is usually significant, but not when both are rated close to 700. Next we EXTEND the ordering on preference ordering on therapies. drug categories into a Definition: EXTENDed in to An ordering on individual items can an ordering on bags of items as follows: 1. {x} < {y} iff x < y. 2. If X < Y and X’ I Y’, then X+X’ < Y+Y’, be example, :‘:, <better it specifies { 1,l) cbetter { 2) and {2,2}, but leaves implicit the supporting assumption that ‘a first choice drug is worth a lot more than a second choice drug” [Clancey, personal communication, January 2, 19851. This assumption is nowhere represented in MYCIN, and as we saw in the discussion of PARTITION, it is not always true. When it is violated, anomalies can arise. For example, MYCIN would propose a combination of two drugs rated 701 before proposing a single drug rated 699, even though the former are really not “worth a lot more.” While such anomalies may be infrequent and unimportant enough for the expert system designer to tolerate them, a robust expert system ought to recognize when they occur. where + denotes bag union. For example 1<2 implies 0) < (21 and {l,l~ < WI. We combine the preference cfewer for fewer drugs with the preference ceffective for more effective therapy*** by CONJOINing them. It should be noted that MYC’IN preserves some distinctions among drugs in the same category by generating them in order of decreasing effectiveness scores, so as to help generate therapies in best-first order. This feature doesn’t prevent the anomaly just described, but would make MYCIN propose the drug rated 699 before another “second choice” drug that wasn’t rated as highly. Definition: The result of CONJOINing two preferences, <P and cQ , is the preference <P&q ’ where: 1. If P and Q both say that x is at least as good as y, so does P&Q: x <,,q y iff x 5, y and xc -Q Y- 2. If in addition one of them says that x is preferable, so does P&Q: x <psrq y iff (x 5, y and x < Q y, or (x <p y and x sQ y) The table of instructions shown in Figure 3-l is largely specified by the preference ordering cfewerLeilective . For examp1eV {l) <fewer&effective {lT1) <fewer&effective {lT2) < fewer&effective 0,3). In general, CONJOINing preferences doesn’t specify what to do when they conflict, since it makes no assumptions about their relative importance. For instanceY <fewer&effective imposes no ordering between {l,l} and {2}, or between {2,2} and {1,3}. This partial ordering is next LINEARrZEd into a total ordering which we’ll call cbetter . Definition: Any partial ordering cp can be embedded into a linear ordering <L . (Unless <p is total, it will have more than one possible linear embedding.) The LINEARIZEd ordering has the property that x cp y => x CL Y- The converse does not always hold, i.e., x <L y In general, a LINEARIZEd ordering is ambiguous as a representation of the original preference, since X<Y doesn’t tell whether x is really preferable to y, or if x just happens to precede y as an artifact of how the preference was LINEARIZEd. Suggestion: An attractive way to LINEARIZE a partial ordering would be to explicitly specify the assumptions that the resulting linear ordering should satisfy. A theorem-proving engine would use these assumptions to fill in the ordering relation and identify cases where the assumptions fail to imply an ordering. The designer could provide additional rules to cover such cases, and the process could continue interactively until the ordering was complete. If feasible, this approach would be better than constructing a table by hand because it would make explicit the assumptions left implicit in such tables, making it possible to distinguish preferences based on genuine domain knowledge from those based only on general principles or computational expedience. 5.2. Combine coverage preferences The therapy goals listed in Section 2 include maximizing the number of organisms covered and giving priority to those the patient is likelier to have. Let’s see how these two goals are integrated: 1. Classify organisms as “most likely” or “less likely.” need not imply x cp y; cp might specify no ordering relation between x and y, in which case x cL y is an artifact of the particular embedding cL . That is, an alternative linear embedding, cL, , might not satisfy x CL’ y. 2. Relax the coverage goal by ignoring “less likely” organisms. 3. Reformulate the coverage goal as the constraint that all the “most likely” organisms be covered. MYCIN’s LINEARIZEd sequence of ‘instructions” incorporates some implicit assumptions about the relative importance of therapy effectiveness and number of drugs. 1.1 Note that A ceffective B means therapy A in more effective than therapy B, i.e., preferable with respect to effectiveness. Organism likelihood is PARTITIONed into only two categories, “most likely” and “less likely.” Assuming that the “most likely” organisms are much more likely than the “less likely” organisms, the importance of treating the “most likelyn organisms DOMINATES the importance of treating the “less likely” organisms. 932 / ENGINEERING Definition: Letting one preference -- call it <primary -- DOMINATE another preference -- call it csecondary -- means using <secondary only to resolve ties with respect to < The resulting preference <primar,,.secondary is primary * 1 defined by X< primary Y Or Cx =primary y and ’ <secondary Y). As Section 4 pointed out, reformulating a goal as a constraint makes it possible to test whether a given choice satisfies the goal without having to compare it against alternative choices. Thus converting a preference into a constraint reduces an optimization problem (choose the most preferred element) to a satisficing problem (choose an acceptable element). The latter problem can be solved more efficiently, since it is easier to generate candidates one by one, test each one separately, and accept the first one that passes, than it is to generate all the (possibly infinitely many) candidates, compare them, and pick the best one. Definition: A preference can simply be IGNOREd. For example, ignoring -Csecondary rkduces < primary;secondary to < primary ' This particular case of IGNORE is appropriate if ties with respect to cprimary are too rare to worry about, or if violating <secondary in the event of such a tie wouldn’t do much harm. 6. Applications Explicit knowledge about the integration techniques used to construct an expert system could be exploited in several ways, which we illustrate by means of hypothetical examples of behavior. It is unlikely for two therapies to be equally effective on the likeliest organisms but different on the less likely ones, so it is reasonable to ignore the less likely organisms altogether. A possible rationale for this compromise is a tradeoff between breadth and effectiveness of coverage, based on the assumption that broad-spectrum drugs are less effective than highly specific drugs. “You could generate all of the recommendations in the equivalence class and pick the one covering the most less likely organisms, but this will probably result in choosing drugs that are lower for most likely organisms (within the rankings). For example, choosing a 950 drug for an organism is preferable to choosing a 750 drug, (both are first rank), even if you pick up a less likely organism” [Clancey, personal communication, January 2, 19851. This is a good example of a design decision rationale of the form “Errors of type X are unlikely to occur and wouldn’t do much harm anyway.” Here an “error” would consist of proposing one therapy before another one that covers the more likely organisms just as well and also covers for less likely organisms. The net effect of the PARTITION, DOMINATE, and IGNORE steps is to CONDENSE the preference for maximal coverage by ignoring the less likely organisms. Notice violated assumptions. If the system can test the assumptions on which an integration technique is based, it may be able to detect flaws in its own reasoning. Therapy A, which coneists of two l&-choice drugs, is rated higher than therapy B, which consists of one lst-choice and one and-choice drug, based on the assumption that lst-choice are much better than and-choice drugs. However, one of the let-choice drugs in therapy A ie rated very - close to the and-choice drug in therapy B, 80 this assumption is questionable here. Detect artifacts. If the system can distinguish genuine domain knowledge from the accidental artifacts of reformulation techniques like METRICIZE or LINEARIZE, it may be able to alert the user to spurious preferences in its recommendations. Therapy X is rated higher than therapy Y because the combination of one let-choice drug and one Brd-choice drug comes before the combination of two and-choice drugs in the table of inetructione. The CoNDENsEd preference compares therapies based on the number of “most likely” organisms covered. This preference is now reformulated into a constraint by THRESHOLDin& However, that might be an accident of how the table was conetructed, rather than a genuine medical preference. Support maintenance by inferring constraints and goals. To some extent it is possible to guess rationales for how knowledge has been integrated in an expert system. In the absence of explicit rationales, these guesses may still serve to expose constraints and goals left implicit. For example, the following inferences might be made based on the knowledge that MYCIN’s table of instructions is a LINEARIZEd form of the preference formed by CONJOINing the preference for fewer drugs with the CONDENSEd preference for more effective therapy: From the fact that an explicit table ie used to Definition: A metric M(x) can be converted to a constraint by the THRESHOLD transformation THRESHOLD(tmin): M(x) --+ X(x) (M(x) 2 tmin), where the threshold value tmin is a parameter of the THRESHOLD transformation. (Variations on this transformation use >, <, or 5 in place of 2, and tmax in place of tmin.) In the case at hand, x is a candidate therapy, the metric M is the number of “most likely” organisms covered by therapy x, and tmin is defined to be the total number of organisms considered “most likely.” That is, an acceptable therapy must cover all the most likely organisms. decide tradeoffs between maximiziing therapy effectiveness and minimizing the number of drugs, it appears that the eimpler approach of computing not considered accurate enough a weighted sum was to do t'he job. This reformulation incorporates the assumption that all the most likely organisms can and must be covered. The implicit rationale for this assumption has to do with the risks of failing to treat for a likely condition. **** It appear6 that 3 effectiveness rank6 are coneidered eufficient to discriminate among different drugs, at least for the purpose of deciding tradeoffs between therapy effectiveneee and number of drugs. en'ough. Perhaps 3 ranks were not .t** Such as being sued for malpractice. AUTOMATED REASONING / 933 Maximizing therapy effectiveness appears more important than minimizing the number of drugs, in the senae that increasing therapy effectiveness by 1 rank is considered more desirable than reducing the number of drugs by 1. This sort of information might be useful to an expert system maintainer who needed to revise the knowledge base. Support expert system construction. If reformulation and integration techniques like those described in Section 5 are mechanized, they might eventually be used to help automate expert system construction and documentation. . . . I detect a conflict between maximizing therapy effectiveness and minimizing the number of drugs. Which of the following the goals of relationships holds between the two goals? 1. One goal is absolutely less important than the other. Only use it to resolve ties with respect to the more important one. [=> uee DOMINATE] a. One goal is absolutely less important than the other, and no ties are expected. [=> use IGNORE] 3. The relative importance of the two goals can be adequately expressed aa coeff weighted cum. [=> utse WEIGHTED SUM] icients in a 4. The importance of the two goals is relative and cannot be adequately expreeeed am in a weighted sum. [=> use TABULARIZE] coeff icients [User selects option 4; system tries integrating preferences in a table.] There are too many combinatione of drug effectiveness ecores to list them in a table. I would like to use a coarser measure of effectiveness [i.e., CONDENSE it] . An ideal therapy would consist of one drug rated 1000. How much lower could one drug be rated and still be better than any therapy coneisting of two drugs? [User says 700; system PARTITIONs effectiveness into 300-point subranges.] Automating the knowledge integration process would make it easy to record the design choices, techniques, and assumptions used. Once captured, this information would be available for generating explanations to future users and system maintainers. 7. Conclusion Rational integration of conflicting preferences involves normalizing them relative to a common supergoal. This process requires identifying an appropriate supergoal and using it to analyze the tradeoff among the preferences. For example, in prescribing therapy, it, is preferable to minimize the number of drugs and maximize their effectiveness, but the relative importance of these two preferences depends on their ultimate impact on some higher level criterion. Presumably the implicit topmost criterion in medicine is patient welfare. ***** Determining the tradeoff between the number of drugs and their effectiveness involves balancing the likelihood and urgency ***** Cynics think it is physician income. 934 / ENGINEERING of curing the illness against the likelihood and seriousness of unforeseen drug interactions. However, information about these factors is imprecise at best. When the knowledge required to integrate preferences on a mathematically rational basis is unavailable, domain experts and expert system designers generally integrate them instead on whatever ad hoc basis is cognitively or computationally expedient. In the absence of compelling medical reasons one way or the other, a physician might choose between a one- and two-drug therapy arbitrarily, out of habit, or based on a medically unjustified rule of thumb. While domain experts make such decisions on a case-by-case basis, expert system designers must anticipate the entire class of situations in which such decisions will be needed, and provide general mechanisms for making them. MY GIN’s designers chose to PARTITION drug effectiveness into three categories, which enabled them to store the LINEARIZEd set of “instructionsn in a precompiled table. Presumably, a simpler design alternative would have been to rate each proposed therapy by computing a WEIGHTED SUM of, say, the effectiveness of each drug in the therapy, with a negative term for the number of drugs. Because MYCIN does not explicitly represent the reasons for using a table of instructions, it is not easy to determine why a weighted-sum approach was considered inappropriate, or even whether it was considered at all. Lest MYCIN’s designers regret their generous assistance to us, or the readers of this paper get the incorrect impression that we are attacking MYCIN, we would like to emphasize that MYCIN is not a particularly egregious example of ad hoc integration; the problem of distinguishing arbitrary choices from justified ones is endemic among current expert systems. In fact we chose MYCIN’s therapy selection algorithm precisely because the task is too complex to fit the single integration method (certainty factors) used in the rest of MYCIN and in many subsequent systems. We found the algorithm to be a rich source of techniques for integrating knowledge, and we expect case studies of other expert systems and problem- solving programs to help identify, clarify, and formalize such techniques. If the rationale, or lack thereof, for integrating preferences in a particular fashion is left implicit in the design of an expert system, the artifacts of arbitrary design choices cannot be distinguished from bona fide domain knowledge. That is, when the expert system recommends one alternative over another -- for example, when MYCIN prefers a therapy consisting of one lst-choice and one 3rd- choice drug over a therapy consisting of two 2nd-choice drugs -- we cannot always tell if the recommendation is based on real domain knowledge or is simply the result of some arbitrarily chosen integration scheme. It is important to distinguish between justifying and explaining a conclusion made by an expert system. Justification is based on knowledge (or assumptions) about the domain, e.g., “therapy A is rated over therapy B because it’s medically more effective.” This kind of information is important to the user. In contrast, explanation can refer to computational or design expediency, e.g., “therapy A is rated over therapy B as an artifact of condensing metrics for computational efficiency, and the designers figured it wasn’t important enough to bother fixing.n This kind of information can be important to the expert system maintainer. In building an expert system it is expedient to use various knowledge integration techniques, some more justified by domain knowledge than others. The ultimate goal of this research is to create a framework for building expert systems that would support the representation of such integration techniques and the assumptions and tradeoffs involved in using them. Before that goal can be reached, much remains to be done. We must better understand how to formalize the techniques and represent the situations they apply to. We must also develop mechanisms for applying them and for reasoning about which technique to use in a given situation. Finally, the entire knowledge integration process must be recorded in a machine-understandable fashion for subsequent use in generating explanations. Such formalization will impose considerable overhead on the design process (though it should be somewhat offset by automating some of the techniques). However, we argue that an expert system ought to be able to explain its knowledge integration techniques and their underlying assumptions, both to help the user evaluate its recommendations, and to guide the expert system maintainer in adding new knowledge. In the long run, these enhanced capabilities should justify the overhead required to support them. Acknowledgements We thank Bill Clancey and Ted Shortliffe for their patient explanations of MYCIN, and Tom Dietterich, Jim Bennett, and Rich Keller for their comments on earlier drafts. Of course any errors are our own. References [Buchanan & Shortliffe 841 B. G. Buchanan and E. H. Shortliffe (Editors). Rule-Based Expert Systems. Addison-Wesley, 1984. [ Clancey 83a] W. J. Clancey. The epistemology of a rule-based expert system: A framework for explanation. Artificial Intelligence 20(3):215-251, 1983. [Clancey 83b] William J. Clancey. The advantages of abstract control knowledge in expert system design. In AAAI89, pages 74-78. Washington, DC, 1983. [ Clancey 841 William J. Clancey. Details of the Revised Therapy Algorithm. In B. G. Buchanan and E. H. Shortliffe (editors), Rule-Based Expert Systems. Addison-Wesley, 1984. [Davis 771 R.. Davis. Interactive transfer of expertise: Acquisition of new inference rules. In IJCAI-5, pages 321-328. Cambridge, MA, 1977. [Mostow 811 D. J. Mostow. Mechanical Transformation of Task Heuristics into Operational Procedures. PhD thesis, Carnegie-Mellon University, 1981. Technical Report CMU-CS-81-113. [Neches et al 851 R. Neches, W. Swartout, and J. Moore. Enhanced maintenance and explanation of expert systems through explicit models of their development. IEEE Transactions on Software Engineering SE-11(11):1337-1351, November, 1985. [Shortliffe 841 Edward H. Shortliffe. Details of the Consultation System. In B. G. Buchanan and E. H. Shortliffe (editors), Rule-Based Expert Systems. Addison-Wesley, 1984. [Swartout 831 Swartout, W. XPLAIN: A system for creating and explaining expert consulting systems. Artificial Intelligence 21(3):285-325, September, 1983. Also available from USC Information Sciences Institute as ISI/RS-83-4. [Swartout & Balzer 821 Swartout, W., and Balzer, R. On the inevitable intertwining of specification and implementation. CA CM 25(7):438-440, July, 1982. AUTOMATED REASONING / 935
|
1986
|
34
|
477
|
The Shifting Terminological Space: An Impediment to Evolvability William Swartout Robert Neches USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 Abstract In an expert system, rules or methods interact by creating situations to which other rules or methods respond. We call the language in which these situations are represented the terminological space. In most expert systems, terms in this language often lack an independent definition, in which case they are implicitly defined by the way the rules or methods react to them. We argue that this hampers evolution, and argue for a separate, independently defined terminological space that is automatically maintained. 1. Introduction Due to the experiential, incremental nature of expert system development, it is critical that an expert system shell support the addition, modification and deletion of knowledge with minimal perturbation of the rest of the knowledge base and that a developer be able to determine how a proposed change will affect the behavior of the system. What’s needed for evolvability? There seem to be two major factors. First, the expert system organization should be modular in the sense that it should be possible to change one part of the system independently of the rest of the system. Second, the expert system organization should be explicit so that the effect of a change can be readily understood. Advocates of rule-based expert system shells argue that this desired modularity arises inherently from the use of rules [6]. However, practical experience indicates that merely adopting a rule-based framework does not guarantee modularity and in some ways can impede it [3]. In this paper we will outline some of the more subtle factors that seem to affect evolvability. Focusing on one that we call the shifting terminological space, we will describe how we have addressed these issues in our the Explainable Expert Systems (EES) framework [ll]. The EES project is exploring a new paradigm of expert system development in which the role of knowledge engineers and domain experts is to develop a rich and detailed knowledge base that captures the factual knowledge and problem solving methods in a domain. Executable code for the expert system is then derived from the knowledge representation. Systems built in this fashion are expected to have a richer knowledge base from which to support machine-generated English explanations and justifications, as well as a more modular structure and principled organization that will facilitate the development and maintenance process. 2. Impediments to Evolvability We begin by reviewing characteristics of current expert system frameworks that limit the modularity of expert systems. An overly specific representation for knowledge. Knowledge is stated at a low level and is very specific to a particular task [ll]. For example, MYCIN is sometimes able to determine the genus of the micro-organism infecting a patient but unable to determine its species. When this occurs, MYCIN just assumes that the species of the organism is the most likely one for the particular genus. This is a reasonable default rule, but unfortunately the general heuristic is not represented at all in MYCIN. Instead, it is captured by a set of rules, each one specific to one of the genera MYCIN knows about. For system evolvability this overly specific representation of knowledge is a major problem. From the standpoint of modularity, if one wanted to modify the general heuristic MYCIN employed, there is no single rule to modify. Instead, the system builder would have to locate and modify manually each of the rules that instantiated the general heuristic, with all the attendant possibilities for making a mistake. MYCIN also reduces explicitness, by forcing the system builder to express knowledge at an overly specific level that does not say what the system builder really intended to represent. Confounding of different kinds of know/edge. As Clancey [3] has pointed out, a single rule may combine a variety of different kinds of knowledge, such as domain facts, problem- solving knowledge, and terminology. This reduces the modularity of an expert system because the different kinds of knowledge cannot be modified independently. This compilation of knowledge also occurs, unrecorded, in the head of the system builder, reducing explicitness and tending to make machine-provided explanations difficult to understand. impoverished control structures. Rules do not cleanly support some higher-level control structures, such as iteration 14,131. On those occasions when it is necessary to represent an iterative control structure within a rule-based architecture, the results are usually clumsy and introduce serious interdependencies among the rules. Modularity is reduced because of the interdependencies. Explicitness is also reduced because the interdependencies are artifacts of the architecture and not what the system builder really wanted to say. Limifed (or no) representafion of Went. In most expert systems, rules are written at what Davis calls the “object level” [5], that is, at the level of domain phenomenon. For example, as Clancey points out [3], MYCIN’s higher-level strategies are implicitly encoded in the rules. There is no representation of how particular rules are involved in achieving these strategies, and in fact, the strategies aren’t even represented. Instead, strategies are carefully (and implicitly) encoded by mapping intentions to object level terms. For instance, MYCIN lacks any explicit representation of the diagnostic strategy it employs; instead, that strategy has been carefully encoded by the system builder in MYCIN’s rules and expressed at the object level. This mapping 936 I ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. hampers both explicitness and modularity because usually does not remain valid as the system grows. the mapping To the four problems above, we add a fifth: An ill-defined terminological space. Laird and Newell define the problem state as “the current state of knowledge of the task situation [that] exists in the agent in some representation” [9]. Rules interact by creating situations or problem states that will trigger other rules. Thus, the language in which the problem state is expressed (which we call the terminological space) is the “glue” that holds the whole expert system together. Unfortunately, some (if not most) expert systems lack any independent definition of terminology. Instead, the terms asserted by a rule acquire their meaning based solely on how other rules in the system will react to them. This is undesirable because the definition of terms is implicit; it can also seriously affect a system’s modularity, since it means that changing a rule can affect not only the behavior of that rule, but also the meaning of any terms that it employs. This, in turn, can implicitly affect the behavior of any rules that use or assert that term. We have described-elsewhere how we have dealt with the first four problems in the EES framework (see [ll] for more details). This paper will focus on our solution to the fifth problem: how we have chosen to represent the terminological state and how we express knowledge within that space. To deal with the problem of an ill-defined terminological space, we decided to adopt NIKL [lo], a descendant of KL-ONE [2], as our basic knowledge representation language. NIKL is a semantic network-based formalism and was designed to provide a semantics for concepts that is independent of how those concepts are used in a particular system. Thus, the way a concept is defined within NIKL itself gives it its meaning, rather than having the concept acquire its meaning based on how it is used within the expert system. Because NIKL provides an independent semantics for concepts, its designers were able to provide an automatic classifier [12] for NIKL. This classifier can determine subsumption relations among NIKL concepts based solely on their definitions. In the remainder of this report we will describe how we have represented intent and the terminofogical space within EES, and how we have made use of NIKL: what seems to be good about this sort of principled knowledge representation language, where the problems are, and what opportunities and challenges remain. 3. Representing Intent in a Terminological Space Representing intent explicitly means providing an explicit representation for the goals the system is trying to achieve and the means for achieving those goals. As we have argued above, many expert systems are written at an object level and lack an explicit representation of the goals they are attacking. The lack of an explicit representation for goals reduces a system’s understandability and its modularity. However, it is insufficient just to provide a representation for goals: how the goals acquire their meaning is also cr,itical. Because this problem is generally not recognized, many systems let the goals acquire their meanings based on how those goals are achieved. This means, for example, that a plan for performing a diagnosis implicitly defines what performing a diagnosis means. The problem with this approach is that it confuses how the goal is achieved (that is, the plan) with what it means to achieve the goal. Following this approach can lead to real problems in evolvability because changing the way a goal is achieved implicitly changes the meaning of that goal. A possible objection to requiring that goals be represented explicitly is that it means that the system reacts to goals that have been posted rather than to object-level data. Thus, the system is not free to opportunistically react to novel situations, but rather is more tightly controlled and in a sense has to know what it’s doing. The ability of a system to opportunistically react to novel situations in ways unanticipated by the system builder is, of course, one of the oft-touted advantages of a rule-based expert system architecture, but it seems to work more in theory than in practice. The problem is that it is difficult to specify the condition part of a rule so that the action part is performed in just the right circumstances. The result is that the reaction of the system in unanticipated situations is likely to be wrong, and in fact, a large part of developing a rule-based expert system involves fixing the rule-base so the system does the right thing in unanticipated situations. Our approach is more conservative in that it gives us tighter control over both the situations our systems will handle and the results they will produce. In EES, we have partially addressed the need for independent and explicit goal definitions by representing goals as NIKL concepts. The goals thus acquire their meaning from the semantics of NIKL, rather than from the methods that are used to achieve them. Although we feel this is the right approach to take, we feel we have only partially addressed the problem because NIKL does not allow us to represent everything we would like to be able to represent about’s goal. For our purposes, NIKL can be summarized as having a small set of basic capabilities. (The true set of capabilities, although still manageably small from the standpoint of maintaining fully-defined semantics, is somewhat broader than the simplified picture presented here. For more details, see [12].) New NIKL concepts are created from other concepts through modification of or addition to the roles associated with a concept. Roles in NIKL correspond roughly to slots in frame-based representation languages. When a role is associated with a concept, a value restriction may also be given that specifies possible fillers for that role at that concept. Number restrictions are also associated with roles at concepts and specify how many objects may fill that role at that concept. Specializations of a concept are created either by further restricting an existing role on a concept, or by adding an additional role. Roughly, a concept A subsumes a concept B if all the roles on A are present on B and the number and value restrictions on B are at least as tight as on A’. ‘This see L-1 simplified view ignores having to do with primitive concepts AUTOMATED REASONING / 937 Each goal represented in an EES knowledge base has three major NIKL roles: a requirements description, inputs, and outputs. We can represent this as follows: GOAL = [OBJECT: requirement-description ACTION, input (arbitrary) OBJECT, output (arbirrary)OBJECT].* The filler of the requirement description role is a NIKL concept which is subsumed by the NIKL concept action. It represents the intention of the goal, that is, what the goal is intended to accomplish. For example, English paraphrases of the requirements of some of the goals we have modelled (from several different domains) are: “compensate digitalis dose for digitalis sensitivities”, “locate cause of fault within ground-system” and “scan program for opportunity to apply readability-enhancing transformations”. An important point is that these requirements are not atomic, but are themselves structured NIKL concepts composed of more primitive concepts. The structured nature of the requirements is important because it eases the process of producing natural language paraphrases and because it allows the classifier to infer subsumption relations among the concepts, something that would have to be manually inserted if the concepts were atomic. The requirements description can be thought of as specifying the problem to be solved. The inputs and outputs of a goal specify respectively the data that is available in solving the problem (inputs) and the data that is expected to be returned as part of the solution (outputs). Plans in EES are also represented as NIKL concepts, in a similar way to goals: PLAN = [OBJECT: capability- description ACTION, inputs (arbitrary) OBJECT, ourpurs (arbitrary) OBJECT, method <sequence of goals>]. The capability description of a plan describes what it can do. The inputs and outputs describe the data it expects to receive and provide respectively. The method is a sequence of subgoals that accomplish the action referred to in the capability description. Automatic classification is important in the context of EES because we use subsumption relations in finding plans that can achieve goals. When a new concept is specified information is given about restrictions on its roles. The NIKL classifier can reason about those restrictions to decide about the actual placement of the concept in the hierarchy. It may find that the user-supplied initial description of the object places it higher than it actually belongs. It may find that the concept is subsumed by additional concepts that were not stated in the initial description. It may also find that the new concept has restrictions that make it more general than existing concepts, and interpose the new concept between those concepts and their former parents. For an example designed to illustrate many of the capabilities of the classifier, see [ll]. The NIKL classifier automatically maintains 2 Our notation for concepts is as follows: concepts appear in upper-case. Roles on concepts appear in lower-case. Value restrictions on the possible fillers of a role appear as concepts following the role. Number restrictions on the number of objects that may fill a role appear in parentheses preceding the value restriction. “arbitrary” indicates 0 to infinity fillers. If no number restriction is explicitly stated, the default is 1 or (number 1). A specialization of a concept is formed by further restricting or adding additional roles to an existing concept. In our notation, this is denoted by placing the concept to be specialized within brackets, followed by the modified or added roles. If the concept is to be given a name, that is denoted by an equal sign preceding the left bracket. Thus, in the example of the definition of GOAL above, we see that a GOAL is a specialization of OBJECT with a requirements-description role filled by an ACTION, and input and output roles filled by OBJECTS. 938 / ENGINEERING subsumption relations, so it is possible for the system to “discover” plans that can be applied to a particular goal without the need for the system builder to anticipate that case in advance. The classifier provides an independent criteria for the meaning Of goals and plans. The process by which the program writer finds candidate plans for achieving a goal is as follows. When a goal is posted, the system retrieves the action represented as the goal’s requirement- description. It then examines all actions which subsume that action, to see if any of them are the capability description of some plan. All such plans are checked to see if their inputs and outputs are compatible with the goal. Those which satisfy that constraint become candidate plans for achieving the goal. If several plans are found, they are tried in order, most specific first, until one succeeds. We feel there are two ways this approach aids the evolvability of expert systems. First, intention is explicitly represented. We do not rely on the intention-to-object level mapping referred to earlier that is implicit in many expert systems. As a result, we don’t have to worry about that mapping becoming invalid as the system expands. This makes the system more modular, and hence evolvable. Second, since intention is expressed in terminology that has an independent semantics, the meanings of terms remains Iconstant as the system’s problem-solving knowledge expands or is modified. The meaning of the terms only changes if we choose to explicitly modify the terminological space itself. If it is necessary to do so, we can at least see what will be affefted, because the NIKL knowledge base is cross-referenced so that all uses of a term can be found. This organization also increases modularity and evolvability. 3.1. Describing Actions: A Closer Look Actions are the “glue” in our system that link goals and plans. In this section we look at them in detail. As we use actions in goals and plans, they are essentially descriptors of a transition the system builder intends to take place. We represent the action concepts as verb clauses. The main verb (e.g. “compensate”, “locate”, or “scan” in the above examples) has roles associated with it that correspond to the slots one would find in a case-frame representation of a verb clause in a case grammar. For example, the internal representation for the requirement “locate cause of fault within ground-system” is: LOCATE-I = [LOCATE: obj [CAUSE of [FAULT within GROUND-SYSTEM]]]. This requirement is a specialization of the LOCATE concept, where the “obj” role (corresponding to an object slot in a case frame) is filled by “cause of fault within ground-system”. We also represent prepositions that modify concepts as roles, so “of” and “within” are used to form specializations of the concepts CAUSE and FAULT in the above example. 3.2. An Example Let’s consider a simple example of the addition of new problem-solving knowledge from the domain of diagnosis of a space telemetry system. In this domain, we represent as domain descriptive knowledge3 the structure of the telemetry system (i.e. the systems and subsystems and how things are interconnected.) 3 Domain descriptive know/edge is the knowledge of the structure of the domain. Problem-solving knowledge is a set of plans for diagnosing potential problems with the system. The terminological knowledge characterizes and (through the classifier) induces a classification hierarchy on the different types of systems, and provides a means for linking goals and plans. As an example, consider a very simple diagnostic strategy, which locates a fault within a component by considering each of its subcomponents in tu rn4. Naturally, such a plan will only work if the component actually has subcomponents. The capability description for such a plan would be a NIKL concept such as, [LOCATE: obj [CAUSE of [FAULT within DECOMPOSABLE-SYSTEM]]], wtiere the concept DECOMPOSABLE-SYSTEM was defined as: DECOMPOSABLE-SYSTEM = [SYSTEM contains 1 (minimum I) SYSTEM]. That is, a DECOMPOSABLE-SYSTEM is one that contains a system5. Now, let’s say that in the domain descriptive knowledge we describe one of the components of the system: SPACECRAFT = [SYSTEM: contains 12 TRANSMITTER, contains 7 3 SPACECRAFT-RECEWER]. That is, a spacecraft is a system that contains two sub- components: a transmitter and a spacecraft-receiver. The classifier will recognize that a spacecraft is a kind of decomposable-system, because it contains at least one system. Suppose a goal is posted whos., * requirements description is: [LOCATE: obj [CAUSE of [FAULT within SPACECRAFT]]]. The system will find the plan above because the classifier recognizes that a spacecraft is a kind of decomposable-system. Now, suppose we wanted to add a new problem-solving plan to the system that had specialized knowledge about how to locate faults in systems that contained transmitters. The capability description for this plan would be: [LOCATE obj [CAUSE of [FAULT within [SYSTEM contains47 TRANSMITTER]]]]. The terminological reasoning done by the classifier enables the system to determine that this plan is more specific. Therefore, the system will choose this plan in preference to the more general one for diagnosing decomposable systems, in those cases where it has to locate a fault in a system that contains a transmitter, such as “locate cause of fault within spacecraft”. Thus, we can add new problem-solving knowledge and have the system apply it automatically in appropriate circumstances. This provides us with a very clean mechanism for embedding special-case knowledge within a system. Furthermore, such special case knowledge does not replace the old problem-solving knowledge when added to the system. Whenever possible, the system will try to apply special- case knowledge first because it is more specific, but the general knowledge is still available as a backup in case the special case knowledge fails. Now, let’s consider what happens when we have to modify the terminological space to add a new kind of system, and a new kind of problem-solving knowledge. Suppose our informant describes certain kinds of systems where the inputs and outputs among the 4We have constructed a demonstration system for this example, along with some of the other problem solving strategies in the domain. This system, while still small by expert system standards, begins to demonstrate the feasibility of this approach. ?n our implemented system, there is a number restricfion placed on the contains role that indicates that there must be at least one system filling it. Also, “containsl” is a specialization of the “contains” role, necessary for reasons we won’t go into here. subcomponents are so tightly-coupled that a different diagnosis strategy is more appropriate: e.g., signal-tracing within the system rather than the tree search method we use on decomposable systems. Unfortunately, as so often happens in expert system construction, our informant can’t state precisely the defining characteristics of a tightly-coupled system, but can identify empirically which systems are tightly-coupled. Because we can’t give a precise definition for tightly-coupled systems, we can’t use the classifier to recognize which systems are tightly-coupled. Instead, we have to explicitly define systems as being tightly coupled. Thus, we define the concept “tightly-coupled-system” as a primitive specialization of system and, for example, explicitly define deep-space-receiver as a specialization of that: TIGHTL Y-COUPLED-SYSTEM [SYSTEM], and DEEP-SPACE-RECEIVER = [TIG”;L Y-COUPLED-SYSTEM: contains23 ANTENNA-SYSTEM, contains24 GROUND- RECEIVER, contZns25 SIGNAL-CONDITIONER]. As one would expect, the capability description for the new plan for diagnosis by signal-tracing would be: [LOCATE obj [CAUSE of [FAULT within.T/GHTL Y-COUPLED-SYSTEM]]]. The interesting thing is that when the goal to “locate cause of fault within deep-space-receiver” is posted, one of the plans that is found is “locate cause of fault within decomposable-system”. This is because the classifier recognizes that a deep-space- receiver is a kind of decomposable-system. The system would try to apply the plan for tightly-coupled-systems first because it is more specific, but the plan for decomposable-systems would nevertheless still be available as a backup if the other plan failed. We have been pleasantly surprised more than once by this sort of behavior. This example illustrates how the classifier, by automatically maintaining subsumption relations in the terminological space, allows the system to make appropriate use of its knowledge in ways that may not have been anticipated by the system builder. 3.3. Reformulation An additional capability of the EES framework that makes use of NIKL and the terminological space is its ability to reformulate a goal into a new goal or set of goals when searching up the subsumption hierarchy fails to yield an acceptable plan for acheiving the original goal. This capability enhances evolvability by further loosening the coupling between plans and goals, thus making the system more modular. Explicitness is also enhanced, since the system performs -- and records -- the reformulation, rather than having a human system builder perform it and fail to note having done so. The reformulation capability is discussed in detail elsewhere [ll, 141. 4. Limitations Unfortunately, this approach is by no means without problems. For example, there is no way to construct a concept that denotes “the thing that is the value restriction of the b role of A”. Unfortunately, the need to do this arises frequently in our representation of method knowledge in plans. For example, if we decide to represent systems as having a role “component”, and we have a particular system, “systeml” that has a component “aystem2” there is no way in NIKL to represent something like “diagnose component of system1 ” and have NIKL recognize that that is equivalent to “diagnose system2”. This would be very useful in representing abstract steps for a plan. AUTOMATED REASONING / 939 Another problem is that NIKL does not support transitive relations. This has made it difficult to represent some terms that involve causality, for example, and has forced us to handle some things with problem-solving kiowledge that ideally shouid be handled by the classifier. A third problem is that there are some conceptualizations where inherent ambiguities in the language make it impossible for the classifier to decide if one concept is subsumed under another without explicit guidance. For example, our diagnosis system has a concept of a “start of a component chain”: a sub-component whose input is the same as the input of the component containing it. We might like the classifier to recognize when a new concept is subsumed under this concept. However, it is not enough for the classifier to simply define that new concept so that its input happens to be the same as that of its containing component. Doing so only tells NIKL that inputs of the concept and its containing component are subsumed under a common parent; for the classifier to recognize that the new concept is a specialization of “start of a component chain”, we must explicitly indicate that the values of the two input roles are intended to be identical. This is an important limitation on the ability of the classifier to find relationships that the system builder failed to anticipate. While limitations such as these have hampered our efforts to completely separate terminological knowledge from other kinds of knowledge, nevertheless we feel we have gained significant leverage from NIKL. Furthermore, NIKL is under continued development [7]. 5. Status The system has operated on reasonably large NIKL networks (approximately 500 concepts). The plan-finding and reformulation capabilities described in the preceding sections, as well as the NIKL knowledge base are all operational and support the examples given above. 6. Aids for Constructing Models We have argued for a well-defined and independent terminological space and tried to show how that could ease the evolution of an expert system. However, the increased development time due to the demands of building the initial knowledge base is a severe potential problem. We are developing a knowledge acquisition aid to address this issue, which will help knowledge representation builders plan activities and keep track of status while developing knotiledge bases. It does so by maintaining an agenda of unfinished business that the knowledge base builder must deal with before the specification can be considered complete. The aid seeks to ensure that consistent conventions are followed throughout a knowledge base in terms of both what kinds of information are represented and what form is followed in representing them. Our knowledge acquisition problem is partly defined by our representational concerns. We have to be concerned with eliciting not just a sufficient set of some type of concept, but with forming an abstraction hierarchy and placing concepts properly within it. Furthermore, unlike acquisition aids like MORE [8] and ROGET [l] which essentially have a schema for a particular kind of system that they are trying to instantiate for a new domain, we expect the problem solving knowledge to be part of what is acquired. Thus, one cannot know in advance what kind of system is to be acquired, and therefore cannot make the same assumptions about what kinds of concepts need to be elicited. The fundamental insight represented by the design of this knowledge acquisition aid is that the increased reliance on detailed representation of terminological knowledge creates an opportunity, and not just a problem. Because more concepts are explicitly represented, it becomes easier to communicate about how one intends to make use of them. In particular, we will allow the knowledge base builder to express intentions for the form and content of the evolving knowledge base. Those intentions can be stated in terms of relationships or predicates that are expected to hold for concepts that classify in some way with respect to existing concepts. We will then be able to check, as the knowledge base evolves, whether concepts have been specified that violate these expectations. These concepts then become the target for knowledge acquisition, as we try to bring them into line with the statements of intentions. An example of a problem in knowledge base construction that this could help with has to do with two distinct uses of inheritance of roles in a taxonomic knowledge base. One use is to represent some information at the most abstract concept for which it holds; we expect that instances subsumed beneath that concept will simply inherit the information. Another use is to represent an abstraction of the information itself; in that case, we expect that instances of the concept will not inherit the associated abstract information but will instead indicate a more specific version of the information related by that role. Knowledge base errors can arise because there is ordinarily no information to indicate which of these two uses is intended. Thus a new item can appear to be complete because it has some role it was intended to have, when in fact the value of the role is an inherited value that is actually less specific than is needed. Take for example, the notion of program transformations in our the Program Enhancement Advisor. Building this system requires that the EES program writer reformulate a general goal to enhance programs into specializations like, ENHANCE READABILITY OF PROGRAM and ENHANCE EFFICIENCY OF PROGRAM. Implementing these goals, in turn, requires the program writer to instantiate a plan that calls (among other things) for a goal to SCAN PROGRAM FOR TRANSFORMATIONS THAT ENHANCE CHARACTERISTIC OF PROGRAM. Thus, a keystone concept in the model is: ENHANCE- 7 = [ENHANCE: agent TRANSFORMATION, obj [CHARACTERISTIC of PROGRAM]]. In order to instantiate plans in the manner intended by the system-builders, there is an implicit requirement that the domain knowledge about transformations specifies, for each transformation, some particular CHARACTERISTIC of PROGRAMS which that transformation ENHANCES. That is, to be useful to the Program Enhancement Advisor, a transformation has to be described as enhancing something. This is the knowledge that, in the end, determines what transformations the resulting expert system program will make use of in trying to suggest program enhancements. Thus, although many specializations of ENHANCE-l are possible, certain specializations are crucial; the knowledge acquisition process must ensure that they are supplied. In particular, we expect to see specializations in which both the agent and the obj roles are further restricted, because we do not want inheritance to produce the spurious implication that all transformations enhance all characteristics of programs. In other words, the intention in specifying domain knowledge for this domain is to model which transformations enhance which characteristics. 940 / ENGINEERING The knowledge acquisition aid we are now designing is centered around a language for stating intentions such as these in terms of how various concepts are expected to classify. The aid will be able to look at the results of classification to determine whether or not such intentions have been violated. We can use these intentions to guide us in deciding whether further information is needed pertaining to a concept, and also when the knowledge base is or is not consistent with respect to its intended use (which is a different question from that of whether it is internally consistent). 7. Summary In this paper we have argued that evolution is a critical part of the expert system lifecycle and that expert system frameworks must support evolvability. We have identified several factors that limit the evolvability of many expert systems and focussed on two: limited representation of intent and an ill-defined terminological space. Additionally, we have shown how such an independent terminological space could be used to define goals and plans. Finally, we have argued that a mechanism such as the NIKL classifier can be a significant benefit by finding and maintaining subsumption relations in the terminological space automatically, allowing the system to make use of knowledge in ways that may not have been foreseen by the system builder. Acknowledgements We would like to thank R. Balzer, R. Bates, L. Friedman, L. Johnson, T. Kaczmarek, T. Lipkis, W. Mark, J. Moore, J. Mostow, and S. Smoliar for interesting discussions and comments that greatly facilitated the work described here. The EES project was supported under DARPA Grant #MDA 903-81- c-0335. References 1. Bennett, J., “ROGET: acquiring the conceptual structure of a diagnostic expert system,” in Proceedings of the IEEE Workshop on Principles of Knowledge-based Systems, 1984. 2. Brachman, R. J., and Schmolze, J. G., “An Overview of the KL-ONE Knowledge Representation System,” Cognitive Science 9, 1985, 171-216. 3. Clancey, W., “The Epistemology of a Rule-Based Expert System: A Framework for Explanation ,‘I Artificial intelligence 20, (3), 1983,215251. 4. Davis, R., Applications of meta-level knowledge to the construction, maintenance, and use of large knowledge bases, Ph.D. thesis, Stanford University, 1976. also available as SAIL AIM-283 5. Davis, R. and Lenat D. B., Know/edge-based systems in artificial intelligence, McGraw-Hill, 1982. 6. Davis, R., King, J., The Origin of Rule-Based Systems in A/, Addison-Wesley, 1984. 7. Kaczmarek, T., Bates, R., and Robins, G., “Recent Developments in NIKL,” in Proceedings of the National Conference on Artificial Intelligence, American Association for Artificial Intelligence, 1986. 8. 9. 10. 11. 12. 13. 14. Kahn, G., Nowlan, S., McDermott, J., “A foundation for knowledge acquisition,” in Proceedings of the /EEE Workshop on Principles of Know/edge-based Systems, 1984. Laird, J. and Newell, A., A Universal Weak Method, Carnegie- Mellon University Department of Computer Science, Pittsburgh, PA, Technical Report CMU-CS-83-141, June 1983. Moser, M.G., “An Overview of NIKL, the New Implementation of KL-ONE,” in Research in Natural Language Understanding, Bolt, Beranek, & Newman, Inc., Cambridge, MA, 1983. BBN Technical Report 5421 Neches, R., W. Swat-tout, J. Moore, “Enhanced Maintenance and Explanation of Expert Systems through Explicit Models of Their Development,” Transactions On Software Engineering, November 1985. Revised version of article in Proceedings of the IEEE Workshop on Principles of Knowledge-Based Systems, December, 1984 Schmolze, J.G. & T.A. Lipkis, “Classification in the KL-ONE Knowledge Representation System,” in Proceedings of the Eighth International Joint Conference on Artificial Intelligence, IJCAI, 1983. Swat-tout, W., “A digitalis therapy advisor with explanations,” in Proceedings of the Fifth International Conference on Artificial Intelligence, pp. 819-825, Cambridge, MA., 1977. Swartout, W., “Beyond XPLAIN: toward more explainable expert systems;” in Proceedings of the Congress of the American Association of Medical Systems and lnformatics, 1986. AUTOMATED REASONING / 94 1
|
1986
|
35
|
478
|
USING QUALITATIVE REASONING TO UNDERSTAND FINANCIAL ARITHMETIC Chidanand Apt6 and Se June Hong IBM Thomas J. Watson Research Center Yorktown Heights, New York 10598 Abstract This paper describes a general mechanism for the qualitative inter- pretation of simple arithmetic relations. This mechanism is useful for the understanding and reasoning about domains that can be modeled by systems of simple arithmetic equations. Our representation at- tempts to model the underlying arithmetic in its complete detail. Reasoning from these forms provides the completeness and consist- ency that cannot be always guaranteed by a pure production-rule based system. We describe an experimental architecture for Equation Reasoning (ER), and illustrate its applicability using examples from the financial domain. 1 Introduction One popular form of representation for building expert systems is production rules. This representation has been found to be highly successful for encoding empirical knowledge directly elicited from human experts. This empirical knowledge is normally based on the human expert’s experience at solving specialized problems. Many times the underlying basis for this knowledge is hard to formulate, and rules are then the most ex- pedient way to encode the problem solving knowledge. A class of problems exist for which it is possible to develop an underlying model of problem solving. Expert problem solving rules in such domains usually turn out to be specialized pre- compiled statements that are, in fact, derivable from the under- lying theory. For building computer based problem solvers, it would then seem more advantageous for these systems to di- rectly represent the underlying theory and reason directly from this representation just as a human expert would in the absence of pre-compiled rules. Intuitively, we can see the advantages of circumventing the problems of completeness and consistency which arise when one attempts to directly transfer a human ex- pert’s situation specific compiled rule knowledge into a com- puter program. However, on a more practical level, underlying theories, possibly well defined, often prove to be computa- tionally expensive, and the trade-off between expediency and accuracy usually leans towards the use of production rules, even if it implies painstaking knowledge engineering efforts to ensure maximal coverage by the knowledge base. There do exist some problem domains for which the underlying model is no more complex than simple arithmetic equations. It is an interesting research issue as to whether we can draw upon principles and mechanisms of causal and qualitative modeling for representing these simple underlying models, and use them as augmented problem solvers or support tools for knowledge acquisition and explanation generation. Human experts rarely reason from underlying models, however simple, only because they have a truly vast store of applicable surface rules. On the other hand, it is not impossible for one to use a “reasoning from first principles” approach in the absence of such rules, when given a problem that can be abstracted down to its underlying mathematical form. When we try to examine, understand, and solve problems that are governed by numerical formulations, it is possible for us to use our knowledge about numerical expressions to understand how we can make the cor- rect assumptions and approximations to get a quick feeling for the expected behavior of the unknowns in the solution. We may then progressively refine or restrict our approach to obtain more precise and accurate solutions. Many times, exact values of variables required for solving the problem are hard to come by, and inexact or estimated values need to be used for obtaining qualitatively reasonable solutions. For a computer program to be able to use this approach, we need an explicit representation for the generic knowledge that can reason about numerical relations. Mechanisms are required for applying this knowledge to a situation that is modeled by equations. Strategies need to be developed for combining this domain independent equation based reasoning with the domain environment, which may include heuristic reasoning for solving problems that are not entirely based on pure numerical consid- erations. 1.1 Motivation Our interest in this mechanism stems from a desire to build knowledge based automatic reasoners in the domain of finance. Financial planning and analysis is centered around a couple of dozen of arithmetic equations. Expert finance specialists seem to use a combination of numerical computations and heuristic rules. Upon closer examination, it turns out that a major portion of these heuristics are in fact derivable from exactly the same equation set (that are used in the computations), using a qual- itative reasoning approach. It then seems natural to use a core representation for this basic equation set, and implement both quantitative and qualitative problem solvers that work off the same single representation. The financial application is an extremely appealing problem from the viewpoint of building AI systems. There are suffi- ciently complex yet not impossible problems in this domain that merit closer inspection. Other than problem solving mech- 942 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. anisms, there is potential for applications of advanced know- ledge representation techniques, problem solving control mechanisms, natural language processing, and many other AI notions. Knowledge based systems for business and finance may prove to be a rich arena for testing the fusion of many diverse AI mechanisms and concepts. 1.2 Approach Assume an artifact whose behavior is governed by the equation a+b2- c = 0. What can we say about the behavior of the variables a, b, and c? Very little, given no further particulars. However, if we are told that typically a is extremely small com- pared to b , and that both a and b are typically greater than I, and then asked how a small change in a would affect c , it would be relatively easy to answer. Because of the relation between a and b, a significant change in c will be only noticed if there is a change in 6, and not otherwise. What we just did was apply our qualitative knowledge about arithmetic relations to the above equation, and made a statement of effect based on certain do- main specific knowledge about the variables in the equation. How should a computer based system solve a similar problem? The quick way is to build a rule based system that will incorpo- rate rules like “If ‘a’ is extremely small compared to ‘b’, and ‘a’ is changed by a small amount, then %’ will not change significantly or some variation thereof. We could write countless rules just for the equation c = a + b2 for covering different combinations of conditions. If our system is modeled by some dozens of dif- ferent equations then we need to repeat these countless rules for each of the equations. The folly in the rule based approach is obvious; the clean way to do this is to separate the equations and their domain specific knowledge into a distinct representation, and use domain independent knowledge about arithmetic re- lations to reason about what is represented. Our architecture is based on what we view as the clean way. All equations that are available to model a problem or parts of it are stored in an Equation Base (EB). A special purpose problem solver for Equation Reasoning (ER) can inspect, manipulate, and reason about the equations in EB using its expert rules about arithmetic relations and expressions, in conjunction with domain specific Situational Knowledge (SK) about the current states of the variables involved. ER can be asked to determine the consequences on a variable(s) in a specified state, via a query. As a side effect, ER may be also queried for a symbolic solution for a specified variable (that can be used in computing the value(s) of that variable). ER is itself completely domain independent, all it knows about is simple arithmetic operations and their effects. It is a domain dependent problem solver that has to transform problems from the domain to specialized que- ries in the equational domain, and transform answers in the equational domain to inferences in the application domain. 1.3 Related Work The use of a central “base” of equations to perform intelligent problem solving can also be seen in the work of [Kosy * and Wise 841, where an equation base is used as a self-explanatory financial planning model. The use of equations in this work is to mainly give the system’s spreadsheet like computation facility an a posteriori reasoning capability to gen- erate explanations for computed values from the underlying equations. Our effort is more in the development of an a priori reasoning mechanism to solve for and generate problem solving steps or “rules” as and when required. More recent attempts in the use of qualitative reasoning for solving financially related applications can be seen in [Hart et al. 861. The major difference here is that this ap- proach is considering more abstract level representations of fi- nancial activities as a basis to reason with, and not attempting to represent the underlying “arithmetic”, as we wish to. 2 Representation and Reasoning about Equations Consider a system to be reasoned about that is modeled by a set of equations. In the course of problem solving, we have to per- form both qualitative inferences and numerical computations. Then given a problem specification, one way to solve that problem is to use an available production rule statement or a computable function to prod&e a required answer. What if the rule or the function is not available? Since we know that the system under consideration is modeled by a set of equations, we transform the domain situation problem into a query that consists of a goal accompanied by a constraint set on the vari- ables. We then perform some symbolic manipulation and qual- itative reasoning on the equations, to derive an answer to the query. The derived answer is then transformed back into a do- main inference or value. This mode of operation is illustrated in Figure 1. Domain Production Rules Doma i n Situation b Inference Problem Commutation Functions or Value Goal and Constraint Specificat Symbolic Manipulation b on Qualitative Reasoning I Transform Solved Forms and Derived Constraints Figure 1. Reasoning from equations: In the absence of available rules or functions, reasoning or solving may be done at the level of the underlvinn equations We represent all equations in their unfocused prefix notation in an Equation Base. For example, the equation x = y t z can be rewritten in its unfocused form as x x z - y = 0. Correspond- ing to this equation, the form stored in the equation base would be (- (x x z) y). To reason about or compute a variable that be- longs to this equation set may require the solving of these equations to first obtain a focused symbolic solution for the variable. Classical computer based systems for symbolic ma- nipulation of algebraic forms (e.g. MACSYMA, SMP, SCRATCHPAD II [Wolfram 853) incorporate advanced gen- eral purpose algorithms for operations such as polynomial ma- nipulation and symbolic integration. For the simple arithmetic world, there are more restricted but efficient approaches [Derman and Van Wyk 841, [Hansen and Hansen 851. We use a simple Solve facility, based on the latter, for the purpose of solving for a variable of interest from a given set of equations in the equation base. This capability is required for two pur- AUTOMATED REASONING / 943 poses, for solving for a variable so that its value may be com- puted, and for solving for a variable so that its behavior may be reasoned about, in a focused manner, under weakly specified conditions. The major components of our problem solving mechanism are: 0 An equation base (EB), for storing all the available equations in their prefix unfocused notation. a An equation reasoner (ER), that uses specialized and qualitative knowledge about arithmetic for solving and reasoning about the contents of EB. Figure 2 illustrates the architectural organization of the overall equation reasoner. Various components will be appropriately described, although the focus in the remainder of this paper will be upon the inference capabilities of the qualitative equation reasoner (ER) and the knowledge it employs. Domain Situational 41 Knowledge (SK) t Queries b Domain Problem Transformations Answers Equation 7 Reasoner (ER) Figure 2. Overall organization of an Equation Reasoner: Indicates how a sing/e equation base may be used for both numerical computation and qualitative reasoning activities Typically, a domain problem solver will send off a request to ER for a computable function for a certain variable, or for finding out a qualitative set of consequences on variable(s) un- der certain situation specific constraints. These requirements are specified as part of a query statement that is composed by the domain problem solver. Some distinct types of queries that ER can produce answers for are: 0 What is the function for computing the value of the domain variable v? 0 Given a variable Vand a constraint set on it c(V), what are the implied constraints on one or more of the variables (XI, a**> -5) ? a Given variables (x1, . . . . x,), and constraints on them 4x,>, --a, c(x,), what are the implied constraints on the var- iable v? Producing an answer for the first query is just a matter of simple symbolic solving for producing a computable expression for a focused variable v. A domain problem solver would typically request ER to produce this if it didn’t already have available to it a function for computing a value for the variable under ques- tion. Of more interest is the way ER handles other types of queries. In a sense, producing answers for them also requires the symbolic solving for a focused variable V. In addition, ER can apply its general knowledge about arithmetic functions to situ- ation specific and query specific domain knowledge about the variables involved to come up with some qualitative solutions. Thus, ER can solve for a variable V to obtain a form v = fh +2, *a-*, x,) , and use weakly specified constraints on x1 . . . x, to derive some qualitative descriptions about V. ER can also apply this reasoning in the reverse direction, i.e., given some weakly specified constraint requirements on V, it can derive some qualitative statements about the states of the variables x1 . . . XII* A query posed to ER includes two set specifications, a in- terms-of variable set, and a subset of this, a controllable variable set. Exactly one member of the controllable set is identified as a focused variable, and one or more members of the in-terms-of set are associated with a constraint list. Determining the relevant members of this set is done by the domain problem solver using the situational knowledge base (SK). ER will solve for the specified focused variable, and in the process, will only inspect and keep track of qualitative effects on the controllable set variables. Any other variables that may come up in the symbolic solution will be considered invariant in the current situation. The in-terms-of variable set is a specification of variables in terms of which the focused symbolic solution is to be computed. Often, EB may contain multiple independent solutions for a variable, in which case specifying a in-terms-of set helps to re- strict the number of solutions that are to be inspected. In some other cases, though EB may have a unique solution for a vari- able, a in-terms-of variable set specification is used to prevent the substitution of sub-expressions in a symbolic solution to finer resolutions. This is indeed the case in many situations when sub-expressions directly correspond to some domain concepts of current interest and the reasoning is to be done in terms of these concepts and not anything finer. ER’s general knowledge includes what we view as the qualitative knowledge about arithmetic relations. Currently, ER’s know- ledge is limited to the more common arithmetic notions, includ- ing Addition, Subtraction, Multiplication, Division, Exponentiation, Summation expressions, Binomial expressions, Rounding, Integer parts, Fractional parts, Sign, Magnitude. ER knows properties about these more common arithmetic re- lations. For example, given a relation x), ER can use its know- ledge about exponentiation to make various inferences. The exponentiation knowledge includes rules that combine notions of even powers, odd powers, sign and magnitude of base, etc. An example of a qualitative rule about exponentiation is If the base is a positive real fraction less than I, then the result will de- crease as the exponent increases. A related rule would be If the base is a positive real number greater than I, then the result will increase as the exponent increases. Which of these rules to apply to a given situation is very much dependent upon the assumptions that can be made. An impor- tant additional ingredient that is therefore used by ER is sim- plifications and approximations based on assumptions that can be made about the operands involved in a relation. The ability to make these assumptions relies very much on domain and en- vironment specific facts that can be gathered about these oper- 944 / ENGINEERING ands. These facts may be thought of as a combination of domain knowledge that is partly query dependent and partly query in- dependent situational knowledge (SK) that can be accessed by ER during its reasoning process. The types of facts that ER will attempt to assess for a query using the domain situational knowledge base (SK) include Is the operand a domain concept?, Is the operand’s domain/situation value available?, Is the oper- and’s domain/environment value available?, Does the operand have a domain/situation typical/default value?, Do the operands have typical/default relative magnitudes?. Available data in SK regarding these questions in combination with ER’s general knowledge about arithmetic make it possible to infer and prop- agate qualitative assessments about an arithmetic relation. Many frequently occurring patterns in arithmetic expressions possess special properties that can be used for making quick qualitative interpretations. For example, the quadratic form a xx2 + bxx is constrained in its behavior by the special role x plays in the relation (think of the quadratic curve). However, if the terms a xx* and bxx are inspected independently, we may possibly make locally correct but globally incorrect interpreta- tions. One way to avoid this problem is to look for patterns matching a library of known special relations, before breaking into sub-expressions. To make this pattern match problem easy, the equation base (EB) keeps some such special relations (that are expected to frequently occur) in a pre-parsed form so that a special relation can be detected just as any standard arithmetic operator. For example, the domain that we wish to apply ER in has one very frequently occurring pattern (1 + y)-‘. When an equation containing this form is entered into EB, a pre- processor replaces the form by (binomial1 y z). From the view- point of numerical computation, all required information is preserved too (e.g. a function binomial1 can easily compute (1 + y)-2 from its two arguments y,z). Assume that EB contains three equations (- (V (x x w))) (cor- responding to V - xx w = 0), (- (x w (- 1 (binomial] y z))) y) (corresponding to w x ( 1 - ( 1 + y)-‘) - y = 0)) and (- (x (x s t))) (corresponding to x - sx t = 0). Note that EB represents a binomial term not in its fine details but rather as (binomial1 y z). Assume ER receives a query that requires reasoning about the focused variable V, with a in-terms-of variable set specifi- cation of (x,y,z, V) and a controllable set specification of (x,y, V). ER’s solution for V would be (xxy) + (1 - (1 + y)-‘) (note that x is not further solved for because it is specified as a in- terms-of variable). The representation ER uses for building this solution is a variation of the standard prefix notation for storing expressions. For one thing, certain recognizable arithmetic concepts (e.g., a binomial term, in this case) are preserved as such, and not expanded into finer sub-trees. Also, appended with each operation and its operands is a property list that is gathered from an active domain specific situation environment. In its most general form, the symbolic solution for a focused variable is of the type (concept operand1 &optional operand2 . ..) where concept is an arithmetic operator or special form and each operand is itself an internal focused variable or a domain vari- able. ER makes two passes through this solution, once top- down for propagating qualitative assessments to each of the controllable variables, and once bottom-up for aggregation of individual controllable variables’ qualitative constraints to their parent focused variable. For example, if the above query comes in with a constraint on V, “decrease ‘I, then ER would use its two-pass propagate-aggregate mechanism to come up with consequent constraints on x, “decrease ‘I, and on y, “decrease the ER can also detect conflicts in assessments for a variable and perform local backtracks for making consistency corrections. For example, consider a focused expression for a variable p to be (q - t) + (r - t) Suppose we wanted to propagate a con- straint on p, “increase” to its domain variables. Using its knowledge about I’+, - ‘I, ER would see initially conflicting as- sessments for t, “decrease” and “increase I’. One way to handle such problems is to include a form like the above in EB/ER’s notion about special arithmetic expressions. Then such a pattern would be detected before propagation takes place, ensuring correct actions to be taken. However, there are innumerable such patterns with special properties, all of which cannot be possibly cataloged in advance. In the case of the above form, assuming it to be not pre-compiled in advance, ER first posts a tag for the numerator “increase” and a tag for the denominator “decrease ‘I. It then posts a tag for t, “decrease” when propagat- ing constraints to the sub-expressions in the numerator. It then pursues the denominator, attempting to post a tag for t, “in- crease ’ ‘. A conflict is detected, and ER will backtrack to the immediate parent concept that subsumes the roots of the con- flict, which in this case is (q - t) + (r - t). ER first attempts to resolve this conflict by considering the relative magnitudes of q and r (based upon data in SK). However, in the absence of any special knowledge about this concept, ER resorts to simulation techniques to propagate constraints on q, r, t. Within the bounds of situational constraints provided by SK, ER will actually plot the behavior of this term for some numerical value assignments to the variables involved, to come up with some reasonable qualitative assessments for those variables. How does this mechanism apply to a real example? The intent is to use ER to solve a class of problems that arise in financial planning and analysis. The following section will describe the application of ER to qualitative reasoning about the arithmetics of finance. 2.1 Understanding the financial impact of a capital acquisi- tion When a corporation acquires a major item such as a large com- puting system, it has available to it various financing options which can be used for the acquisition. Which option to use is governed by certain well defined concerns of how the trans- action could potentially affect the corporations’ financial ratios and books, and by some related but not so well defined qualita- tive issues (e.g. a corporation may always choose a particular bank to finance its acquisitions because it “likes” the way the bank treats its corporate customers). ER’s usefulness is limited to the first category of concerns. The financial ratios and book entries are governed by equations, some of which are shown in Figure 3. The two major types of financing available to a corporation are leasing and purchasing. Of course, there are several sub-types in each category. Many times, a corporation may choose one method in one time interval and another in a different time in- terval. It may also choose different financing methods in the same time interval for different capital acquisitions. To deter- mine a corporation’s preferred financing method for a specific AUTOMATED REASONING 1 945 CurrentRatlot = CurrentAssetst CurrentLlabllltuzst PercentageDebtt = Debtt TotalAssetst EarnlngsPerSharet = AfterTaxProfltst TotalSharest n CashFlowt = x CashFlowItemst, 1 l=l P TaxBenefltst = TaxRate x c TaxExpenseItemst, 1 + TaxCredltst i=l CashFlowAfterTaxt = CashFlowt - TaxBenefltst t CumulatlveDlscountedCashFlowt = c CashFlowAfterTaxl i=l (l+D~co"ntRate)~ g ProfltLossExpensest = c ProfltLossExpenseItemst, 1 l=l 9 ProfltLossBenefltst = TaxRate x 2 ProfltLossExpenseItemst, 1 i=l ProfltLossImpactt = (1 -TaxRate)r (Interestt + Depreclatlont) AfterTaXProfltst = OperatlngIncomet -Taxest -ProfltLossImpactt Figure 3. Sample of financial equations: A complete set is stored in its unfocused form in ER’s equation base. capital acquisition, its financial profile needs to be assessed. The reasons as to why a company chooses a certain financing method is based on the company’s concerns about how the ac- quisition may potentially affect its balance sheet and income statement. The company’s concerns may be expressed as re- quirements on financial ratios, which along with the balance sheet and income statement may be expressed as a set of equations. The impact of the acquisition on the ratios and books may then be determined by propagating qualitative constraints across these equations. When a financial expert deals with problems in planning and analyzing financing methods, he is typically using heuristics like If the acquisition decision maker wishes to maintain a high profit margin, then use the ProfitLossImpact as a comparison basis, or If the corporation is cash-poor, has a low effective tax rate, and a high borrowing rate, then it will be strongly inclined to lease. The human expert has a countless number of such rules that he uses over and over again. These rules may be thought of as qualita- tive solutions to situation specific problems, and for a large part, may be derived from equations of the type illustrated in Figure 3. For an expert system to be able to reason about fi- nancial problems, it will either require all such rules to be en- coded in advance, or have the ability to reason directly from the underlying equations, when there are no readily applicable sur- face rules. The advantages of the latter are obvious, and we are fortunate to have an underlying theory consisting, in large part, of simple arithmetic equations. Qualitative reasoning comes in very useful for answering questions that come up while performing the financial planning and analysis required for choosing an “attractive” financing method for acquiring a capital intensive item. For example, during the planning phase, typical questions may be of the type: 1. What financing method will best suit a corporation? 2. How does the use of a certain financing method for acquiring some equipment affect the corporation’s EarningsPerShare ratio? 3. What financial criteria to use for ranking the outcomes of analyses of a series of financing alternatives? To answer such questions, we have to essentially inspect the fi- nancial equations, and determine the implications and con- straints imposed upon certain variables under domain specific situations. The answers lie in the derived implications and con- straints. Situational knowledge that is available is not always precisely (numerical values) specified. We do have to make use of whatever specifications are available for deriving a reason- able and rational qualitative solution. Let us consider some examples of qualitative derivations of sit- uation specific interpretations. Consider question 3. Suppose the domain specific financial planner requires an answer to the problem, If a corporation’s concern is its EarningsPerShare ratio, what criteria should be used for ranking analyses of several fi- nancing alternatives? The query passed on to ER by the domain problem solver will include the controllable set < EarningsPerShare, CashFlow, CashFlowAfterTax, CumulativeDhcountedCashFlow, ProfitLossImpact>. The vari- able identified as the focused variable in this set is EarningsPerShare, with a constraint on it “increase “. ER will attempt to solve for EarningsPerShare, coming up with the sol- ution: EarnlngsPerShare- OperatuIgIncome -Taxes-Prof1tLossImpact TotalShares El Using the strategy described previously, ER will deduce a con- straint on ProfitLossImpact, to be “decrease”. When passed back, the domain problem solver can infer that If EarningsPerShare is of concern, rank alternatives by ProfitLossImpact. Taking another example, consider question 2. Suppose the do- main problem solver requires an answer to the query What is the impact in the first year on the EarningsPerShare ratio if outright purchase (using bank financing) is used for acquiring a $5 million worth computing system The query passed on to ER will consist of the controllable set < EarningsPerShare, Interest, Depreci- ation >. The variable identified as the focused variable in this set is EarningsPerShare, with constraints on “Interest ” to be “about a $I million increase” and “Depreciation ” to be “about a $I million increase ’ ‘. ER will attempt to solve for EarningsPerShare”, coming up with the solution: EarnlngsPerShare= OperatIngIncome -Taxes - (1 - TaxRate)x (Interest+Depreclatlon) TotalShares E2 ER will assess (using a fact from SK that OperatingIncome is not extremely large compared to Interest + Depreciation) a constraint on “Earnings per share” to be “significant de- crease”. Upon passing back, the domain problem solver may make an inference of the type An outright purchase of a capital asset may adversely affect a firm’s EarningsPerShare ratio in the first year. 3 Discussion The production rule formalism is not well suited for many quantitative problem solving mechanisms. Attempts at ad- dressing this shortcoming has resulted in systems.like SOPHIE 946 I ENGINEERING [Brown et al. 821 and ELAS [Apt6 and Weiss 851 that inte- grate numerical models of a problem with appropriate heuristic models. What eluded most of these systems was the ability to inspect and reason about the quantitative/numerical models at a qualitative level. This was mainly due to the absence of a model of general knowledge about quantities and their relations, and an appropriate representation for the quantitative problem solving models. The recent surge of activity in the areas of causal modeling and qualitative reasoning may be partially viewed as attempts to remedy this lack of representational detail. Although the use of causal models is not new [Weiss et al. 781, the more recent approaches have begun to address the qualitative and causal modeling of specific quantitative problem solving methods. Much of the hallmark work in this area appeared in a special is- sue of Artificial Intelligence [Kuipers 841, [DeKleer and Brown 841, [Forbus 841. Our approach to a specific domain problem is very much inspired by this recent work in qualitative reasoning and simulation. What is it we are doing that is different? It is primarily in the representational detail we wish to use. In much of the work on qualitative simulation and understanding of physical systems, researchers have attempted to use some form of abstract repre- sentation to model mathematical entities like differential equations for the purpose of reasoning about variables that are constrained by such equations. This shift in the representation is a forced requirement when dealing with complex forms like differential equations. A major disadvantage in abstracting the representation of mathematical forms is that certain information is lost, and that may exclude reasoning steps that require the comparison of magnitudes or computation of numerical values. The domain of mathematics that we deal with is simple arith- metic, and thus it is possible to achieve our more exacting re- quirement, with lesser computational overhead than that associated with typical causal models of complex mathematical systems. We have formulated an architecture for equation reasoning, and studied its application to the financial domain. The examples il- lustrate the potential role of ER as a special purpose reasoner in a financial/business expert system. The main advantage of using ER is for its ability to understand and reason about con- strained variables in a generalized way. Because ER uses a sys- tem’s underlying equational model, it ensures completeness and consistency in its answers. Another important advantage is in the use of EB for storing equations in their true form, so that they may be used for performing both numerical and qualitative computations. ER’s inference strategy works on a prefix sym- bolic solution for a focused variable. The inference is a two-pass propagate-aggregate mechanism that detects and resolves in- consistent assessments by local backtrack actions. The power of ER lies in its own general knowledge about arithmetic, and it is the “knowledge engineering” of this base that is most crit- ical to the working of our architecture. It is interesting to note that from the viewpoint of a domain, ER solves problems using a “first principles” approach, although the knowledge base of ER itself is quite akin to “expert rules” about arithmetic equations. ER is currently not able to provide all that is needed for use in an expert system. Our final goal is to strengthen ER’s know- ledge base by building a comprehensive catalog of general knowledge about arithmetic expressions. We are also enriching the constraint language used by ER for propagating and storing derived qualitative assessments. In particular, we would like to see an external problem solver be able to compare and contrast ER’s solutions for problems of comparable nature. For example, a question of practical and theoretical interest to financial and business analysts is the corporate assessment problem of deter- mining in advance whether a firm should lease or purchase cap- ital assets. It requires posing multiple queries to ER (e.g. one for evaluating the impact of a lease on the firm’s “xyz” ratio, one for evaluating the impact of a purchase on the firm’s “xyz” ra- tio, etc.). The external problem solver then should be able to compare the returned qualitative assessments of the impacts on this ratio to determine which financing method is preferred. This requires ER to have the capability of computing qualitative as- sessments using a sufficiently rich constraint language. Another potential area where ER could be further strengthened is in its role in the creation of queries. Currently, the domain problem solver uses the situational knowledge base (SK) to form these queries. However, there do exist causal connections between domain concepts and the variables in ER’s equations. One way t? find out about the relevant in-terms-of and con- trollable sets is to actually interrogate ER about the variable in question while forming a query for it. 3.1 Current Research Many interesting extensions can be made to ER. Some of these are essential before ER can become a practical real world tool. Others are interesting research extensions. We present some open problems that we are currently investigating. Strategies for integrating domain dependent heuristics: As we pointed out earlier, the possibility of mapping all domain spe- cific problems into constraints on arithmetic relations is too ideal a situation. While we intend to use a generic mechanism to reason with numerical relations themselves, there do exist domain specific heuristics that just can’t be mapped into our structure yet are useful to the reasoning process. We therefore require the mechanisms to be able to separately represent such heuristics and make use of them where feasible. Many times, very specific heuristics produce solutions that are in complete disagreement to what ER may produce. In other cases, ER may fail to produce a solution while specific heuristics might. It is cases such as these that will require ER to be integrated with domain dependent heuristics. Integrating ER to a domain prob- lem solver brings up an interesting issue, how does a domain problem solver know that it does not have a domain heuristic and therefore needs ER? Or vice versa? We would like to re- solve these issues at least partially in the course of our on-going investigation. Knowledge Acquisition: Many concepts that are derived from a formula based representation in the course of solving a problem may be useful over and over again, during the same problem solving process, as well as in new ones. There is an interesting possibility of caching the query-answer pairs as rules for future use within the domain problem solver. We illustrated how some typical questions (2 and 3) are solved by ER (El and E2). The answers returned from ER, once transformed back into domain terms, can be viewed as a domain inference. For example, AUTOMATED REASONING / 947 question 2 coupled with the answer of E2 may constitute a rule for a specific situation. The advantages of caching are twofold. One, we do not need to pre-compile all such concepts in ad- vance. Also, often used lines of reasoning should be available as surface rules, rather than having to do something akin to theorem proving, or reasoning from first principles each time the same problem arises. Automatic derivation and caching is one kind of knowledge acquisition, for which useful derivations are dynamically compiled, along with supporting knowledge struc- tures. Work in the area of learning apprentice systems like LEAP [Mitchell et al. 851 and LAS [Smith et al. 851 dem- onstrate this capability. The problem of generalizing a line of reasoning for the purpose of caching a useful rule is extremely hard, specially if the reasoning process employs qualitative concepts. We are currently investigating this problem of gener- alization as it applies to financial planning and analysis. Explanation One of the desired aspects of intelligent problem solvers is that they be able to explain or present their solution in a way that allow a human user to understand not only the solution reached, but also how and why they were reached. This usually requires the problem solver to maintain some kind of a trace on the bodies of knowledge used during the problem solv- ing process. When the underlying behavior is governed by nu- merical relations, composing an explainable solution from a large body of quantitative data can be quite complex, unless explicit knowledge is encoded in advance for composing such explanations from numeric solutions. We would like ER to pre- serve traces of its reasoning process so as to compose intelligible explanations from them. 3.2 Concluding Remarks ER is a stand-alone experiment in testing mechanisms for ap- plying qualitative reasoning to systems of arithmetic equations. Our mechanisms are currently confined to the class of simple arithmetic relations and concepts. The efforts of many others to develop powerful qualitative reasoners for a more complex class of mathematical models has given us good insight to developing this approach for a simpler subset. Many practical problems may be based on this subset, and developing more efficient mech- anisms for this special class of problems would be very useful. ER is being developed within the scope of a wider project that is investigating the applications of AI to building a system that will serve as a powerful interactive consultant for financial marketing decisions [Kastner et al. 861. We continue to work towards the implementation and integration of ER with this system. Acknowledgments We would like to thank members of the Knowledge Systems Group at IBM Research and the anonymous referees for their useful comments, criticisms, and suggestions. References [Apte and Weiss 851 C.V. AptC and S.M. Weiss, An approach to expert control of interactive software systems, IEEE Transactions on Pattern Analysis and Machine Intelli- gence, PAMI-7(5):586-591, Sept. 1985. [Brown et al. 821 J.S. Brown, R. Burton, and J. de Kleer, Peda- gogical, natural language and knowledge engineering techniques in SOPHIE I, II, and III, In D. Sleeman and J.S. Brown (editors), Intelligent Tutoring Systems, pages 227-282, Academic Press Inc., 1982. [DeKleer and Brown 841 J. de Kleer and J.S. Brown, Qualitative physics based OIE confluences, Artificial Intelligence, 24( l-3):7-83, Dec. 1984. [Derman and Van Wyk 841 E. Derman and C.J. Van Wyk, A simple equation solver and its application to financial modelling, Software Practice and Experience, 14(12):1169-1181, Dec. 1984. [Forbus 841 K.D. Forbus, Qualitative process theory, Artificial Intelligence, 24( l-3):85-168, Dec. 1984. [Hansen and Hansen 851 B.S. Hansen and M.R. Hansen, Simple symbolic and numeric computations based on equations and inequalities, IBM Research Report RJ 4754, June 1985. [Hart et al. 861 P.E. Hart, A. Barzilay, and R.O. Duda, Qual- itative Reasoning for Financial Assessments: A Prospectus, AI Magazine, 7( 1):62-68, Spring 1986. [Kastner et al. 861 J. Kastner, C. Apte, J. Griesmer, S.J. Hong, M. Karnaugh, and E. Mays, A knowledge based consult- ant for financial marketing, IBM Research Report RC 11904, May 1986. [Kosy and Wise 841 D.W. Kosy and B.P. Wise, Self-Explanatory Financial Planning Models, Proceedings of AAAI-84, 176-181, August 1984. [Kuipers 841 B. Kuipers, Commonsense reasoning about causal- ity: deriving behavior from structure, Artificial Intelli- gence, 24( l-3): 169-203, Dec. 1984. [Mitchell et al. 851 T.M. Mitchell, S. Mahadevan, and L. Steinberg, LEAP: A learning apprentice for VLSI design, Proceedings of the ninth IJCAI, 1:573-580, Au- gust 1985. [Smith et al. 851 R.G. Smith, H. Winston, T.M. Mitchell, and B.G. Buchanan, Representation and use of explicit justi- fications for knowledge base refinement, Proceedings of the ninth IJCAI, 1:673-680, August 1985. [Weiss et al. 781 S.M. Weiss, C. Kulikowski, S. Amarel, and A. Safir, A model-based method for computer-aided medical decision making, Artificial Intelligence, 11: 145- 172, 1978. [Wolfram 851 S. Wolfram, Symbolic mathematical computation, Communications of the ACM, 28(4):390-394, April 1985. 948 / ENGINEERING
|
1986
|
36
|
479
|
MOLE: A Knowledge Acquisition Tool That Uses its Head Larry Eshelman and John McDermott Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 Abstract MOLE can help domain experts build a heuristic classification problem-solver by working with them to generate an initial knowledge base and then detect and rernedy deficiencies in it. By exploiting several heuristic assumptions about the world, MOLE is able to minimize the information it needs to elicit from the domain expert. In particular, by using static techniques of analysis, MOLE is able to infer support values and fill in gaps when a knowledge base is under-specified. And by using dynamic techniques of analysis, MOLE is able to interactively refine the knowledge base. 1. Int reduction MOLE assists domain experts in building expert systems that do heuristic classification [Clancey 84, Clancey 85, Buchanan 841. MOLE is useful in domains in which the expert can pre- enumerate a set of candidate hypotheses (e.g., faults, diseases, components) and in which hypotheses can be evaluated on the basis of weighted evidential considerations (e.g., symptoms, requirements). MOLE is the successor to MORE [Kahn 85a, Kahn 85b] and, more generally, follows in the footsteps of systems like TEIRESIAS [Davis 821 and ETS [Boose 841. Like these other knowledge acquisition tools, MOLE elicits knowledge from the domain expert and builds a knowledge base. The knowledge base can then be interpreted by an inference engine to perform some heuristic classification task. In all such knowledge acquisition tools the inference engines make certain assumptions about the nature of the world. MOLE differs from these other systems in that its heuristic assumptions are made explicit and are exploited in the knowledge acquisition process. We are trying to make MOLE smart -- which in this case means asking as few questions of the expert as possible while still being able to build a reasonable knowledge base for performing a task. MOLE’s approach to knotiledge acquisition is .to use its heuristic assumptions about the world and assumptions about how domain experts express themselves to disambiguate the knowledge elicited from the expert. In Section 2 we describe MOLE’s inference engine and how it depends upon MOLE’s heuristic assumptions about the world. Unlike most other knowledge acquisition tools, MOLE is both a knowledge acquisition system and a performance systetn. The knowledge base built by MOLE’s knowledge acquisition tool is interpreted by MOLE’s inference engine to perform the given task. In Section 3 we show how MOLE’s heuristic assumptions guide its knowledge acquisition process. This section is divided into two subsections which reflect the two modes of analysis used by MOLE when guiding the knowledge acquisition process: static and dynamic. Static analysis looks at the structure of the dormant knowledge base. Dynamic analysis focuses on certain parts of the knowledge base in the context of feedback provided by the expert during test diagnoses. 2. The Inference Engine MOLE’s power as a knowledge acquisition tool comes from its understanding of its problem-solving method. In MOLE’s case this means selecting or classifying hypotheses on the basis of evidential considerations. To the extent that a problem-solving method makes weak presuDD,ositions about the world, the method may give only the mds’t limited leverage to a knowledge acquisition tool. MYCIN. for examDIe, makes verv weak presuppositions; it views its rules &s’entlally as irbitrary implications among arbitrary fact:; about the world [Szolovits 781. Other classification systems such as INTERNIST [Miller 82, Pople 821 and CASNET [Weiss 781 provide a much more specific interpretation -. a causal interoretation -- of the network of rules or links connecting its “facts;‘. MOLE in more like INTERNIST and CASNET in this respect. MOLE’s current strength is principally in the area of assisting in the development of diagnostic systems (as opposed to other types of classification systems). For MOLE a hypothesis is the cause or explanation of the problem being diagnosed. There are three types of associations supporting hypotheses: 1. symptoms 2. prior-conditions 3. qualifying conditions A symptom is any event or state that is a causal manifestation of a hypothesis. A prior-condition is any event or state that occurs prior to or simultaneous with the hypothesis and makes the hypothesis rnore or less likely to be true. A qualifying condition is any background or distinguishing condition that qualifies the support of a symptom or prior-condition for a hypothesis. We will illustrate these various types of associations with an example from a knowledge base that allows MOLE to diagnose steel rolling mill problems. One problem that can arise in a rolling mill is that the sheet of steel being rolled is too narrow coming out of the mill. This symptom has three potential causes: (1) a roll is worn out; (2) there is excessive tension between the various rolls; (3) the sheet of steel was too narrow going into the rolling mill. These are the hypotheses which could explain the symptom. The hypothesis that the roll is worn out has several other symptoms -- for example, an oscillating looper roll. In addition, the worn out roll hypothesis has several prior-conditions which might affect the likelihood that it is worn out -- for example, its installation date. Note that the symptoms of the hypothesis, unlike the prior- conditions, are explained by the hypothesis. The association between a hypothesis and a symptom or prior-condition may need to be qualified;, for example, if the looper roll fails to oscillate, this tends to rule out the hypothesis that the roll is worn out unless the steel being rol!ed is a soft alloy. MOLE’s predecessor, MORE, evaluated candidate hvpotheses by combining support values and comparing the resul%ng value to a threshold. Hypotheses whose combined support was above the accept threshold were accepted, and hypotheses whose combined support was below the reject threshold were rejec?ed. Any hypothesis whose combined support was in between the reject and accept thresholds was classified as indeterminate. However, indeterminate candidates were rejected if they were not needed to explain any symptoms. This latter criterion for rejecting candidates meant that MORE had some rudimentary capability to reason about evidence. But for the most part, MORE’s performance was dependent upon the expert assigning reasonable numeric support values to its evidential associations. This meant that the adequacy of the knowledge acquisition process depended upon the expert’s ability to assign reliable 950 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. support values. Since experts have trouble assigning these support values and often do so in a rather ad hoc fashion, this became the weakest link in MORE’s knowledge acquisition process. Although experts could use MORE to build diagnostic knowledge bases, MORE was little more than a knowledge acceptor. With MOLE, on the other hand, less emphasis has been placed on the numeric support values and more on reasoning about evidence. The user no longer has to supply support values. MOLE is able to assign reasonable support values because of certain heuristic assumptions it makes about the world. These assumptions also facilitate MOLE’s ability to reason about evidence which, in turn, enables MOLE to be less reliant on its support values. We discuss how support values are determined in the next section. In the remainder of this section we discuss MOLE’s heuristic assumptions and how they affect its ability to reason about evidence. MOLE’s heuristic assumptions about the world are similar to those made by INTERNIST. MOLE makes two basic assumptions about the world: 1. Exhaustivity: every abnormal finding has an explanation -- i.e, some candidate hypothesis will account for it. 2. Exclusivity: explanations should not be multiplied beyond necessity -- i.e., do not accept two hypotheses if one will do. The exhaustivity heuristic enables MOLE to interpret the evidential links in its domain model causally. Every symptom is assumed to have a cause. If a symptom is not explained by one hypothesis, it must be explained by another. The exclusivity heuristic is based on Occam’s razor. All other things being equal, parsimonious explanations should be favored. In addition, it captures the assumption that the types of events represented by hypotheses are fairly rare, so it is unlikely that several occur simultaneously. (Of course, two such events might be interrelated, but then this should be represented in the network.) An important corollary follows from the exhaustivity and exclusivity heuristics: Accept the best candidate relative to its competitors -- i.e, a candidate may “win” by ruling out competing candidates. Because symptoms must be explained by some hypothesis (exhaustivity), one of the hypotheses must be true. And because only one hypothesis is likely to be true (exclusivity), we can drive up the support of one hypothesis by driving down the support of its competitors or vice versa. For instance, the fact that the looper roll is not oscillating tends to rule out the hypothesis that a worn out roll is the cause of the steel being too narrow on exit. If we have already ruled out that the steel was too narrow on entry, then we are led to conclude that the only remaining hypothesis must be the cause -- i.e., there is excessive tension between the rolls. However, if.we find that there is greater evidence against this hypothesis than the other two, the other two again become contenders. Even though there is evidence that would normally rule them out, they are still better than the only other alternative. In order- to show the important role MOLE’s heuristic assumptions play in the evaluation process, we will briefly summarize its method of evaluation. The evaluation process begins by asking the user’ about a set of core symptoms. Depending upon the starting point within a given network of hypotheses and evidential associations, the inference engine can do either backward or forward chaining. The evaluation method consists of the following steps: 1. Ask about the core symptoms. 2. Activate those hypotheses that are needed to explain the the symptoms that are known to be present. 3. Differentiate active hypotheses l Rule out: Raise support for one hypothesis by lowering support for competing hypotheses by establishing that negative prior-conditions are satisfied. l Raise prior probability: Raise support for one hypothesis relative to its competitors by establishing that positive prior-conditions are satisfied. l Symptom differentiation: Establish that there are symptoms which support one hypothesis more than its competitors; go to 2. 4. Combine the support provided by the evidence for each hypothesis using the Bernoulli combination. 5. Accept those hypotheses whose evaluation is above some threshold. l Accept those hypotheses which explain a single symptom better than any of their competitors. l Accept those hypotheses whose combined support from symptoms is greater than any of their competitors. 6. If there are some symptoms which are not explained by an accepted hypothesis and there are potential queries which might be relevant, go to 3. 7. Reject those hypotheses that are not needed to explain the known symptoms. l Reject those hypotheses that are not accepted and which are not needed to explain known symptoms. 0 Reject those hypotheses that are accepted because they explain a particular symptom, provided this sy!nptom is very likely to follow from a hypothesis that is needed to explain other symptoms. MOLE’s heuristic assumptions are the basis for steps 3 and 7 -- the differentiation and rejection steps. The exhaustivity heuristic implies that a hypothesis can be rejected only if it is not needed to explain any of the symptoms. The exclusivity heuristic also is relevant for determining when to reject a hypothesis. A tentatively accepted hypothesis H,, is rejected if some other independently accepted hypothesis H, will explain those symptoms Si which H, explains, and the Si are more likely to follow from H than H , The corollary which tollows from the two heuristics is ?he basi& for the differentiation process by which MOLE distinguishes the relative merits of the active hypotheses. To return to our rolling mill example, if MOLE knows that the steel is too narrow on exit, three hypotheses are activated: (1) the roll is worn out, (2) there is excessive tension between the rolls, and (3) the steel was too narrow on entry. In order to determine which of these three hypotheses is the cause of the steel being too narrow on exit, MOLE looks for symptoms which only one of the hypotheses will explain and for circumstances which will rule out the other hypotheses. In this example, there are three symptoms which are explained by a worn out roll that are not explained by either of the other hypotheses. If one of them holds _- e.g., the looper roll is oscillating -- then MOLE concludes that since the worn out roll explains all known symtoms, it is the cause of the steel being too narrow on exit. The other two hypotheses are not needed and so are rejected. However, if there were another symptom which only the excessive tension hypothesis explained, MOLE would accept this hypothesis as well as the worn out roll hypothesis. Suppose, on the other hand, t,hat none of the three symtoms which are only explained by the worn out roll hypothesis were present, but that some circumstance held which tended to rule out the worn out roll hypothesis -- c.y., there are no uneven surface prob!ems. In this case MOLE wou!d conclude that it is unlikely that there is a worn out roll, and would focus on the other two hypotheses. If MOLE could also rule out that the steel was too narrow on exit, then, by elimination, MOLE would conclude that the cause must be excessive tension between rolls. MOLE’s method of evaluation can be usefully compared to that of INTERNIST and CASNET. Like both these systems, MOLE attempts to select the hypothesis which accounts for the most data. And like both these systems more than one may be selected. There may not be a single hypothesis which covers all symptoms, so several hypotheses may need to be accepted. Although it is assumed that only a single hypothesis is needed to explain a particular symptom, another hypothesis may better explain some other symptom. MOLE’s way of selecting the best hypothesis is similar to INTERNIST’s Like INTERNIST, MOLE KNOWLEDGE ACQUISITION / 95 1 picks the best hypothesis relative to its competitors instead of accepting a hypothesis only if its absolute score (numeric measure of belief) is above some fixed threshold. This method of selecting a hypothesis is a natural consequence of MOLE’s heuristic assumptions about the world which, as we have noted, are similar to those made by. INTERNIST. However, MOLE handles differentiation somewhat differently in that support is dynamically shifted from one hypothesis to another. When one hypothesis is ruled out, the support values for other hypotheses explaining the same symptom increase. MOLE sides with INTERNIST, and against CASNET, on one other important issue: observations and Intermediate states are lumped together as manifestations of hypotheses. For MOLE this .means that confidence in observations can be easily integrated into the differentiation process. If its confidence in a symptom is less than certain, MOLE treats the possibility that the observation might be mistaken as another hypothesis explaining the symptom. As evidence against the hypothesis explaining the symptom mounts, the likelihood that the observation is mistaken increases. Finally, MOLE has one very important property in common with CASNET: MOLE can reason both backwards and forwards within its network just as CASNET can in its network of pathophysiological states. A? the hear? of MOLE’s evaluation process is fhe distinction between evidence that needs to be explained or covered by some hypothesis and evidence that is circumstantial -- which is merely correlated with some hypothesis. By allowing MOLE’s inference engine to be driven by “covering” evidence as opposed to circumstantial evidence, the emphasis is shifted from numeric support values to how well the covering evidence is explained. Only those hypotheses which are potentially needed to explain the symptoms are activated. Circumstantial evidence is used to differentiate the active candidates relative to some piece of evidence that must be covered. A hypothesis is accepted if it covers a piece of evidence better than its competitors. In so far as the underlying heuristic assumptions can be given a suitable interpretation, MOLE’s method can be applied to non- diagnostic domains. The exhaustivity heuristic simply says that there is information associated with some of the hypotheses which, when it holds, must be covered by one of these hypotheses. If the domain involves component selection, for example, then the hypotheses would be components and the relevant information might be requirements that must be met. Exhaustivity is then interpreted to mean that given some requirement, some member from a set of components which meets this requirement must be selected. The exclusivity assumption is interpreted to mean that only one component should.be selected from this set. If no single hypothesis will cover all the requirements, then there must either be a missing hypothesis (component) that would cover all the requirements, or some of the requirements must be relaxed. It should be noted that the relaxing of requirements in a selection task parallels lowering the confidence in some of the symptoms in a diagnostic task. We are not claiming that a suitable interpretation of these heuristics can be found for all heuristic classification tasks. Some classification tasks seem to be based primarily on circumstantial knowledge, with little or no role for covering knowledge. An example would be Grundy which recommends books based on the reader’s personality [Rich 791; the relevant knowledge is correlat.ions between book traits and personality traits. No doubt there are many other classification tasks which nrn\+la li+fla if any, ro!e for covering knowledge. p. Y I.“” .I. .“) . Early heuristic classification systems did not distinguish between coverina and circumstantial knowledge [Shortliffe 76, Weiss 791. In effect, they treated all evidential knowledge as circumstantial. This does not mean that their performance is inferior to MOLE’s. If the expert can provide correct support values, they should perform as well as MOLE. The main advantaae of MOLE lies elsewhere. As will be shown in the next two sec?ions, by distinguishing covering knowledge from other types of associations, MOLE can provide more guidance to the knowledge acquisition process than would otherwise be possible. 3. Knowledge Acquisition MOLE’s knowledge acquisition process consists of two phases: (1) the gathering of information for constructing the initial knowledge base and (2) the iterative refinement of this knowledge base. In order to generate the initial knowledge base, MOLE asks the expert to list hypotheses and evidence that are commonly relevant in the expert’s domain and to draw associa.tions between the evidence and the hypotheses. The expert is encouraged to be as specific as possible. However, the expert is not required to specify anything more than the names of “events” and to indicate which events are associated. The resulting knowledge base can be viewed as an under-specified network of nodes and links. For the network to be fully specified three additional kinds of information are needed: (1) the type of each node, (2) the type of each link, and (3) each link’s support value. A node’s type indicates whether the methocl for determining its value is by directly asking the user or by infering its value from other nodes. A link’s type indicates the type of evidential association it represents -- a covering association, a circumstantial association, or an association which qualifies the support of a covering or circumstantial association. The support valIue indicates how much positive or negative support a piece of evidence provides for a hypothesis. MOLE understands that expcr:s cannot always provide such information. This is a major difference between M0L.E and its predecessor, MORE. MORE required the expert to specify the information in a form that’reflccted the knowledge structure presupposed by its knowledge base interpreter. The burden was on the expert to fit his knowledge into MORE ra!her than MORE being intelligent enough to make sense of whatever information the expertswas willing to provide. MOLE, on the other hand, recognizes that experts often have difficulty coming up with a consistent set of support values, that they sometimes are uncertain about the type of evidential link, and that they occassionally are even unsure whether an event is observed or inferred. MOLE can tolerate such indeterminateness. MOLE is opportunistic and relies on its heuristics to mold the under- specified information provided by the expert into a consistent and unambiguous network and to discover missing or incorrect knowledge. Our research effort has been directed toward making MOLE smarter and less tedious to use. MOLE now asks less and infers more. During the second phase of knowledge acquisition, MOLE and the expert interact in order to refine the knowledge base. The nature of fhis interaction is another major difference between MOLE and MORE. MORE used static analysis to try to discover weaknesses in the knowledge base. MORE had certain expectations about the structure of diagnostic networks, and prompted the user when the network did not meet these expectations. MOLE also uses static analysis, but it plays less of a role in discovering weaknesses in the knowledge base and more of role in disambiguating an under-specified network. Of MORE’s eight strategies for improving diagnostic performance, only differentiation plays an important role during static analysis. Most of the burden of refining the knowledge base has been shifted to dynamic analysis. The expert supplies MOLE with feedback on how accurate its diagnosis is for some test case. If the diagnosis is incorrect, MOLE tries to determine the likely cause of the mistake and recommends possible remedies. The following two subsections discuss how static and dynamic analysis aid in the knowledge acquisition process. 3.1. Static Analysis Static analysis concentrates on the structure df the dormant knowledge base. MOLE uses static analysis (1) to disambiguate an under-specified nework, (2) to assign support values, and (3) to recognize structural inadequacies in the network. ’ The expert may specify the initial knowledge base at any of several levels of abstraction. If the expert is not able to say whether an association is a covering or a circumstantial link, for example, he can specify the temporal relation of the association. This will create some ambiguity for MOLE. For instance, event E, could be prior to event E, because either E, is a prior-condition for hypothesis E or E is a hypothesis explaining symptom E . . If the expert is unible $0 specify the temporal direction of a fink, then he can minimally specify that two events are associated with no indication of the type of association or the temporal direction. 052 / ENGINEERING In this case, there is even more ambiguity in the network. Because the network can be layered, with some hypotheses serving as symptoms for other hypotheses, there are often many possible interpretations of an under-specified network. MOLE currently has a number heuristics for helping it interpret such a network. Some of these heuristics rely on the nature of the types of associations understood by MOLE’s evaluation method. Others make assumptions about how an expert’s style of specifying the network should be interpreted. The following is an example of a heuristic based on the nature of associations. If event E leads to event E and then event E* (when false) rul& out event E2 E, is a iymptom for E, rather than’a prior-condition MOLE assumes that although symptoms may provide negative as well as positive support, prior-conditions tend to be either positive or negative but not both. The following is an example of a heuristic based on how experts express themselves: If event E, Is inferred to be a symptom of event E, and then event E, I$ input as a sibling of event E, E, is inferred to be a symptom of E, If the specification of the network is so under-determined that MOLE is not able to make any reasonable guesses about its shape, then MOLE asks the expert for additional information. Of course, even here MOLE does not simply ask for undirected guidance. MOLE ask for information which it expects will be the most effective in helping it determine the structure of the network. For example, asking about the role of an association with many siblings usually provides more information than asking about the role of an association with only a few siblings. So far, nothing has been said about qualifying conditions. This is because MOLE initially assumes that each piece of information is either a symptom or a prior-condition and not some background qualifying condition. Symptoms and prior-conditions are assumed to provide independent evidence for hypotheses. This is a default assumption which expresses a lack of knowledge on MOLE’s part. Once MOLE gets some feedback about the network’s performance, MOLE can adjust this assumption during dynamic analysis if it needs to. This is done by adding conditions that qualify the support of a symptom or prior-condition for the hypothesis. Although qualifying conditions are typically extraneous background conditions, the interdependence of two symptoms can be represented by treating them as qualifying conditions for each other. The rolling mill example illustrates some of these heuristics for disambiguating an under-specified network. MOLE was told that a worn out roll leads to a number of events such as the steel being too narrow on exit and the looper roll oscillating. Because these events were leaf nodes that follow from a worn out roll, MOLE assumed that the worn out roll was a hypothesis explaining these leaf nodes. For the same reason it concluded that excessive tension between rolls was a hypothesis. The excessive tension hypothesis, in turn, can be explained by one of two second level hypotheses -- i.e., either there is an overload or the looper is not working. MOLE assumed these were hypotheses because it was told that they lead to excessive tension,and that there wet-e other events that lead to them. On the other hand, MOLE was told that the roll being installed before a certain date was linked to the worn out roll. Because this association was less specific than ?he other types of specifications, MO!-E assumed that it was probably a different type of an association -- i.e., a prior-condition, One of the events that MOLE was told leads to the looper not working is that there is a regulator malfunction. MOLE was uncertain whether this was a third level hypothesis explaining, or a positive prior-condition affecting the likelihood of, the looper not working. When it learned that a regulator malfunction leads to the looper meter resting on zero and that this is a leaf node, it concluded that the regulator malfunction must be a third level hypothesis. Static analysis is also used to assign default support values. The method for assigning support values for covering evidence follows directly from MOLE’s heuristic assumptions about the world. The exhaustivity heuristic, which assumes that every symptom can be explained by some hypothesis, in conjunction with the rule out corollary, which assumes that best is relative, insures that the positive support provided by a piece of evidence must be distributed among the hypotheses. And these two assumptions. along with the exclusivity heuristic, insure that the positive support from some piece of evidence to various candidates must sum to 1.0. MOLE makes the default assumption that the support values for any symptom are equally divided among the hypotheses that explain it. The method for assigning support values for circumstantial evidence relies on a heuristic concerning how experts express themselves. MOLE assumes that experts initially mention a positive or negative prior-condition only if it has a significant impact; thus a fairly high support value is assigned in all cases. These values, like the support values for covering .knowledge, can subsequently be changed by MOLE during dynamic analysis. So far we have focused on semantic inadequacies of the initial network. Another source of problems is structural inadequacies. The expert typically forgets to add certain basic associations. Sometimes. the resulting structure makes little sense from a diagnostic point of view. MOLE is able to recognize certain structural inadequacies and prompt the expert for likely remedies. For example, there may be no way to differentiate two hypotheses on the basis of the evidential associations provided by the expert. The expert may have forgotten to specify that there is sorne positive piece of evidence -which supports one hypothesis but not the other or that when a positive piece of evidence which supports both hypotheses fails to hold, it tends to rule out one of the hypotheses. In the case of the rolling mill, the expert indicated that both excessive top speed and a wrong speed set up could explain an overload. MOLE reasoned that there is no point in specifying alternative explanations of an event unless these explanations can somehow be differentiated. MOLE asked the expert if there was any event that followed from one of the hypotheses and not the other. In this case, there was one such event for excessive top speed and two for the wrong speed set up. Although static analysis plays an important role in locating structural inadequacies, its greatest value is jn disambiguating and completing an under-specified network. Because MOLE does not need to elicit a lot of information from the expert in order to build a reasonable knowledge base, t’he expert is able to use MOLE to quickly generate a proto-type that performs the diagnostic task. The expert can then experiment with this proto- type and use MOLE’s dynamic analysis capabilities to interatively refine the knowledge base. 3.2. Dynamic Analysis Dynamic analysis is done in conjunction with test diagnoses. The expert gives MOLE a test case and tells MOLE the correct diagnosis. If MOLE the performance program comes to an incorrect conclusion, MOLE the knowledge acquisition tool tries to determine the source of the error and recommends possible remedies. MOLE’s predecessor, MORE, only did static analysis of its knowledge base. MORE analyzed the network staticly and suggested what types of knowledge might be missing. For instance, if MORE discovered that a hypothesis had no symptoms providing strong positive support, it would ask whether there were any features of the symptom which, when true, increased the support for the hypothesis. The problem is that there are potentially too many places where knowledge may be missing. In the rolling mill example, MORE discovered eighteen cases where distinguishing features might be needed, but only in one case could the expert provide any such features. This may be because the expert cannot think of the missing knowledge or because there is none. In either case, with the static approach, analysis of the network for missing knowledge was often cumbersome and not very helpful. As was indicated in the previous subsection, MOLE does use static analysis. However, MOLE limits it to a few special cases. Generally, what is needed is some way to focus the analysis on the relevant parts of the network. MOLE uses feedback from diagnostic sessions to help it focus its attention on parts of the network with missing or incorrect knowledge. After MOLE has provided its diagnosis for some test case, the expert has the option of telling MOLE what he thinks is the correct diagnosis. This enables MOLE to focus on the part of the network where there is likely to be missing knowledge and to do so in a context KNOWLEDGE ACQUISITION / 953 in which the expert is more likely to notice that some knowledge is missing. If, for example, MOLE cannot distinguish between the hypotheses that would explain the looper not working, but the expert has told it that it should be able to, then it will occur to MOLE that it may be missing some distinguishing condition. In other words, MOLE does not ask for a specific type of knowledge until it makes an incorrect diagnosis where that type of knowledge could make a difference. MOLE uses dynamic analysis to help (1) discover missing knowledge, (2) guide in the revision of support values, and (3) further disambiguate the network. The conditions for these actions are closely intertwined. Given MOLE’s diagnosis and a target diagnosis supplied by the expert, MOLE first determines whether the targeted diagnosis is reachable by shifting support within the existing network of symptoms and hypotheses. If this is possible, MOLE does one of the following: l If a hypothesis’ support needs to be driven down, and it does not have strong negative support, MOLE asks for information that would tend to rule it out. l If a hypothesis’s support needs to be driven up, and it has strong negative support, MOLE asks for background condr ‘tions that would mask negative support. l If a hypothesis’s support needs to be driven up, and it does not have strong positive prior-conditions, MOLE asks for positive prior-conditions. o If a symptom’s support needs to be shifted from one hypothesis to another, MOLE asks for distinguishing conditions. l If the user provides no additional information, MOLE either revises support values or reinterprets parts of the network depending on its confidence in its interpretation and in its support values. On the other hand, if the the targeted diagnosis is not reachable by shifting support within the current network of symptoms and hypotheses, MOLE tries to determine what part of the required structure might be missing: l If a hypothesis cannot be rejected because it is needed to explain given symptoms, or if a hypothesis is accepted because it is the only explanation of a network, certain associations are represented by several types of links. Some of these extra links need to be pruned. By examining which associations are needed in the context of diagnostic cases, MOLE is able to determine when it is possible to prune some of these associations. However, MOLE’s performance system does not require that all ambiguity be resolved. Sometimes ambiguity is inherent to the problem and the associations can only be disambiguated in context. For example, a node which in some instances may serve as a hypothesis explaining a second node, may in other instances serve as circumstantial evidence for this second node. The interpretation will depend upon which node’s value is discovered first. An example from the rolling mill system will illustrate how MOLE uses dynamic analysis. Suppose the user has indicated that the steel is too narrow and that the looper roll is oscillating. Based on this information, MOLE would conclude that there is a worn out roll. This is the only hypothesis which would explain the oscillating. There are two other hypotheses -- i.e., excessive tension between rolls and too narrow on entry into the mill -- which would explain why the steel on exit from the mill is too narrow, but since the narrowness on exit can be explained by a hypothesis which is needed for independent reasons, these two alternative hypotheses are rejected. Now suppose the expert indicates that MOLE should have accepted one of these two alternative hypotheses and rejected the worn out roll hypothesis. MOLE will ask the user to give an alternative explanation for why the looper roll is oscillating. Since every symptom must have an explanation, and the only explanation for the oscillation hypothesis that MOLE knows about is a hypothesis that it is told to reject, MOLE concludes that there must be an alternative explanation. If the expert says that there is no such alternative hypothesis, MOLE asks the expert how certain he is that the roll really is oscillating. If the expert says that he is certain, then MOLE will provide a “dummy” hypothesis for explaining the symptom. MOLE assumes that this dummy explanation is uninteresting because either it explains an event that occurs often in non-problematic situations or it explains an event the expert does not understand. There is one other alternative. If MOLE is not very certain that the oscillating roll observation is a symptom, MOLE will tentatively try treating it as a prior-condition so that it does not have to be explained by any hypothesis. Suppose, on the other hand, MOLE is told that the steel is too narrow on exit, that it was not too narrow on entrv, and that there is no oscillation problem. In this case, it would’ conclude that symptom: o MOLE asks for alternative explanations. o If no such hypotheses is provided, MOLE assumes that the observation of this symptom is not always reliable and adjusts the default confidence (initially 1.0) in the symptom downward. l If a hypothesis was rejected but should not have been, then MOLE asks if there is some symptom which the hypothesis would explain, but which is not currently associated with it in the network. When faced with a choice between revising support values and re-interpreting the network,. MOLE bases its decision on its confidence in past decisions. In order to avoid thrashing, MOLE keeps a record of any revisions in support values that it makes. This enables it to know whether it has revised a support value in the opposite direction in the past. The source of a support value and its degree of stability are used to determine a weight which represents MOLE’s confidence in the support value. Similarly, during static analysis MOLE records its confidence in any interpretations of the network that it makes. MOLE remembers whether its interpretation of a link or node was specified by the user or determined by its heuristics. If the interpretation is a reasoned guess based on its heuristics, MOLE assigns this guess a degree of confidence reflecting the strength of the heuristic used. MOLE changes those parts of the network in which it is the least confident. It should be stressed that the static mode of analysis does.not remove all ambiguities in the network. Some ambiguity may be inherent to the network and can only be disambiguated in the context of actual examples. When staticly disambiguating the there must be excessive tension between rolls. If the expert indicates that he is undecided between this hypothesis and the worn out roll hypothesis, MOLE will first focus on why it ruled out the worn out roll hypothesis. It will discover that the reason is that the oscillation symptom failed to occur. MOLE wilt ask the expert whether there is any background condition which masks the negative effect of the failure of this symptom. It might be that MOLE does not yet know that worn out rolls do not typically lead to oscillation if the alloy is soft. If the expert fails to indicate that there is such a masking condition, MOLE will ask for positive prior-conditions that increase the likelihood of a worn out roll and offset the negative affects of the oscillation failing to occur. Ultimately, if the expert does not indicate additional information, MOLE will try revising the default support values by shifting them from the accepted hypothesis to the worn out roll hypothesis so that neither will be above the accept threshold. As MOLE has evolved, dynamic analysis has become more critical. In the earlier versions in which the cxpcrt was required to describe the knowledge base in terms precisely understood by MOLE, dynamic analysis was only useful for finding missing knowledge and adjusting support values. Now dynamic analysis is also needed for correcting wrong guesses made during static analysis. In the earlier versions wrong guesses were made as well, but they were made by the expert who did not understand how to map his knowledge into the types of associations understood by MOLE. When doing dynamic analysis MOLE had little basis for distinguishing between those instances where the expert knew what he was doing and those where he was guessing. By allowing the expert to be unspecific about association types when he is unsure, MOLE has some basis during dynamic analysis for knowing what relations in the network are guesses and thus reasonable candidates for reinterpretation. 954 / ENGINEERING 4. Conclusion MORE, MOLE’s predecessor, was used to build knowledge- based systems that diagnosed computer disk faults, computer network problems, and circuit board manufacturing problems. Exoerts were able to use MORE to build these systems only after they had acquired an understanding of how MORE worked. In each case, the initial sessions with MORE had to be treated as training sessions. The expert had to learn to “think” like MORE. Our subsequent efforts have been directed toward not bothering the expert with unnecessary questions and enabling MOLE to treat the expert’s responses in a more tentative fashion. As a result less time is needed for the expert to familiarize himself or herself with the system. The current version of MOLE has been used to build systems that diagnose rolling mill problems and help with Micro-Vax tuning. MOLE is currently being used to build a system for doing power plant diagnosis. In addition, we are exploring its use in non-diagnostic domains. We are planning to use MOLE to build a system that selects computer components based on a set of generic specifications. Acknowledgements We want to thank Damien Ehret. Gary Kahn, Sandra Marcus, and Ming Tan for helpful suggestions in the development of MOLE. References [Boose 841 [Buchanan 841 [Clancey 841 [Clancey 851 [Davis 821 [Kahn 85a] [Kahn 85b] Boose, J. Personal construct theory and the transfer of human expertise. In Proceedings of the National Conference on Artificial Intelligence. Austin, Texas, 1984. Buchanan, B. and E. Shortliffe. Rule-based Systems: fhe Mycin experiments of the Stanford f-heuristic Programming Project. Addison-Wesley, 1984. Clancey, W. Classification problem solving. In Proceedings of the National Conference on Artificial Intelligence. Austin, Texas, 1984. Clancey, W. Heuristic classification. Artificial Intelligence 27, 1985. Davis, R. and D. Lenat. Knowledge-Based Systems in Artificial Intelligence. McGraw-Hill, 1982. Kahn, G., S. Nowlan, and J. McDermott. Strategies for knowledge acquisition. IEEE transactions on Pattern Analysis and Machine Intelligence 7(5), 1985. Kahn, G., S. Nowlan, and J. McDermott. MORE: an intelligent knowledge acquisition toot. In Proceedings of Ninth International Conference on Artificial Intelligence. Los Angelos, California, 1985. [Miller 821 [Pople 821 [Rich 791 [Shortliffe 761 [Szolovits 781 [Weiss 781 [Weiss 791 Miller, R., H. Pople, and J. Myers. INTERNIST-l, an experimental computer- based diagnostic consultant for general internal medicine. New England Journal of Medicine 307, 1982. Pople, H. Heuristic methods for imposing structure on ill- structured problems. In Szolovits, P. (editor), Artificial Intelligence in Medicine. Westview Press, 1982. Rich, E. User modeling via stereotypes. Cognitive Science 3, 1979. Shortliffe, E. Computer-Based Medical Consultation: Mycin. Elsevier, 1976. Szolovits, P. and R. Patil. Cateyoricai arid probablilistic reasoning in medical diagnosis. ArtificiaI intelligence 11, 1978. Weiss, S., C.A. Kulikowski, S. Amarel, and A. Safir. A model-based method for computer-aided medical decision-making. Artificial Intelligence 11, 1978. Weiss, S. and C. I<ulikowski. EXPERT: a system for developing consultation models. In Proceedings of the Sixth international Joint Conference on Artificial Intelligence. Tokyo, Japan, 1979. KNOWLEDGE ACQUISITION / 955
|
1986
|
37
|
480
|
PROBLEM FEATURES THAT INFLUENCE THE DESIGN OF EXPERT SYSTEMS’ Paul J. Kline and Steven B. Dolins Artificial Intelligence Laboratory Computer Science Center Texas Instruments Incorporated Dallas, TX. 75266 ABSTRACT An analysis was made of a set of design guidelines for expert systems. These guidelines relate problem char- acteristics to appropriate AI implementation techniques. The analysis indicates there are five general problem fea- tures that are important for the proper use of a wide va- riety of AI implementation techniques. By being aware of these problem features, knowledge engineers improve their chances of coming up with the right design for ex- pert systems. Awareness of these problem features should also help knowledge engineers take full advantage of new AI techniques as they emerge. I Introduction ber The designer of an expert system has to make of choices about implementation techniques: a num- What knowledge representation technique should be used? What problem-solving strategy should be employed? How should uncertainty be handled? Making the right decisions greatly simplifies the devel- opment of an expert system and helps ensure its lasting usefulness. However, making the right decisions means choosing the AI implementation techniques that are most appropriate for the problem at hand. This can be difficult, as there is no readily available source of guidance about the appropriate use of AI techniques. In an effort to address this problem, Kline & Dolins (1985) present a list of 47 Probing Questions that relate problem features to AI implementation techniques. Given lThis research was supported by the Air Force Systems Command, Rome Air Development Center, Griffiss AFB, New York 13441 un- der contract no. F30602-83-C-0123. access to an expert in a particular problem area, a knowl- edge engineer should be able to use the Probing Questions to determine which of nearly 100 AI techniques and im- plementation strategies are best suited to the problem. The Probing Questions were developed by 1) collect- ing claims about the appropriate use of AI techniques from the published literature on expert systems, 2) cir- culating draft questions among experienced expert sys- tems builders for their comments and additions, and 3) revising the draft questions in response to the nine sets of comments received. In level of detail and completeness, the questions go a step beyond what can be found in Stefik et al. (1983). Knowledge engineers are encouraged to use the result- ing Probing Questions to help design their expert systems. (Copies of the questions with supporting documentation are available from the authors). However, two problems can arise in using the current set of Probing Questions to design expert systems: Since there are 47 Probing Questions, there is a danger of “losing sight of the forest amid all of the trees.” During the course of discussions between ex- pert and knowledge engineer, problem features will emerge that should have substantial impact on the design of the expert system. Knowledge engineers need to recognize the significance of those features when they are mentioned and they need to explore them thoroughly with the expert to avoid misunder- standings. The Probing Questions are designed to verify the appropriateness of a target set of AI implementa- tion techniques. If the problem in question cannot be solved using these techniques, then there is no ob- vious way to use the Probing Questions to help eval- uate proposed solutions relying on techniques that are not part of the target set. 956 I ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Analysis of the Probing Questions indicates there are five issues that are important for the proper use of a wide variety of AI implementation techniques: l When does the information needed to solve prob- lems become available? Information can be avail- able at program-design time, data-input time, or problem-solution time. l What kind of connection is there between evidence and hypotheses ? There are many kinds of evidence, and different program designs are required to obtain the maximum amount of leverage from each kind. l What counts as a solution, and how many are there likely to be? An expert system is entitled to stop and declare the problem solved under a variety of different circumstances. l How accommodating is the program’s environment? That is, how much assistance in the form of infor- mation or guidance can the program rely on without compromising its usefulness? l How can we help ensure that the program will ex- pend its effort wisely. 7 It can be difficult to solve problems within the constraints imposed by compu- tational resources. If knowledge engineers have these problem features in mind during their discussions with experts they improve their chances of getting the information they need to make good implementation decisions. These problem features should also be useful in evaluating solution proposals that depend on the use of new AI techniques. II Analyzing a Problem Domain An examination of the Probing Questions shows 23% of those questions inquire about the expected arrival time of information, 34% inquire about the connection between evidence and hypotheses, 19% inquire about the definition of solutions, 19% inquire about the degree of accommc+ dation provided the program’s environment, and 15% in- quire about good use of resources. (The same question can be counted in more than one category). While a sub- stantial fraction, IS%, of the Probing Questions do not raise any of the five issues, we were unable to discern significant commonalities among the remaining questions. These statistics suggest that if knowledge engineers have thoroughly explored the five issues in their problem do- mains, then they should be able to answer most of the Probing Questions. In the process, they will be able to determine the potential applicability of many AI imple- mentation techniques. The following section discusses the Probing Question that best illustrates the importance of the expected arrival time of information. This is followed by brief summaries of a few of the other Probing Questions that are part of the 23% inquiring about that general issue. In all cases, Probing Questions are labeled with the numbers they have in Kline & Dolins (1985). Subsequent sections use the same format to illustrate the other four issues. A. When Does Information Become Available? If the knowledge that makes it possible to solve problems is available at program-design time, then it is generally possible to build that knowledge directly into the design of an expert system. In other cases, important information only becomes available at data-input time or after some progress has been made toward finding a so- lution. In these cases, a design must be found to take advantage of the information as it emerges. Knowledge engineers should appreciate that whenever they identify a crucial item of information that makes it possible to solve problems in a domain, they also need to establish the expected arrival time of that information. This issue comes up in the Probing Question shown in Fig. 1, which determines whether constraint propagation techniques will be required to find the values of variables. If it is known at program-design time that the values of certain variables will always be provided as part of the in- put data, then forward chaining can be used to determine the values of the other variables. On the other hand, if it is necessary to wait until data-input time to discover which variables have known values, then more general constraint propagation techniques will have to be employed. The choice between forward chaining and constraint propaga- tion hinges on the question of exactly when information will be available about the identities of the variables with known values. Other implementation strategies for expert systems are appropriate when certain kinds of information are available at program-design time. For example (Probing Question 2.3.1), if at design time it is possible to antici- pate the major areas of uncertainty and unreliability the program will face, then building redundancy into the de- sign might help deal with the uncertainty (Buchanan & Shortliffe 1984, p. 684f). Besides program-design time and data-input time, there are cases in which crucial information does not appear un- til a partial solution to a problem has been obtained. For example (Probing Question 2.6.2), opportunistic search strategies wait for “islands” of certainty to emerge and KNOWLEDGE ACQUISITION / 957 Is it necessary to find values for a number of variables that take on numeric or boolean values? and Are there constraints among the variables that make it possible to use the known values of some of the variables to solve for other variables? and Does the identity of the variables whose values are known at the outset differ from problem to problem? Yes, differ from problem to problem + Constraint Propagation No, same variables known at outset --f Forward-Chaining Rules Figure 1: Probing Question 2.2.1 then use those islands to help interpret neighboring re- gions of greater uncertainty (Nii, Feigenbaum, Anton, & Rockmore 1982). While the islands of certainty are crucial to solving a problem, it is impossible to say where those islands will be found until a certain amount of progress has been made toward developing a solution. B. What Kind of Connection is There Between Evidence and Hypotheses? A wide variety of implementation strategies have been employed in expert systems in order to extract the max- imum amount of leverage from evidence. The Probing Question in Fig. 2 is looking for several different kinds of connections between evidence and hypotheses. As this question suggests, a test for detecting that a candidate is not a genuine solution can produce a positive conclusion by eliminating all but one of a set of candidates, i.e., confirmation by exclusion as in (Pople 1982, p. 13Of.). The negative connection between evidence and hypotheses leads to a different expert system design than is obtained when there is total reliance on positive connections. In other cases (Probing Question 2.4.9), a particular piece of evidence restricts the solution to a range of possi- bilities without saying anything about which of those pos- sibilities is actually the right on-for example, Pople’s “constrictors” (1977, p. 1033). This kind of connection between evidence and hypotheses leads naturally to ex- pert systems that organize hypotheses into a hierarchy and proceed from general hypotheses (e.g., lung disease) to more specific hypotheses (e.g., emphysema). As a final example, it has been observed (Clancey 1984, pp. 55% Kahn 1984, p. 25f) that the nature of the connection between evidence and hypotheses influ- ences the choice of “shallow” versus “deep” reasoning in expert systems (Probing Question 2.1.1). In some diag- nosis problems, the evidence is linked directly to bottom- line conclusions. Shallow reasoning employing heuristic associations is appropriate for these problems. In other diagnosis problems, evidence can be found that also con- firms intermediate steps along a causal path connecting ultimate causes to symptoms. Deep reasoning employ- ing a model of the operative causal relationships may be appropriate for these sorts of problems. c. What Counts as a Solution, and How Many are There Likely to Be? Knowledge engineers and experts need to achieve a clear understanding about the circumstances that will en- title the program to stop and declare the problem is solved. Different stopping criteria will be appropriate in different problem domains. For example, the output of the Rl ex- pert system (McDermott 1982) is a functionally accept- able VAX system configuration. There is no guarantee the optimal configuration is found, but Rl is quite useful nonetheless. In other problem domains a program would need to keep working until an optimal solution is found. A closely related question is the number of solutions that are expected. For example, some medical expert sys- tems can safely assume patients will only have one of the diseases the program is capable of diagnosing (e.g., the expert system discussed in Reggia, Nau, and Wang, 1984, for determining the cause of the wasting of the muscles of the lower legs). Other medical expert systems must be prepared to deal with patients with multiple diseases. The Probing Question in Fig. 3 makes recommendations for the case where it is not reasonable to stop the diagnostic process as soon pls one solution is found. Discussions of the subtractive method recommended in Probing Question 2.2.5 and examples of heuristics for judging parsimony and detecting competitors can be found 958 I ENGINEERING Is it possible to construct a test that can be applied to each candidate solution, so that passing the test proves the candidate is a genuine solution? (e.g., A combination is clearly the right one if it opens the safe) Or Is it possible to construct a test, so that failing the test proves the candidate is not a genuine solution? (e.g., Blood tests can rule out paternity, but not establish it.) Of Is there only a large “gray area” of better and worse candidates to choose from? Rule candidates in, and there is --f Generate-And-Test an efficient generator Rule candidates out -+ Generate-And- Test, Pruning, or Confirmation By Exclusion Gray area + Scoring Functions, Group and Difierentiate, Opportunistic Search, etc. Figure 2: Probing Question 2.7.3 in Pople (1977 p. 1032), Reggia, Nau, and Wang (1984), and Pat& Szolovits & Schwartz (1981). There is a continuum of expert system problems that ranges from multiple solutions at one extreme, passes through a point where there is exactly one legitimate so- lution, and finally reaches a point at the other extreme where there are no solutions to the problem as originally stated. ISIS (Fox, 1983) provides an illustration of the “no solutions” end of this continuum. ISIS attempts to construct job-shop schedules that satisfy a number of con- straints. It often turns out that there are implicit conflicts between the constraints making it impossible to find any schedule that satisfies them all. ISIS employs a constraint relaxation scheme to define new problems that have a bet- ter chance of being successfully solved (Probing Question 2.9.2). Sensor interpretation problems will often be examples of the “one solution” point on this continuum. If we can assume there is some true state of the world that gives rise to the sensor data, then, in principle, there is only one le- gitimate solution. If the sensor data is sufficiently rich to uniquely determine the underlying state of the world (Probing Question 2.3.5), then an interpretation expert system can be satisfied with finding one solution (Feigen- baum 1977, p. 1025). D. How Accommodating is the Program’s Environment? Expert systems provide assistance to their users, but in order to do so, the programs themselves generally need assistance in the form of information or guidance. The amount of assistance that a program can rely on without compromising its usefulness is another problem feature that influences the design of expert systems. The Probing Question in Fig. 4 is concerned with this issue. This question suggests that one extreme of accommo- Is this and a diagnosis problem? Would it be unwise to assume that there is only a single underlying fault because multiple faults are either too common or too serious to run the risk of a mis-diagnosis? Yes --+ Solve a sequence of problems that “Subtract Off” previously accounted for manifestations. The system will need to use heuristic criteria to determine the most parsimonious explanation and also to distinguish between competing and complementary hypotheses. Figure 3: Probing Question 2.2.5 KNOWLEDGE ACQUISITION / 959 What is the nature of the environment that provides inputs to the program? 1. Cooperative and knowledgeable users will provide inputs. 2. Users are cooperative, but not always knowledgeable. That is, certain users are likely to give unreliable answers to questions posed by the system. 3. The environment is hostile and might therefore try to mislead the program with false inputs. (e.g., Enemy ships might try to hide by emitting misleading signals or no signals at all.) 4. Neutral environment way or another. that is a source of data, but does not try to influence the program one Both cooperative -+ Accept the information that is input as accurate and complete. and knowledgeable Not always knowl- - -j Tailor information gathering to the knowledge level of individual user, or allow the edgeable users to indicate how certain they are that their answers are correct, or apply more consistency checks when users are less knowledgeable. Hostile ---f Expend much effort in Consistency Checking, set up demons’to look for evidence of deception, Reason Explicitly About Uncertainty, use Endorsement-Baaed Approaches to try to resolve uncertainties, etc. Neutral --j Expend moderate effort on Consistency Checking. Figure 4: Probing Question 2.2.6 dation is the misleading information arising from decep- tion in military settings and the other extreme of accom- modation is the reliable information provided by coopera- tive and knowledgeable users. However, if guidance rather than information is at issue (Probing Question 2.9.1)) then an extreme case of accommodation is a division of labor between man and machine where the user makes all the critical decisions. The program might display the deci- sion options and then trace the consequences of the user’s decisions. With this kind of arrangement, the program is not capable of solving the entire problem by itself and is dependent on guidance from a very accommodating en- vironment; i.e., a competent user to whom it can defer decisions. E. How Can We Ensure that Effort is Expended Wisely? Many expert systems operate in domains where a com- binatorial explosion of possibilities will defeat them if they do not expend their efforts intelligently. The Probing Question in Fig. 5 illustrates a variety of approaches to deciding what to do next so problems get solved within the constraints imposed by resource limitations. This question contrasts various global strategies for directing the problem solving process. However, it is pos- sible to make finer distinctions within some of the broad categories that Probing Question 2.9.4 treats as equiva- lent. For example, the choice of a general purpose search strategy also has implications for the program’s ability to make good use of its resources. The data-driven reasoning provided by forward-chaining search allows the program to immediately recognize the implications of evidence that strongly suggests a particular hypothesis (Probing Ques- tion 2.6.4). This rapid appreciation of the consequences of new information is important in some problems. In other problems, the goal-driven reasoning provided by backward-chaining search leads to a better expenditure of resources. This will be the case if it is important to be sure that all inferences made could help achieve the program’s current goals. III Analyzing an AI Technique The Probing Questions provide guidance about the appropriate use of nearly 100 AI techniques and imple- mentation strategies. However, there are other AI tech- niques-not included in that set, and new techniques are 960 / ENGINEERING Is there a fixed order of subtasks that solves most problems in this domain? Or Are the potential lines of reasoning few enough that the program can afford to investigate them in the order that is most convenient for the reasoning strategy? Or Do experts routinely use their knowledge of the domain to make good choices of subproblems to work on next? OT Will the program need to estimate the costs and benefits expected from invoking a line of reasoning so as to best allocate computational resources among a wide range of possible lines of reasoning? Fixed order of tasks At the convenience of the reasoning strategy Domain knowledge determines next task Estimate costs and benefits + Hard-wire the flow of control, e.g., conventional programming or the Match strategy. + General-purpose search strategies, i.e., forward-chaining, backward-chaining, etc. * Use Control-Rules that embody domain knowledge to reorder the Agenda, set focus tasks, or invoke rule sets. + Devise an Intelligent Scheduler to order tasks on an Agenda according to their expected benefits. Meta-Knowledge is required about the costs and benefits associated with potential lines of reasoning. Figure 5: Probing Question 2.9.4 being developed fairly rapidly. Since descriptions of an AI technique often do not say what problem features make that technique appropriate, it would be helpful to identify questions knowledge engineers could ask in order to make a decision for themselves. The five issues discussed in this paper provide some candidate questions. Given a new AI technique, a knowl- edge engineer should ask: 1) what assumptions the new technique makes about the arrival time of information, 2) what kind of connection between evidence and hypotheses that technique assumes, 3) what assumptions this tech- nique makes about the nature of solutions, 4)how accom- modating the program’s environment would have to be for the technique to be useful, and 5) whether that technique helps the program expend its effort wisely. One way to estimate how useful these questions are likely to be in characterizing the use of a new AI tech- nique is to determine how often they help characterize techniques in the current collection. It was found that these questions helped characterize the appropriate use of approximately two thirds of those techniques. As a concrete example, a case was discovered where more precision was needed in asking about the expected arrival time of information. A Probing Question that de- termines if means-ends analysis is an appropriate search strategy asks: Is it relatively easy to guess that a cer- tain crucial step will be required to solve the problem? Given the previous discussion of the need to distin- guish program-design time from data-input time and problem-solution time, it is now clear this question should have been phrased Is it relatively easy to guess at program- design time that a certain crucial step will be required to solve the problem? By paying attention to the issues discussed in this paper, it should be possible to avoid this kind of mistake in future analyses of AI techniques. IV Conclusions Analysis of the 47 Probing Questions in Kline & Dolins (1985) indicates there are five issues that are important for the proper use of a wide variety of AI implementation KNOWLEDGE ACQUISITION / 961 techniques. By being aware of these issues, knowledge engineers improve their chances of coming up with the right design for expert systems. Awareness of these issues should also help knowledge engineers take advantage of new AI techniques as they emerge. ACKNOWLEDGEMENTS We would like to thank Dr. Northrup Fowler III of Rome Air Development Center for his valuable sugges- tions during the course of this research. We are very grateful to the expert systems builders who were kind enough to comment on draft versions of the Probing @es- tions: Bruce Buchanan, Ruven Brooks, John Kunz, Penny Nii, Michael Genesereth, Bruce Porter, Robert Drazovich, Robert Neches, Tim Finin, Barbara Hayes-Roth, Casimir Kulikowski, and Jim Kornell. REFERENCES Buchanan, B.G. & Shortliffe, E.H. (Eds.), Rule-Based Ez- pert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project, Reading, MA: Addison- Wesley, 1984. Clancey, W.J. Extensions to rules for explanation and tutoring. Artificial Intelligence, 1983, 20, pp. 215-251. Fox, M.S. Constraint-directed search: A case study of job- shop scheduling. Ph.D. dissertation, Computer Science Department, CMU, Pittsburgh, PA, 1983. Feigenbaum, E.A. The art of artificial intelligence: I. Themes and case studies of knowledge engineering. Pro- ceedings of IJCAI-77, pp. 1014-1049. Kahn, G. On when diagnostic systems want to do without causal knowledge. In ECAI-84: Advances in Artificial Intelligence. T. O’Shea (Ed.), Elsevier, 1984, pp. 21-30. Kline, P.J. & Dolins, S.B. Choosing Architectures for Ex- pert Systems. Final Technical Report RADC-TR-85-192, October 1985, Rome Air Development Center, Griffiss AFB, NY, 13441. (Available through the National Tech- nical Information Service or the authors.) McDermott, J. Rl: A Rule-Based Configurer of Com- puter Systems. Artificial Intelligence, 1982, 19, pp. 3Q- 88. Nii, H.P., Feigenbaum, E.A., Anton, J.J. & Rockmore, A.J. Signal-to-symbol transformation: HASP/SIAP case study. The AI Magazine, Volume III, No. 2, Spring 1982. Patil, R.S., Szolovits, P., Schwartz, W.B. Causal under- standing of patient illness in medical diagnosis. Proceed- ings of IJCAI-81, pp. 893-899. Pople, H.E.Jr. The formation of composite hypotheses in diagnostic problem solving: An exercise in synthetic reasoning. Proceedings of IJCAI-77, pp. 1030-1037. Pople, H.E.Jr. Heuristic methods for imposing structure on ill-structured problems: The structuring of medical diagnostics. In Artificial Intelligence in Medicine. P. Szolovits (Ed.), Boulder, CO: Westview Press, 1982. Reggia, J.A., Nau, D.S., and Wang, P.Y. Diagnostic ex- pert systems based on a set covering method. In M.J. Coombs, (Ed.) Developments in expert systems, New York: Y Academic Press, 1984. Stefik, M., Aikins, J., Balzer, R., Benoit, J., Birnbaum, L., Hayes-Roth, F. & Sacerdoti, E. The architecture of expert systems. In F. Hayes-Roth, D.A. Waterman, & D.B. Lenat (Eds.) Building Expert Systems, Reading, MA: Addison-Wesley, 1983. 962 / ENGINEERING
|
1986
|
38
|
481
|
Knowledge Level Engineering: Ontological Analysis James H. Alexander, Michael J. Freiling, Sheryl J. Shulman, Jeffrey L. Staley, Steven Rehfuss and Steven L. Messick Computer Research Laboratory Tektronix Laboratories ABSTRACT Knowledge engineering suffers from a lack of formal tools for understanding domains of interest. Current practice relies on an intuitive, informal approach for col- lecting expert knowledge and formulating it into a representation scheme adequate for symbolic process- ing. Implicit in this process, the knowledge engineer for- mulates a model of the domain, and creates formal data structures (knowledge base) and procedures (inference engine) to solve the task at hand. Newell (1982) has proposed that there should be a knowledge level analysis to aid the development of AI systems in general and knowledge-based expert systems in particular. This paper describes a methodology, called ontological analysis, which provides this level of analysis. The methodology consists of an analysis tool and its princi- ples of use that result in a formal specification of the knowledge elements in a task domain, 1. Knowledge Engineering needs a methodology. Traditionally, knowledge engineering has been a difficult pro- cess. Neophyte knowledge engineers often “don’t know where to start.” The difficulty in getting started is related to confusions over how to encode or classify relevant knowledge items from the task domain. Clancey (1985) provides a typical example from MYCIN: Perhaps one of the most perplexing difficulties we encounter is distinguishing between subtype and cause, and between state and process . . . For example, a physi- cian might speak of a brain-tumor as a kind of brain- mass lesion. It is certainly a kind of brain-mass, but it causes a lesion (cut); it is not a kind of lesion. Thus, the concept bundles cause with effect and location: a lesion in the brain caused by a mass of some kind is a brain- mass-lesion. (pg. 3 11) This experience is familiar to any knowledge engineer. Misunder- standings about the knowledge elements in a system often pervade mature systems and cause endless problems, In response to this problem Newell (1982) has suggested that there should be a knowledge level analysis of domains which would guide knowledge-based systems development. In this paper we discuss a methodology for analyzing problem domains we call ontological analysis. Most problems encountered in knowledge-based systems derive from ad hoc design of the knowledge structures. Often, knowledge is collected by writing rules or frames in a language-specific syntax, without a systematic consideration of the underlying structure of knowledge elements. Ontological analysis focuses attention on the elements of knowledge in their own right, independent of implementation tech- niques. An ontological analysis is distinctly different from knowledge representation languages in that it presents only a high level description of a problem’s knowledge structure. Ontological analysis is used to identify and construct an adequate knowledge representation for a problem. 2. Ontological Analysis. To philosophers, ontology is the branch of metaphysics con- cerned with the nature of existence, and the cataloguing of existent entities (Quine, 1980). The role of ontology in AI has been noted previously (Hayes, 1985; Hobbs, 1985; McCarthy, J., 1980). We use the term to emphasize that a knowledge-based system is best designed by careful attention to the step-by-step composition of knowledge structures. An ontology is a collection of abstract objects, relationships and transformations that represent the physi- cal and cognitive entities necessary for accomplishing some task, Our experience indicates that complex ontologies are most easily constructed in a three step process that concentrates first on the (static) physical objects and relationships, then on the (dynamic) operations that can change the task world, and finally on the (epistemic) knowledge structures that guide the selection and use of these operations. Any useful methodology must contain both formal tools for constructing an analysis, and informal principles of practice to guide application of the formal tools. Our research has indicated that several different formal tools are useful for extracting and defining ontologies. We are developing a family of languages col- lectively called SPOONS (Specification Of ONtological Structure) to encompass tools based respectively on domain equations, equa- tional logic, and semantic grammars. 2.1 Domain Equations in Ontological Analysis The most useful and concise of these languages is SUPE- SPOONS (SUPErstructure SPOONS), which is based on the domain equations of denotational semantics (Gordon 1979; Stoy 1977) and algebraic specification (Guttag and Horning, 1980). Because of the rich ontologies found in most knowledge engineer- ing problems, domain equations provide a concise and reasonably abstract characterization of the necessary knowledge structures. Furthermore, they do not encourage the knowledge engineer to get prematurely involved in details. 2.1.1 SUPE-SPOONS Syntax and Semantics SUPE-SPOONS consists of two basic statement types: l Domain equations: Site = Building x Campus. These state- ments define domains, or types of knowledge structures.* l Domain element declarations: add-meeting: Meeting + [Meetings + Meetings] These statements declare the type of specific domain elements. The right hand side of statements can be composed of one or more domains or constant elements with operators relating these ele- ments, Four primitive domains, STRING, BOOLEAN, INTEGER and NUMBER, are always assumed to be defined. Other primitive domains can be defined by explicit enumeration of their elements, or by open assignment to some collection of atomic elements. *For most purposes, it suffices to think of domains as sets. A more complex semantics is needed if domains are defined recursively (Stoy 1977) or with multiple equations. KNOWLEDGE ACQUISITION I 963 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Table 1 Static Ontology Fragment for IEC Meeting = <atomic> Project = <atomic> Department = <atomic> Person = <atomic> Scheduled-Meeting = ( Meeting x Person ) Meeting-Purpose = One-Time-Meeting-Purpose + Repetitive-Meeting-Purpose One-TimeMeeting-Purpose = {discuss, plan} x Project Repetitive-Meeting-Purpose = {staff, project} x Department Location-Of-Meeting = [Meeting -+ Location-Description] Time-OfMeeting = [Meeting + TimeDescription] Purpose-Of_Meeting = [Meeting + MeetingDescription] Participants-In-Meeting = [Meeting + Person-Description] Owner-of-Meeting = rScheduled_Meetinp + Person1 a Explicit enumeration: MeetingJioom-Accessory = {black- board, screen, . ..} l Open assignment: Meeting = <atomic> The operators in the domain equations are of five types: l Discriminated Union: D + E. Discriminated union of two domains defines that domain composed of each member of D and E, where original domain identity is preserved. l Cross Product: D x E. Cross product of two domains describes a domain composed of all ordered pairs whose first element is a member of domain D and second element is a member of domain E. l Domain mapping: D + E. Mapping of one domain onto another creates a domain consisting of all functions which map domain D onto domain E. l Collection of subsets of D. Sets: 2**D . Defines the domain consisting of all l Collection of Ordered ordered sequences of D. Sets: D*. Defines the domain of all 2.2 Building an Ontology We will illustrate our method by using SUPE-SPOONS to build an ontology for the task of scheduling meetings in an Intelligent Electronic Calendar (IEC: Staley, in press). The task of the IEC is to schedule meetings in a semi-automated fashion, and to support the negotiations necessary for determining who will meet when and where. A partial ontology for this task is found in Appendix A. It should be noted that the IEC is in the early stages of design, and our analysis here is for expository purposes only. A complete ontology for a knowledge-based system that has been implemented and runs can be found in Freiling et al. (1986). 2.2.1 A Static Ontology for the IEC Ontological Analysis begins by enumerating the physical objects in the problem domain and identifying their inherent pro- perties and relationships. At the level of the static ontology, the analysis performed is quite similar to the entity-relationship model of Chen (1976). Table 1 presents a subset of the IEC static ontology. Only those equations relating to the domain Meeting are presented. This domain consists of abstract tokens, or surrogates (Codd 1979), each of which represents a single meeting. Another domain, Scheduletieeting, contains elements which indicate that a meet- ing has been entered on someone’s calendar. A number of map- pings are also defined that identify the salient properties of meet- ings,suchas Time-Description, Meeting-Description and 964 / ENGINEERING Table 2 Dynamic Ontology for IEC State = Meetings x Purposes x Required-Participations x [Meeting + Arbitrator] x [Meeting + Reviewer] x [Meeting -+ Meeting-Plan] x [Person --+ Schedule] x [Room + Schedule] 3peration = Heuristic-Operation + Algorithmic-Operation + Autonomous-Operation + ( schedule.newmeeting } Heuristic-Operation = { select-arbitrator,selectreviewer} Algorithmic-Operation = { create-newmeeting,assimilate} Autonomous-Operation = { signoff-or-propose, assent, arbitrate,initial-proposal} schedulemeeting = (Purpose x Required-Participation) --+[ State + State ] create-new-meeting : (Purposex Required-Participation) + Meeting select-arbitrator : Meetingx Purpose + Arbitrator select-meeting-to-act-on : State + Meeting select-reviewer : Meeting + [ State + Reviewer ] initial-proposal : [ Arbitrator + MeetingJlan ] signoff-or-propose : [Reviewer+[Old_Meetirig-Plan-+New_Meeting_Plan] arbitrate : (Old-Meeting-Plan x New-Meeting-Plan) + (Meeting_Plan x Continue) Continuer = BOOLEAN assent : MeetingJQn + [ (Person + Room) -+ Schedule ] assimilate : ((Meeting x Meeting-Plan) + ((Person + Room) x Schedule) + (Meeting x Arbitrator) + (Meetinp x Reviewer)) + r State 3 State 1 PersonDescription. The domain Meeting_Purpose represents various reasons why a meeting should be held, and embodies the analytical decision that one time meetings are project oriented, while repetitive meetings revolve around organizational units or departments. This is a simplification of course, but what is impor- tant is the ease with which such decisions can be expressed. The IEC static ontology also addresses time scales, meeting room characteristics, and individual membership in projects and departments (see Appendix A for the details). In total the static ontology gives us a picture of all important elements within the domain. 2.2.2 A Dynamic Ontology for the IEC. Problem solving is often characterized as search through a state space (Simon, 1981; Newell and Simon, 1972). Solution of a prob- lem consists of selecting operators whpse application transforms the current state into another. The dynamic ontology defines a problem space in terms of configurations of elements from the static ontol- ogy, and then defines problem operators as transformations built on the domain of problem states. The dynamic ontology defines which knowledge is unchanged throughout the problem solving process (i.e., organizational charts, see Appendix A) and which knowledge changes as the problem is solved (i.e., schedules and meeting plans). In the IEC, the problem state includes pending meetings, their purposes and required partici- pations, room schedules, individual schedules, and state informa- tion recording the negotiation process. Table 2 shows a sample dynamic ontology for the IEC. To define the dynamic ontology, we must commit to a particular model for the negotiation process. Negotiation is an extremely subtle and complex problem (Davis and Smith, 1981); for this example, we will use a very simplistic model. The task of scheduling a meeting is called schedule-meeting. Significant subtasks include choosing an arbitrator, choosing reviewers, producing proposals, reviewing them, and reserving space on calendars. These operations are divided up into algo- rithmic operations (no rules or heuristics required), heuristic opera- tions (driven from a rule base to be specified in the epistemic 3. Principles of practice Table 3 Epistemic Ontology for the IEC Arbitrator-Selection-Rules = 2 ** (Purpose x Persotiescription) Meeting-Selection-Rules = 2 ** Meeting-Plan-Pattern Reviewer-Selection-Rules = 2 ** (Meeting-Plan-Pattern x PersonDescription) ontology), and autonomous processes (which do not run in the IEC at all, but represent the behavior of independent actors.) Our negotiation model proceeds as follows. First, the IEC selects an arbitrator ( 1 meeting plan ( se ect arbitrator), who proposes an initial initial-proposal). The meeting plan consists of proposals about time, location, and participants of the meeting, cou- pled with signoffs from participants. A plan is considered complete when the signoff list for each part of the plan is the same as the list of participants. Once a plan for the meeting is on the table, a reviewer is selected to look at the plan (select-reviewer) and either approves it or makes a counter-proposal (signoff-or propose). If a counter-proposal is made the arbitrator selects either the old propo- sal or the counter-proposal as the standing proposal (arbitrate) then selects a new reviewer (select-reviewer). After each deci- sion, the arbitrator also decides whether to continue the negotiations (arbitrate). If negotiations are halted, all participants are required to accept the current version of the plan (assent). The IEC divides its time between coordinating the negotiation processes for several meetings. It makes use of a heuristic process (select-fneeting-to-act-on) to decide which meeting to work on at any pomt in time. 2.2.3 The Epistemic Ontology. The dynamic ontology defines the operations available to the IEC for performing its task. The epistemic ontology defines knowledge structures to guide the selection and use of these opera- tions. Table 3 shows the epistemic ontology for the IEC. The epistemic ontology usually contains two different types of knowledge structures. One knowledge type is used to select which operation should be performed. The other knowledge type controls the actual performance of operations. Because our simple negotia- tion model is so rigid, none of the former appear, In fact, the only knowledge structures that do appear are those needed to guide the operations classified as heuristic operations in the dynamic ontol- ogy: select-meeting-to-act-on, select-arbitrator, and select-reviewer. For select-arbitrator, we assume that the purpose of the meeting dictates who a likely arbitrator should be (e.g. the ranking manager of the group calling the meeting), so that arbitrator selec- tion rules need only associate certain purpose patterns with descrip- tions of likely persons. Rules to select meetings to work on and reviewers to continue negotiation depend on the current version of the meeting plan. Thus, these rules require the definition of a pattern that can success- fully match meeting plans. For meeting selection rules, all that is required is that the meeting plan match the pattern -- that will make it a candidate for selection. For reviewer selection rules, the meet- ing pattern must be associated with some suggestion regarding a reviewer. To complete the epistemic ontology, the details of patterns and the matching process need to be defined for meeting plans and pur- poses. The interested reader is referred to Appendix A for some sample definitions. We have performed ontological analyses on a wide range of domains including a system troubleshooting Tektronix Oscillo- scopes (Alexander et al., 1985; Rehfuss et al., 1985; Freiling et al., 1986), MYCIN’s medical knowledge (Shortliffe, 1984), design rule checking for nMOS circuitry (Lob 1984), oscilloscope operation and parts of the IEC (Staley, in press). Each analysis has improved our understanding of the problem domain, and the use of SUPE- SPOONS to sketch out designs has helped us clear up many confus- ing situations. Through this experience we have built up a series of principles for constructing ontologies with SUPE-SPOONS. A methodology is more than just a formal notation, it also requires guidelines of proper practice. We have identified seven guidelines so far. 1. Begin with physical entities, proceed to their properties and relationships from there. The most accessible elements of any task domain are usually the physical objects and relationships that must be manipulated. Formalizing these provides an easy way to get started. The recommended procedure for extracting an ontology is to begin by analyzing a paper knowledge base (Freiling et al., 1985) that describes the task domain in English. The paper knowledge base may come from verbal protocols (Ericsson and Simon, 1984), a textbook, or a training manual. The technical vocabulary used in the paper knowledge base provides the initial elements of the static ontology. After these vocabulary elements have been defined is the time to examine the more esoteric dynamic and epistemic realms, 2. The static, dynamic and epistemic ontologies are not strict boundaries, use them loosely. The placement of ontologi- cal elements into categories has no formal effect on the semantics of the ontology, the levels only provide a conceptual framework. Arguing about whether a knowledge structure is actually dynamic or epistemic is of little value. 3. Clearly establish the distinction between objects and what they are intended to represent. TWO types of object appear in an ontology: first-class and second-class objects. First- class objects (surrogates in database terminology; Codd, 1979) can- not be individuated by their properties. Instead they are indivi- duated by identifying tokens (<atomic> in SUPE-SPOONS). In the IEC, meetings are first-class objects: Meeting = <atomic> Time-OfMeeting = [ Meeting + TimeDescription ] Properties of first class objects are expressed via functions that map the objects into their property values. Second-class objects refer to those elements of an ontology that represent aggregations of other elements, Second-class objects are individuated solely on the basis of common components. For example, consider the following second-class object: GregorianTime-Point=YearxMonthxDayxHourx{00,15,30,45} Any two calendar dates are equal if they consist of the same year, the same month, hour and quartile. Only when their composite attributes are identical are the elements themselves identical. The usefulness of this distinction usually does not appear until implementation, and has to do with issues of representing identity and partial knowledge about elements that are beyond the scope of this paper. 4. Understand and separate intensional and extensional entities. There are many cases in knowledge engineering where it is important to distinguish between representatives for the physical objects, and for descriptions or viewpoints of those objects. For the IEC, it is necessary to define (extensional) units of absolute time, and relate them to (intensional) descriptions of time units with respect to one calendar or another. Only then is it possible to represent the fact that descriptions like 1986 (Gregorian) and Showa 60 (Japanese) refer to the same time interval. KNOWLEDGE ACQUISITION / 965 A common way to achieve the distinction between extensional representatives of real world objects and intensional representatives of descriptions or classes of such objects is define representatives for the extensional objects with only the bare minimum of structure. In the IEC, for instance, Real-Time-Point = INTEGER Real-Time-Interval = (Real-Time-Point x ReaLTimePoint) Here we define the primitive points in time as integers. We associ- ate point 0 with 12 midnight on l/1/1901, by the Gregorian calen- dar. The points are 15 minutes apart. Intervals of time can then be represented simply as a pair of points. (There is actually a bit more complexity for dealing with unbounded intervals, see Appendix A). Intensional descriptions with respect to various calendars can be constructed as necessary from different parts of the description. IntensionalTimeSoint = GregoriaTimePoint + Japanese_ImperialReign-Time-Point Gregorian-Time-Point = Year x Month x Day x Hour x ( 00, 15,30,45 } Japanese_lmperialReign-Time-Point = Era x Year x Month x Day x Hour x { 00,15,30,45 } Finally, intensional description can be related to extensional descriptions by the use of various interpretation functions: interpret : [ Intensional-Time-Point + Real-Time-Point ] Extensional identity of descriptions of varying sorts can then be defined as equality of the image under the relevant interpretation functions. 5. Build relevant abstractions through the use of generali- zation and aggregation. Generalization and aggregation (Smith and Smith) arc ,common techniques for building large knowledge structures. It is Interesting to note that generalization and aggrega- tion steps have a direct manifestation as discriminated unions and Cartesian Products: GENERALIZATION: Car = (Compact + Luxury-Car + Truck) AGGREGATION: Car-Assembly = (Engine x Chassis x Body x Drive _ Train) An ontology may also contain many implicit generalization and aggregation relations. Even the properties of a first class object are implicitly aggregated through the fact that some particular car defines values for each: Car = <atomic> Type = [ Car + [ Compact, Luxury-Car, Truck ) ] Has-Engine = [ Car + Engine ] Has-Chassis = [ Car + Chassis ] Has-Body = [ Car + Body ] HasDrive-Train = [ Car -+ Drive-Train ] Note also that generalizations are implicit in the properties of objects as well. Instances of the domain Car, for example, can be decomposed into Compacts, etc. on the basis of common images under the Type function. 6. Encode rules as simple associations, and heuristic steps as mappings between domains. Novices at Ontological Analysis are tempted to define rules in a form like; Gate_Recognition-Rules = [ Circuit + Gate-Type ] This mapping (from an Ontological Analysis of RUBICC (Lob 1984)) to choose a gate type from a circuit fragment really describes the heuristic task that uses rules, rather than of the rules themselves. We prefer to analyze rules as simple aggregations, -_ because this makes it easier to spot multiple uses for the-same rule structure: Gate-Recognition-Rules = 2 ** (Transistor-Pattern xGate_Type) mcOgnize : GateRecognition-Rules + [ Circuit + Gate-Type ] synthesize : GateRecognitionRules -+ [ Gate-Type + Circuit ] Another advantage of separating the rules from the heuristic task is that it focuses explicitly on the need to define classes of patterns and matching criteria. Table 4 Glib Fragment <signal value> ::= ‘HIGH1 ‘LOW’ <signal> ::= ‘SIGNAL-‘cintegen <atomic signal predicate> ::= <signal> IS <signal value> <signal predicate> ::= <atomic signal predicate> ] <atomic signal medicate> ‘when’ <atomic signal medicate> 7. Ensure the compositionality of elements. This case illus- ates vividly the usefulness of our methodology. We first encoun- tered this problem in the process of building a semantic grammar (GLIB) in order to extract the ontology of electronic instrument behavior (Freiling et al., 1984). Table 4 shows a fragment of GLIB that can generate the following atomic signal predicate. SIGNAL-3 IS HIGH Initially we assumed that this signal predicate would map signals into Boolean values. However, the semantics of two such state- ments combined with the connective when was not at all clear. If when was assumed to produce a Boolean itself, then the result would be returned by one of the 16 truth function of two Boolean values, clearly not what we had intended. SIGNAL-3 IS HIGH when SIGNAL-4 IS LOW Using domain equations to analyze the problem, Signal = [Time + Value] Signal-Predicate = [(Signal x Signal-Value ) + BOOLEAN] = [([Time + Value] x Signal-Value) + BOOLEAN] we discovered that our signal predicate as defined was dropping the temporal information and performing a global comparison with the threshold value. This problem was solved by creating a more appropriate definition for signa~~redicate, which follows: SirmaLPredicate = - &$& x Signal-Value) + [Time + BOOLEAN] J = [([Time + Value] x Signal-Value) + [Time -+ BOOLEAN]] when: [[[Time + BOOLEAN] x [Time + BOOLEAN]] + [Time + BOOLEAN]] Thus, the comparison made by the si gnal9redicat e is made at each instant of time, so that the result is not a single truth value computed from the whole signal, but a truth value for every time unit of the signal. This makes it possible for when to preserve its when functional character, since the truth function (logical and) is now applied on a point by point basis. The compositional analysis of this type of problem is common to researchers familiar with the techniques of semantics and model theory (Allen, 1981). Our hope is that a language like SUPE-SPOONS can make such techniques available to practitioners as well. 4. Future Work There are a number of weaknesses with the ontological analysis technique as currently defined. Even so, we have found the metho- dology useful for conceptualizing a knowledge engineering prob- lem, and creating a forum for cogent discussion. Consequently, we actively use the ontological technique on a day to day basis. Simultaneously, we are defining the theoretical foundations of the methodology. Our goal is to create a formal mathematical sys- tem for ontological analysis of problem solving domains. Formal systems allow the creation of tools for automatically checking and organizing the resulting analysis, automating the creation of some components of the ontological systems. We feel that SUPE-SPOONS provides a valuable tool that enables knowledge engineers to sketch out solutions to knowledge engineering problems at a fairly high level of abstraction. The limi- tations of domain equations prevent a premature attention to the low-level details of a domain. Eventually, however, those details 966 I ENGINEERING do need to be addressed. We feel that this is best accomplished with a separate language, and are actively working on another member of the SPOONS family (T-SPOONS) that uses equational logic to define and constrain actual domain elements beyond simply naming their types. In practice, we have found it hard to get consistent analyses from different knowledge engineers. Only experience will show us the proper formulation of the methodology. Presently, there is no standard concept of a virtua2 machine to be assumed when the analysis is being performed. Denotational semantics, for example has implicitly assumed concepts such as stack that form the virtual machine for programming language analysis. We are working to establish a standard to serve as a basis for the methodology. Finally, we are working to connect our work with other theoret- ical work on the nature of the knowledge level. Specifically, we see two connections with Clancey’s (1985) recent analysis of classification problem solving. First, his notions of generalization, aggregation and heuristics have a more formal description in our formalism. Second, Clancey suggests that problem solving tech- niques compose to form larger knowledge-based systems. Ontolog- ical analysis can provide a means to highlight this composition pro- cess. For both of these concepts, we hope eventually to be able to build demonstrations that connect these higher level tasks with the primitive ontological elements of the problem domain. 5. Summary We have presented a technique, ontological analysis, that has much promise as a knowledge engineering methodology. Metho- dologies of this type will release the discipline from ad hoc descrip- tions of knowledge and provide a principled means for a knowledge engineer and expert to analyze the elements of a problem domain and communicate the analysis to others. The abstract level at which domain equations characterize the semantics of structures and pro- cedures, not specifying too much detail, help in this regard. The effectiveness of a technique depends critically on the for- mulation of more and better principles to guide its use. Such prin- ciples only come painfully with much practice. We invite other knowledge engineers to try this approach, and relate their experi- ences. 6. References Alexander, J.H., M.J. Freiling, S.L. Messick & S. Rehfuss. Efficient Expert System Development through Domain-Specific Tools. Fifth International Workshop on Expert Systems and their Application, Agence de l’hrformatique Etablissement Pub- lic National, Avignon, France, May, 1985. Alexander, J.H. & M.J. Freiling. Smalltalk- Aids Troub- leshooting System Development. Systems and Software, 4, 4, April, 1985. Allen, J.F. An Interval-Based Representation of Temporal Knowledge. In Proc. ZJCAI-1981, Vancouver, British Colum- bia, Canada, August, 1981. Chen, P.P. The Entity-Relationship Model -- Toward a Unified View of Data. ACM Transactions on Database Systems, 1, 1, March, 1976. Clancey, W.J. Heuristic Classification. Artificial Intelligence, 27, 289-3X),1985. Codd, E.F. Extending the Database Relational Model to Capture More Meaning. ACM TODS 4 :4, December 1979,397-434. Davis, R. & R.G. Smith. Negotiation as a Metaphorfor Distributed Problem Solving. Artificial Intelligence Laboratory Memo, 624, MIT, May 1981. Ericsson, K.A. & H.A. Simon. Protocol Analysis. MIT Press; Cambridge, MA, 1984. Freiling, M.J., J.H. Alexander, S.L. Messick, S. Rehfuss & S. Shul- man. Starting a Knowledge Engineering Project - A Step-by- Step Approach. A.I. Magazine, 6,3, Fall, 1985. Smalltalk-80 is a registered trademark of Xerox corporation. Freiling, M.J. & J.H. Alexander. Diagrams and Grammars: Tools for the Mass Production of Expert Systems. First Conference on Artificial Intelligence Applications, IEEE Computer Society, Denver, Colorado, December, 1984. Freiling, M.J., J.H. Alexander, D. Feucht & D. Stubbs. GLIB - A Language for Describing the Behavior of Electronic Devices. Applied Research Technical Report, CR-84-12, April 6, 1984; Tektronix, Inc., Beaverton, OR. Freiling, M.J., S. Rehfuss, J.H. Alexander, S.L. Messick, & S. Shul- man. The Ontological Structure of a Troubleshooting System for Electronic Instruments. First International Conference on Applications of Artijicial Intelligence to Engineering Problems, Southampton University, U.K., April, 1986. Gordon, M.J.C. The Denotational Description of Programming Languages. Springer Verlag; New York, NY, 1979. Guttag, J. & J.J. Horning. Formal Specification as a Design Tool. Xerox PARC Technical Report, CSL-80-1, January, 1980. Hayes, P.J. Naive Physics I: Ontology for Liquids. In J.R. Hobbs & R.C. Moore (Eds.), Formal Theories of the Commonsense World. Ablex Publishing; Nor-wood, NJ, 1985. Hobbs, J.R. Ontological Promiscuity. 23rd Annual Meeting of the ACL, Chicago, July, 1985. Lob, C. RUBICC: A Rule-Based Expert System for VLSI Integrated Circuit Critique. Electronic Research Laboratory Memo UCB/ERL M84/80, University of California, Berkeley, 1984. McCarthy, J. Circumscription - A form of non-monotonic reason- ing. Artificial Intelligence, 13, 1980,27-39. Newell, A. The Knowledge Level. Artificial Intelligence, 18, pp. 87-127, 1982. Newell, A. & H.A. Simon. Human Problem Solving. Prentice- Hall; Englewood Cliffs, N.J., 1972. Rehfuss, S., J.H. Alexander, M.J. Freiling, S.L. Messick & S.J. Shulrnan. A Troubleshooting Assistant for the Tektronix 2236 Oscilloscope. Applied Research Technical Report, CR-85-34; Tektronix, Inc.; Beaverton, OR; September 25, 1985. Simon, H.A. The Sciences of the Artificial. The MIT Press; Cam- bridge, MA; 1981. Quine, W.V.O. From a Logical Point of View. Harvard University Press; Cambridge, MA; 1980. Shortliffe, E.H. Details of the Consultation System. Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley; Reading, MA, 1984. Smith, J.M. & D.C.P. Smith. Database Abstractions: Aggregation and Generalization. ACM Transactions on Database Systems, 2:2, June, 1977. Staley, J.L. An Intelligent Electronic Calendar: A SmalltaIk-808 Application. Tekniques, in press, Information Display Group, Tektronix, Wilsonville, OR. 7. Appendix A: Ontology for IEC Person = <atomic> Static Ontology Persons = 2**Person Project = <atomic> Department = <atomic> Scheduled-Meeting = ( Meeting x Person ) Meeting-Room = <atomic> Name = <string> Group = <atomic> Mee$x&RoomAccessory ={ blackboard, screen, projector} ~~rAmqwnent_Type = [conference, classroom, auditorium } Meeting = <atomic> KNOWLEDGE ACQUISITION / 967 Meeting-Purpose = One-Time-Meeting-Purpose + RepetitiveMeeting-Purpose One-TimeMeetingPurpose = {discuss, plan, review} x Project RepetitiveMeetingPurpose = ( staff, project } x Department Meeting-Proposal = Time-Proposal + Location-Proposal + Participant-Proposal Time-Proposal = TimeDescription Location-Proposal = Location-Description ParticipantProposal = Persons Reflection = [Scheduled-Meeting + Meeting] Person-Name = [Person *Name] = [Name + Person] Person-Attribute = Name + [Name x Hierarchical-Link] + [[rep-of} x Group] + [ { resp-rep-of} x Group] + [{head-of} x Group] Person-Description = 2**Person.-Attribute HierarchicaLLink = {boss-of, subordinate-of) * Organizationrelationof4erson = Hierarchical-Link Concession-Type = [time, location, . . . ) Owes-Concession-To = [(Person x Person) + Concession-Type*] negotiating-points : [(Person x Person x Concession-Type* x Organization-Relation-of-Person) + INTEGER Group-Contained-By = [Group + Group] Member-Of-Group = [Person + Group] Project-Name = [Project + <string>] Location-Description = (Room-Capacity) x INTEGER x {blackboard, no-board) RoomHas = MeetingRoom + 2**MeetingJioomAccessory Room-Capacity = MeetingRoom + INTEGER ChairArrangement-InRoom = Meeting-Room + Chair-Arrangement-Type Building = <atomic> campus = <atomic> Site = Building x Campus At = [Meeting-Room + Site] Quarter={OO,15,30,45) Hour = { 0..24) Date = (1..31) Month = { 1..12) Year = { -BB . . +BB) Cycle = {-BB’ . . +BB’) Year’ = {OOO, 100, . . . . 900) Ap = { l.13) Day = (1..28) Identified-Time_InteIval = [Real-Time-Point + INTEGER] Calendar = [Real-Time-Interval + Calendar-Interval] Real-Time-Interval = [Real-Time-Point x RealTimePoint] + [RealTimePoint x {unbounded)] + [ {unbounded) x Real-Time-Point] +[{unbounded)x {unbounded)] Event-Description = Interval-Description x MeetingDescription Interval-Description = [(between) x Calendar-Point x Calendar-Point] + [ {before) x Interval-Description] + [{after) x Interval-Description] + [(before, after, during) x Event-Description] Calendar-Region = <atomic> Calendar-Point = Gregorian-Point + Japanese-Point Calendarlnterval = Calendar-Point x Calendar-Point PointDescription = Calendar-Point + [{after) x Calendar-Point] + [ {before) x Calendar-Point] + [{within] x Interval-Description] Gregorian-Point = Year x Month x Day x Hour x Quarter JapanesePoint = Era x Year x Month x Day x Hour x Quarter express-as : [Calendar + [Real-Time-Point + CalendarPoint] interpret-as : [Calendar + [Calendar-Point + Real-Time-Point] Event = Scheduled-Meeting + BlockSchedule Events = 2**Event Assignments = [Event + Real-Time-Interval] Schedule = Events x Assignments BlockSchedule = (read, errand, fill-out-form) x Time-Quantum Dynamic Ontology Meeting-Plan = [ MeetingProposal + Signoffs ] Signoffs = Persons Arbitrator = Person Reviewer = Person Participant = Person Old-Meeting-Plan = Meeting-Plan NewMeetingPlan = Meeting-Plan State = Meetings x Purposes x Required-Participations x [ Meeting + Arbitrator ] x [ Meeting + Reviewer ] x [ Meeting + MeetingPlan ] x [ Person + Schedule ] x [ Room + Schedule ] Operation = Heuristic-Operation + Algorithmic-Operation + Autonomous-Operation’+ { schedule-newmeeting ) Heuristic-Operation = { select-arbitrator , selectdeviewer , selectmeeting-to-act-on ) Algorithmic-Operation = { create-new-meeting, reserve) Autonomous-Operation = ( signoff-or-propose , assent , arbitrate , initial-proposal ) schedule-meeting = (Purpose xRequired4articipation) +[ State + State ] createnewmeeting : (PurposexRequiredParticipation)+Meeting select-arbitrator : MeetingxPurpose + Arbitrator selectmeeting-to-actn : State + Meeting select-reviewer : Meeting + [ State + Reviewer ] initial-proposal : [ Arbitrator + MeetingPlan ] signoff-or-propose : [ Reviewer + [ OldMeetingPlan + New-Meeting-Plan ] arbitrate : (Old-Meeting-Plan xNewMeeting_Plan) Continue = BOOLEAN assent : Meeting-Plan + [ (Person + Room) + Schedule ] assimilate : ((Meeting xMeeting9la.n) + ((Person + Room) xschedule) + (Meeting XArbitrator) + (Meeting xReviewer)) + [ State + State ] ) Epistemic Ontology Arbitrator-Selection-Rules = 2 ** (Purpose xPersonDescription) Meeting-Selection-Rules = 2 ** Meeting-PlariPattern Reviewer-Selection-Rules = 2 ** (Meeting_Plan-Pattern xPersonDescription) MeetingPlan-Pattern = ((Time-Pattern XSignoff-Pattem) (Locatiotiattem XSignoff-Pattem) (Participant-Pattern XSignoff-Pattem)) Time-Pattern = TimeDescription + { anytime ) Location-Pattern = 2 ** Location-Description + { anywhere ) Participant-Pattern = 2 ** PersoUescription + { anybody } Signoff_Pattem = 2 ** Person-Description + { anybody ) + { nobody-but-proposer ) 968 / ENGINEERING
|
1986
|
39
|
482
|
A Software and Hardware Environment for Developing Al Applications on Parallel Processors R. Bisiani Computer Science Dept. Carnegie Mellon University Pittsburgh, PA 15213 (4121 268-3072 Abstract This paper describes and reports on the use of an environment, called Agora, that supports the construction of large, computationally expensive and loosely-structured systems, e.g. knowledge-based systems for speech and vision understanding. Agora can be customized to support the programming model that is more suitable for a given application. Agora has been designed explicitly to support multiple languages and highly parallel computations. Systems built with Agora can be executed on a number of general purpose and custom multiprocessor architectures. 1. Introduction Our long-term goal is to develop a software environment that meets the need of application specialists to build and evaluate certain kinds of heterogeneous Al applications quickly and efficiently. To this effect we are developing a set of tools, methodologies and architectures called Agora (marketplace) that can be used to implement custom programming environments. The kinds of characteristics: systems for which Agora is useful have these -they are heterogeneous - no single programming model, language or machine architecture can be used; -they are in rapid evolution - the algorithms change often while part of the system remains constant, e.g. research systems; -they are computational/y expensive - no single processor is enough to obtain the desired performance. Speech and vision systems are typical of this kind of Al applications. In these systems, know/edge-intensive and conventional programming techniques must be integrated while observing real time constraints and preserving ease of programming. State-of-the-art Al environments solve some but not all of the problems raised by the systems we are interested in. For example, these environments provide multiple programming models but fall short of supporting “non-Al” languages and multiprocessing. Some of these environments are also based on Lisp and are therefore more suitable (although not necessarily limited) to shared memory architectures. For example, some programming environments provide abstractions tailored to the incremental design and implementation of large systems (e.g. LOOPS [14], STROBE [16]) but have little support for parallelism. Other environments support general purpose parallel processing (e.g. QLAMBDA [ll], Multilisp [13], LINDA [7]) but do not tackle incremental design (Linda) or non-shared memory computer ‘Many individuals have contributed to the research presented is this paper, please refer to the Acknowledgements section for a list of each contribution. This research is sponsored by the Defense Advanced Research Projects Agency, DOD, through ARPA Order 5167, and monitored by the Space and Naval Warfare Systems Command under contract N00039-65-C-9163. Views and conclusions contained in this document are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or of the United States Government. architectures @LAMBDA, Multilisp). ABE [lo] and AF [12] are the only environments we are aware of that have goals similar to Agora’s goals. ABE has, in fact, broader goals than Agora since it also supports know/edge engineering. Agora supports heterogeneous systems by providing a virtual machine that is independent of any language, allows a number of different programming models and can be efficiently mapped into a number of different computer architectures. Rapid evolution is supported by providing similar incremental programming capabilities as Lisp environments. Programs that run on the parallel virtual machine can be added to the environment and share the same data with programs that were designed independently. This makes it possible to provide an unlimited set of custom environments that are tailored to the needs of a user, including environments in which parallel processing has been hidden from the end user. Finally, parallelism is strongly encouraged since systems are always specified as parallel computations even if they will be run on a single processor. Agora is not an “environment in search of an application” but is “driven” by the requirement coming from the design and implementation of the CMU distributed speech recognition system [5]. During the past year, we designed and implemented an initial version of Agora and successfully used it to build two prototype speech-recognition systems. Our experience with this initial version of Agora convinced us that, when building parallel systems, the effort invested in obtaining a quality software environment pays off manyfold in productivity. Agora has reduced the time to assemble a complex parallel system and run it on a multiprocessor from more than a six man-months to about one man- month. The main reason for this lies in the fact that the details of communication and control have been taken care of by Agora. Application research, however, calls for still greater improvement. Significant progress in evaluating parallel task decompositions, in CMU’s continuous speech project, for example, will ultimately require that a single person assemble and run a complete system within one day. This paper is an introduction to some of the ideas underlying Agora and a description of the result of using Agora to build a large speech recognition system. The current structure of Agora is the outcome of the experience acquired with two designs and implementations carried out during 1985. One of these implementations is currently used for a prototype speech recognition system that runs on a netivork of Perqs and MICROVAXES. This implementation will be extended to support a shared memory multiprocessor, Sun’s and IBM RT-PC’s by the end of the second quarter of 1986. 2. Agora’s Structure Agora’s structure can be explained by using a “layered” model, see Figure 2-l starting from the bottom. 2Unix is a Trademark of AT&T 742 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. I fhnrrlc TOO18 I Acoustic Phonetlc # Word Sentence I, Hypothesizer 9 Parser # c Coaputatlonal Frameworks f Data Flow 1 Blackboard -------- Figure 2-1: Layered Model of Agom and its Interfaces -The first layer is a network of heterogeneous processors: single processors, shared memory multiprocessors, loosely-connected multiprocessors and custom hardware accelerators. The Mach operating system provides the basic software to execute computations on all these machines and Agora provides tools to map Mach abstractions into real machines. Mach is a Unix-compatible2 operating system that runs on multiprocessors, see [2]. -The Mach layer provides three major abstractions: message passing, shared memory and threads. Message passing is the main communication mechanism: all Agora implementations can run on machines that provide message passing as the only communication mechanism. Shared memory (when available in the underlying computer system) is used to improve performance. Threads (processes that share the address space with other processes) are used to support the fast creation of new computations (a useful but not always vital characteristic). -The parallel virtual machine layer represents the “assembly language level” of Agora. Computations are expressed as independent procedures that exchange data by using Agora’s primitives and are activated by means of a pattern matching mechanism. Computations can be programmed in either C or Common Lisp. It is in this layer that the most suitable Mach primitives are selected, the code is compiled and linked, tasks assigned to machines, etc. Computations expressed at this level are machine independent. Although systems can be fully described at this level, the virtual machine level is best used to describe frameworks rather than program user computations. -The framework layer is the level at which most of the application researchers program. A framework is like a specialized environment built to interact with the user in familiar terms. The description, assembly, debugging and production run of an application system are all performed through its associated framework(s). Frameworks, as used in Agora, are very similar to ABE’s frameworks, see [lo]. USi!f Interface Knowledge Source Clusters Frameworks R7uY . Machine System Machines + Communication First, application engineers program one or more “frameworks” that implement the programming environments that an application requires. A framework provides all the tools to generate and maintain a given kind of program. For example, a framework could contain virtual machine code to implement data-flow and remote- procedurecall communication mechanisms and a tool to merge user-supplied code with the existing virtual machine code. Such a framework could also contain a structured graphical editor that allows the user to deal with programs in terms of data-flow graphs. Researchers can then use frameworks to create framework instantiations, i.e. frameworks that contain user provided code and data. Components of a framework instantiation can themselves be instantiations of some other framework. A framework instantiation can then be integrated with other framework instantiations to generate more complex frameworks. In the speech system, for example, the word hypothesizer is described by using a framework that embodies the asynchronous control necessary to run the word hypothesizer in parallel and code to display the data processed: a user need only be familiar with the algorithms and the language in which they are written in order to be able to experiment with different word hypothesization algorithms. A word hypothesizer generated in this way can be merged with an acoustic-phonetic framework instantiation by using, for example, a data-flow framework. Tools from all the original frameworks are made available in the combined framework. We cannot describe frameworks in detail in this paper. Currently, we have implemented a “Unix-like” framework which lets users build parallel programs that communicate by using streams. We are implementing a data-flow framework that will let a user program through a graphic editor and a number of very specialized frameworks like the word hypothesizer framework described later. AI LANGUAGES AND ARCHITECTURES / 743 3. The Agora Virtual Machine The Agora virtual machine has been designed with two goals in mind: first, to be able to efficiently execute different programming models. Second, to avoid restricting the possible implementations to certain computer architectures. Agora is centered around representing data as sets of elements of the same type (elements can be regarded as variable-size records). Elements are stored in global structures called c/ioues. Each clique has a name that completely identifies if and a type (from a set of globally-defined types). Agora forces the user to split a computation into separate components, called Knowledge Sources (KS), that execute concurrently. Knowledge Sources exchange data through cliques and are activated when certain patterns of elements are generated. Any KS that knows the name of a clique can perform operations on it since Agora “registers” the name of each clique when it is created. Since “names” are global, the only requirement for sharing a clique between KSs is that a clique be first “created” by a KS and then declared “shared” by another KS. In a speech recognition system, for example, an element could be a phoneme, word, sentence or some other meaningful intermediate representation of speech; a clique could contain all the phonemes generated by a KS and a KS could be the function that scores phonetic hypotheses. Element types are described within the KS code by using the syntax of the language that is used to program the KSs, with some additional information. The additional information is stripped from the source code by Agora before the code is handed to the compiler or interpreter. This means that users need not learn a different language. This is in contrast with other language-independent data transport mechanisms, like the mechanism described in [4], that use a separate language to define the data. The type declarations can contain extra information for scheduling and debugging purposes, e.g. the expected number of accesses per second, the legal values that elements can assume, display procedures, etc. KSs can refer to sets of elements by using capabilities. Capabilities are manipulated by Agora functions and can be used to “copy” from a clique into the address space of a KS and viceversa (often no real copy will be necessary). There are two “modes” of access: React-only and Add-element. Elements cannot be modified or deleted after they are written but Knowledge Sources can signal that they are not interested anymore in a given Element Clique. Each KSs contains one or more user functions. KS functions are completely independent of the system they are used in and must only be able to deal with the types of element they use. KSs are created by calling an Agora primitive, each call to this function can generate multiple instances of the same KS. When a KS instance is created, a pattern can be specified: once the pattern is satisfied the KS function is activated. The pattern is expressed in terms of “arrival events” (the fact that an element has entered a clique) and in terms of the values of the data stored in the elements. For example, one can specify a pattern that is matched every time a new element enters the clique or that only matches if a field in the element has a specific value. More than one clique can be mentioned in the same pattern but no variables are permitted in the pattern (i.e. there is no Speech I 1 binding). It is also possible to specify if an event must be considered “consumed” by a successful match or if it can be used by other patterns (this can be very useful to “demultiplex” the contents of a clique into different KSs or to guarantee mutual exclusion when needed). A KS can contain any statement of the language that is being used and any of the Agora primitives, expressed in a way that is compatible with the language used. The Agora primitives are similar to the typical functions that an operating system would provide to start new processes (create new KS’s in Agora), manipulate files (create, share and copy cliques) and schedule processes (i.e. change the amount of computation allocated to a KS). KS’s are mapped into Mach [2] primitives. In this mapping a KS can be clustered with other KS’s that can benefit from sharing computer resources. For examole. a cluster could contain KSs that-access the same clique or KSs that should be scheduled (i.e. executed) together. Although clusters could also be implemented as Mach tasks, sharing the address space between “random” tasks can be very dangerous. Clusters can be implemented (in decreasing order of efficiency) as multiple processes that share memory or as multiple processes communicating by messages. Multiple instances of the same KS have a very effective and simple implementation on the Mach operating system [2] as a single process (task, in Mach terminology) in which multiple “threads” of computation implement the KSs. Currently, the composition of clusters must be fully specified by the user. but Aaora maintains information on which KSs are runnable and on how-much of the cluster computation power each KS should be-receiving. The computation power associated with a KS can be controlled by any KS in a cluster-by using Agora primitives. In conclusion, the Agora virtual machine provides mechanisms to statically and dynamically control multiprocessing: KSs can be clustered in different ways and executed on different processor configurations. Clusters can be used to dynamically control the allocation of processors. Therefore, Agora provides all the comoonents necessarv to implement focus-of-attention policies within a system, but *the responsibility of designing the ‘control proceduresremains with the user. 4. Example of a System Built with Agora We the will illustrate how Agora can be used by describing the design of CMU speech recognition system, ANGEL [5]. ANGEL uses more computation than a single processor could provide (more than 1,000 MIPS), is programmed in two languages (currently, the system comprises more than 100,000 lines of C and CommonLisp code), uses many different styles of computation, and is in continuous evolution since more than 15 researchers are working on it. Figure 4-l shows the top level organization of the system. Arrows indicate transfer of both data and control. At the top level most of the components communicate using a data-flow paradigm, with the exception of a few modules that use a remote-procedure-call) paradigm. Eventually, a blackboard model will be used at the top sentence t c . SIGNAL PROCESSING SENTENCE + PARiSER , F . VERIFIER I Figure 4-l : Structure of the ANGEL Speech Recognilion System: Top-level and Internal Structure of the Acoustic-phonetic and Word-hypothesizer Modules 744 ! ENGINEERING level. In Figure 4-1, two components contain a sketch of the structure of their subcomponents. For example, the acoustic phonetic component uses both data-flow and blackboard paradigms. It is important to note that Figure 4-1 shows the current structure and that part of the work being done on the system concerns expanding the number of modules and evaluating new ways of interconnecting them. Frameworks are used to provide each component with the environment that is best suited to its development. At the top level there is a framework that provides a graphic editor to program data- flow and remote-procedure-call computations. Each subcomponent of the top level framework is developed using a different framework. We will use the word hypothesizer subcomponent as an example. The word hypothesizer generates word hypotheses by applying a beam search algorithm at selected times in the utterance. The inputs of the search are the phonetic hypotheses and the vocabulary. The times are indicated by a marker, called anchor that is computed elsewhere. The word hypothesizer must be able to receive anchors and phonemes in any order and perform a search around each anchor after having checked that all the acoustic-phonetic hypotheses within delta time units from the anchor are available. Phonetic hypotheses arrive at unpredictable times and in any order. The word hypothesizer requires two functions: the matching function (match()) that hypothesizes words from phonemes and the condition function (enough-phonemes()) that checks if there are enough phonemes within a time interval from the anchor. The “editor” of the word-hypothesizer framework lets a researcher specify these two functions and binds them with the virtual machine level description. This description, that has been programmed by an application engineer, provides the parallel implementation. The framework also contains a display function that can be altered by a user. The stylized code in Figure 4-2 describes the virtual machine level description used within the word hypothesizer framework. A speech researcher does not ne,ed to be aware of this description but only of the external specification of the two functions match0 and enough-phonemeso. This description could be written in any language supported by Agora (currently C and Common Lisp). KS5 Type declaration8 for cliques KS setup Initialization: Create word and phoneme lattice clique Instantiate a few copies of KS word-hypothesize to be activated at the arrival of each new anchor KS word-hypothesize Initialization: Declare the word, phoneme lattice and anchor cliquea as shared Entry point: if enoughghonemes() then execute match0 else instantiate KS wait to be activated at each new phoneme KS wait Initialization: Declare the word, phoneme lattice and anchor cliques as shared Entry point: if there are enough phonemes then match0 Figure 4-2: The Virtual Machine Level Implementation of the Word Hypothesizer There are three Knowledge Sources: KS setup creates instantiations of KS word-hypothesize that are activated when “anchors” arrive. When any of these KSs receives an “anchor”, it checks if there are enough phonetic hypotheses and, if so, executes match(). If not enough hypotheses are available, it creates an instantiation of KS wait that waits for all the necessary phonemes before executing match(). The KS word-hypothesize can be compiled into a task if the target machine does not have shared memory. A parameter of the KS creation procedure indicates to Agora how many copies of the KS the framework designer believes can be efficiently used. If the machine has shared memory, then threads can be used and the parameter becomes irrelevant since new threads can be generated without incurring in too much cost. Agora can also be instructed to generate the wait KSs as threads of the same task. This is possible if KS wait and the functions it calls do not use any global data. 5. Custom Hardware for the Agora Virtual Machine In a parallel system, the duration of an atomic computation (granularity) must be substantially bigger than the overhead required I P- M f- p --+- P-M I I \ I I P PPl -Y I I M l I I General Purpose ( L Sys tern ----------- J Custom Processor Figure 5-l : A custom hardware architecture for search AI LANGUAGES AND ARCHITECTURES / 745 to start and terminate it. This overhead can be very large when using (conventional) general purpose architectures. Agora shares this characteristic with many other languages and environments. For example, Multilisp [13] cannot be used to efficiently implement computations at a very small granularity level unless some special hardware is provided to speed-up the implementation of futures. The effect of the overhead can be seen in the performance curves for the quicksort program presented in [13], Figure 5: the parallel version of the algorithm requires three times more processing power than the sequential version in order to run as fast (although it can run faster if more processing power is available). Therefore, hardware support tailored to the sty/e of parallelism provided 6y a language or an environment is necessary. A proposal for an architecture that supports small granularity in concurrent Smalltalk can be found in [9]. The Agora virtual machine efficiently supports computations with a granularity larger than 500 ms when implemented on general purpose machines connected by a local area network. In the case of shared memory architectures, the limiting factor is the operating system overhead. Most of the current parallel environments, such as the Cosmic Cube [15] and Butterfly [3] environments, provide only minimal operating system functionality: typically, only the basic functions required to use the hardware. In these systems, granularity could be as small as a few hundred microseconds. Agora supports systems that are too large and complex to be implemented without a full operating system environment and must pay the price of operating system overhead. Therefore, the minimum granularity of an Agora’s KS on shared memory systems and Mach is about 1Oms. There is no single set of hardware which can lower the KS granularity independently of the computation being performed since each different style of computation poses different requirements on the virtual machine primitives. Therefore, our strategy is to develop architectures tailored to specific computations. So far, we have designed an architecture that supports computations that can be pipelined but are data-dependent and cannot be easily vectorized. For example, pipelines can be generated every time an algorithm executes a large number of iterations of the same loop. The architecture exploits the fact that data-flow control can be implemented very efficiently by direct connections between processors and the fact that a hardware semaphore associated with each element of a clique can speed-up concurrent access to elements. The architecture is custom because a set of KS’s has to be explicitly decomposed and the architecture configured for that particular decomposition. This process is convenient when an algorithm is reasonably stable. Figure 5-l shows the structure of the architecture and how it interfaces with the rest of the system. Each KS is executed in a separate processor. Processors communicate through shared memory as well as through dedicated links. Dedicated links take care of data-flow while shared memories serve two functions: input/output and synchronized access to shared data to resolve data- ,’ *’ o 7 Memory Blocks ,’ Processors Figure 5-2: Speedup vs. processors o 7 Memory Blocks A 4 Memory Blocks l 2 bjemory Blocks 64 23 32 45 64 91 728 1000~s MOS Tra nsis tars Figure 5-3: Speedup vs. transistors dependencies. Each processor contains: a simple hardwired control unit that executes a fixed program, functional units as reauired bv the KS function, and register storage. An instruction can specify which data to read from local storage, what operations to perform on them, and where to store the result. A typical instruction can perform an addition or multiplication or access the shared memory. The architecture can be implemented as a fully-dedicated VLSI device or device set that communicates with the rest of the system through shared memory. Therefore, each processor can have different functional units and be wired directly to the processor(s) that follow it. A more expensive implementation with off-the-shelf components is also possible. We evaluated the architecture by decomposing the match procedure used in the word hypothesizer described in the previous section. The match function uses a “best match, beam search” algorithm though other, more sophisticated search algorithms are already being planned. The current algorithm requires about 20 to 40 million instructions per second of speech with a 200-word vocabulary when executed in C on a VAX-111780, depending on how much knowledge can be applied to constrain the search. A 5000-word vocabulary will require 500 to 1000 million instructions per second of speech. We have simulated the custom architecture instruction-by-instruction while it executes the beam search algorithm with real data. The simulation assumed performance figures typical of a CMOS VLSI design that has not been heavily optimized (and therefore might be generated semiautomatically using standard cell design techniques). See [l] for details. The presence of 1, 2, 4, or 7 physical memory blocks was simulated to evaluate memory access bottlenecks. Figure 5-2 shows our simulation results as speedup relative to the performance of a VAX-limo. With five KS’s and one physical memory we see a speedup of 170. This configuration could be implemented on a single custom chip by using a conservative fabrication technology (memories would still be implemented with off-the-shelf RAM’S). With 28 KS’s and seven memories, we can obtain speedups of three orders of magnitude, enough to cope with a 5,000 word vocabulary in real time. Moreover, each of the 28 processors is much smaller (in terms of hardware) than the original single processor and all 28 processors could share the same VLSI device by using the best available fabrication technology. This fact is illustrated graphically in Figure 5-3 which plots the speed-up against the transistor count of the design. I 7e transistor count was obtained by adding the number of transistors in actual layouts of the various functional units and 746 I ENGINEERING registers, and is a crude estimate of the amount of silicon area required in a VLSI design. 6. Conclusions Agora has a number of characteristics that make it particularly suitable for the development of complex systems in a multiprocessor environment. These include: -the complexity of parallel processing can be hidden by building “reusable” custom environments that guide a user in describing, debugging and running an application without getting involved in parallel programming; -computations can be expressed in different languages; -the structure of a system can be modified while the system is running; -KSs are activated by patterns computed on the data generated by other KS’s; -KSs are described in a way that allows Agora to match the available architecture and its resources with the requirements of the computation; -custom architectures can easily be integrated with components running on general purpose systems. Acknowledgements The parallel virtual machine has been designed with Alessandro Forin [6,8]. Fil Alleva, Rick Lerner and Mike Bauer have participated in the design and implementation of Agora. The custom architecture has been designed with Thomas Anantharaman and is described in detail in [l]. The project has also benefited from the constructive criticism and support of Raj Reddy, Duane Adams and Renato De Mori. References 1. Ananthamaran, T. and Bisiani,R. Custom Search Accelerators for Speech Recognition. Proceedings of the 13th International Symposium on Computer Architecture, IEEE, June, 1986. 2. Baron, R., Rashid, R., Siegel, E., Tevanian, A., and Young, M. “Mach-l : An Operating System Environment for Large Scale Multiprocessor Applications”. IEEE Software Special Issue (July 1985). 3. anon. The Uniform System Approach to Programming the Butterfly Parallel Processor. BBN Laboratories Inc., November 1985. 4. Birrel, A.D. and Nelson,B.J. “Implementing Remote Procedure Calls”. Trans. Computer Systems 2, 1 (February 1984), 39-59. 5. Adams, D.A., Bisiani, R. The CMU Distributed Speech Recognition System. Eleventh DARPA Strategic Systems Symposium, Naval Postgraduate School, Monterey, CA, October, 1985. 6. Bisiani, R. et. al. Building Parallel Speech Recognition Systems with the Agora Environment. DARPA Strategic Computing Speech Workshop, Palo Alto, CA, February, 1986. 9. Dally, W.J. A VLSI Architecture for Concurrent Data Structures. Ph.D. Th., California Institute of Technology, March 1986. 10. Erman,L. et.al. ABE: Architectural Overview. Proceedings of the 1985 Distributed Artificial Intelligence Workshop, Sea Ranch, California. 11. Gabrie1,R.P. and McCarthy,J. Queue-based multiprocessing Lisp. Symp. Lisp and Functional Programming, August, 1984. 12. Green,P.E. AF: A Framework for Real-time Distributed Cooperative Problem Solving. Proceedings of the 1985 Distributed Artificial Intelligence Workshop, Sea Ranch, California. 13. Halstead, H. “Multilisp: A Language for Concurrent Symbolic Computation”. ACM Trans. on Programming Languages and Systems 7, 4 (Octber 1985) 501-538. 14. Bobrow,D.G. and Stefik,M.J. A Virtual Machine for Experiments in Knowledge Representation. Xerox Palo Alto Research Center, April, 1982. 15. Seitz, C. L. “The Cosmic Cube”. Comm. ACM28, 1 (January 1985), 22-33. 16. Smith,R.G. Strobe: Support for Structured Object Knowledge Representation. Proceedings of the Eighth International Joint Conference on Artificial Intelligence, August, 1983. 7. Carriero,N. and Glelemter,D. The S/Net’s Linda Kernel. Proceedings of the Tenth ACM Symposium on Operating Systems Principles, December, 1985. 8. Bisiani,R. et.al. Agora, An Environment for Building Problem Solvers on Distributed Computer Systems. Proceedings of the 1985 Distributed Artificial Intelligence Workshop, Sea Ranch, California. AI LANGUAGES AND ARCHITECTURES I 747
|
1986
|
4
|
483
|
FRAMEWORK FOR PROTOTYPING EXPERT SYSTEMS FOR FINANCIAL APPLICATIONS Jacob Y. Friedman and Atul Jain Decision Support Group, Management Consulting Services Coopers & Lybrand, New York, NY 10020 ABSTRACT Analysis of difficulties in trans- ferring expert systems technology into the financial industry applications suggests that speed-up of the prototyping phase can significantly reduce the cost and length of the entire development process. We suggest a prototype concept that is generic for certain types of financial applications and can serve as both a catalyst for the knowledge engineering process and a laboratory for knowledge gathering, validation and maintenance. We developed software tools to provide a framework for rapid prototyping by finan- cial professionals with basic computer training. I INTRODUCTION High costs, a long development cycle and the lack of experienced knowledge en- gineers are the main obstacles to the broad commercialization of expert system technology, particularly in the "bottom line" oriented financial industry. To understand the difficulties in building an expert system (ES), one has to consider all stages of ES development. 1. 2. 3. 4. Knowledge Engineering: analysis of problem domain; selection of knowledge representation, computational ap- proach, and user interface; building of the first prototype. Knowledge Acquisition: selection and training of experts; knowledge gather- ing and validation using the proto- type; refinement of the first proto- type or building a second one; prototype field testing. Delivery System Development: preparation of specification and design documentation; software coding; system Integration; testing. System Deployment: preparation of user documentation; user training; instal- lation and support; knowledge base maintenance. The first two stages are referred to as ES prototyplng and the last two as lm- plementation. ES implementation is a pro- cess similar to conventional DP system development. Therefore, well formulated and accepted principles and methods of software engineering can be adopted. The prototyplng phase, on the other hand, often becomes a major research project, consumes more than 70% of the time and resources allocated to ES development and requires highly skilled knowledge engin- eers (KE). Transformation of the ES prototyping process from a research effort to an en- gineering task is an important factor in reducing the length of ES development and associated costs. II PURPOSE OF PROTOTYPE A. Knowledge Engineering Catalyst The difficulties in ES development start from the very first interaction be- tween the KE and the domain expert (DE). Typical scenario: the KE Is a computer scientist with knowledge of AI principles and techniques but a limited understanding of the problem domain; the DE is very knowledgeable in the application domain but has difficulty comprehending ES tech- nology. Knowledge engineering sessions become unstructured crash courses in both AI and application. They are complicated by differences in the individuals' back- grounds, ways of thinking and communlcat- M3, and Inability to visualize abstract rules, notions, and Ideas. The Knowledge Engineering process functions much smoother when an ES for a similar problem is available and can be used as a means for exchanging knowledge, demonstrating principles and testing ideas, and sometimes as a first prototype. Even though an existing ES is not a per- fect solution for the new problem and can not be used for complete knowledge acqui- sition (KA), it helps to keep the know- ledge engineering process focused and significantly speeds up ES development. KNOWLEDGE ACQUISITION / 969 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. If a similar ES is not available, the building of the first prototype in the very early stages of ES development is extremely important. Such a prototype, even with a limited useful life, can serve as a catalyst for the Knowledge Engineer- ing process by provinding the DE with structure and concrete objects to describe his reasoning process to the KE. B. Knowledge Base Laboratory Some developers believe that it is important to have a prototype as close to the final system as possible at very early stages of a project. Such a prototype is supposed to speed up development of the delivery system. In our opinion, at this stage of de- velopment It is difficult to select the optimal design of the final system. Fur- thermore, we do not believe that the final system intended for production is the most suitable for KA. The system used during the KA process needs certain capabilities that the final system will not, and these features are essential tools for the most effective development of the Knowledge Base (KB). For example, the prototype should provide the DE easy access to intermediate results to permit the necessary refinements in the problem-solving process. It also needs a greater degree of user control over steps of inference to allow testing of the KB by the DE. Testing the KB is often complicated by the fact that processing a test case requires the ES to contain knowledge about all aspects of the problem. When in the prototype version the problem Is broken down into a series of relatively indepen- dent steps (at least as a first aproxlma- tion), and human intervention is provided for steps missing In the KB, the entire KA process becomes better structured, earlier tests of completed sections of the KB can be performed, and several DES can work on different sections simultaneously. Some advanced ES software tools pro- vide facilities to examine and modify the KB during the test run of a final system. However, these are typically designed more for the use of the KE in debugging the system code. The DE needs an interface that is specifically tailored for a par- ticular application and does not signlf- icantly slow him down when routinely pro- cessing real cases. C. Knowledge Validation Device A good prototype can be carried to the phase of KB validation by independent experts who have otherwise not partici- pated In the process of KA. In prototype field testing, the prob- lem of acceptance by new users becomes very important. Users have a problem ac- cepting a system whose operations are not completely transparent. Because of the inherent limitations of the Initial KB, the user does not entrust real cases to the system and performs parallel testing only with selected cases. This slows the KB validation and limits the variety of test cases. Another limitation of on-line testing of the prototype is that it rarely handles multiple cases. In financial ap- plications, not all input Information is readily available at one time, and the user often switches back and forth between cases. To allow on-line processing, the sys- tem used for the KB field test should be absolutely transparent, imitate the prob- lem-solving process used by human experts and provide the user with full control of this process. These features are not im- portant and often are undesirable for the production system, but are essential for the prototype used In KA. D. Knowledge Base Maintenance System The process of KB maintenance is analogous to the process of KA and re- quires thorough testing of new pieces of knowledge. A good prototype can be used for testing of new knowledge even if it does not exactly correspond to the deliv- ery system. The speedup of KB testing us- ing debugging facilities of the prototype easily compensate for possible delays caused by the conversion from the format used in the prototype to the format of the delivery system. E. Methodolonv Carrier The major problem in ES development Is the lack of an established methodology and the absence of examples for different types of problems In various applications. BY continuing to use a prototype through the entire process of ES develop- ment, and later KB maintenance, we extend its role of knowledge engineering catalyst to one of a carrier of ES development methodology. A good prototype built on the base of similar problems can be a valuable guide during ES development, especially for KEs who are not very experienced. 970 I ENGINEERING III PROTOTYPE FEATURES For a prototype to be extended to cover all aspects of knowledge engineering (acquisition, testing, validation and maintenance of the KB), it must lncorpor- ate a series of features not usually asso- ciated with a prototype. The following Is a description of these features embedded In the example of a generic ES prototype used by Coopers & Lybrand to develop ES for its financial clients. The prototype is Intended for decision making problems that comprise a large segment of the potential ES applica- tions in finance. A. Domain Tailored Data Representation From our experience, the process of Knowledge Engineering proceeds faster if the knowledge representation in the first version of the ES closely follows the rep- resentation used by human experts in solv- ing the problem. This makes It easier to extract rules from experts and test their validity on real cases. Decision-making processes in finan- cial applications are typically accompa- nied by and organized around massive pa- perwork: input information collected in forms, summary data, results of interme- diate analyses, underlying assumptions and the final report. Every step of the prob- lem solution is documented on paper. If several people participate in the process, they use paper media (folders with forms and reports) as a main channel of commu- nication. In the prototype, data is organized on the screen in a way that imitates the paper forms used by the human experts. The on-screen form is composed of several fields that are placeholders for data (Fig. 1). Each field has a label and a value, number or string, and is mouse- sensitive. Values of lldisplay only" fields or space allocated for them are underlined. Values of user modifiable fields are boxed. Users can enter or modify field values by clicking on the box and then, depending on the field type, en- ter values from the keyboard, cycling through a pre-specified list of values or selecting values from a pop-up menu. A form can have a bar diagram to illustrate some of the numeric field values. Bars corresponding to modifiable fields are mouse-sensitive and provide another way of entering or overwriting field values by the user. Each form can have several "schedules" - instances of the same form for several instances of an object. For example, the form In Fig. 1 is scheduled by vessels and there is an instance of this form for each instance of a vessel. The schedule menu is used to select specific schedules or to create new ones. B. Problem Decomposition Decomposition of the problem is im- plemented both by grouping contextually similar data and by breaking decision pro- cess into a series of steps. Forms in an application are organized in folders. Mouse clicking on a folder icon in the application window (Fig. 2) is SCHEDULES NEU SCHEDULE American Dream Enclno Girl Ilarblehead Princess Russian Orean Y anksc MARINE UMBRELLA LIABILITY INSURANCE UNDERWRITER Long Island Yacht Standard Rates e.9 Vessel Information Form Vessel: Marblehead Princess Vessel Type: Sailboat//Yacht m Boat contribution: 1 s 16.800 I Value: 1s 2.500.000.00 m Crew Contribution: [ 8 10,000 1fgg Location: East Coast ID Location Factor: 1.4 Cargo: Not Hazardous Im Premium Contribution: 1 8 23,000 I= Number of Crew: III Age: B Base Amount for Boat: s 12.000 t Base Amount for Crew: s 750 Base Amount for Location: s 1,250 m Figure 1. Interface Screen with Form Window, Context Window, and Schedule Menu KNOWLEDGEACQUISITION / 971 used to select a folder. Clicking on the form icon in the folder window (Fig. 3) invokes display of the corresponding form. Any folder can also be selected from the context window on the form or folder screen (Fig. 1). The way forms are grouped in folders is intended to accomplish two objectives: to have forms with context-related fields in the same folder and, more important, to have forms containing the results of a ma- jor step of the problem solution in the same folder. Thus, the solution process can be represented as a step-by-step prop- agation of Information (field values) from one group of folders to another. Visual- ly, It Is represented .ln the application window (Fig. 2) and context window (Fig. 1 as lines with arrowheads (connectors) con- necting two folders, the origination fold- er and the termination folder. C. KB Debugging Facilities Such features as explanation facili- ties and indicators of data availability, source and quality are Important in pro- duction ES, but they are even more import- ant In prototype during the KA and KB val- idation. An explanation related to specific data can be obtained by clicking on a field. The user can Invoke an explanation of why its value Is needed to solve the problem, or a justification of the field value was inferred. In addition to a label and a value, each field has a value source and a value quality associated with it. The shading intensity of a box displayed on the form next to the value (Fig. 1) is used to in- dicate one of the three levels of value quality: "guessed", "probable*' or "cer- taint'. The quality of data represented by a bar diagram Is Indicated by the shading of the bar. The quality of the field value is in- ternally represented by a number from 0 to 1. It is calculated using one of the methods described in C1,2,3] depending on the rule specification. The entire range from 0 to 1 is broken in three sub-ranges corresponding to "guessed", "probable" and "certain" for external presentation. Each form and folder icon has a data meter (Fig. 2 and Fig. 3) that Indicates how many fields are filled with values and the quality composition of these values. Total height of the meter's shaded area Indicates weighted number of filled fields, while the nonshaded area repre- sents unfilled fields. A weight factor assigned to each field Is used to determ- ine the field contribution in the data meter read-out. The shaded areas are broken into three areas, with different shading indicating how many filled fields are "'guessed", l'probable", and "certain". The validation mechanism is used to allow the user to easily trace changes In field values resulting from inference. II CASES I NEU CRSE Fantasy Is1 and Hal’s Cruise Line Long Island Yacht NY Port Authority Us&bury Narins MARINE UMBRELLA LIABILITY INSURANCE UNDERWRITER Long Island Yacht Figure 2. Interface Screen with Application Window and Case Menu 972 I ENGINEERING When a field value is asserted by a rule, it is not verified and is displayed in reverse video. The user can verify all fields of a schedule, a form or an entire folder by clicking on the corresponding verify icon. If as a result of later ln- ference the field value Is changed, the new value is displayed in reverse video. By clicking on it, the user can examine the old value and even restore it. The user can detect changes at any level of field hierarchy: schedule, form or fold- er. The corresponding verify icon Is not filled If the folder, form or schedule contains nonverified values. The icon Is filled if all field values are verified. Hierarchy of folders, forms and schedules provides the user with easy ac- cess to all Input data and intermediate and final results. The user has the op- tion to enter data that Is supposed to be inferred or to overwrite already Inferred data. If the value of a field that Is ln- tended to be inferred was actually entered by the user, it is displayed In italic. D. Control and Status of Solution Process Two components are Important for step-by-step execution of a decision pro- cess : 1. Clear visual representation of the problem structure, steps in the solu- tion and their Interaction. 2. User control of inference, with an Indication of the solution steps that provide enough information to infer new data. A flow chart composed of folder Icons and connectors on the application window (Fig. 2) combined with folder data meters provides the visual representation men- tioned above. Each connector has a group of production rules associated with It. These interfolder rules, in their pre- mises, refer to field values from the origination folder and assert the field values of the termination folder. Each rule can be associated with several con- nectors terminating at the same folder. A connector is shaded in the application window if at least one rule associated with it is ready to fire. By clicking on the connector, the user can view a list of actions that will occur if these rules are fired. The user has to click on the data meter of a folder icon to allow firing of the inter-folder rules. A tree with the activated folder as a root, connectors as branches and other folders as nodes illus- trates the scheme used to control inter- folder inference. The generation of con- nectors most removed from the root Is ac- tivated first, then the next generation, and so forth, ending with a generation of connectors terminating at the activated folder. Generations with no rules ready to fire are skipped. When there are no more rules to fire at the root level, inter-folder Inference is completed. Con- nectors that are activated at each step through the tree are highlighted in the application window. PRIMARY COVERAGE Figure 3. Folder Window KNOWLEDGEACQUISITION / 973 The data meters of folders are also updated. They give the user a visual indication of the information propagation through the system. The user can choose to walk through the solution process by sequentially invoking one generation of connectors at a time and reviewing lnter- mediate results after each step. On the other hand, by clicking on the folder with the final results, the user can accomplish the entire process In one shot. In addition to user-controlled lnter- folder rules, the system has intra-folder rules that fire automatically as soon as their premises are satisfied. Intra- folder rules are local to a particular folder and usually represent a '*lower level" of knowledge. These rules are invoked as result of user entry of field values or inter-folder inference. E. Analysis of Alternative The ability to analyze a variety of alternatives in the process of selecting an optimal solution is one of the most important features of an ES. To help de- velop and test rules for selection of op- tional alternatives, the prototype should allow the user to generate and analyze ar- bitrary alternatives. The alternative window (Fig. 4) is used to generate new alternatives and is used to switch between alternatives. Alternatives are shown as nodes of the alternative tree; the node corresponding to the current alternative Is highlighted. Standard Assumptions Low Boat Contribution (10,000) v!iiz~+ Figure 4. Alternative Window A new alternative Is generated by clicking on the node that becomes a parent alternative. The parent alternative be- comes frozen and cannot be modified. The user can change any data in the new alter- natives and other alternatives without children. An alternative can be selected by clicking on the corresponding leaf of the alternative tree. The user can prune the alternative tree using the mouse. In this way he can temporarily or permanently disregard (poison) certain branches or select a subtree as the only promising set of alternatives (believe). The alterna- tive comparison window is used to compare alternative values of any field under dlf- ferent assumptions. It is invoked by clicking on the field value and has the value of the field corresponding to each alternative displayed in each node. Al- ternative selection and pruning is also possible in the alternative comparison window. F. Case Management Svstem To be able to use the prototype for on-line validation of KB, it should handle multiple cases. The case menu In the ap- plication window (Fig. 2) is used to switch between cases and to Initiate new ones. When a different case is selected, the current case, including the field val- ues, the statuses, and justifications, is stored on disk and can be restored later for continuation. IV PROTOTYPE DEVELOPMENT A. Difficulties of Prototype Development Building a prototype, especially one with the battery of features described above, entails a serious programming ef- fort, even with the help of commercially available ES software tools. These tools provide knowledge representation language, Inference engine, and graphic and develop- ment utilities, but they lack a very im- portant feature successfully used in PC spreadsheets and data base packages: complete application structure with built: in user interface and data handling facil- ities. As a result, ES developers spend a lot of time programming specific features and designing the prototype structure In addition to developing the KB. A majority of ES software tools (at least those with enough representational and computational power to solve financial applications) require extensive training and a strong programming background from a KE. That limits selection of KEs to pro- grammers and precludes utilization of the large group of financial specialists who are successfully using PC software pack- age. Even experienced KEs have a problem properly Incorporating the wide variety of existing ES techniques. They would bene- fit from a software-development environ- ment with a library of modules tailored to specific problems, enabling them to select those most relevant to the particular application. 974 / ENGINEERING B. Framework for Financial Applications Prototvnina The following is a description of a software development environment (frame- work) for rapid building of prototypes for financial applications that have features presented in the previous section. The framework was implemented on a Symbolics 3640 computer, by Symbolics, Inc. using both LISP and Automatic Rea- soning tool (ART) from Inference, Inc. The Symbolics computer provides the ne- cessary computational power, high reso- lution graphics, rich software library and development environment. ART provides the knowledge representation language, rule compiler, inference engine, and Important features such as logical dependencies and a viewpoint mechanism for exploration of hypothetical alternatives. The framework consists of three modules: interactive application editor, preprocessor and run module. The inter- active application editor is an Icon edi- tor that provides the KE with an easy way (using the mouse) to specify the entire application structure: fields, forms, schedules, folders and connectors. As a result this module produces a file with the application specification. The preprocessor module uses the file with the application specification and the file with rules specification to generate application-specific ART schemata, facts and rules, and to store them as an appli- cation file. The run module is a set of ART rules and LISP functions generic for all appli- cations. When loaded together with an ap- plication file, it generates the prototype for a given application that supports all generic features described in the previous section. V, RESULTS This framework has been used for prototyping three financial expert sys- tems: Risk Manager, Marine Umbrella Lia- bility Insurance Underwriter, and Merger and Acquisition Analyst Assistant. In all three cases, the framework structure was efficient and flexible enough to produce a working prototype In less than four weeks. All interactions with domain experts were performed by KEs with a background In the problem domain after short training in use of the framework. The main tasks of the software engineers were to check rule syntax and consistency and to select the method for data quality propagation. The experts' acceptance of the *'forms" concept was very encouraging. As a result, knowledge engineering sessions were very focused from the beginning. In a matter of days, the experts learned to use the prototype and started processing real cases to generate, refine and test rules. Several experts were used as sources of rules for each application. The clear and visible structure of the prototype made it easy to achieve consls- tency in rules derived from different ex- perts. Use of multiple experts was also aided by the fact that the inference process was broken into several steps. VI CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE WORK Successful use of the framework in prototyping several ESs for financial ES applications confirmed our statement that the proposed financial applications struc- ture and tools for prototype development with built-in features facilitating gath- ering, validation and testing of KB can significantly reduce the cost and time involved In ES development. Even in the somewhat more specialized field of financial applications there are different types of problems requiring dif- ferent approaches: questionnaire-driven systems, modeling in time, etc. To accom- modate these problems, different frame- works should be created or new features should be added to the existing one. ACKNOWLEDGMENTS We would like to thank Dr. David Shpilberg from Coopers & Lybrand for his support and advice on the project and help in shaping this paper. REFERENCES (1) Shortfliffe, E.H. and B.G. Buchanan. "A Model of Inexact Reasoning in Medicine", Mathematical Biosciences, Vol. 23, 1975, 355-356. (2) Duda, R.O., P.E. Hart and Nils Nlls- son '*Subjective Bayesian Methods for Rule-Based Inference Systems** AFIPS Conf. Proc., National Computer Conf., Vol 45, New York, 1976, 1075-1082. (3) L.A. Zadeh, '*Syllogistic Reasoning as a Basis for Combination of Evidence in Expert Systems'*, Proc. IJCAI-85, Los Angeles, 1985, 417-419. KNOWLEDGE ACQUISITION / 975
|
1986
|
40
|
484
|
A KNOWLEDGE REPRESENTATION TECHNIQUE FOR SYSTEMS DEALING WITH HARDWARE CONFIGURATION* Jeff Pierick ROLM Corporation 4900 Old Ironsides Dr. Santa Clara, CA 95054 Massachusetts Institute of Technology Laboratory for Computer Science 77 Massachusetts Ave. Cambridge, MA 02139 ABSTRACT A representation language combining the attributes of both rule-based systems and frame-based systems is discussed within the context of developing systems for computer hardware configuration. It is believed that the combination of these two common approaches to knowledge representation provides many advantages over the strict use of either of the two approaches alone. I INTRODUCTION The domain which shall be considered in this paper is that of order processing for computer hardware. For years, this task was handled almost exclusively by teams of experts who would meticulously review each customer's order. The expert would check the customer's order by mentally reviewing a set of rules, learned over time. For the most part, the task was tedious and time consuming. However, this all changed with the advent of XCON (McDermott, 1980). The first commercially successful knowledge-based system to effectively deal with this domain was the XCON system, developed by John McDermott and his colleagues for use by the Digital Equipment Corporation. Since the development of this system, many similar systems have been developed for use by other companies. One such system is the BEACON system (Freeman, 1985>, developed for use at Burroughs. These two systems use two very different approaches toward representing the knowledge necessary for their domain. The XCON system uses simple production rules to represent its knowledge. The BEACON system uses a semantic network augmented with the addition of simple constraints. However, I suggest that neither of *The work described in this paper is based in part on research done at ROLM Corporation, through the coordination of the MIT VI-A Internship Program, in partial fulfillment of my SM Thesis at the Massachusetts Institute of Technology. these two techniques adequately hand1 complexities that arise in this doma .in. es the This paper will discuss a representation language that is tailored to the representational needs of the domain by combining the benefits of production rules and the benefits of semantic networks. While this language is demonstrated within the domain of computer hardware configuration, it should be remembered that it may be equally useful in other domains as well. II PRODUCTION RULES AS A KNOWLEDGE REPRESENTATION FORMALISM The typical approach to knowledge representation in knowledge-based systems is the use of production rules (Barr and Feigenbaum, 1981). This tendency is so prevalent, that the term rule-based systems iS used almost synonymously with knowledge-based systems. Certainly, the majority of the commercial expert system building tools available today make extensive use of this knowledge representation technique. There are many reasons that the use of production rules is so desirable. The domain knowledge is represented explicitly within the rules. The knowledge is encoded in such a way that a casual observer can easily understand the intent of the knowledge. This is made possible by the fact that the rules seem natural and the fact that control structures are not freely mixed with the domain knowledge. That is, the information in the rule is declarative rather than procedural. Thi; all leads to the fact that production rules can be used to explicitly represent the important domain knowledge. Another advantage of production rules is the fact that the knowledge is represented in a uniform manner. This makes for efficient handling of the knowledge. A very simple, generic inference engine can be built to parse the knowledge base. Furthermore, a uniform representation language makes it easy to translate the knowledge into a stylized form of English for use in describing the reasoning of KNOWLEDGE REPRESENTATION / 99 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. the system. of semantic networks; however, the discussed are pertinent to both formalisms. points Furthermore, production rules have a nice way of dealing with empirical knowledge; in fact, it could be said that all of the knowledge in these systems is simply empirical. This is very useful for domains for which a model cannot easily be formulated. A good example of such a domain is the diagnosis of infectious bacterial diseases, the domain within which the MYCIN (Shortliffe, 1984) system worked. Within MYCIN's domain, the expert could rarely be certain about the knowledge he was giving the system. Certainty factors were used in an attempt to deal with this problem of reasoning with uncertainty. Moreover, it would have been very difficult for the expert to give MYCIN a reliable model of the domain. The expert could only offer empirical knowledge about the domain, knowledge that he had gathered through past experiences. Thus, a rule-based knowledge representation language was well suited to this domain. Lastly, production rules tend to be very modular (Davis et al., 1977). Each production rule represents a distinct chunk of knowledge. Each production rule is relatively independent of the other rules in the system. Thus, for a moderately sized knowledge base, the knowledge engineer should be able to modify an existing rule or add an additional rule to the system without having to worry about adversely affecting any of the other rules in the system.* However, herein lies one of the major limitations of using production rules alone as a knowledge representation technique within the domain of computer hardware configuration. This domain has a great deal of structure which is not properly represented in the form of production rules. A lot of the structure of the domain is lost when it is compiled in the form of production rules. It is unable to exploit the structure of the domain, but rather it can only use the empirical rules which it is given. Moreover, compiling the structure of the domain in the form of rules may actually make the system difficult to modify and maintain. This is pointed out in (Koton, 1985) where she states, "One piece of knowledge may be contained in any number of rules in the system, so in order to change that knowledge it must be changed in every rule that uses it." As the size of the knowledge base increases, it becomes even more difficult to determine all of the interrelationships between the rules (Freeman, 1985). Frames can be used to build a model of the system to be configured. A hardware component is represented as a single frame within the model. If the component is modified, the effect of the modification is isolated to the single frame which represents the component. Thus, the frame-based approach exhibits a form of modularity not inherently found in the rule-based approach. It displays a form of object oriented modularity. Using a rule-based approach, knowledge about a component may be located in several different rules. When the knowledge of that component must be modified, each relevant rule must be found and changed as appropriate. Using a frame-based approach, all of the information about a component, including its relationship to other components, is located within a single frame; thus, since the knowledge about a component is centralized, the knowledge base is much easier to modify and maintain. Moreover, the relationships between different components in the system can be represented naturally using a frame-based approach to knowledge representation. For example, if a system has four distinct components, then the frame representing the system would have a slot listing the four components. If two systems within the domain are simply variations of an encompassing system, then there would be a slot in each of the two representative frames representing this fact. Thus, the relationships between different components in the domain can be represented explicitly using frames. This suggests another advantage of using a frame-based approach to knowledge representation within the domain of hardware configuration. Frame-based systems typically have some form of inheritance between frames (Brachman, 1983). This is provided for through the IS-A or AK0 (standing for A Kind Of) slot alluded to in the previous example. Inheritance results in an efficient means of hierarchical knowledge representation which makes the knowledge easier to modify and maintain. Since knowledge can be inherited by one frame from another, knowledge does not have to be duplicated within the knowledge base. The knowledge is located at the most logical position within the network. It is not spread throughout the knowledge base. Thus, using a model-based approach to knowledge representation can overcome many of the problems which a rule-based approach introduces. However, just as it has been shown that the exclusive use of the rule-based approach to knowledge representation within this domain may not be appropriate, it can also be shown that the exclusive use of the frame-based approach to knowledge representation may also prove to be a hinderance. III FRAMES AS A KNOWLEDGE REPRESENTATION FORMALISM The BEACON system uses a semantic network system to represent its domain knowledge. This paper will consider frame-based systems instead *As the size of the knowledge base increases, this may not necessarily hold. 992 / ENGINEERING While a model can represent the structures and the legal combinations of structures within the domain, the user is left to decide which configuration is most appropriate for his needs. If the user is already an expert in the domain, this will be an easy task for him. However, there is no reason to expect that the user of the system is going to be an expert in the domain; in fact, if he were an expert, he probably would not be using the knowledge-based system in the first place. The more likely situation is that the user will know very little about the domain; thus, it is unlikely that he will be able to decide which configuration is appropriate for his needs. A useful knowledge-based system should aid the user in making this decision. Thus, frames are a good formalism for representing the structure in a domain such as computer hardware configuration. However, it is very difficult to add any form of judgemental reasoning to such a system. The result is that we have a system which will hold a user to a set of outlined constraints, but the user must be smart enough to decide which constraints apply to his situation. IV MAKING THE TWO WORK AS ONE If the domain exhibits a great deal of structure, while still requiring a certain amount of judgemental reasoning, (as is the case with hardware configuration) then a solution to this problem presents itself as the combination of these two paradigms. Frames can represent the structure of the domain, while rules can represent the needed judgemental reasoning. The idea is to hold on to the structure which is naturally provided by the frame-based approach. System components are to be represented as frames with slots, the slots representing the component's relationship to other components in the domain. However, if there is a relationship in the system which is dependent on the user's needs, then production rules are used. That is, if the user has a choice as to whether a particular component is included in his configuration, then the value of that slot would be filled with an if-needed production rule demon. The production rule system would then aid the user during the consultation in deciding whether that component is appropriate for his needs. This representation language represents the structure of the domain while still being able to deal with empirical knowledge and judgemental reasoning. It represents the natural dependencies within the domain while maintaining a modular style. Knowledge is centralized and logically sectioned. Instead of having a single, large rule base dealing with every facet of the domain, this knowledge is divided into smaller, more manageable, rule bases, each with a very particular purpose. Thus combining the frame-based approach and the rule-based approach takes advantage of the benefits of both paradigms while overcoming their limitations. The frame-based section divides the knowledge base into logical components, while the rule bases provide the needed judgemental reasoning. Moreover, the important aspects of the domain are made explicit, which is the ultimate goal in the selection of any good representation language (Winston, 1984). V IMPLEMENTATION DETAILS As noted in a previous section, the language used in this project incorporates both the structure of a frame-based system and the judgemental reasoning power of a rule-based system. This is accomplished in the following manner. A. The Frame Based System The frames in the system are implemented in the fashion of FRL frames (Roberts and Goldstein, 1977), with the exception that every datum has a certainty factor associated with it: (frame1 (slot1 (facet1 (datum1 certainty) (datum2 certainty)) (facet2 (. . .))) (slot2 (. . -1) (. - .>I The name of the frame corresponds to the name of a concept or object in the domain. A slot represents an attribute of the object. A facet signals the way in which the data associated with the facet fill the slot. A datum is the actual slot filler. The certainty factor associated with the datum (of which there is always exactly one), represents the strength with which the association between the slot and the slot filler is believed. In the domain of hardware configuration, the frames represent typical components and typical systems. There is a frame for each distinct component within the domain. There are also frames which represent systems composed of combinations of distinct components and other systems. These frames are linked together through special slots, For example, a system frame has a slot containing the names of the frames representing its constituents. As with traditional frame-based systems, slots can be filled with explicit values, default values, or demons. The components of a system are represented as a list of explicit values. The purpose of a particular system is given as a default value which may be overridden if the user has a different purpose in mind. The price of a component is given as a demon which the price of the would look up component in a separate, loosely-linked data base. Inheritance is also an integral part of the representation language. A frame is able to KNOWLEDGE REPRESENTATION / 993 inherit values for its slots from all of its ancestors. This makes the representation efficient in terms of space requirements. It also makes intuitive sense. For example, it seems natural to say that an IBM PC* has a display because it is a kind of personal computer; we know that all personal computers have displays. B. Adding Rules to the Frame Based System The production rules are found in the system in the form of production rule demons. They are identical to their procedural counterparts, except for the fact that they are declarative instead of procedural. There are IF-NEEDED demons, which are activated if a value is needed for a slot which does not have an explicit value, IF-ADDED demons, which are triggered if a new value is added to a slot, IF-REMOVED demons, which are evaluated if a value is removed from a slot, and IF-MODIFIED demons, which are processed if the certainty factor associated with a datum is modified. A production rule demon is essentially a small rule base containing a number of rules intended to solve a very focused problem. The way in which the demon is evaluated depends on the type of demon it is. If it is an IF-NEEDED demon, the rules are evaluated in a backward chaining manner, in which only those rules which may provide a solution to the current goal are triggered. If the demon is an IF-ADDED, IF-REMOVED, or IF-MODIFIED demon, then it is evaluated in a forward chaining manner, in which every rule is triggered. The rules themselves are based on attribute, object, value tuples. In this case, the objects are frames, the attributes are slots, and the values are slot fillers. Thus, a typical rule may appear as follows: (RULE1 ((SAME FRAME1 SLOT1 VALUE DATUMl) (KNOWN FRAME2 SLOT2 VALUE)) ((RETURN FRAME3 SLOT3 VALUE DATUM3 95))) These rule S are very similar were used in the MYCIN system. to the rules that VI HYPOTHETICAL SCENARIO As an example of the use of the new representation language, I will present a system which would be used by a sales representative to aid in the configuration of an IBM personal computer for a customer. A portion of the knowledge base is shown in Figure 1. We could imagine that a sales representative, who may have very little computer experience, could sit down with the system and place an order in the following manner. The inference engine would begin with the PC System *IBM PC, XT, and AT are trademarks International Business Mach ines Corporation, of Rule Base #I (RBl): Rule 1.1: If the customer knows the type of system he wants, then that is definitely the proper system (1.0). Rule 1.2: If speed is important to the customer, then an IBM AT map be the proper system (0.7). Rule 1.3: If expandability is important to the customer, then an IBM XT may be the proper system (0.7), and an IBM AT may be the proper system (0.8). Rule 1.4: If the customer does not want to spend a lot of money, then an IBM PC may be the proper system (0.8), and an IBM XT may be the proper system (0.6). Figure 1. Determining the Proper Order Type: This is the portion of the knowledge base which helps the user decide which of the three system types (i.e. PC, XT, or AT) is most appropriate for the customer. frame and it would apply Rule Base #l when the IBM PC frame is activated. Thus, the consultation would proceed as follows: What type of PC system would the customer like, or would you like me to help you decide which would be appropriate for him (PC, XT, AT, or Assist)? >Assist Since the user has asked for the assistance of the knowledge-based system in determining the proper PC system for the user, the evaluation of the rule base continues: Is it critical that the customer's applications run as fast as possible? >No Does the customer plan on expanding his PC system (e.g. extra memory, a modem, communication interfaces, etc.)? >Yes IS price an important factor for the customer? >Yes 994 / ENGINEERING The evaluation of Rule Base #l has determined that the most appropriate system for the customer is an IBM XT. Thus, it notes the fact that there should be one IBM XT frame but no IBM PC frame nor IBM AT frame instantiated in the consultation data base. Thus the inference engine halts its evaluation of the IBM PC branch of the knowledge base and begins to evaluate the IBM XT branch. The rest of the knowledge base would be represented in a similar fashion as that suggested in Figure 1, and the consultation would continue as shown in this example. VII CONCLUSIONS The problem of hardware configuration and order processing has proven to be a difficult problem to deal with in any sort of automated way for many reasons. Firstly, the amount of knowledge necessary to make an automated system work effectively in this area is usually quite large. Secondly, it is typically the case that the domain knowledge changes significantly during the life of the product. Thus, no real progress was made in this area until knowledge-based system technology was applied to the problem. Since the first knowledge-based system was developed for this domain, many other systems have followed. Some of these systems used production rules as a knowledge representation language, while other systems used frames or semantic networks, It has been argued that the representation language used in these systems may not be appropriate for their domain. Important information about the domain, namely the structure of the target system, is lost when production rules are used. Frame-based systems deal nicely with the structure of the domain, however, they cannot adequately represent the empirical knowledge and judgemental reasoning which is often necessary for a complete system. For these reasons, a representation language which effectively combines the benefits of a frame-based system and a rule-based system has been proposed. The frames in the language effectively represent the inherent structure in the domain of computer hardware configuration while maintaining a sense of modularity. The rule-bases describe the judgemental reasoning which is involved in the process of hardware configuration and order processing. These ideas are currently being tested through the development of two prototype systems for hardware configuration. The first system is being developed using a commercial expert system building shell. The knowledge for this system is encoded in the form of production rules. The second system is being developed concurrently using an inference engine based on the new representation language. It is believed that the second system shall prove to be easier to develop and maintain than the first system. Since the knowledge will be partitioned into logical sections it should be easier to enter the original core of knowledge and easier to maintain the knowledge base thereafter. ACKNOWLEDGMENTS I would like to thank Dr. Wing Kai Cheng, of ROLM Corporation, and Prof. Ramesh Patil, of the Massachusetts Institute of Technology, for their comments and suggestions on earlier drafts of this paper and their continued support during my research. REFERENCES Barr, A., and Feigenbaum, E. A. (Eds.) "Production Systems". In The Handbook of Artificial Intelligence. Volume 1. Los Altos, CA: William Kaufmann, Inc., 1981, pp. 190-199. Brachman, Ronald J. "What IS-A Is and Isn't: An Analysis of Taxonomic Links in Semantic Networks". Commuter, Volume 16(10), 1983, pp. 30-36. Davis, R., Buchanan B., and Shortliffe, E. "Production Rules as a Representation for a Knowledge-Based Consultation Program". Artificial Intellipence 8, 1977, pp. 15-45. Freeman, Michael W. "Case Study of the BEACON Project: The Burroughs Browser/Editor and Automated Configurator". Logic-Based Systems Group SDC, A Burroughs Co. Paoli, PA, 1985. Koton, Phillis. "Towards a Problem Solving System for Molecular Genetics". Technical Report MIT/LCS/TR-338, Massachusetts Institute of Technology, Cambridge, MA, 1985. McDermott, John. "Rl: An Expert in the Computer Systems Domain". In Proceedings of -the First International Joint Conference on Artificial Intelligence, Palo Alto, California, 1980, pp. 269-271. Roberts, B., and Goldstein, I. The FRL Manual". MIT AI Memo 409, Massachusetts Institute of Technology, Cambridge, MA, 1977. Shortliffe, Edward H. "Details of the Consultation System". In Rule-Based Expert Systems, Reading, MA: Addison-Wesley, 1984, pp. 78-132. Winston, P.H. "Representing Commonsense Knowledge". In Artificial Intelligence. Reading, MA: Addison-Wesley, 1984, pp. 251-289. KNOWLEDGE REPRESENTATION / 995
|
1986
|
41
|
485
|
AGNESS: A GENERALIZED NETWORK-BASED EXPERT SYSTEM SHELL* James R. Slagle Michael R. Wick Marius 0. Poliac Computer Science Department 207 Church Street, S.E. 136 Lind Hall University of Minnesota Minneapolis, MN 55455 ABSTRACT AGNESS is an expert system shell developed at the University of Minnesota. AGNESS is more general than other shells. It uses a computation network to represent expert defined rules, and can handle any well-defined inference method. The system works with non-numeric as well as numeric data, and shares constructs whenever possible to achieve increased storage efficiency. AGNESS uses a menu-driven user interface, and has several features that make the system friendly and convenient to use. The system includes eight explanation queries designed to increase the amount of information available to the user, the expert, and the knowledge engineer while remaining simple enough to be included in most of today’s expert system shells. AGNESS has been tested on several domains ranging from simplified problems to real world medical analysis. I. lNTRODUCTlON The design of expert consultation systems has been a topic of growing interest in Artificial Intelligence (Al) research during the past decade. Numerous expert systems have been constructed to give consultations in a variety of application areas. Two prominent examples of this are MYCIN [l], a program for the diagnosis of infectious diseases, and PROSPECTOR [2], a mineral exploration system. The common aim of expert system technology is to represent and apply knowledge obtained from a specialist in the problem domain. Early in the history of this technology, people realized that rewriting the entire system for a new domain was both wasteful and unnecessary. Since most of the operational code can be separated from the domain specific knowledge, one program can be written to handle rule bases from several domains. Using this idea, a system can be developed for a new domain by simply changing the rules that the operational system handles. This operational system is called a skeletal system or an expert system shell. Many expert system shells have been implemented recently with varying degrees of success. The best known of these are KEE( Knowledge Engineering Environment) from Intellicorp, LOOPS developed at the Xerox Palo Alto Research Center, and ART (Automated Reasoning Tool) from Inference Corporation [3]. We have developed an expert system shell called AGNESS standing for A Generalized Network-based Expert System Shell. AGNESS uses a computation network to represent the domain knowledge as opposed to a production rule base. The network is restricted to be a directed acyclic graph. There are several advantages to using a network-based shell as opposed to a simple rule-based shell. For example, in a network-based system, there is *This material is based partly on work supported by the National Science Foundation, grant no. DCR8512857 and by the Microelectronics and Information Sciences Center of the University of Minnesota. no need for searching for the rules to be fired, as all rules are directly connected to the current node. The AGNESS network increases storage efficiency by sharing common constructs whenever possible. The network also allows for visually pleasing graphical representations of the domain knowledge, and lends itself well to data flow analysis. PROSPECTOR is perhaps the best known network-based expert system 121. AGNESS has been implemented as a generalization of the network scheme introduced in PROSPECTOR. In AGNESS, constructs are shared to achieve increased storage efficency. AGNESS also has the ability to manipulate any well-defined data type, not just probabilities. For example, a value in the AGNESS system can be a string, or a frame. Also, AGNESS allows for expert-defined inference methods. This gives the system the abiltiy to handle any value propagation method that the domain expert desires. AGNESS is a shell aimed at a wide variety of domain applications, however, as with all shells, some application areas are better than others. AGNESS is particularly useful in domains that involve matching entities. The matching problem is really a I I SOCIALLY-COMPATIBLE PHYSICALLY-COMPATIBLE 1 I 1 (COHP-HOBBIES (COHP-JOBS) COHP-WEIGHTS I I I ’ Figure 1. The dating service network 996 I ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. generalization of the classification problem, and as such, is a widely occurring problem. We will illustrate the AGNESS constructs by means of an example taken from a simplified problem domain of a dating service. The “expert system” we will develop is intended to be used to find the probability that two people make a good match for dating each other. The computation network for this problem domain is presented in Figure 1. Each general proposition is represented by a node in the network. A value is associated with each node in a given context. For example, in Figure 1, the node AGE1 may represent the general proposition of the age of a person. Given a context such as (Steve), the node, together with the context, have an associated value, say 24. This is the specific instance of the general proposition AGE1 representing the fact that Steve’s age is 24. In AGNESS, the value associated with a node and context is not required to be a numerical value, but may come from any well-defined data type. For instance, the value associated with the node HOBBY1 and the context (Steve) may be the string “computers” representing the fact that Steve’s favorite hobby is computers. The triple made up of the node, the context and the value is called a datum. The nodes in the network are connected by links called edges representing the possible dependency of one datum’s value on that of another. For example, the nodes AGE1 and AGE2 are linked (connected by an edge) to the node COMP-AGES representing the relationship between the ages of two people and the probability that the two people have compatible ages. The nodes connected by the edges are called the antecedents (AGE1 ,AGE2) and the consequent (COMP-AGES) to emphasize their inferential relationship. Associated with each node is a function that takes the value of the antecedent nodes and generates the value of the consequent node. This function is called an inference method corresponding to its function of inferring the consequent value from the antecedent values. For example, the node COMP-AGES uses an expert defined inference method that takes the values of the two antecedent nodes AG E 1 and A G E 2 and computes a value for COMP-AGES. Given the context of (Steve) for AGE1 and (Cindy) for AGEP, this would correspond to taking Steve’s age and Cindy’s age and computing the probability that their ages are compatible and assigning this probability to the node COMP-AGES in the context (Steve,Cindy). A node can be linked to an arbitrary number of other nodes, and it may have an arbitrary number of nodes linked to it. It is important to realize that a node and the value associated with that node (in a given context) are separate entities. The node represents a general proposition whereas the value, stored in a separate database, represents a given instance of a general proposition that occurs during the problem solving process. Domain specific knowledge is relatively fixed, and thus is represented directly in the computation network. User supplied and problem specific knowledge is more volatile and thus is stored in a separate database. AGNESS has been implemented on a LISP workstation and uses a menu-driven interface. The system operates in several modes and provides a variety of facilities including explanation. AGNESS has also been tested on various problem domains ranging from a simplified expert system on wine to the serial evaluation of ECG exercise tests [4], and has proven to be elegant and powerful. II. BASIC TERMINOLOGY A. An Object The basic element that AGNESS manipulates is called an object. An object represents a primitive element to which information may apply. Typically, an object represents a single real-world entity or a group of entities that work together. For example, in our dating service, an object is a given person, such as Steve or Cindy. B. An Obiect Tvpe Objects are grouped into sets referred to as object types, such as male and female. AGNESS supports basic object types, defined by enumerating their member objects, and derived object types, defined by applying the set-theoretic union, intersection, and difference operators to other object types. Figure 2. The object type lattice The basic object types are organized into a structure called the object type lattice, representing a partial ordering based on set inclusion. Figure 2 shows the object type lattice for the dating service world. Placement of objects on an object type lattice naturally allows for the representation of fragmentary knowledge. For example, AGNESS will reason about Steve as both a person and as a male. Rules and elementary propositions in a computation network can be made as general as possible to avoid duplication, without diluting the power of the system to reason about specifics. C. -urn Function A datum function is a mapping from objects to information about those objects. The English meaning of a datum function is described using a list of words and integers. For example, the datum function INTERESTED-DF has the phrase (The probability that cl> is interested in <2>). Each bracketed number in this list, called a parameter, corresponds to an element of an ordered list of objects called a context. The datum function may be instantiated by substituting the elements of the context for the corresponding parameters, resulting in a concrete phrase about specific objects. Instantiating the above datum function in the context (Steve,Cindy) yields the phrase “The probability that Steve is interested in Cindy”. The use of a datum function enables AGNESS to represent the general antecedent-consequent relationship between un-instantiated concepts instead of the specific relationship between concrete phrases. D. A Domain Constrair-tt We have seen how a datum function may be instantiated to yield a phrase. It is important to prevent instantiations that yield meaningless phrases such as “The probability that Steve is KNOWLEDGE REPRESENTATION / 997 interested in Jeff” (assuming a heterosexual dating service). The set of permissible contexts of a datum function is specified using domain constraints. A domain constraint is a list of object types that represents the Cartesian product of those types. For example, the above datum function may have the domain constraint (male,female) designating that the datum function may be instantiated in any context that contains a male in the first position and a female in the second. Thus a context is a member of the set represented by a domain constraint when each object in the context belongs to the corresponding type in the domain constraint. We say in this case that the context matches the domain constraint. A datum function will be instantiated only in contexts that match one of its domain constraints. A datum function may have several domain constraints, allowing (The probability that 4r is interested in <2>) to be instantiated in any context matching either the domain constraint (male,female) or the domain constraint (female,male). Multiple domain constraints may be used with the same datum function to divide the domain of the datum function into disjoint parts. This division is useful when the value of the datum is computed differently when instantiated in contexts from different parts of the domain. For instance, the datum function (The probability that cl> is interested in <2>), specifying the probability that one person is interested in dating another person, may be instantiated for any male/female or female/male pair. However, if we allowed the instantiation for any person/course, we would get a completely different idea, namely the probability of a person being interested in some particular course. Obviously, this datum would be derived in a completely different manner than would the earlier datum. Domain constraints also enable AGNESS to generate the possible contexts for a node, an important consideration when inference is performed ( see section 4 ). III. KNOWLEDGE REPRESENTATION -THE NODF In AGNESS, knowledge is represented in the form of a computation network and a database. The network is built from the rules supplied by the domain expert and the database is built from knowledge obtained during the run of the expert system and from the default information. This section describes, in detail, the design and implementation of the AGNESS network and illustrates the ideas with the dating service example. The main element of the network is the node. A node in AGNESS corresponds to a datum function with domain constraints. That is, a node represents a proposition and the domain in which that proposition is valid. A node is defined as the following 5-luple: <datum function, constraint-default list, antecedent edges, consequent edges, inference method>. There are two types of nodes that are of special interest. First, a node with no consequent is called a top node and represents a high level topic that is of interest to the system. For example, the top node in the dating service example is GOOD-MATCH representing the probability that two people are a good mafch for dating. A second special type of node is a node with no antecedents, called a bottom node. A bottom node represents a topic that the system has no way of inferring from other information, and thus has no associated inference method. An example of a bottom node is AGE1 or HOBBYP. The value of such a node will either be the default value or a value supplied by the user. A. The Datum Function of a Node A datum function is associated with each node and is a mapping from objects to information about those objects. It is defined as the following 5-tuple: carity, phrase, askable, codomain-constraint, self-merit>. An argument list for a datum function is a list of objects called a context, and the result of instantiating the datum function in a context is called the value. Together a node (with its datum function), a context, and a value are called a datum. In this paper we will use data as the plural of datum. Each entry in the AGNESS database is stored as a datum, and retrieved using the node and context as keys. The arity of the datum function is the number of formal parameters. This number is used during the generation phase (called phase I) of the propagation process. The phrase of a datum function is a list of bracketed numbers and text such as (probability that cl> is interested in <2>). The phrase represents the English meaning of the datum function. The instantiated phrase is what the system uses to request or report the value of a datum. This text gives the system some of the advantages of a natural language interface, while retaining the advantages of strictly canned text. The askable flag of the datum function tells the system whether the user may be requested to supply a value for this datum. Use of this field allows the expert to prevent questions that a typical user cannot answer. The codomain-constraint of the datum function is used to verify that a value of this datum is reasonable. That is, the value of any datum that uses this datum function must satisfy the constraint. For example, if the value of the datum is meant to be a probability, the system will use the codomain constraint called probp which will return true if the value is between zero and one. This provides the system with a way to screen data that can not possibly be correct. The self-merit of the datum function is a number that is used to calculate Merit [5], a measure of the utility of requesting information from the user. The self-merit associated with each datum function is an expert-defined approximation of the ratio of the expected change in the value of a datum to the expected cost of determining the value. The concept of Merit will be discussed later in relation to the questioning process of the AGNESS shell (see section 5). B. The Constraint-defaultlist of a Node The constraint-default list is the second element of a node. Each element of this list is an ordered pair that consists of a domain constraint and a default value. For example, the constraint-default list ( (person) computers ) for a datum function with the phrase (The favorite hobby of cl>) defines that computers are the default hobby of every person. The domain constraints in the constraint-default list need not represent disjoint sets of contexts. If a context matches more than one domain constraint in the constraint-default list, the first such constraint and its associated default value apply. For example, a constraint-default list containing 1 (male) operating-systems ) and ( (Person) artificial-intelligence ) defines that the default hobby for any man is operating systems, while the default hobby for any other person (simply woman in this example) is artificial intelligence. 998 / ENGINEERING C. The Edges of a Node The next two elements of a node are the edges to the antecedents and the consequents. In AGNESS, an edge is explicitly represented as the following 4-tuple: <antecedent, consequent, transformation template, auxiliary information>. The antecedent of an edge is the node that is used as the “source”. For example, the antecedents of the node COMP-AGES are the nodes AGE1 and AGES. It is the data of these nodes that are used in the computation of the consequent datum. The consequent of an edge is the node that is used as the “destination”. It is the datum of this node that is computed using the data of the antecedent nodes. The consequent of COMP-AGES is the node COMPATIBLE. A node can have an arbitrary number of antecedents and consequents. The names “antecedent” and “consequent” are chosen from their role in the typical IF - THEN rule. The transformation template of an edge is a list of bracketed numbers that specifies the correspondence between parameters of the antecedent and consequent datum functions. Each bracketed number in a transformation template specifies a single pair of corresponding parameters. The consequent parameter is given by the number’s value, while the antecedent parameter is given by the number’s position in the transformation template. Every element of the antecedent context must occur in the consequent context. Thus reasoning is constrained to proceed from the general to the specific. Transformation Template : ( (2~ ) Consequent Context : ( 1 2 ) / Antecedent Context : ( 1 1 Figure 3. Operation of a transformation template The operation of a transformation template is illustrated in Figure 3. In this example the edge links the antecedent node HOBBY 2 (with a one parameter datum function) to the consequent node COMP-HOBBIES (with a two parameter datum function). The first (and only) element of the antecedent context corresponds to the second element of the consequent context because the first element of the template contains c2>. Thus the two parameters must be the same. The first element of the consequent context does not correspond to any element of the antacedent context. Edge = ~HOBBYP,COMP-HOBBlES,( <2~ ),nilB Consequent Context : ( ? , Steve ) ( Cindy , Steve ) fi u Antecedent Context : ( Steve ) ( Steve ) [al bl Figure 4. Context mapping during edge traversal AGNESS uses a transformation template in two ways, corresponding to the two directions in which the edge can be traversed. In proceeding from antecedent to consequent, AGNESS constructs a set of consequent contexts based on the antecedent context. Figure 4a illustrates this traversal. In this example, the transformation template is interpreted from the antecedents’ point of view. That is, the template tells the system that the first parameter in the antecedent context (Steve) is mapped to the second parameter of the consequent context. Thus the context (Steve) for HOBBY2 is mapped to the context ( ?, Steve ) for COMP-HOBBIES. Notice, this context is only partially specified. The system will fill in the question mark with all legal objects by using the constraint-default list for COMP-HOBBIES. This process will be discussed in section 4. Traversing the edge from antecedent to consequent occurs during the propagation process. In proceeding from consequent to antecedent, the transformation template tells the system that the second parameter in the consequent context (Steve) is mapped to the first parameter of the antecedent. This is illustrated in figure 4b. Thus the context (Cindy, Steve) for COMP-HOBBIES maps to the context (Steve) for HOBBYS. Traversing the edge from consequent to antecedent occurs during the questioning process. The auxiliary information element of an edge holds any additional information a particular inference method might need. For example, the subjective Bayesian inference method requires conditional probabilities. This information can be extracted from the edge and used during the propagation process. D. The Inference Method of a N& The last element of a node is the inference method. An inference method specifies the relation that holds between the value of a consequent datum and the values of ifs antecedents. An inference method is defined as the following 3-tuple: <assignment function, antecedent value function, edge merit function>. Each inference method has an assignmen function which is a procedure for deriving the value of a consequent datum from the values of its antecedent data. The assignment function is called with one argument for each antecedent, and returns the value of the consequent. The arguments of the assignment function are usually the values of the antecedents. For example, the inference method *AND* (probabilistic “and”) takes the value of each of the antecedent data, multiplies them together and assigns the resulting value to the consequent datum. In more complicated situations, another function called the antecedent value function constructs the arguments of the assignment function from information present in the computation network. Thus the second element of an inference method, the antecedent value function, is used when the assignment function requires information other than the value of the antecedents. For instance, subjective Bayesian inference methods use conditional and prior probabilities to apply Baye’s formula [6]. This information is extracted from the computation network by the antecedent value function. For instance, the conditional probabilities for subjective Bayesian inference are stored in the auxiliary position of the edge. The antecedent value function returns a value suitable as an argument to the assignment function. The third function making up an inference method is the edge-merit function. Merit calculations are used to direct the acquisition of information by identifying questions that are likely to have a large effect on the results of the computation network at a relatively low cost. Since user interaction is frequently the most time-consuming part of expert system use, the intelligent direction KNOWLEDGE REPRESENTATION I 999 of questioning can significantly improve the system’s performance. If a questioning mechanism is desired for a computation network built with AGNESS, an edge-merit function must be specified for each inference method. The current implementation of AGNESS contains inference methods that deduce consequent probabilities from independent antecedent probabilities. Basic logical connectives (“and”, “or”, and “not”) and subjective Bayesian inference have been implemented, as well as Mycin style confidence functions. AGNESS also allows expert-defined inference methods for both general and problem-specific purposes. This feature allows the domain expert to use any inferential relationship that is found to be desirable. value would be computed in each of the three resulting contexts. Once this has been done, the propagation process is re-started once for each node/context pair. Thus all daa in the data base that are affected by the change in the initial antecedent datum will be updated. An important effect of this propagation process is the downward inconsistency that might arise. If the initial datum that is changed corresponds to any node other than a bottom node, the database will be inconsistent downward. That is, this modified datum has no longer been inferred from its antecedents. The propagation process does insure upward consistency in the database. IV. PROPAGATION V. _SVFSTIONING Propagation is a procedure invoked each time a new value is added to the data base. The propagation process updates the data base so that values of the consequents of the modified data are consistent with the new values of their antecedents. This means that the value of each consequent datum has been deduced from the values of the antecedent data by applying its inference method. Modifying the values of the consequents thus implies a recursive invocation of the propagation procedure. The recursion is terminated by reaching nodes that have no consequents. The termination condition is insured by requiring that the network be acyclic. The propagation process consists of two phases. Partially Specified Context : ( ? , Steve ) Constraint-Default List : ( ( ( male, female ) 0.8 ) ( ( female, male ) 0.8 ) ) u Matched Constraint : (female,male) v Consequent Contexts : { (Cindy,Steve)(Ann,Steve)(Candy,Steve) ) Figure 5. Context propagation A. Phasel The first phase of the propagation process is illustrated in figure 5. First, a partially specified context is generated from the antecedent context using the transformation template associated with the edge. Next, the unspecified parts of the consequent context are filled in using the domain-constraint list of the consequent node. In this example, the partially specified context ( ? , Steve) matches only one domain constraint, namely (female,male). This tells the system that the first parameter of the partially specified context can be filled in with any female object. Doing so gives a set of all the contexts for the consequent that will be affected by the change in the antecedent datum. B. Phasell In the second phase of the propagation process, the consequent node is evaluated in each of the contexts produced by phase I. That is, the value for each datum involving the consequent node and one of the given contexts is re-computed using the new value of the antecedent datum. For example, in figure 5, a new The propagation process discussed earlier can be thought of as the forward chaining mechanism of the AGNESS system. AGNESS also provides a backward chaining mechanism, namely the questioning process. To initialize this process, the user gives the system the initial focus (a node and context). This focus is used as the goal that the system is working towards. If the focus datum is marked as askable, the system will ask for its value. If the user supplies the value, the value is recorded, the propagation process initiated, and the questioning process stops. If the user does not supply a new value, the system generates the antecedent data of the focus datum. The antecedent data act as the initial set of candidate questions. The questioning process then proceeds in three phases: Merit calculation, value retrieval, and candidate updating. A. Merit CalculatiQn In this phase, the system calculates Merit values for each datum in the candidate set that is marked as askable. These Merit values represent the ratio of the expected change in the focus datum over the expected cost of suppling the candidate datum. The calculation is based on the partial derivatives of the assignment functions on the path from the candidate’s node to the initial focus node. The theoretical foundation of Merit has been presented in previous papers and will not be presented here [5]. B. y&e Retrid The system now chooses the candidate datum with the highest Merit value and asks the user for the value of this datum. If the user supplies a value, the system will initiate the propagation process to update the data base to be consistent with the new value. The user may, however, not know the answer to the question. In this case, no propagation is performed. C. Candidate Upd&g In this phase of the questioning process, the system updates the set of candidate data. The updating is done as follows. If the user answered the question, that datum is simply removed from the candidate set. However, if the user did not answer the question, the antecedent data are generated and added into the set of candidates. By doing this, the system has added to the possible questions the data that will allow a value for the skipped datum to be computed. At this point, the system returns to the Merit calculation phase. This questioning process halts when either the user requests the system to stop, or when the Merit of the best available 1000 / ENGINEERING candidate datum falls below a predetermined threshold. The use of the Merit scheme directs the system towards asking next an optimal question. Thus, if the questioning process must be terminated before all the questions are asked, the time has been used to optimal efficiency. VI. USFR INTERFACE The AGNESS system is capable of using two different user interfaces. The system can run in a batch interface, reading and executing commands from a file. This user interface is useful for applications that require many independent runs of the AGNESS system. The second and more interesting interface is a menu-driven user interface. Through a series of menus, the user can pick activities, nodes and contexts with a minimum of typing. AGNESS provides a variety of facilities including construction and explanation. A. Construction This facility provides the expert with a user-friendly interface for building the computation network. Through this facility, the expert can add, delete, and modify nodes, datum functions, and edges. Also, the expert can examine the structure of the network through a graphical representation, examine the values of a given set of nodes and contexts, and essentially access and manipulate all elements of the network and data base. This activity allows the expert to experiment with slight additions to the network, test the need for some nodes by temporarily removing them from the network, and even experiment with new and different inference methods. B. Explanation One of the most important features of an expert system, and thus an expert system shell, is the explanation facility. In early systems, the explanation usually took the form of answering “why” a question was being asked. This form of explanation gave the user a way to follow the reasoning of the system by viewing the series of rules the system used to reach its conclusions. We use the term “user” to refer to the end user, the domain expert, and the knowledge engineer. By exposing the user to this information, the designers increased the confidence in the final system. Many early systems also included a second explanation query, namely “how”. This query was designed to allow the user to ask questions about the conclusions of the system. As the system listed the conclusions, the user was allowed to ask “how” each conclusion was reached. Designing an improved set of explanation queries has become an increasingly important area of research. Most of this research has moved away from the simple notion of presenting the rules used by the system towards more sophisticated explanation systems. Some researchers are concentrating on the causal relationships that exist in the domain knowledge [7]. Another branch of research on explanation concentrates on the natural language feature of the user interface. Although this names only two of the numerous areas of research on explanation, it does serve to illustrate that the explanation of tomorrow’s systems will be sophisticated, depending on more than the simple rules used by the expert system in reaching its conclusions. In an expert system shell, the value of the explanation facility is increased significantly. A shell is designed to be used over and over again in various domains, and as such should include friendly and useful interface facilities. Most of the expert system shells available today host an impressive graphic and menu-driven user interface. However, these shells have seemingly forsaken explanation as part of their elaborate interface. For example, some of the most visually sophisticated and useful expert system shells such as ART, KEE, and LOOPS do not include explanation as a feature 131. These systems do provide a means of programming the explanation function into the final system, however it is not provided as part of the actual shell. Other shells that do provide explanation facilities such as INSIGHT, M.l, and Personal Consultant only provide the basic “why” and “how” queries that were found in the earliest systems [3]. Although the state-of-the-art explanation facilities are far to complicated and domain sensitive to be reasonably included in today’s expert system shells, the set of permissible explanation query types should be much larger than the simple ‘why” and “how” that is found today. AGNESS provides eight types of explanation that give the user a more complete set of queries. These query types also give the system designer rule tracing and debugging facilities. Each query type uses only slightly more knowledge than the standard “why” and “how” queries, and yet significantly increases the information available to the user . Each has been designed to improve the explanation available to the user while using only technology that is already in use in most of today’s expert system shells. Thus, these query types are not meant to challenge the state-of-the-art explanation technology, but instead, to act as an intermediate set that can be included in commercially available systems with little or no increase in the cost or complexity. Also, each of these query types can be added to existing expert systems directly without significant effort. In the AGNESS system, the explanation queries fall into three categories: queries about the past, queries about the present, and queries about the network structure. Each of the eight queries is recursive. That is, as the system answers the query, the user is allowed to ask for explanation of the answer. Queries about the past allow the user to ask the system (1) why a datum was derived, (2) where a datum was used, and (3) how a datum was computed. These three queries give the user the ability to move through the database, examining the features that lead to certain data. With respect to the present, AGNESS provides the user with the ability to ask (1) why a question is being asked, (2) where a datum will be used, and (3) how a datum will be computed if left to the system. By using these queries, the user can follow the reasoning process of the system as it happens. They also provide the user with information about the effect of answering a question, or leaving the computation up to the system. Thus the user can always ask for an explanation of the system’s actions, and an explanation of the results of the user’s actions. AGNESS also provides explanation about the structure of the network. The user or the expert is allowed to ask for (1) the antecedents and (2) the consequents of any node. This information is displayed graphically, and gives the user or expert an explanation of the structure of the network at the node level. This explanation facility can be valuable when the expert wants to verify part of the network. It is also valuable to the user as it gives an explanation in terms of general propositions as opposed to specific instances. This set of explanation facilities allows the system to be easily understood and followed, thus increasing the user’s confidence in the system’s conclusions. A more detailed description of the query types and their importance will be presented in a forthcoming paper. KNOWLEDGE REPRESENTATION / 100 1 VII. CONCLUSIONS The AGNESS system has been tested on domains ranging from an expert system on wine to the real world problem of analyzing treadmill exercise ECG test results. In both domains, the system proved to be elegant and simple to use. The expert system written to analyze ECG test results has achieved a level of performance higher than that of the human doctors that were being used to analyze the data [4]. AGNESS represents a significant step forward in generalized expert system shells. AGNESS can reason both forward and backward, can use any combination of numeric and non-numeric data, and can use any well defined inference method required by the user. The system provides an excellent range of explanation queries far and above other expert system shells. The explanation query types give a full and rich explanation of the relationships that exist in the knowledge base. By including these query types as a basic feature, expert system shells can patiently wait for the technology of tomorrow while remaining useful today. The AGNESS architecture provides efficient implementations of expert systems by sharing constructs such as nodes, edges, and datum functions whenever possible. The computation network used in AGNESS allows only relevant rules to be considered during propagation, thus reducing the work needed in finding the rules that can be fired. Also, AGNESS uses the Merit scheme to handle the questioning of the user to insure that the most important questions are asked first in case the questioning period must be prematurely terminated. VIII. PI ANS Even though AGNESS has shown to be extremely useful as an expert system shell, we are still working on more features and improvements to make the system even better. Some of the things we are investigating include new explanation facilities that contain more knowledge than the current system, new network configurations to help make the propagation process even faster, and improvements to the Merit scheme used during questioning. The interface is also being revised to include more graphical representations, and better help facilities. ACKNOWLEDGMENTS We would like to express our deep gratitude to the members of the Monday Night Expert Systems Workshop. Without their guidance and encouragement, the AGNESS system might never have been implemented. REFERENCES [l] Shortliffe, E.H., Computer Based Medical Consultations: MYCIN. New York: Elsevier, 1976. [2] Duda, R.O., Hart, P.E., Konolige, K., and Reboh, R., “A Computer-Based Consultant for Mineral Exploration,” Technical Report; Final Report, SRI Project 6475, SRI International, September, 1979. [3] Harmon, P., and King, D., “Expert Systems: Artificial Intelligence in Business,” John Wiley & Sons, Inc., 1985. [4] Slagle, J.R., Long, J.M., Wick,M.R., Matts, J.P., and Leon, A.S., “Expert Systems in Medical Studies - A New Twist,” Proceedings of the Conference on Applications of Artificial Intelligence, SPIE, 1986. [5] Slagle, J.R., and Hamburger, H., “An Expert System for a Resource Allocation Problem,” Comm. of the ACM, September, 1985. [6] Duda, R.O., Hart, P.E., and Nilsson, N.J., “Subjective Bayesian Methods for Rule-based Inference Systems,” National Computer Conference, 1976. [7] Swat-tout, W.R., “XPLAIN: a System for Creating and Explaining Expert Consulting Programs,” Artificial Intelligence 21, 1983. 1002 / ENGINEERING
|
1986
|
42
|
486
|
SYNTELTM: KNOWLEDGE PROGRARlhlING USING FUNCTIONAL REPRESENTATIONS Rene Reboh and Tore Risch Syntelligence, Inc. 1000 Hamlin Court, PO Box 3620 Sunnyvale, CA 94088 ABSTRACT SYNTELTM is a novel knowledge representation language that provides traditional features of expert system shells within a pure functional programming paradigm. However, it differs sharply from existing functional languages in many ways, ranging from its ability to deal with uncertainty to its evaluation procedures. A very flexible user- interface definition facility is tightly integrated with the SYNTEL interpreter, giving the knowledge engineer full control over both form and content of the end-user system. SYNTEL is fully implemented and has been successfully used to develop large knowledge bases dealing with problems of risk assessment. I INTRODUCTION A large class of expert system applications concerns judging risks under conditions of uncertainty. Problems of financial risk analysis--with which we have been particularly concerned--have this characteristic, but it arises in many other fields, such as public health and safety. An effective knowledge representation language addressing such domains must satisfy a number of requirements. The assessment of financial risk involves a combination of quantitative analysis and qualitative judgment. Both quantitative and qualitative factors might have either high or low confidence associated with them, imposing a need to manage uncertainty uniformly across different types of variables. Financial risk analysis often requires consideration of sets of similar objects (such as geographical sites or years), suggesting that the representation and manipulation of large sets requires efficient support. For the end user, the analysis of financial risk may require identifying and analyzing a large amount of (possibly conflicting) data. Accordingly, data must be presented (and requested) in a compact, familiar form; equally important, the user must be spared the need even to look at system displays that are not immediately relevant . For the knowledge engineer, the knowledge representation must afford a natural means to structure the assessment criteria used by experts, and to do so on a large scale. This goes beyond the availability of a well-developed knowledge-acquisition tool set and the design of a convenient syntax, although surely both are important. It requires the architecture of the language to mirror the structure of explicit domain expertise and to hide all inessentials from the knowledge engineer. These and other requirements were sufficiently demanding that we were persuaded to design and implement a new language. Our philosophical point of view is that a purely non-procedural representation of knowledge most naturally reflects the way experts appear to think in our domains of interest: Experts are very much concerned with relations among factors and subfactors, but appear far less concerned with the order in which factors are considered--provided, of course, that all potentially relevant items are considered before a recommendation is made. SYNTEL is thus at one end of the “what- to-how” spectrum and bears certain similarities to other non-procedural languages such as LUCID IWadge & Ashcroft, 19851, BATTLE [Slagle & Hamburger, 19851 and that described by CLucas & Risch, 19821. Among the more widely used expert system languages, SYNTEL might be compared with a production language like OPS5 [Brownston, et al., 19851. However, SYNTEL differs from OPS5 in using an alternative to a recognize/act architecture, in the high level of primitives it supports, in its treatment of inexact reasoning, its runtime features, and its integration with a user interface-definition facility. The following sections discuss SYNTEL’s use of a functional representation of knowledge, its inference procedure, the user interface facility, and some run-time features of the language, A final section comments on some lessons learned. II KNOWLEDGE REPRESENTATION The basic entities in SYNTEL are variables and functions. Variables can represent input data, intermediate levels of assessments, or final (i.e., output) assessments. Functions define mappings between variables. A SYNTEL program (i.e., a knowledge base) consists in large part of a collection of variable and function definitions. A. Variables A SYNTEL variable does not hold a single value but instead represents a set of probability distributions indexed by formal parameters. For example, a variable EconomicOutlook, indexed by the single parameter Region, might contain the following information: Re ion Weak Avg Strong 2 NE .3 .6 .l SE .4 .5 .l .l .4 .5 SW .l .2 .7 The column headings indicate that EconomicOutlook can take on the values Weak, Avg, and Strong. Each row of the table holds a probability distribution over these values for an instance of the variable, each instance corresponding to a value of the Region parameter. In general, variables can be indexed by any number of formal parameters. The variable EconomicOutlook is ordinal-valued and the probability distributions describing its instances are discrete. Variables in SYNTEL can also have logical, nominal (e.g., a city name), string or real values. The probability distribution of a real-valued variable is represented by a mean and a variance for each variable instance. KNOWLEDGE REPRESENTATION / 1003 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Because uncertainty, in various guises, plays such an important role in financial assessment problems, we have provided additional mechanisms for its representation. For example, variables and parameters can both have Unknown as an explicit value, with consequences that are context dependent. An unknown parameter causes the creation of a partially-indexed set, an unknown logical variable is treated using a 3-valued logic, and so forth. Other mechanisms for managing uncertainty include support of prior and default probability distributions over variables. Our interest in the efficiency of the knowledge engineering process, together with our desire for extensive error checking facilities, has led us to develop a powerful type system. The language has a handful of built-in variable types, but a typical knowledge base contains many times that number of additional variable types defined by the knowledge engineer. This extensive use of typing leads to code-sharing (further enhanced by an inheritance hierarchy of types), and to improved consistency of large knowledge bases. Control of errors is facilitated by compile-time and run-time type-checking. B. Functional Representations We have made a strong commitment to knowledge representation based on functional mappings between variables. To describe this approach by example, suppose a real-valued variable Population, indexed by Region, is to be combined with our previously- defined EconomicOutlook to estimate the size of a target market of interest. This would be represented by TargetMarkzt(Region) <= f[EconomicOutlook(Region), Population(Region)]. The knowledge engineer is not free to define the function f arbitrarily. Rather, he or she selects the computational form off from among the several dozen supported in SYNTEL. Accordingly, this selection process--and, for some functions, specification of additional information like weighting factors--plays a central role in the lmowledge engineering process. The functional forms supported by SYNTEL, called CompurationTypes, fall naturally into several families. Beyond the expected arithmetic and logical functions, there are important families of functions for performing a variety of tests and manipulations on sets, for combining or weighting variables in several ways, for testing the current state of the computation, and for accessing databases. Certain ComputationTypes have the important ability to combine arguments of different modalities probabilistically. The simplest such, Table, illustrates the principle. To use it to compute TargetMarket, a table such as the following would be defined, which would apply to each Region. TargetMarket Population O-40 40 - 50 50-+INF EconomicOutlook Weak Avg Strong, 10 20 25 15 25 30 15 35 45 The table extensionally represents the expert’s judgment of how an ordinal and a real-valued variable are combined to estimate the real variable TargetMarket. Suppose there were uncertainty in the arguments because each is an estimate of the value, say, in the year 1995. Then each argument would have an associated probability distribution, discrete for EconomicOutlook and normal for Population. Integration of the normal over the indicated ranges, together with an assumption of independence, allows the mean and variance of TargetMarket to be computed. While simple tables are useful, judgments of this sort are usually combined using a more complex ComputationType, called Weight, that implements a non-linear, probabilistic voting scheme. Weight uses a mechanism analogous to Table, in combination with summation and a “soft” threshold, to map between variables of arbitrary modalities. With the intense current interest in methods for managing inexactness in expert systems [e.g., Pearl, 1985; Heckerman, 1985; Zadeh, 1985; Gordon & Shortliffe, 19851, it may seem foolhardy to have designed and implemented an alternative representation for experts’ subjective beliefs. Nonetheless, the “rate and weight” style of reasoning we found prevalent in financial institutions led us to develop the Weight mechanism which, extending earlier work Duda and Reboh, 1984; Reboh, Reiter and Gashnig, 19821, eases the problem of representing the “apples to oranges” comparisons that are an essential part of financial risk assessment. Even a cursory description of the remaining ComputationTypes would far exceed space limitations, but we make a few general comments. Every function works with probability distributions as inputs; independence of arguments is assumed as needed. Distributions are typed; e.g., normal, discrete, exact, or undefined. However, unlike the strong typing of variables, distribution types can be determined only at run-time. All probabilistic calculations converge smoothly to deterministic calculations when a variable is bown with certainty. Each argument to a function can be a set (of distributions), so it is important to support set manipulations within argument definitions. We have therefore provided means for selecting, aggregating, disaggregating, and defaulting variable instances. A simple example of this is the selection of the Maximumlnstance of a variable. In the illustration above, this could be used to identify the Region having the largest TargetMarket. Continuing the illustration, we can easily imagine an expert specifying EconomicOutlook to depend on a variety of additional quantitative and qualitative variables. One or more additional equations would therefore have to be written to define EconomicOutlook and its arguments until a primitive input level was reached. Mathematically, the principal operation for articulating a knowledge base is thus function composition. The computational correlate of this is function evaluation, which will be further described below. Before moving on, though, we should re-emphasize the purely declarative nature of this representation. The “<=” signifies function definition, not assignment; the right hand side of the equation contains no assignments, deletions, or other side effects. Thus, the order of function evaluation is controlled entirely by the SYNTEL interpreter, not by the knowledge engineer. This separation is more than a theoretical nicety. In practice, the knowledge engineer is not concerned with control issues. (See [Schor, 19861 for a further discussion of this point.) 1004 / ENGINEERING III INFERENCE A single inference step in SYNTEL consists of the side-effect- free evaluation of a function, given the value of its arguments. This step, called propagation, takes place whenever the value of an argument to any function is changed. If the function being evaluated is itself an argument of a second function, a subsequent evaluation will be performed. The basic evaluation strategy is thus forward chaining, and the result of an inference cycle is to maintain the values of functions consistent with the (new) values of their arguments.* Propagation is the central operation of SYNTEL’s inference engine. A. Inference Networks The nesting of function compositions is compiled into an explicit data structure called the static knowledge base. The static knowledge base can be represented as a directed acyclic graph in which nodes correspond to functions and arcs point to a function from its arguments. We call these graphs inference nets, following the terminology established in the PROSPECTOR system for analogous but simpler structures. Nodes with no incoming arcs represent special elementary functions that accept input data from the user or from external data sources. Nodes with no outgoing arcs represent “top- level” functions whose values are among those displayed to the user. A typical cycle of operation begins with the user providing a new value for any node in the inference net to which he or she has access. (Access mechanisms are described in the following section.) Subsequent function evaluations can be visualized as a dataflow originating at the changed node and propagating upwards along the arcs of the inference net. Changing the value of a node cannot lead to the creation of new variables-- i.e., of new nodes. However, it can lead to the creation or deletion of instances of existing variables. When this occurs, a truth maintenance procedure (adapted to our functional representation) is invoked to assure global consistency over all instances of all variables. B. Efficiency Issues A typical knowledge base written in SYNTEL contains thousands of nodes and, since functions in general compute sets of distributions, many thousands of instances. We have therefore been motivated to develop several techniques to enhance propagation efficiency. Together, they implement the strategy “Compute only what you must, and that only once.” Updated instances: Function values are computed only for those instances of its arguments that have changed significantly. Referring to our earlier example, if our economist tells us that the EconomicOutlook for the NorthEast region has improved, we would compute a new value for only the variable instance TargetMarket( Limited Propagation: New function values are computed only if they affect the display seen by the user. A change in the value of any node may, in general, affect the values of many output nodes, only some of which can be displayed at any one time. By limiting propagation to these visible nodes, execution time is proportional to the number of outputs displayed rather than to the total size of the knowledge base. Potential changes to other nodes are recorded in a queue if needed. *But see Section V for a description of a backward chaining sub-process. Non-redundant computation: Because an inference net is a graph rather than a tree, because instances of a node can be selected by other nodes, and because propagation can be initiated from several nodes simultaneously, a straightforward propagation algorithm will perform unnecessary recomputation of functions. Limited propagation and non-redundant computation are supported by an extensive compile-time analysis of the static knowledge base. The analysis identifies the connections between each node in the inference net and each screen that the user may see. It also produces an optimal partial ordering for function evaluation. In these and similar optirnizations we have favored run-time efficiency at the expense of compilation cost. IV USER INTERFACE It is widely recognized that the user interface consumes a large proportion of application development resources, yet its design is so critical to system acceptance that its development cannot be slighted. Our approach to this ubiquitous problem is to separate the description of the interface cleanly into two parts: one governing the appearance and interactive properties of the display itself, and the other defining the logical relations between display objects and the underlying inference net. A. Display Description Our application domains make it natural to adopt the business form as the underlying display metaphor, much as ONCOCIN [Hickam, et al., forms 19851 uses medical forms are insurance applications for its or sets domain. Examples of business of financial statements. Figure 1 shows a form concerning the assessment of a building’s fire risk. WILDING FIRE RISK - CDNSTRlJCTION - FIRE i ALLIED ncation nlmber: Ii Building age: Actequate for current occupancy: Heating condition ssseaslllent: Electrical capacity assent: Aluinm or other inferior wiring: Tmprary wiring asseszaent: Electrical cmditron: Heating and ala CmstructiM type: CMcaaled spaces: Interior finishing: Mixed cm5tructlM: Admquacy of cutoffs: kdmr or stories: a, Construction fire risk assessnmnt Building fire risk amesant Figure 1: Fire risk at Location 1 Forms are represented as structured objects constructed of rectangular primitive regions called boxes and non-primitives called groups. Boxes (which need not have visible outlines) are used to accept input data, display several kinds of output assessments, and allocate regions for displaying text and annotations of various kinds. Groups, which can be nested to any depth, are typically used to define larger regions of the display, to define a single full display screen (a “form”) z?d to define sets of forms. KNOWLEDGE REPRESENTATION / 1005 A high-level form description language provides the means for defining and positioning boxes, for composing them into higher-level groups, and for associating various properties with them. These descriptions can be compiled to run on desired target displays. B. Inference Net Integration An important design feature of SYNTEL is the close relation between the user interface and the control of propagation in the inference net. Control is accomplished through explicit links between variables held in nodes of the inference net and display objects. Among the more important links are the following: Input data: A link can specify a node which is to receive a user-supplied value. When a user enters a value for a variable (typically by menu-selection), the forward-propagation process is initiated. Since any variable can be selected, the underlying flow of control is effectively in the hands of the user*. Output and Limited Propagation : A node value can be displayed (in any of several formats) in a linked object, and is updated whenever a change occurs. As mentioned earlier, propagation does not proceed “upward” beyond currently-displayed objects. Conditional display: In our domains of interest, the complete assessment of a single case can in principle require the user to supply a very large number of individual data items. In practice, fortunately, most cases can be resolved after supplying only a small subset of the possibly-relevant data. A forward-chaining system, however, cannot easily take advantage of this opportunity to spare the user from the need to supply values for, or even to scan, unnecessary items. Our design approach to this difficulty has been to hide from the user all information or requests for data whose present relevance is problematic. We accomplish this by allowing the display of any object to be conditional on the value of any logical variable. Figure 2 shows the Fire Risk form as it applies to Location 2. The input data requested for this building is rather different from that shown in Figure 1, because the two buildings differ in key aspects such as age, height and construction-type. Thus, while the user controls the flow of inferences, this control can be exercised only within currently-relevant portions of the knowledge base. BUILDING FIRE RISK - CONS7RUCTION - FIRE L ALLIED LoutiM lumber: 12 @uilding age: i Adtlqlnts for current occupancy: i Heating and electrical *ssessma"t fT"T:.,;: .II(., ;+I cmstruction type: conwaled spaces: Intarlor flnishlng: Yurber Of stLlr1es: ml AdequA3ly 7Aned heat/air: Flmr openings ws-nt: DetectIon equipant assmnt: Elevator safety sss-nt: Epsrgency backw p-r: Ccmstructicm fire risk assesaaent Buildlng fire risk assessment The user-interface definition-facility has allowed us to factor knowledge base design into two independent parts. The inference net is based on decision criteria supplied by experts, while the interface is governed by the need to provide a smooth, natural and efficient environment for the end user. V RUN-TIME CAPABILITIES The SYNTEL run-time environment contains a number of features to increase the efficiency and confidence of the end user. For reasons of space, we describe just two of them: the backward chaining facility and the explanation facility. A. Backward Chaining While we believe that the user should control the interaction, we have nonetheless found it important to indicate which missing data items are especially relevant in the current context. To this end, we have designed a best-first search algorithm based on sensitivity analysis. Working backwards from an identified goal node, the algorithm uses the probability distribution at each daughter node to estimate the amount by which that daughter is likely to change the distribution of the parent. The process continues until input nodes are reached. The input node having the largest likelihood of changing the goal node is indicated to the user, who ‘is free to accept the advice or to supply some other piece of data. B. Explanations The ability to explain reasoning is an important feature of expert systems, improving both the efficiency of knowledge engineering and the acceptability by users. However, as representational power has grown beyond that of the earliest systems, it has become increasingly difficult to generate cogent explanations. Because we wish to minimize the need for knowledge engineers to define significant auxiliary structures, we have had to rule out approaches like those described by [Neches, et al., 19851 and [Smith, et al,, 19851. Instead, we adopted a simpler approach based on refining the standard notion of the support for an inference. In a functional representation such as ours, the support for a variable is simply the arguments of the function that computes its value. We extend this slightly and define explainable support to be a subset of the support so designated by the knowledge engineer. Explanations can be then restricted to include only those variables whose meanings are judged to be intuitive and useful to the end user.** A recursive procedure allows the user to explore explainable supports to any desired depth. VI STATUS AND CONCLUSIONS SYNTEL has been implemented on a Xerox 1 lxx in Interlisp- D and in a distributed IBM mainframe/workstation environment in PUI and C. Several large knowledge bases have been built, with the aid of a well-developed knowledge engineering environment, to address risk assessment problems in commercial insurance and banking. What has been learned? Figure 2: Fire risk at Location 2 *This is analogous to the use of active values in access-oriented programming systems [Stefik, et al., 19861. In SYNTEL, however, function evaluation is invoked uniformly by the system, rather than by programmer-provided explicit triggers. **Even though a SYNTEL knowledge base contains no control information-a standard problem for explanation systems--the expert may judge some intermediate variables to represent unintuitive concepts. 1006 / ENGINEERING The salient feature of SYNTEL is its functional representation of knowledge. Even experienced computer scientists and software engineers require a little time before they are fully comfortable with this purely non-procedural representation. Significantly, however, computer-naive domain experts find it very natural to describe their expertise in these terms. This suggests that we have at least partially achieved our goal of matching the representation to the structure of domain knowledge. This conclusion is further supported by the fact that no knowledge base built to date has had to escape to the underlying implementation language. We have come to appreciate the benefits to the user of a “soft” mixed-initiative system. Forward chaining provides the basis for high- bandwidth, user-controlled interactions. The features for computing advice and conditionally controlling the display prevent the interactions from becoming unfocused and time-wasting. A final point: We have tenaciously maintained the purity of the non-procedural architecture, with consequences extending beyond philosophical gratification. It has allowed the system to undergo major enhancements with minimum re-coding. ACKNOWLEDGEMENTS SYNTEL has benefited greatly from the work and ideas of many people. We especially wish to acknowledge the contributions of A. Barzilay, R. 0. Duda, S. Gower, J. Harris, P. E. Hart, M. Ljungberg, J. Reiter, J. F. Rulifson, and A. White. We also thank D. Bobrow, A. Schiffman and J. Aikens for their constructive comments on earlier versions of this paper. REFERENCES Brownston, L., R. Farrell, E. Kant, and N. Martin, Progrumming Expert Systems in OPS5, Addison-Wesley, 1985. Duda, R. 0. and R. Reboh, “AI and Decision Making: The PROSPECTOR Experience,” in W. Reitman, Ed., Artificial Intelligence Applications for Business, pp 11 l- 147,(Ablex Publishing, Norwood, NJ, 1984). Gordon, J., and E. H. Shortliffe, “A Method of Managing Evidential Reasoning in a Hierarchical Hypothesis Space,” Artificial Intelligence, 26 (3), pp 323-358, (July, 1985). Heckerman, D., “Probabilistic Interpretations of MYCIN’s Certainty Factors,” Proc. Workshop on Uncertainfy and Probability in Artificial Intelligence, (UCLA, 14-16 August, 1985). Hickam, D.H., E. H. Shortliffe, M. B. Bischoff, A. Carlisle Scott, and C. D. Jacobs, “The Treatment Advice of a Computer-Based Cancer Chemotherapy Protocol Advisor,” Annals of Internal Medicine 103 (6), (December 1985). Lucas, P. and T. Risch, “Representation of Factual Information by Equations and Their Evaluation,” Sixth International Conference on Sofhvare Engineering, (Tokyo, 13-16 September 1982). . Neches, R., W.R. Swartout, and Moore, J., “Explainable (and Maintainable) Expert Systems,” Proc. Ninth International Joint Conference on Artificial Intelligence, (Los Angeles, 18-23 August 1985). Pearl, J., “How to Do with Probabilities What People Say You Can’t,” Proc. Second Conference on Artificial Intelligence Applications, IEEE Computer Society, (Miami Beach, 11-13 December, 1985). Reboh, R., J. Reiter and J. Gashnig, “Development of a Knowledge- Based Interface to a Hydrological Simulation Program”, Final Report, SRI Project 3477, (May 1982). Schor, M.I., “Declarative Knowledge Programming: Better than Procedural?,” IEEE Expert, I (I), (Spring 1986). Slagle, J. R., and H. Hamburger, “An Expert System for a Resource Allocation Problem,” Cornm. ACM, 28 (9), (September, 1985). Smith, R., H.A. Winston, T.M. Mitchell, and B.G. Buchanan, “Representation and Use of Explicit Justifications for Knowledge Base Refinement,” Proc. Ninth International Joint Conference on Artificial Intelligence, (Los Angeles, 18-23 August 1985). Stefik, M. J., D. G. Bobrow, and K. M. Kahn, “Integrating Access- Oriented Programming into a Multiparadigm Environment,” IEEE Software, 3 (IO), (January, 1986). Wadge, W. W. and E. A. Ashcroft, Lucid, the Dataflow Programming Language, Academic Press, 1985. Zadeh, L., “Syllogistic Reasoning as a Basis for Combination of Evidence in Expert Systems,” Proc. Ninth International Joint Conference on Artificial Intelligence, (Los Angeles, 18-23 August 1985). KNOWLEDGE REPRESENTATION / 1007
|
1986
|
43
|
487
|
GBB: A Generic Blackboard Development System Daniel D. Corkill, Kevin Q. Gallagher, and Kelly E. Murray Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 Abstract This paper describes a generic blackboard development system (GBB) that unifies many characteristics of the blackboard systems constructed to date. The goal of GBB is to provide flexibility, ease of implementation, and efficient execution of the resulting application system. Efficient insertion/retrieval of blackboard objects is achieved using a language for specifying the detailed structure of the blackboard as well as how that structure is to be implemented for a specific application. These specifications are used to generate a blackboard database kernel tailored to the application. GBB consists of two distinct subsystems: a blackboard database development subsystem and a control shell. This paper focuses on the database support and pattern matching capabilities of GBB, and presents the concepts and functionality used in providing an efficient blackboard database development subsystem. I Introduction Historically, blackboard-based AI systems have been imple- mented from scratch, often by layering a blackboard archi- tecture on top of other support systems. This has fostered a notion that blackboard-based architectures are difficult to build and slow in execution. Despite this notion, AI sys- tem implementers are increasingly considering blackboard architectures for their applications. Unlike rule-based and frame-based AI architectures where a variety of commer- cial and academic system development shells are now avail- able, an application developer considering an blackboard approach remains largely unassisted. A microcosm of this situation existed at the Univer- sity of Massachusetts. Several large blackboard-based AI systems had been implemented [1,2], and a number of ad- ditional blackboard-based applications were being consid- ered. We decided to pool our experience in implementing blackboard systems into a common development system. We felt that by consolidating our implementation resources we could construct a generic system that would be more efficient than any of the individual systems, if they all were constructed from scratch. The goal for the blackboard de- velopment system was to reduce the time required to im- plement a specific application and to increase the execution efficiency of the resulting implementation. This research was sponsored in part by the National Science Foundation under CER Grant DCR-8500332, by the National Science Foundation under Support and Maintenance Grant DCR-8318776, and by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under Contract NRO49-041. This paper describes the resulting generic blackboard development system, termed GBB (Generic Blackboard). The GBB approach is unique in several aspects: 1. A strong emphasis was made on efficient insertion and retrieval (pattern matching) of blackboard objects. GBB was designed to efficiently implement large blackboard systems containing thousands of blackboard objects. 2. A non-procedural specification of the blackboard and blackboard objects is kept separate from a non-procedural specification of the insertion/retrieval storage structure (Figure 1). This allows a “black- board administrator” to easily redefine the black- board database implementation without changing the basic blackboard/object specification or any applica- tion code. Such flexibility is not only important dur- ing the initial development of the application system, but also to maintain efficient database operation as the scale and characteristics of the application evolve during its use. Both specifications are used by the GBB database code generator to produce an efficient blackboard kernel tailored for the specific application. 3. We have defined a general composite blackboard object for representing objects composed of discrete elements (such as a phrase of words or a track of vehicle sightings). 4. We have defined a pattern language for retrieving simple and composite objects from the blackboard. The application programmer has the ability to insert additional procedural filtering functions into the basic retrieval process. This can be significantly more efficient than applying the filters to the results of the retrieval. 5. ff clean separation was made between the database support subsystem of GBB and the control level. r Non-Procedural GBB Application-Specific Blackboard Unit/Space - Database Blackboard Database Specifications Code Generator Kernel Application Implementer I 1 Figure 1: The GBB Database Subsystem 1008 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. This allows different control shells to be implemented using the common database support subsystem. (We feel that it is premature to force a particular control architecture on all blackboard applications.) The interface between the two subsystems is a set of blackboard events, signals indicating the creation, modification, or deletion of blackboard objects. We are implementing several control shells as part of GBB, however an application implementer is free to develop a different control shell using the GBB database subsystem. The emphasis on database efficiency separates GBB from the generic blackboard architectures of Hearsay-III [3] and BBl [4]. Although both Hearsay-III and BBl are domain independent blackboard architectures, their focus is on generalizing control capabilities. The major contribution of GBB is not, in any extension of the technology of blackboard architectures, but in the unification of existing blackboard technologies into a development, system for high-performance applications. The remainder of the paper describes GBB in more detail, focusing on its database support subsystem and pattern matching capabilities. The implementation of control shells will not be described. GBB is implemented in Common Lisp and is now being tested in a Common Lisp reimplementation of the Distributed Vehicle Monitoring Testbed [I]. II Specifying the Blackboard Structure An application implementer using the GBB system must specify the structure of the blackboard and the objects that will reside on it. In GBB, the blackboard is a hierarchical structure composed of atomic blackboard pieces called spaces.l For example, the blackboard abstraction levels (phrase, word, syllable, etc.) of the Hearsay-II speech understanding system would be implemented as spaces in GBB. In addition to being composed of spaces, a blackboard can also be composed of other blackboards (themselves eventually composed of spaces). Finally, blackboards and spaces can be replicated at blackboard initialization time (discussed in Section V). Spaces are defined first, using define-spaces: define-spaces spaces [documentation] &KEY dimensions [Macro] Spaces is a list of space names. Dimensions is a of specifiers defining the dimensionality of spaces. list The concept of space dimensionality is crucial to efficient insertion/retrieval of blackboard objects, and is best introduced by examples from existing systems. In addition to having the blackboard subdivided into multiple information levels, the levels of the Hearsay- II speech understanding system (HS-II) [5] and the Distributed Vehicle Monitoring Testbed (DVMT) [I] are structured. That is, the blackboard objects are placed onto appropriate areas within each level based on their attributes. In HS-II, each level has one dimension, time. ‘In designing GBB, we used names that did not evoke preconceived notions from previous blackboard systems. Hence the term “space” rather than “level” and the term “unit” rather than “hypothesis” or “object”. In the DVMT, each level has three dimensions: time and x, y position. In both systems, each level in the system has the same dimensionality. This may not be the case for other application areas, and GBB allows individual spaces to have different dimensionality. (Spaces with differing dimensionality are declared using multiple calls to define: spaces .) An important aspect of the level dimensionality in HS- II and the DVMT is that each dimension is ordered. This means that there is a notion of objects being “nearby” other objects. In HS-II this idea of neighborhood allows retrieval of words to extend a phrase whose begin time is “close” to the phrase’s end time. In the DVMT, a vehicle classification is made from a component frequency track by looking on the blackboard for other component frequency tracks that are positioned close to the original track throughout its length. In addition to ordered dimensions, GBB supports enumerated dimensionality. An enumerated dim&ion contains a fixed set of labeled categories. For example, in the DVMT, a hypothesis classifying a vehicle could be placed on a space containing a “classification” dimension, where the dimension’s label set consists of vehicle types. GBB allows a space to have both ordered and enumerated dimensions. The dimensionality of each space is an important part of system design. Although GBB provides flexibility in spec- ifying space dimensionality, the application implementer must determine what is appropriate for the particular appli- cation. It should be stressed that specifying the dimension- ality of spaces is primarily an issue of representation-not of dattibase efficiency. Efficiency decisions will be discussed in Section IV. Returning to the dimension specification in define- space, each dimension is specified as a list where the first, element is the name of the dimension and the remainder is a list of keyword/value pairs describing the dimension. The two defined keywords are :RANGE, corresponding to ordered dimensions, and : ENUMERATED, corresponding to dimensions of enumerated classes. The argument, to :RANGE is a list of (lower-bound upper- bound) or : INFINITE, indicating a range of (-co, +oo). The argument to :ENUMERATED is the label set for the enumerated dimension. For example: (define-spaces (vehicle-location vehicle-track) :DIMENSIONS ((time :RANGE (0 30)) (x :RANGE (-1000 1000)) (Y :RANGE (-1000 1000)) (classif ication :ENUMERATED (Chevy Porsche toyota VW-beetle unknown)))) defines two spaces with identical dimensionality. It is an error to attempt placement of a blackboard object outside the range of an ordered dimension or outside the label set of an enumerated dimension. If dimensions is omitted or nil, the space has no dimensionality. Such spaces are unstructured. Once an application’s spaces have been defined, the blackboard hierarchy is defined using define-blackboards: KNOWLEDGE REPRESENTATION / 1009 define-blackboards blackboards components [documentation] [Macro] Blackboards is a list of blackboard names. Components is a list of symbols naming those spaces and/or blackboards that will be the children of blackboards. For example: (def ine-blackboards (hyp-bb goal-bb) signal-location vehicle-location vehicle- track) > defines two blackboards, each having three spaces. III Specifying Blackboard Objects Once the blackboards and spaces have been specified, the blackboard objects are defined. In GBB, all blackboard objects are termed units. Hypotheses, goals, and knowledge source activation records, are typical examples of units. A unit is an aggregate data type similar to those created using the Common Lisp defstruct macro, but only units can be placed onto blackboard spaces. Units are defined using define-unit: define-unit name-and-options [Macro] [documentation] &KEY slots links indexes The name-and-options argument is exactly the same as defstruct with several extensions. First, a function to generate a name for each unit instance can be specified (using : NAME-FUNCTION). This function is called with the newly created unit instance after all slots in the unit have been initialized, but before the unit is placed onto the blackboard, The function returns a string that is used in a special read-only slot, name, that is implicitly defined if the :NAME-FUNCTION option is used. Second, it is often useful when interacting with a blackboard system to retrieve a unit by name rather than through a pattern match on its attributes. GBB provides this capability through a separate hash table of units (indexed by name) that can be dynamically created/destroyed as needed. Code for performing these activities is generated for a unit if the :HASH-UNIT option value is non-nil. Finally, the signaling of blackboard events associated with creating and deleting the unit can be controlled using the :EVENTS option. The : EVENTS argument is a list of : CREATION and/or : DELETION, indicating which events are to be signaled to the control shell for instances of this unit. The :EVENTS argument can also be nil, indicating that no unit events are to be signaled. The slots argument contains a list of slot-descriptions that are also identical to defstruct with one addition. Any slot can have a slot option :EVENTS that is a list of event-name and event-predicate-function pairs. Each event-predicated function is evaluated each time the value of the slot is modified. If the event-predicate-function returns true, the corresponding event-name is signaled. The links argument defines additional slots that hold interunit links. The name of the link is used as the new slot, name.2 By default, GBB forces all links to be bidirectional; each outgoing unit link must be defined with an accompanying inverse incoming link. GBB generates special modification functions (linkf for adding a single ‘Note that the slot-names defined by lanks are implicitly slots and are not included in define-unit’s slots argument. defined as link, linkf-list for adding a list of links, and unlinkf and unlinkf -list for deleting links) that maintain consistent link bidirectionality. For example: (linkf (hyp$creating-ksi this-hyp) current-ksi) adds current-ksi as a new creating-ksi of this-hyp. Each link-description in links has the form: (link-name [: SINGULAR] {:REFLEXIVE 1 (other-unit other-link [ : SINGULAR])} [ : EVENTS event-descriptions]). For example, here is a bidirectional link between hypothesis units: (supported-hyps (hyp supporting-hyps)) (supporting-hyps (hyp supported-hyps)). The keyword :SINGULAR is used to implement one-to-one, one-to-many, and many-to-one links (the default, is many- to-many). The keyword :REFLEXIVE is simply a shorthand for: (link-name (this-unit-name link-name)) . The optional :EVENTS argument is identical to the event,- predicate-function specification discussed for slots. The indexes argument specifies how the unit is mapped onto spaces (termed indexing) There must be a space dimension corresponding to each unit index (additional space dimensions are acceptable). Slots containing unit indexing information must be described by an index- description of the form: (index-name slot-name) where index-name is an indexing-structure specification that describes how to extract the dimensional indexes from slot-name. Indexing-structures are defined using define- index-structure discussed below. In its simplest form, an index is just the name of a define-unit slot defined in slots. For example, if a unit had a slot, named time containing a numeric value, GBB would have no problem placing that object on the time dimension of a space. Handling a slot value containing a range (such as the time span of a phrasal hypothesis) is also straightforward. Unfortunately, things are not always simple. One problem is that the indexes may be only a portion of a structured slot value, and GBB must be told how to extract the index information from the overall structure. A much more complex situation stems from the need to support composite-units. A composite unit is a unit that has multiple elements along one or more of its dimensions. An example of a composite unit is a track of vehicle sightings. Each sighting is an x,y point at a particular moment in time. One way to represent such a track is a time-location-list: ((time1 (xl yl)) (time2 (x2 ~2)) (timeN’(xN yN))). 1010 / ENGINEERING Such a unit does not occupy a single large volume of the blackboard, but rather a series of points connected along the time dimension. To indicate this, time-location-list must be declared as a composite index-structure. The information needed to decode a datatype into its dimensional indexes is specified using define-index- structure: define-index-structure name [documentation] &KEY type composite-type composite-index element-type indexes [Macro] The name argument is a symbol that is defined as a new. Lisp datatype. Type is used when the datatype to be decoded is a simple (non-composite) datatype and simply defines the new datatype name as a synonym for the existing datatype type. For a composite datatype, composite-type, composite-index, and element-type must be specified in place of type. The composite-type argument specifies the type of sequence that contains the individual index elements. Composite-index specifies the dimension connecting the composite elements (for example, time). Element-type specifies the datatype of the composed elements. Finally, indexes defines how to extract the dimensional indexes from each element. The format for each index-dimension specifier is: (dimension { :POINT field {(type field)}* 1 : RANGE ( :MAX field {(type field)}*) (:MIN field {(type field)}*)}). For example: (define-index-structure TIME-LOCATION-LIST :COMPOSITE-TYPE list :COMPOSITE-INDEX time :ELEMENT-TYPE time-location :INDEXES ((time :POINT time) (x :POINT location (location x)) (y :POINT location (location y)))) In the above example, GBB would know how to access the x index from the first element of the composite datatype time-location-list as: (location$x (time-location$location (first time-location-list))). Note that all types and fields must be defined using defstruct or define-units. Returning to the indexes argument of define-unit, slot- name is the name of a slot (from the : SLOTS argument). The slot must have a :TYPE slot-option whose value is the name of an index-structure. Index-name must be an index in that index-structure. Here is a highly-abridged version of the hypothesis unit specification in the DVMT: (define-unit (HYP (:CONC-NAME ltHYP$fl) (:NAME-FUNCTION generate-hyp-name) (:HAsH-UNIT nil)) “HYP (Hypothesis) ‘I :SLOTS ((belief 0 :TYPE belief) (classification) (sensor-id 0 :TYPE sensor-index) (time-location-list () :TYPE time-location-list)) :LINKS ((consistency-hyp :SINGULAR (hyp consistent-hyps)) (consistent-hyps (hyp consistency-hyp :SINGULAR)) (supported-hyps (hyp supporting-hyps)) (supporting-hyps (hyp supported-hyps)) (creating-ksis (ksi created-hyps))) : INDEXES ((time time-location-list) (x time-location-list) (y time-location-list) (classification classification)) IV Implementing the Database The previous sections presented the blackboard and unit specifications that must be specified by the application implementer. To this point, the specifications defined representational aspects of the application. This section describes how particular implementations of the black- board database are specified. We concentrate on ordered dimensions-enumerated dimensions are typically imple- mented as sets or hash tables. The implementation machinery for storing spaces is specified using define-unit-mapping: define-unit-mapping units spaces [documentation] &KEY indexes index-St rut t we units on [Macro] Units is a list of unit names, where each unit has identical index dimensions (as defined by the define-unit indexes argument). Spaces is the list of spaces whose implementation m&chinery is being defined. Note that the same unit type can be stored differently on different spaces, and that different unit types can be stored differently on the same space. Indexes is the list of indexes whose implementation machinery is being defined. Index- structu,re defines the implementation machinery. Simple hashing techniques do not work for ordered dimensions due to the neighborhood relationship among units. The storage structure must be able to quickly locate units within any specified range of a dimension. A standard solution is to divide the range of the dimension into a series of buckets. Each bucket contains those units falling within the bounds of the bucket. The number of buckets and their sizes provide a time/space tradeoff for unit insertion/retrieval. The bucket approach requires that a pattern range be converted into bucket indexes and that units retrieved from the first and last bucket be checked to insure that they indeed are within the pattern range. In a three-dimensional blackboard (x, y, and time) the bucket approach becomes more complicated. One approach would be to define a three-dimensional array of buckets. A second approach would be to define three one-dimensional bucket vectors and have the retrieval process intersect the result of retrieving in each dimension. To indicate that several d imensions should be stored together in one KNOWLEDGE REPRESENTATION / 10 11 array, they are grouped together with an extra level of parentheses. For example, ((time x y) > would specify a three-dimensional array, and (time (x y)) would specify a vector for time and a two dimensional array for (x, y). Here is a three one-dimensional vector example: (define-unit-mapping (unit1 unit21 (space11 :INDEXES (time x y> :INDEX-STRUCTURE ((time :SUBRANGES (START 5) (5 15 (:WIDTH 5)) (15 25 (:WIDTH 2)) (25 :END)) (x :SUBRANGES (:START :END (:WIDTH 5))) (y :SUBRANGES (ISTART :END (IWIDTH 2))))). V Instantiating the Blackboard Once the structure of the blackboard database has been specified with the functions presented above, it may be instantiated. This creates all the internal structures needed by GBB to actually store unit instances. Sometimes it is useful to be able to create several copies of the entire blackboard database or copies of parts of it. For example, to simulate a multiprocessor blackboard system one could instantiate a copy of the blackboard database for each processor. Instantiation is done via instantiate-bb- database: instantiate-bb-database replication-desc [Function] Replication-desc describes the blackboard hierarchy to be created. In the simplest case, it is a symbol that names the root of the tree to be instantiated.3 This would instantiate one copy of each of the nodes in the tree (all the leaves would be space instances and the interior nodes would be blackboard instances). The general form of replication-desc is: {name 1 (name [replication-count] [description . . .] I}. Name is the name of a blackboard or a space; replication- count is an integer specifying how many copies of the subtree to create; and description is a replication-desc for one of the components of the specified blackboard (or space). For example: (instantiate-bb-database ‘(top-level 3 (goal-bb (level-one 2)) (hyp-bb (level-three 3)))) would create three copies of the blackboard database rooted at the blackboard top-level. Each copy would have two copies of level-one and three copies of level-three. Any defined blackboards or spaces not mentioned in the replication-desc would have one copy created. 3Note that this need not be the root of the entire blackboard hierarchy but can be any node in the tree. This would allow, for example, different parts of the blackboard database to be distributed (and possibly replicated) across a network of processors. 4The unit creation function is automatically generated by define-unit. VI Creating Units A unit is created and placed onto a space using the function make - unit-type :4 make-unit-type {blackboard-path-element)+ [Function] {slot-keyword slot-value}* The blackboard-path-element arguments uniquely name the space that is to be searched. The simplest blackboard path is a space name. If a space name is not unique, it must be qualified by its parent blackboards’ names until it is unique. In addition, replicated blackboards and/or spaces must be appropriately indexed. Values for the newly created unit can be specified by slot-keyword slot-value pairs. Link slots can also be specified for the newly created unit, and GBB insures that inverse links are also created. In addition to creating the unit, make-unit-type con- structs the indexing information needed to retrieve the unit from the blackboard, invokes the name generation function (if specified in define-unit), and inserts the unit into the unit hash table (if unit hashing is enabled). VII Unit Retrieval (Pattern Matching) Blackboard systems spend a significant amount of time searching the database. Because retrieval is so important we have given the application programmer the means to make it as efficient as possible by eliminating candidate units early in the retrieval process. This is done in two ways. First, the user can specify specialized filter functions that are applied between the initial retrieval of units (such as from a set of buckets) and the subsequent checking of pattern inclusion. Second, the pattern language is rich enough to allow the application programmer to specify complex retrieval patterns that can be analyzed and optimized by GBB. The result is a reduction in retrieval time arid, equally important, a reduction in the amount of temporary storage and consing required for unit retrieval. The primitive function for retrieving units from spaces is find-units: find-units units {blackboard-path-element}+ &KEY pattern filter-before filter-after [Macro] The units argument identifies which unit types are to be retrieved. The blackboard-path-element arguments uniquely name the space that is to be searched. The simplest blackboard path is a space name. If a space name is not unique, it must be qualified by its parent blackboards’ names until it is unique. In addition, replicated blackboards and/or spaces must be appropriately indexed. The two keyword arguments filter-before and filter-after specify predicates to perform application specific filtering of the candidate units. The filter-before predicates are applied to the initially retrieved units before the pattern matching tests and are intended as a quick first test to 1012 / ENGINEERING shrink the search space. The filter-after predicates are run after the pattern matching tests and can perform additional acceptance testing. The other keyword argument is the retrieval pattern that describes the criteria that must be met by the units retrieved. The simplest pattern is the keyword :ALL that matches all of the specified units on the specified space. A pattern can also be quite complex, represented by a list of pattern specifiers. Much of the richness in the pattern specifier language supports the retrieval of composite- units, and many of the options are meaningless unless the pattern’s index structure is a composite structure. A non-trivial pattern is based on a pattern-object that may be either an index element, a composite structure, or a concatenation of index elements or composite structures. Index structures that are concatenated together need not all be the same nor does the index structure of the pattern need to be the same as the index structure of the unit. GBB is able to efficiently map from one index structure representation to another. When a pattern needs to be constructed by splicing together components of different index structures GBB decomposes all patterns/objects into sequences of simple dimensional ranges to avoid expensive type conversions. The pattern-object specifies a region of the blackboard in which to look for the units. It is either a list of options or, to concatenate several index structures, a list whose first element is the symbol :CONCATENATE and whose remaining elements are lists of options. The keywords used to specify the pattern-object are: index-object: This is an index structure, for example, a time-location-list. index-type: This is the type of the index-object. It is the name of an index structure. select: This allows extraction of a subsequence of a composite structure based on the valzle of the composite index (for example, time). subseq: This allows extraction of a subsequence of a composite structure based on the position in the sequence (the same as selection of a subsequence of a vector). delta: This expands or contracts a range or expands a point into a range. displace: This allows the index-object to be translated along one or more of its dimensions. The other pattern specifier keywords are: element-match: This specifies how each index element from the unit is compared with the index element from the pattern-object. It may be one of :EXACT, :OVERLAPS, :INCLUDES, or :WITHIN. : EXACT means that the unit’s index element must exactly match the pattern’s. : INCLUDES means that the unit’s index element must include the pattern’s. :WITHIN means that the unit’s index element must be within the pattern’s, :OVERLAPS means that the unit’s index element must overlap with the pattern’s. before-extras and after-extras: The argument is a range that specifies the minimum and maximum number of index elements that the unit may have before (or after) the index elements mentioned in the puttern- object. The argument can also be :DONT-CARE that is short for the range (o MOST-POSITIVE-FIXNUM). match: This is an inclusive lower bound on the number of index elements that must match. This can either be expressed as a percentage of the length of the pattern-obl’ect, by saying ( : PERCENTAGE 50) or an absolute count by saying (:COUNT 51, or as a difference from the length of the pattern-object by saying (:ALL-BUT 2). mismatch: This is an inclusive upper bound on the number of index elements that are allowed to not match. “Not matching” means that the unit has an index element for that composite index (for example, time) that does not match (according to the : ELEMENT-MATCH criterion) with the index element in the puttern- object. This does not include index elements that appear in the p&tern-object but do not have a corresponding index element in the unit (call these skipped). (See Figure 2.) contiguous: If this is true, then the index elements that match must be contiguous along the composite index dimension. Pattern: Unit: -t Eztra Match Match Mismatch Skipped Figure 2: Composite Unit Matching Conditions For example: (find-units '(ghyp hyp) 'goal-bb 'vehicle-track :PATTERN T:PATTERN-OBJECT (:CONCATENATE (:INDEX-TYPE time-region-list :INDEX-OBJECT (#<TIME-REGION 3 (8 11) (4 6)> #<TIME-REGION 4 (6 10) (6 8)> #<TIME-REGION 5 (5 8) (8 9>>> :DISPLACE ((x 4) (y 2))) (:INDEX-TYPE time-location :INDEX-OBJECT #<TIME-LOCATION 6 4 lO> :DELTA ((x 2) (y 2)))) :ELEMENT-MATCH :INCLUDES :MATCH (ZPERCENTAGE 75) :MISMATCH 2 :BEFORE-EXTRAS (0 5) :AFTER-EXTRAS (0 0) :CONTIGUOUS T) :FILTER-BEFORE '(sufficient-belief)). Anot her map-space: useful form of unit retrieval is provided by KNOWLEDGE REPRESENTATION / 10 1 j map-space function units (blackboard-path-element}+ [Function] Function specifies a function that is to be applied to each type of unit (specified in units) that resides on the space specified by {blackboard-path-elements}+. Map- space insures that function is not applied more than once to any unit. VIII Summary and Future Developments High-performance blackboard-based AI systems demand much more than a multilevel shared database. An appli- cation’s blackboard may have thousands of instances of a few classes of blackboard objects scattered within its data- base. GBB provides an efficient blackboard development system by exploiting the detailed structure of the black- board in implementing primitives for inserting/retrieving blackboard objects. A control shell (implemented using GBB’s blackboard database support) is used to generate a complete application system. We have presented a brief description of the database subsystem of GBB. Length limitations have prevented a thorough discussion of all the details and rationale for particular decisions. We have tried to convey both the capabilities of GBB database support and some of the issues that must be faced in implementing a high-performance blackboard development system. Although GBB has been implemented and is in use, its development continues. Much effort is being applied to performing compile time optimizations of insertion/retrieva,l operations. The next phase of GBB development will be to extend the space specification and initialization aspects of GBB to support a blackboard database that is distributed among a network of processing nodes. References 11 Victor R. Lesser and Daniel D. Corkill. The Distributed Vehicle Monitoring Testbed: A tool for investigating distributed problem solving networks. AI Muguzine, 4(3):15-33, Fall 1983. 21 Allen R. Hanson and Edward M. Riseman. VISIONS: A computer system for interpreting scenes. In Allen R. Hanson and Edward M. Riseman, editors, Computer Vision Systems, pages 303-333, Academic Press, 1978. (31 Lee D. Erman, Philip E. London, and Stephen F. Fickas. The design and an example use of Hearsay-III. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, pages 409-415, Tokyo, Japan, August 1981. P [5 Barbara Hayes-Roth. A blackboard architecture for control. Artificial Intelligence, 26(2):251-321, March 1985. Lee D. Erman, Frederick Hayes-Roth, Victor R. Lesser, and D. Raj Reddy. The Hearsay-II speech-understanding system: Integrat- ing knowledge to resolve uncertainty. Computing Surveys, 12(2):213-253, June 1980. 10 14 / ENGINEERING
|
1986
|
44
|
488
|
ISCS - A TOOL KIT FOR CONSTRUCTING KNOWLEDGE-BASED SYSTEM CONFIGURATORS Harry Wu, Hon Wai Chun, Alejandro Mimo Honeywell Information Systems 300 Concord Rd., MS895A Billerica, MA 01821. 617-671-3663 I. INTRODUCTION System configuration is the process by which a formalized description of a computer system and its interconnected components is produced based on an initial user account of the individual components. In recent years, expert system technology has been applied successfully to the construction of automatic system configurators such as Rl/XCON (McDermott, 1982), OCEAN (Szolovits, 1985), ISC (Wu, 1985), and SYSCON (Rolston, 1985). Progress has been made in understanding the process of configuration and in identifying the general representational and functional requirements of these systems. This paper describes an intelligent system configuration shell (ISCS) which is specially designed as a tool-kit to assist knowledge engineers in the construction and maintenance of system configurators. The power of the tool kit is derived from AI and software engineering techniques, leveraging on the accumulated knowledge on configuration. In recent years, many major advances in knowledge-based or expert system technology have been made (Hayes-Roth, 1983). In particular, certain features of knowledge representations (e.g. frame, rule, demon, etc.) and inference mechanisms (e.g. agenda, forward and backward chaining, etc.) have become the building blocks of many commercially available tools (shells) for developing knowledge-based systems (Richer, 1985). These commercial AI tools provide excellent development environments for experienced AI programmers but not domain experts; they are general purpose in nature but not problem oriented. The goal of our project is to adapt the existing INTERLISP-D and LOOPS (a multi-paradigm language -- Bobrow, 1981) environment on the XEROX 1108 workstation into one which is specially tailored for the configuration problem. By specialization to a particular problem domain, we created a shell which is much more convenient to use and provides better integration of the various concepts and control mechanisms required for this problem domain. While the basic theories are interesting on their own and fundamental to AI, it is often the programming aspects that make the AI tools attractive to the practitioners. Some people even argued that a significant part of the applicability of AI derives not from the AI techniques per se but from the underlying software technology (Sheil, 1983). Therefore in addition to knowledge engineering techniques, the ISCS shell also contains several additional software features which facilitate the process of constructing system configurators. The ISCS system has three main objectives: (1) to provide an integrated representational framework for the various knowledge sources relevant to the configuration problem, (2) to assist a knowledge engineer in the development and administration of the knowledge base, and (3) to aid a knowledge engineer in the construction of a system configurator. A prototype of the system is implemented in INTERLISP-D and LOOPS on a XEROX 1108. The shell is being used to construct configuration expert systems at Honeywell. In the following sections, we describe how knowledge acquisition is eased through the use of a configuration language which is specially designed to represent the various knowledge sources for configuration, how knowledge encoding and modification may be aided by the knowledge engineer assistant module and how the development of user interface may be aided by a generic user interface generator. Implementation details are also given in the discussion. II. CONFIGURATION The task of configuration is one which selects and organizes objects into some system so that, functioning as a whole, it produces certain desired system behavior. In this paper, we restrict the task to computer system configuration; that is, we are only interested in selecting and organizing devices and components to form a computer system. A configuration task may choose to address only software components, only hardware KNOWLEDGE REPRESENTATION / 10 15 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. components, or both; we describe them as software configuration, hardware configuration, and integrated system configuration respectively. Configuration activities are traditionally carried out by different organizations within a computer company for different purposes. For instance, sales representatives “configure” computer systems to fit the needs of customers while field engineers “configure” computer systems to fit physical installation requirements. Rl (McDermott, 1982) was developed before XSEL (McDermott, 1982) and thus are sometimes viewed as separate systems. In reality, the two activities complement each other. The difficulty of one activity may be reduced if knowledge relevant in the other activity can be made available; usually the sales representatives and field engineers maintain contact with each other in order to consummate an order. The design of ISCS is such that it may be used to construct a sales configurator, an engineer configurator, or one which encompasses both activities. In order to come up with a shell for the system configuration problem domain, one must first identify the tasks and knowledge common to system configuration problems in general and then decide how they might be represented conveniently. This problem is first studied by McDermott in his seminal paper (McDermott, 1982) where he demonstrated that this can be solved by a rule-based system using only forward chaining. In a later system, OCEAN (Szolovits, 1985), the developers used a richer environment involving hybrid representations. Our approach is similar to that of OCEAN. While a knowledge-based configurator system can be implemented by forward rules alone, we feel that by using a richer environment and representation, knowledge encoding is made conceptually closer to actual expert knowledge and hence easier to construct. In particular, maintenance of a knowledge base is made simpler by having modular sources of configuration knowledge. One may view the configuration process as consisting of a data entry and value validation phase, a completion phase, and an assignment phase. The process usually starts with the specification of the components to be included in the target system and site-specific information. A configuration expert sorts through the mass of data to validate the values and to see if the information is complete. If components are missing, they are added to the system. Finally, the expert attempts to build a viable configuration which satisfies the site- specific requirements. The output from the configuration is a highly detailed and formalized document recording the spatial, or functional layout of the components. The configuration task is usually tedious and error-prone due to the large amount of information involved. Moreover, the information changes with time because old components and systems become obsolete while new ones are introduced. The task is also complicated by the fact that the initial specification is seldom accurate or complete many iterations of the configuration process actually take place between the time a sales proposal is first initiated till the system is delivered and installed. Several sources of knowledge are required in the system configuration process: knowledge about the individual components, knowledge about the inter- relationships among components and between components and systems, knowledge about the configuration procedures, and knowledge about the format of the output description. In ISCS, separate knowledge representations are used to capture these knowledge sources. Although some of these ideas have been studied and advocated elsewhere (Hayes-Roth, 1983), our goal is to illustrate how they may be packaged and utilized in the domain of system configuration. In this section, we provide a brief conceptual overview of three of the more interesting knowledge sources - the component model, the constraint demons, and the configuration knowledge. Details on each knowledge source are provided in the next section. Information about components and system types are described by the component model which defines the attributes and options of the components and systems. The component model constitutes a taxonomic hierarchy of class and sub-classes and a part-of hierarchy to depict the relationship between parts and sub-parts. Properties may be inherited along the descendant path within a hierarchy. Constraints and dependencies may arise when different types of components are assembled together into a system. Constraint demons are provided to capture this type of dependency knowledge. Constraint demons have two basic constituents: descriptive predicates and sets of imperative actions. The predicates define the constraints and the context under which the demons should be triggered, while the actions indicate the activities (usually remedial corrections) to follow. The configuration knowledge consists of a hierarchy of procedural knowledge involved in configuration. At the bottom level are operations where each operation acts on a set of objects of the same component type. For example, an operation may be selecting an object from a set, or modifying every object in a set. At the next level are tasks which are composed of sequences of operations (hence tasks may act on objects of different component types). For instance, a task may act first on an aggregate and then its sub-parts by a sequence of operations along a “part-of” path. Above the tasks are plans which decide which tasks to execute. III. ISCS Although there are a large variety of tools and knowledge representations embedded in ISCS, integration and coherence are achieved by relying on a object- oriented programming paradigm. A similar strategy has been adopted in another tool-kit (Lafue, 1985). Figure 1 provides a schematic diagram of ISCS. In the middle of the figure are the modules of the shell consisting of a knowledge engineer assistant module, a control and inference module, an user interface generator, and a site database manager. The knowledge engineer assistant, KEA, provides “structure-based” editors for the various types of knowledge representations. It is the main vehicle for knowledge entry and maintenance. The KEA also interprets knowledge structures created through the ISCS configuration language into the internal representations. The control and inference module, CT, decides how a configuration session proceeds by looking up site-specific data from the data base and system-specific configuration knowledge from the knowledge base. The user interface generator, WSI, is a tool that allows a knowledge engineer to create and customize an end-user interface. The database manager handles the storage and retrieval of site-specific data. 1016 / ENGINEERING A. ISCS knowledge sources This subsection presents the major knowledge sources in ISCS. These knowledge sources correspond to ‘the various types of knowledge required in the configuration task. 1. The Component Model The component model captures information about individual systems and components. Each type of system or component is defined by a class, i.e. “prototype record”, which contains variables, i.e. “attributes”, related to that object type. Instances, i.e. “objects”, of the same class will have identical record structure. Two kinds of relationships may be specified between classes. The taxonomic hierarchv: In ISCS, this class- subclass hierarchy permits the knowledge engineer to define paths for class inheritance. For example, in figure 2, the “7301” class is a subclass of “synchronous terminal” class which in turn is a subclass of “terminal” class. All properties, such as standard 1200 baud rate, about a terminal will also be inherited by a 7301 terminal unless indicated otherwise. The part-of hierarchv: Variables of a class may possess the “contain-parts” property which is a list of class names. Instances of classes in the list are considered sub- parts. For instance, the example in figure 2 illustrates that terminals of type 7301, 7303, or 7305 may be hung on a synchronous communication line. The “part-of” and “contain-parts” are inverse relationships; as soon as one direction is defined, the other direction will be automatically added. The variable name in the other direction is given by adding a suffix “-INV” to the variable name. The value for either the part-of or contain-parts relation is always a list to indicate an one- to-many relationship. Values may be inherited along the part-of hierarchy as well. In the example of figure 2, the channel number of a 7301 terminal is obtained from the synchronous communication line it is attached to. Unlike the taxonomic case where inheritance is at the class level, the part-of inheritance is at the instance level; values are automatically copied when two objects are linked together by an “AddTo” operation; e.g. (AddTo <an instance of SynchronousCommunicationLine> ‘TerminalAttached <an instance of T7303>). (CLASS Terminal . . . (BaudRate 1200) . . . 1 (CLASS SynchronousTerminal . . . (Super Terminal) . . . ) (CLASS T7301 (Super SynchronousTerminal) (TerminalAttached-INV NIL Part-Of (SynchronousCommunicationLine)) (ChannelNumber (Inherit TerminalAttached-INV)) . . . ) (CLASS SynchronousCommunicationLine . . . (TerminalAttached NIL Contain-Parts (T7301 T7305 T7307)) (ChannelNumber NIL) . . . ) Figure 2. Example of taxonomic and part-of hierarchy. 2. The Constraint Demons Constraint demons are the means by which a knowledge engineer may specify restricted conditions and remedial corrections if these conditions are violated. A constraint demon has three properties: scope, predicate, User User Interface r-7 E7 / v Conflguration Output Knowledge Engineer Site Data nowledge Base Figure 1. Schematic diagram of ISCS and its environment. KNOWLEDGE REPRESENTATION / 10 17 and action. Currently ISCS support two types of demons which are “triggered” under different circumstances. For a value demon, the corresponding predicate is tested at the time when a value is being stored into an instance variable of an object. For a class demon the corresponding predicate is tested every time a new instance from the (immediate) class is created. If the predicate evaluates to false then the actions are executed. The scope property of a demon indicates whether it is a value or class demon. The action property contains a list of actions that may be performed if the predicate is not satisfied. The conditional portion of the action specification allows a knowledge engineer to select appropriate actions by context. When a demons’s predicate is found to be false, each action is then carried out sequentially. The variable “Predicate” is bound to the result of the evaluation of the demon’s predicate and is made available to all the actions. Constraint propagation may be explicitly turn on or off for each demon action. The first demon in figure 3 validates baud rates. The demon will check values that are to be stored into either a 7307 or 7309 terminal. The legitimate values are 1200, 4800, and 9600. There are two actions in the example. One of them will be executed during input phase when a user enters data into the system interactively, the other will be executed during the completion phase when the system automatically complete all necessary information. The second demon makes sure that there are enough synchronous communication lines for 7307 terminals given the fact that at most five 7307’s can be hung from one line. If not enough lines are there, then a new one is added. In this case, constraint propagation is turned on because there might be other demons monitoring the creation of synchronous communication lines. (DEMON CheckBaudRate (* 7307 & 7309 baud rates) (Scope Value (T7307 BaudRate) (T7309 BaudRate)) (Predicate (MEMBER (0 BaudRate) (1200 4800 9600))) (Actions (IF InputPhase THEN (<-a BaudRate NIL) (PROMPTPRINT <message>)) (IF CompletionPhase THEN (<-a BaudRate 1200)) )) (DEMON LineForTerminalExists? (* max. of five 7307 terminals per syn comm line) (Scope Class (T7307)) (Predicate (LESS (NumberOf SynchronousCommunicationLine) (/ (NumberOf T7307) 5))) (Actions (IF InputPhase THEN (PROMPTPRINT <message>)) (IF CompletionPhase THEN (DemonPropagation ‘On) (CreateInstance ‘SynchronousCommunicationLine)))) Figure 3. Example of constraint demons 3. The configuration procedural knowledge ISCS provides a mean to decompose the procedural aspects of configuration into more manageable units. These units are organized in a hierarchy of three layers: Operation: An operation applies the same procedure to all the objects of the same class; e.g. send the same message to every objects of the same class. In addition, there is a preconditional predicate which is evaluated before an object is operated on. If the predicate fails then that particular object is skipped. The number of objects that are to be tested and worked on depends on the “Repeat” attribute (see figure 4). There are two automatically generated variables associated with each operation. After the execution of an operation, these two variables will contain respectively the objects that have been worked on or not. Figure 4 shows an operation which iterates through all 7307 terminals and assign each one to a communication line. In the example the variable “self” is bound to a different object during each iteration. After the operation, the variables, “T7307.AssignTerminalToCommLine.Done” and “T7307.AssignTerminalToCommLine.NotDone”, will contain respectively the terminals that have been attached and those that are still loose. (OPERATION AssignTerminalToCommLine (’ attach each 7307 terminal to a comm line) (Class T7307) (Predicate T) (Repeat UntilExhausted) (Code (AddTo(GetAnInstance SynchronousCommunicationLine) ‘TerminalAtt ached self)))) (HIERARCHICAL-TASK Documentation (* Output each comm line and its terminals) (Root SynchronousCommunicationLine) (Paths (SynchronousCommunicationLine TerminalAttached)) (Code ((Class SynchronousCommunicationLine) (IF <predicate> THEN (Document)) ((Class T7307) (IF T THEN (Document))))) (ITERATIVE-TASK PutCardsOnBoards (* assign all cards to boards) (Type ALLI (Predicate (AND (NULL Type-1-Card.NotDone) (NULL Type-2-Card.NOtDone))) (Epilog (IF (OR Type-1-Card.NotDone Type-2-Card.NOtDone) THEN (Createinstance Board))) (Variables (NewBoard)) (Rules (Rl:(SETQ NewBoard(Operation Board GetABoard))) (Ra:(Operation Type-l-Card AllocateSlot NewBoard)) (RS:(Operation Type-a-Card AllocateSlot NewBoard)))) (OPERATION GetABoard (* always returns the first board still in the “NotDone” list) (Class Board) (Predicate T) (Repeat 1) (Code (Return self))) (OPERATION AllocateSlot (* Assign as many cards to a board as possible) (Class Type-a-Card) (Repeat UntilExhausted) (Predicate (Type-l-Card-Fit? NewBoard)) (Code (Decrement-slot NewBoard) (Fix-Slot self NewBoard))) (PLAN Card-type-l-Or-Card-type-2 (* interchange r2 and r3 in task PutCardsOnBoards) (Rules (MyRule: (If <predicate> THEN (XChange PutCardsOnBoards R2: R3:))))) Figure 4. Example of configuration knowledge Task- A A task enables a knowledge engineer to perform “aggregate” work by invoking individual operations. It is especially useful when operations, acting on different classes of objects, achieve jointly a common objective. There are two types of tasks: hierarchical and iterative. 1018 / ENGINEERING A hierarchical task is used to traverse a “part-of” hierarchy and work on each object in this hierarchy. An iterative task is used to carry out a sequence of operations repeatedly. The second example, in figure 4, is a hierarchical task which traverses the communication lines and the terminals attached to each line in pre-order; i.e. visit a communication line and then all its terminals before visiting the next communication line. As shown in the example, a hierarchical task is similar to an operation except for the number of classes involved and for the order in which the objects are fetched. “Pruning”, i.e. the decision whether to follow a branch, is decided by the IF conditions. If an object fails its own predicate test then none of its descendants will be tested. An iterative task is similar to a LOOPS ruleset in concept in that rules are partitioned according to the objects that they affect. An iterative task is composed of a list of IF-THEN rules and operations. There are two mode of iteration which dictates whether all rules are executed or only one rule is executed during each pass of the iteration. A knowledge engineer supplies a condition for terminating the loop; the default is to stop the task when all the rules and operations have failed. The rules and operations in an iterative task are scanned by their sequential ordering in the task. An optional piece of code, “Epilog” may be executed after each iteration before the start of the next. This Epilog can look at the “trace” of the previous iteration and make corrections and preparations for the next pass. Each iterative task only looks at certain classes of objects. For each operation, there are two values to indicate the objects that have been worked on and those that have not. These two values are not reset after an iteration (unless explicitly stated in the EPILOG); in fact when an operations appears in an iterative task, it will only look at the objects that have not yet been worked on. If new objects are created, within or without an iterative task, they are automatically appended to the appropriate lists so that the operations can work on them in a subsequent iteration. Figure 4 contains an example where cards are being assigned into slots on a board. If there are not enough boards to fit all type-l and type-2 cards then a new one is added. The procedure repeats until all cards have been assigned to boards. Note that the Epilog checks to see if more boards are needed and creates one on demand so as to ensure some progress in the following iteration. Plan: Plans decide which tasks are be executed as well as manipulate the tasks themselves. Each rule in a task is treated as a named object and thus can be manipulated. Plans can remove, replace, or relocate rules within a task. This is useful in cases where the relative locations of the rules are important. In Figure 4, a plan may interchange the two rules in task “PutCardsOn- Boards” so to assign card-type-2 before card-type-l. Note that a plan is composed of rules with unique names, so a plan may be modified by another plan, i.e. meta-plan. B. Knowledge Engineer Assistant The “Knowledge Engineer Assistant” (KEA) is a module within ISCS which provides the development environment for a knowledge engineer. KEA supplies three major features: “structure-based editors” to guide the knowledge engineer in constructing the knowledge base, “development tools” to facilitate the knowledge encoding and maintenance, and an “interpreter” to translate the ISCS knowledge sources into internal representation. The main purpose for KEA is to make available to a knowledge engineer enough tools and built-in mechanisms so as to reduce the total time and effort needed in creating a configuration knowledge base. KEA is built upon the DEDIT facility in the Xerox Lisp machines which is itself a “structure-based editor” for the INTERLISP-D and LOOPS environment. KEA commands are conveniently built into the DEDIT menu system. The KEA operations are tightly integrated within the DEDIT framework. 1. Structure-based Editors One of the main focuses of ISCS is to provide an appropriate set of knowledge representations or structures to reflect conceptually the different sources of configuration knowledge found in experts. By providing the knowledge engineer with a more natural and convenient means of encoding expert knowledge, the process of knowledge acquisition is made easier. These knowledge structures are expressed using the ISCS configuration language. Each knowledge structure has its own syntax, internal structure and control mechanism. The “structure-based editors” guide the knowledge engineer by providing syntax templates for the various knowledge structures. Syntax and semantics are checked before it is entered into the knowledge base; preventing errors in the knowledge base at the earliest possible onset. For example, the KEA will immediately create a new variable or class for a knowledge engineer, if a value demon is entered when the variable or class that it affects is not yet defined. A knowledge engineer is not allowed to exit form the KEA editors until all necessary information has been furnished. A future improvement would be for KEA to automatically keep track of all such “loose” pieces and prompts the knowledge engineer to complete the knowledge at appropriate times. 2. Development Tools The “development tools” provide an integrated set of knowledge access facilities which allows the knowledge engineer to inspect the current status of the knowledge base, to list the library functions appropriate for the current context, as well as to integrate new knowledge structures. For example, just by simply marking a variable of a class and selecting appropriate “development tool” menu item, the system can list all the current constraint demons attached to this slot, or all other classes which also include this variable in their definition. Each new piece of expert knowledge might have to be integrated with existing knowledge structures already in the knowledge base. This integration is made easy and error-free through the use of access functions and automatic encoding. For example, browsers may be used to display the relationships among pieces of knowledge, e.g. the “part-of” relationship. There are also knowledge source dependent tools which reduce the amount of encoding required by the knowledge engineer. For example, in the “part-of” hierarchy, when a contain-parts pointer is set from an object to sub-part objects, the inverse pointer is automatically added into the parent object structure. KNOWLEDGE REPRESENTATION / 1019 3. Interpreter The “interpreter” translates knowledge sources developed using the ISCS configuration language into internal representations. The internal representation consists of INTERLISP-D and LOOPS structures. The main unifying mechanism is the object-oriented programming paradigm in LOOPS. In particular, we use the active value feature of LOOPS to implement the value demons, and the “New” method of a metaclass to implement class demons. C. The User Interface Generator The ISCS system provides a user interface generating facility called WSI (Mimo, 1986). The facility assists knowledge engineers in the development of user interfaces. Our design is based on an analysis of the environment that a human expert would work in. Consider the case of the sales or field representative who is configuring a computer system for a customer. The volume of data that one must manage is plentiful. After gathering the information, the representative stores the data on a collection of business data forms. The data entry process itself is intermittently spread out across many interactive sessions. The purchase order is usually reworked many times before it is finally sealed. When one builds a knowledge-based system for such a type of users, one must design the system so as to fit the user’s working habit. The WSI facility attempts to simulate the field representatives’ working behavior by providing data sheets for them to enter data and a file cabinet metaphor for storage and organization of data sheets. A knowledge engineer may develop customized data sheets and cabinets with the aid of the WSI facility. the WSI mixins as supers in the class definitions of the components and systems, may immediately obtain a user interface with standard behavior. Customization may be obtained by adding special properties to instance variables in class definitions. WSI, by examining these properties of an object, determines whether a particular variable should be displayed or not in the corresponding data sheet, whether it is modifiable or read-only, whether menus are attached to the variable, what menu (dynamically or statically created) to show, etc. WSI is non-obtrusive because the properties that it relies on are separated from those used in the actual computational tasks. On the other hand, it is possible for a knowledge engineer to indicate to WSI to examine properties involved in computations so as to minimize redundant and inconsistent information between the user interface and the computation tasks. The WSI facility is integrated into the ISCS programming environment through the inheritance and specialization techniques of object-oriented programming. A knowledge engineer may gradually enhance and refine the user interface by the addition of more and more properties to each instance variable in a class definition through a period of time. This incremental approach is very important in projects like knowledge-based systems where the initial phase of the project is focused mainly on knowledge acquisition and leaves little time for the user interface. By linking to the WSI facility via the “superclass” specification, a standard user interface is immediately obtained. Starting on day one, a knowledge engineer may use the WSI facility for demonstration and testing. As the knowledge base grows and is validated, a knowledge engineer can then pay attention to interface issues. Through the single concept of data sheets and object-oriented programming, a consistent user interface is achieved across the development cycle of a knowledge- based system. Figure 5 shows the interface of a system configurator that utilizes the WSI facility. IV. SUMMARY The WSI facility is based on a single integrating concept, the data sheet. A data sheet is a business data entry form in concept and a screen image of an internal object in operation. Both end users and knowledge engineers perceive a WSI-derived user interface as one that provides capability for the organization and manipulation of data sheets. Through this integrating idea consistent user interface behavior is achieved. A data sheet may be filed in a folder which is stored in a drawer that, in turn, is contained in a cabinet. A data sheet in the WSI facility is a scrollable window which displays the attribute names and values, of an object. The data sheet window is mouse-sensitive and both the attribute names and values may be selected by a mouse device. The behavior of the WSI facility may depend on the button pressed, the attribute selected, and the context of the system configurator at the time of the mouse selection. For instance, a user interface may be designed such that when a user “clicks” at the attribute name “Memory Size” in a “Site Information data sheet”, menus with different items will appear depending on the model number of the site system. This feature is useful because different models of computers have different memory configurations. The WSI facility supplies generic mechanisms to support the mapping between internal objects and external screen data sheets, operations on the data sheets, and storage and retrieval of data sheets. All the generic mechanisms are implemented by methods in “mixins”. Moreover each mechanism has a full set of default behaviors. A knowledge engineer, by simply including We have described a tool kit, ISCS, for constructing knowledge-based system configurators. ISCS is designed to assist a knowledge engineer in the encoding and maintenance of configuration knowledge, and to facilitate user interface development. The various knowledge representations and system modules are integrated through an object-oriented foundation. The design and implementation of ISCS is leveraged upon our past experience in the development of knowledge-based system configurator (Wu, 1985). Many of the features in ISCS have been hand-coded into a configurator that we have developed recently. ISCS is the outcome of a post-mortem analysis of the former configurator. It is still under improvement and will eventually be used as a standard development vehicle by Honeywell knowledge engineers. As ISCS is deployed, new requirements and functionality will emerge and incorporated into the system. At present, we have identified two areas that we will study after our current project. The knowledge engineer assistant is currently a “passive” module; a knowledge engineer must decide which knowledge representation to use and then invoke the appropriate editor or tool. We intend to add a “user dialog” feature 1020 / ENGINEERING which will assist and guide a knowledge engineer in the selection of knowledge representation through an interactive sessions. The other area that we will enhance is in the provision of a facility that enables knowledge engineers to design form layouts; at the moment a WSI data sheet has only a very simple layout consisting of two columns, one for name and one for value. REFERENCES Bobrow D.G. and Stefik M., “The Loops Manual”, Tech. Rep. KB- VLSI-81-13, Knowledge system area, XEROX Palo Alto Research Center, 1981. Chun H.W., “The ISCS Knowledge Engineer Assistant”, Honeywell SCOS/AST Technical Report, AST8603, 1986. Hayes-Roth F., Waterman D., and Lenat D., “Building Expert Systems”, Addison-Wesley, Reading, MA, 1983. Lafue G. and Smith R., “A Modular Tool Kit For Knowledge Management”, IJCAI, 1985. McDermott J., “Rl: A Rule-Based Configurer of Computer Systems”, Artificial Intelligence 19, 1982. McDermott J., “XSEL: A Computer Sales Person’s Assistant”, Machine Intelligence, 10, 1982. Mimo A., Chun H.W., and Wu H., “WSI - A Facility for Organizing and Manipulating Data Sheets”, Honeywell SCOS/AST Technical Report, AST8601, 1986. Mimo A., “WSI: A Guide to its Implementation and Use”, Honeywell SCOS/AST Technical Report, AST8602, 1986. Richer M., “Evaluating the Existing Tools for Developing Knowledge- Based Systems”, Standford Knowledge Systems Laboratory, Report KSL85-19, Standford University, 1985. Rolston D., “An Expert System for DPS 90 Configuration”, 9th Annual Honeywell International Computer Sciences Conference, 1985. Sheil B., “The Artificial Intelligence Tool Box”, Proceedings of the NYU symposium on Artificial Intelligence and Business, edited by Reitman W., ABLEX publishing Corp., 1983. Szolovits P. and Clancey W., “Case Study: OCEAN”, Tutorial 8, IJCAI, 1985. Wu H., Virdhagriswaran S., Chun H.W., and Mimo A., “ISC- An Expert System for the Configuration of DPS-6 Software Systems”, 9th Annual Honeywell International Computer Sciences Conference, 1985. l+OhWWELL SYSTEM CONFIGWMTOFI Figure 5. Sample session of a configurator system implemented on ISCS. KNOWLEDGE REPRESENTATION / 102 1
|
1986
|
45
|
489
|
RECENT DEVELOPMENTS IN NIKL Thomas S. Kaczmarek Raymond Bates Gabriel Robins USC/Information Sciences Institute 4676 Admiralty Way Marina Del Rey, CA 90292 ABSTRACT NIKL (a New Implementation of KL-ONE) is one of the members of the KL-ONE family of knowledge representation languages. NIKL has been in use for several years and our experiences have led us to define and implement various extensions to the language, its support environment and the implementation. Our experiences are particular to the use of NIKL. However, the requirements that we have discovered are relevant to any intelligent system that must reason about terminology. This article reports on the extensions that we have found necessary based on experiences in several different testbeds. The motivations for the extensions and future plans are also presented. 1. INTRODUCTION Our work on NIKL is motivated by a desire to build a principled knowledge representation system that can be used to provide terminological competence in a variety of applications. To this end, we have solicited use of the system in the following applications: natural language processing, expert systems, and knowledge-based software. Our research methodology is to allow application needs, rather than theoretical interests, to drive the continued development of the language. This methodology has allowed us to perform an empirical evaluation of the strengths and weaknesses of NIKL. It has also helped us identify general requirements for terminological reasoning in intelligent systems. We classify the improvements make into three broad categories: that we have made or plan to 1. Expressiveness - enhancements to the terminological competence represented in NIKL and the inferences NIKL can make regarding the subsumption relationship, 2. Environment - enhancements to the tools that accompany NIKL for both maintaining knowledge bases (knowledge acquisition) and reasoning about the terminology defined in the knowledge base, and 3. Support - enhancements to user documentation, the reliability and the availability of the implementation. l This research is supported by the Defense Advanced Research Projects Agency under Contract No. MDA903 81 C 0335. Views and conclusions contained in this paper are the authors’ and should not be interpreted as representing the official opinion of DARPA, the U.S. Government, or any person or agency connected with them. This paper will concentrate on enhancements made to the expressiveness of NIKL but will also describe some improvements and additions made to the NIKL environment. An introduction to NIKL will be included as background material and enhancements to the support of NIKL will be mentioned for the sake of completeness. 2. BACKGROUND KL-ONE was designed by [Brachman 781 to “circumvent common expressiveness shortcomings**.” It was designed to embody the principles that concepfs are formal representational objects and that episfemological relationships between formal objects must be kept distinct from conceptual relations between the things that the formal objects represent. KL-ONE defined an “epistemologically explicit representation language to account for this distinction.” A KL-ONE concept is described by “a set of functional roles tied together by a structuring gestalt.” Concept definitions “capture information about the functional role, number, criteriality and nature of potential roles fillers; and ‘structural conditions’, which express explicit relationships between the potential role fillers and give functional roles their meaning.” A overview of the KL-ONE system has been published by [Brachman & Schmolze 861. 2.1. The classifier An important consequent of the well-defined semantics of KL- ONE is that it is possible to define a classification procedure to determine the subsumption relationship for concepts in a KL-ONE network. A detailed description of the semantics of the KL-ONE classifier have been published by [Schmolze & Lipkis 831. The classifier for KL-ONE deduces “that the set denoted by some concept necessarily includes the set denoted by a second concept but where no subsumption relation between the concepts was explicitly entered.” Classifiers for KL-ONE and NIKL have been developed at ISI. The desirable properties for the classification algorithm are soundness (no incorrect inference is made), completeness (all correct inferences are made), and totality (the algorithm always halts). Theoretical analysis work done by [Brachman & Levesque 841 has determined the limits on the expressiveness if completeness of the classification algorithm is to be maintained. l * A more complete discussion of the kinds of shortcomings Brachman is concerned with can be found in [Erachman 85) 978 I ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Work on NIKL has concentrated on the issue of soundness, forgoing completeness in favor of increased expressiveness. An efficient implementation has also been a goal of the NIKL effort and the NIKL classifier is in fact nearly two orders of magnitude faster for large networks than the KL-ONE classifier. 2.2. Classification-based reasoning The NIKL classifier provides a general weak method for categorizing descriptions of objects. It is insufficient as the sole inference mechanism for an intelligent system but it can be used very effectively (and efficiently) in what we have termed classification-based reasoning. Most uses of KL-ONE and NIKL rely heavily on this kind of reasoning. It consists of a classification-reasoning cycle. The application first creates a new description of some partial result and then classifies this into a static network describing knowledge of the problem domain. Based on the result of classification, additional inferences are drawn about the partial result and a new description is constructed. These inferences are the result of some rule or procedure that examines the network looking for inferences that it is capable of making. The new description that results may achieve the goal of the reasoning cycle, in which case reasoning terminates. More typically, further classification and redescription are required and there is a continuation of the reasoning cycle. One way of thinking about this reasoning cycle is to think of the classifier as selecting applicable rules based on the terminology that is used to describe the task domain and the problem at hand. The selection of the rules is within the terminological system, i.e., based on the definitions of terms. However, the rules are outside the terminological component and expressed in some other language. An example of classification based reasoning can be found from the Consul application of NIKL. Consul used classification extensively in the process of interpreting natural language requests for interactive computing services. Suppose the user might typed the request, “Tell me about my afternoon meeting.” Consul’s natural language frontend described this request (in NIKL) as calling for a “tell-about action” where the agent to be told was the logged-in-user and object to be described was a meeting that was further described as being in the afternoon and in the possession of the same logged-in-user. Tell-about-action, meeting, afternoon, and logged-in-user were all defined as NIKL concepts. Object and agent-to- be-told were defined roles of the tell-about-action concept. Owner and time were roles on the concept meeting. The description of the request was classified into the NIKL knowledge base which represented user expectations and system capabilities. Since Consul did not immediately understand how to execute this request, it looked for redescription rules. It sought an applicable rule that was most specific, as defined by the inheritance taxonomy. In this example, the description of the request classified under the concept, tell-the-logged-in-user, which was defined as a specialization of a tell-about-action whose agent must be the logged-in-user. Tell-the-logged-in-user had a rule attached to it that caused redescription of the request as a display-action. Note that had the request classified between tell-a-user, which subsumes tell-the-logged-in-user, and tell-the-logged-in-user it would have been redescribed as a prepare-mail-action. Having done this redescription Consul was not finished. This particular request continued to be refined until it was redescribed as a composite operation. The final result requested the system to display the result obtained by retrieving a meeting with a time in a specific range from a particular file in the logged-in-user’s directory. 2.3. NIKL’s evolution from KL-ONE NIKL’s name is evidence of the fact that it is thought of as a New jmplementation of L-ONE. Despite this, there are major differences between NIKL and KL-ONE. These are in addition to the emphasis on the efficiency of the classification algorithm already mentioned. Many of the differences are a direct result of the influence of work on KRYPTON by [Brachman et al. 831. Close cooperation between the NIKL design team and the KRYPTON designers resulted in many system similarities despite a strong distinction on the issue of completeness. The major difference between N!Ek and KL-ONE involves the representation and use of roles. At the time NIKL was designed, use of KL-ONE had uncovered a need for revisions of the ideas about roles. For example, explicit structural conditions were no longer used to define the meaning of roles because of the inadequacy of the original formalization and lack of useful consequences of these conditions. In addition, the notation required in KL-ONE for relating roles in concepts (which included relations such as modifies, differentiates, and individuates) were cumbersome. The idea of thinking of roles as two-place relations and concepts as one-place relations emerged, and roles took on a new significance. Roles were defined as having a domain and a range, organized in a separate taxonomy, thought of as representations of relations, and assumed to be used consistently. This “enlightened” view of roles was one of the lessons learned through the application of KL-ONE. It represents a case where something learned through the use of a specific tool has general applicability. Attributes of concepts ought to be formalized and have well-defined semantics also. This discovery is not unique to the KL-ONE community--it is an adaptation of the ideas found in systems build around first-order logic. The “re-discovery” of this idea simply under scores its importance. 3. THE STATUS OF NIKL A NIKL implementation was first developed approximately two years ago. Since then it has been in use principally at ISI and at Bolt, Beranek, and Newman Inc., which contributed to the design of the system. Several “browsing” tools, syntactic support, and graphing tools have been developed and used to construct and maintain knowledge bases. A natural language paraphraser to assist users in understanding networks was also developed but has not been heavily used. Various inference mechanisms driven by the classifier have also been implemented. .** Actually there are significant differences beyond those having to do with roles if one takes KL-ONE to be defined by the original formalization rather than the then current implementation, which did not support much of the formalism. KNOWLEDGE REPRESENTATION / 979 Applications of KL-ONE and NIKL have been in the areas of natural language processing (see the publications of [Bobrow & Webber 80, Sondheimer 84, Sidner 85, Mark 811) expert systems (see the work of [Neches et al. 851) and software description(see the publications of [Kaczmarek et al. ??, Wilczynski 841). Large networks, in excess 1500 concepts, have been developed in these environments. This experience with NIKL has led us to consider certain extensions to the language, its environment, and the implementation. The following sections will describe the extensions we consider important and explain the motivation and status of each. The extensions have been divided into roughly three categories: terminological competence, environment, and implementation. 3.1. Terminological Competence By terminological competence we mean the ability of the system to represent and reason about various distinctions that a modeler might need to capture in defining concepts. For example, the ability to restrict the range and number of role fillers for a particular functional role adds to the terminological competence. Inferring that if a person has at least one son, then the person has at least one child (based on the fact that son is a specialization of child) is another example of terminological competence. The benefits derived by enhancing terminological expressiveness are analogous to the benefits derived form the data typing mechanisms found in modern programming languages, Support for various abstractions, such as lists, ranges, and enumerations, make it easier for the programmer to produce correct and more compact programs. Support for all of these data types could be built in assembly language, but the programmer would be responsible for choosing a suitable representation and supporting it (e.g., defining a constructor function and doing error detection on modification). In a similar way, support for the reasoning that NIKL does could be built with frames, flavors, or well-formed-formulae, but applications would have to supply and usually duplicate reasoning to support them. This leads to a view of NIKL as a better data typing mechanism for intelligent systems. This is certainly one of the roles of NIKL in the applications we have seen. There is another however--the classification based reasoning cycle--which was described earlier. The following sections will describe our efforts in this area. 3.1 .l, Disjointness and covers One addition to NIKL that was absent in KL-ONE is support for disjoint and covering sets. A collection of concepts can be declared as being disjoint, i.e., have no common extensions in the world. A collection can also be declared as a cover of another concept, i.e., all extensions of the covered concept must be described by at least one of the members of the covering. These two declarations can be combined to form partitions. Having the ability to define disjointness, covers, and partitions has led to a more streamlined design of systems using NIKL. There are many cases where generic problem solving techniques require this kind of information. By making it an abstraction supported by the language, applications are freed from the responsibility of representing and supporting it. Furthermore NIKL provides limited inferences based on these notions. This frees the application from responsibility to supply this generic reasoning. A description of the limited, though useful, reasoning performed by NIKL follows: - As a result of disjoint classes, NIKL can determine if a concept is coherent or not. For example, a person all of whose children are both males and females, would be marked as being incoherent if male and female were declared as being disjoint. An incoherent description is admissible in NIKL but is assumed to not have any extension in the world. The discovery of incoherence almost always indicates an error in the model. As a result, discovery of incoherence has proven to be valuable during knowledge acquisition. It is one example of how well-defined semantics can assist in the construction and use of terminological knowledge bases. With respect to covers, a simple inference procedure is available to deduce the existence of other covers. For example, suppose male and female cover sex, spouses have a sex role that is restricted to sex, and that husband and wife are specializations of spouse. Further assume the only difference between husband and wife is a restriction of the sex role to male and female respectively, then NIKL can infer that husband and wife cover spouse. Needs for this kind of reasoning have come about in using NIKL for expert systems where certain methods of problem solving are applicable only when some covering exists. NIKL’s current inferential capabilities for covers are limited to simple cases such as the one presented in this example. Plans call for expanding these capabilities as needed by applications. 3.1.2. Reasoning about role restrictions The inclusion of an explicit role hierarchy in NIKL allows the system to infer certain properties of concepts. The example of calculating minimum number restrictions for the son and child roles presented above illustrates one kind of inference. In that example, we have propagated a minimum number restriction up to a more general relation. Obviously, we can also propagate a maximum down to a more specialized relation. These are two inferences that we have recently added to NIKL. Another inference involves value restrictions for roles. It is illustrated by the network definition seen in Figure 3-l. The NIKL specification for this example can be paraphrased as follows: - doctors, famous, and rich are primitive concepts**** - surgeons are a primitive specialization of doctors, - very famous is a primitive specialization of famous, - relative is a primitive relation, - cousin is a primitive specialization of relative - any concept that fills the role of famous cousin must fill the role of cousin and be famous, - any concept that fills the role of rich relative must fill the role of relative and be rich and, .**. A primitive concept or relation corresponds to the notion of a “natural kind”, i.e., a predication that can only be determined by an oracle. To NIKL this means that no concept may be placed beneath this one in the hierarchy unless the concept specification explicitly says to do so. 980 / ENGINEERING - any concept that fills the role of rich cousin must fill the roles of rich relative and cousin. - all the rich cousins of an “A” must be doctors, - all the famous cousins of any “B” must be surgeons and all the rich relatives of any “B” must be very famous, From this specification, NIKL infers the following: . all of A’s rich cousins are rich doctors, - all of B’s surgeons, rich cousins are rich and very famous - all of B’s famous cousins are famous surgeons, and - all of B’s rich relatives are rich and very famous. (DEFCONCEPT Doctor primitive) (DEFCONCEPT Famous primitive) (DEFCONCEPT Rich primitive) (DEFCONCEPT Surgeon primitive (specializes Doctor)) (DEFCONCEPT Very-Famous (specializes Famous)) (DEFRELATIDN Relative primitive) (DEFRELATION Cousin pdmitive (specializes Relative)) (DEFRELATION Famous-Cousin (specializes Cousin) (range Famous)) (DEFRELATION Rich-Relative (specjalizes Relative) (range Rich)) (DEFRELATION Rich-Cousin (specializes Rich-Relative Cousin)) (DEFCONCEPT A (restrict (DEFCONCEPT B (restrict (restrict Figure 3-1: Figure 3-2 graphically has been performed. Rich-Cousin (VR Doctor))) Rich-Relative (VR Very-Famous)) Famous-Cousin (VR Surgeon))) Example of role reasoning depicts the network after classification The conclusions illustrated in the figure are derived from the following line of reasoning. All of B’s rich cousins are rich relatives and therefore very famous, so they are all also surgeons (since all the famous cousins of B are surgeons), making them doctors as well. It follows then that B specializes A since all of its rich cousins are rich and very famous surgeons, which is a specialization of rich doctors. The current classifier for NIKL supports this kind of reasoning based on the role hierarchy. Our plans for enhancing reasoning about role restrictions include adding logic to account for coverings and disjointness in the role hierarchy. For example, if we knew that the roles, son and daughter, are disjoint and that they cover the role, child (i.e., form a partition) then we can determine the maximum and minimum number restrictions for child based on the number restrictions for son and daughter. Similar kinds of inferences can be made involving the value restrictions. 3.1.3. Roles and relations One of the criticisms of KL-ONE and NIKL was an incomplete treatment of roles. In KL-ONE the semantics for roles was determined only by other constructs that were described for concepts, In previous versions of NIKL, all roles were primitive. Work in natural language text generation has pointed out the need for a more uniform treatment because sentences may need to describe the relationships that exist between concepts. This requires giving relations the same status as concepts in the network and establishing a correspondence between restrictions of roles at a concept and the relations those restrictions refer to. We have thus adapted a position where roles are thought of as two place relations that are defined in the concept hierarchy. We have implemented this strategy by allowing the user to define relations that may then be used as roles. The example above in Figure 3-l illustrates this capability. Under this new implementation, relations are represented as concepts in the same hierarchy with all other concepts. All relations have at least two roles, a range and a domain. One implication of this support is that it has allowed the user a simple way to say things such as “a car, one of whose tires is flat.” In the previous implementation, the user would have to specify and name a primitive role that specialized the tire role for a car and then restrict the value of that role. A more significant improvement results from removing an unfortunate consequence of this old procedure (which resulted from the primitiveness of all roles). The result was that nothing would classify as a kind of the concept being defined unless the user added the same role (presumably by referring to it by name) and restricting if to the same range (or some specialization of it). In the current implementation, we “gensym” a relation that specializes tire and restrict its range to flat. Any other similar or more specialized relation resulting from a restriction, for example, 1 H-B ~~rlRICH-DOCTOR, >fMOUS-SURGEONi THING RICH-RELRTIVE. RELRTIOtl- P-PLACE-RELRTION- ROLE-RELATION ‘.. I ‘PLACE<’ ,,SECOHO-PLACE-RANGE \ --*FIRST-PLACE-OOM3IH Figure 3-2: Graph of taxonomy defined in Figure 3-1 KNOWLEDGE REPRESENTATION / 98 1 the one generated by “a car with a blown-out tire,” will either merge with the gensymed relation or classify as a specialization of it. Thus, classification of a car with a blown-out tire under a car with a flat tire can happen without having to refer to a specific (and primitive) flat-tire role in the specification of the car with a blow-out. Since relations are now part of the concept hierarchy, we can define other properties for roles and declare disjointness and coverings. One consequence of this is that we have simplified the development of support for reasoning about number and value restrictions for roles based on these notions. Another is that we can specify more completely the meaning of a relation. 3.1.4. Cycles in the network The current NIKL classifier cannot reason effectively about cycles in the network. A cycle occurs whenever one classification depends on another, The classifier has two user selectable strategies that it follows when a cycle is encountered. In the first, the classifier stops trying to draw inferences about any of the concepts in a cycle when one is encountered. Typically a large collection of static concept specifications are presented to the classifier. It recursively descends the known hierarchy to find and classify those new concepts that have no dependencies on any other new concepts. It then unwinds the recursion and forms the newly classified hierarchy as a result. In this first mode, if the classifier discovers a cycle, it simply declares the concepts classified and warns the user about the existence of the cycle. The second mode involves trying to proceed with classification after sorting the items of the cycle based on the number of dependencies for the items. The classifier operates on each of these in the order determined by the sort. The concept with the fewest dependencies on the others is first in this ordering. If there are concepts with equal numbers of dependencies the order depends on the time of the introduction of the concepts into the network. This has the unfortunate consequence that the result may depend on the order in which concepts were mentioned. In the past, cycles were generally considered errors in modeling so the inadequacies of either of these modes was considered inconsequential. However, with the current support for roles as relations cycles now become more prevalent. For example, if the child relation is used to define a person, then person cannot be classified until the relation child has been classified. But if the domain of child is person, then it cannot be classified until person is classified. Obviously, a cycle results. In this situation, the second mode of classification described above performs reasonably, though incompletely (in the formal sense). A more sophisticated classification control strategy could obviously result in a more complete classification. We have designed, and are in the process of implementing and testing, what we call the incremental classification control strategy. Under this regime, the classifier will maintain dependency links for all concepts and use an iterative approach to classification. When a cycle is encountered, the classifier will do the best it can with the concept with the fewest dependencies. It will then classify all those concepts that depend on that one and eventually (because of the cycle) try to reclassify the original concept after having done its best on the dependent concepts. This approach obviously cycles and needs a termination condition. The incremental classifier will stop classification when the network has reached a quiet state, i.e., no new inferences can be drawn, or some user-settable number of dependency cycles have been completed. This strategy will allow more inferences to be made by the classifier and will also provide the basis for a much improved knowledge acquisition environment. Details of the implications for acquisition will be presented later in Section 3.2.4. 3.1 S. Partial orderings One glaring shortcoming of KL-ONE and NIKL has been an inability to define sequences. Requests for this capability have come from nearly all applications***** We have examined the requirements and designed a more general capability that supports partial orderings on roles. The partial orderings in NIKL represent relations that exist between role fillers, Support includes knowledge (in the classifier) about the reflexive, antisymmetric, and transitive nature of partial orderings. One partial ordering may be a specialization of another and they are defined in the concept hierarchy like all other relations. The NIKL user can make several different kinds of statements about the partial orderings of the role fillers. One states that all the fillers of a particular role must be ordered by a particular relation. For example, the statements of a computer program are ordered by the lexically-before relation. A second kind of statement is that all the fillers of one role are related to all the fillers of another role by a particular ordering. An example is a statement that the initialization steps of a while loop come before the termination tests, which in turn come before the steps in the body. The final kind of statement declares that the fillers of one role are the immediate predecessors (or successors) of the fillers of another. An example is the statement that one statement of a program is immediately lexically-before another. Classification will involve the determination of subsumption between partially ordered sets (posets), which is a fairly expensive operation. The expense includes the construction of the representation of posets as graphs and the determination of whether one graph is a subgraph of another. The design of the implementation is such that overhead caused by this enhancement will be minimal for concepts that do not involve use of this feature. 3.1.6. Necessary and sufficient conditions The NIKL classifier represents a particular kind of classification, one that depends on certain logical properties. There are other kinds of classification that depend on domain specific knowledge. One such kind of classification involves the definition of sufficient conditions. The idea is that the presence of certain evidence is sufficient to draw a conclusion if there is no contradictory evidence. For example, one might be willing to say that any mammal with a human DNA structure must be a kind of human unless there is evidence to the contrary even though we do not have evidence for upright posture, opposing thumbs and so forth. *****Various extra-NIKL schemes have been adopted in past work to handle this problem. In past applications, it was not necessary for the classifier to deal with sequences so a special purpose sequence reasoner could bs used. 982 / ENGINEERING Such reasoning has heretofore been unavailable in NIKL and KL-ONE. In light of this one can characterize the definitions of current NIKL concepts as stating necessary conditions (since no part of the description could be missing) and sufficient conditions (since the presence of them is sufficient evidence for the classifier to draw specialization conclusions). The exception to this is for concepts marked as primitive, which indicates that no set of sufficient conditions can be found. One proposal for adding sufficient conditions would allow the user to state that some collection or collections of roles were sufficient. For example, if you know that an animal has four legs and a trunk or a finger on the end of its nose (and there is no contradictory evidence, such as it lives in a tree) then it is an elephant. Another would allow the user to state that one concept implies another. In this scheme, each set of sufficient conditions would be represented as a separate concept that implies the concept in question. The advantage of this approach is that it easily accounts for structural descriptions and partial orderings. We are in the process of evaluating these two proposals. The important lesson is that sufficiency reasoning about terminology seems to be useful is several domains. We have experienced the need for it. 3.1.7. Negation Negation is a problem for the classification algorithm as has been shown by the work of [Brachman & Levesque 841. Nevertheless, it is a notion that nearly all applications find useful. Since we cannot admit negation and maintain tractability for the classifier, we have provided other mechanisms and conventions that seem to satisfy most users. One convention is the use of zero as the minimum and maximum number restriction for a role restriction. For example, a verb phrase with no time modifier can be modeled this way. The ability to define partitions as disjoint covers provides a way to talk about complements, which are akin to negation. This is another addition to NIKL that was the result of expressed desires for negation, The strategy exemplified in these two capabilities, namely, providing something different than what the user asked for but which meets the requirements of the application is very much a part of our methodology for continuing the evolution of NIKL. 3.2. The Environment The NIKL environment consists of tools that aid in knowledge acquisition and reasoning. Our experience has led to the generation of tools in both of these areas. 3.2.1. Assertions Recording and reasoning about extensions of the terminological knowledge represented in NIKL is considered to be outside of NIKL itself and part of the environment. An ad hoc assertional mechanism****** was developed for use with the CUE and Consul applications (see, [Kaczmarek et al. 831). A more systematic approach has led to the development of a major tool for reasoning about assertions by [Vilain 841 of Bolt Beranek and Newman. This ..**** This scheme was built around the KL-ONE notion of a nexus tool, KL-TWO, combined the RUP package of [McAllester 821 with NIKL. KL-TWO provides a truth maintenance package that is very useful in some applications. However, it is inappropriate for large data bases and for certain kinds of applications where efficient implementations of the assertions are required. To correct these deficiencies (for certain applications) we have planned two other hybrid systems. The first involves coordination between the conceptual hierarchy defined in NIKL with the schemata for a commercial relational data base. With this scheme we plan to use NIKL in applications requiring the kinds of semantic browsing techniques found in the work of [Patel-Schneider et al. 841 and [Tou et al. 821. The second involves using NIKL in coordination with the knowledge representation aspects of a knowledge-based software development paradigm. rlere we are actively involved in using NIKL to define a type hierarchy and relations for the AP5 language of [Cohen & Goldman 851. 3.2.2. Reformulation As was previously mentioned, classification-based reasoning is a common mode of use of NIKL. The terms, reformulation and mapping, have been used in KL-ONE applications to refer to this kind of activity. Currently there is a reformulation facility available that is used in the expert system research of [Neches et al. 881. This mechanism is used to satisfy goals by expanding plans. Within the paradigm of their project, reformulation is used to generate an expert system based on a knowledge of the domain and expert problem solving knowledge. In this methodology, goals, methods, and plans are all expressed in NIKL and the expert system shell uses these to generate the expert system for a particular domain and set of goals and methods. While the facility provided was designed for a particular use, the mechanism is generic and can be applied to any number of other applications. 3.2.3. Graphic-based editing The KL-ONE community has a rich tradition of drawing pictures with “circles and arrows.” A graphical representation of concepts and networks has always been a part of the language. As the expressiveness of NIKL has increased, the cleanliness of the graphs has diminished, but nevertheless, the graphs remain useful. We have developed an integrated set of acquisition tools in a window-based workstation environment. The tools include a graph of the concept hierarchy, an EMACS editing window, and a LISP interaction window. Within the LISP interaction window, the environment can produce highly formatted (“pretty-printed”) descriptions of concepts. The atoms in these formatted displays, which refer to concepts and relations, as well as the nodes of the graph and the text in the edit buffer are all mouse sensitive and known to be NIKL constructs by the environment. This allows the user to move from one window to another in a coordinated way. It also allows the user to refer to a NIKL object simply by pointing at it in any of the various views of the network. A natural language paraphraser has also been added to this environment to assist in the understanding of the network. We also have a tool to graph the definition of a particular concept. This tool has proven to be less useful than originally thought. While drawing concept specifications on paper with a pencil is extremely useful, we haven’t been able to duplicate the free flowing expressiveness of that mode of design. Work on the KNOWLEDGE REPRESENTATION / 985 human factors of the tool and the inclusion of higher level operations (the current level is, for example, add a role) are anticipated. However, the tool is useful in terms of providing a graphic presentation of a concept. The deficiencies become obvious in creating orediting a concept definition. 3.2.4. Incremental classification A major problem with the NIKL environment arises from the batch nature of the classifier. The example in Figure 3-1 illustrates some of the many inferences that the classifier makes. For example, deciding that the user really meant rich cousins to be rich doctors, not just doctors. This kind of inference can be particularly troublesome for the user because NIKL frequently needs to generate new concepts that the user has not explicitly defined. Usually NIKL cannot pick an appropriate name for the concepts it generates. In many cases the need to generate a new concept arises from the fact that the user has inadvertently omitted the concept or made some modeling error. A better acquisition environment can be obtained by having the classifier interact with the user whenever such a concept must be generated. The user could then choose an appropriate name, decide there is an error, or tell NIKL that the concept will be defined later. The example of interaction arising from new concepts being generated is just one case in which interaction during classification can improve the modeling environment. The control strategy that will be employed in the incremental classifier will be much more supportive of the kind of interaction that knowledge acquisition requires. support for negation, a connection to an assertional truth maintenance system, support for domain specific reasoning (triggered by classification), and more complete inferences drawn as a result of having a relation hierarchy. Further enhancements have also been suggested and continue to be developed. They include: the representation of sequences and orderings, the availability of sufficiency reasoning in the classifier, more complete inferences regarding cycles in the models, and coordination with an assertional component that supports efficient data base access. In addition we have implemented and continue to develop tools for the knowledge acquisition environment. This work has also been sensitive to the needs that have arisen out of several application environments. Principle developments include a Common LISP implementation, and an integrated tool set that features graphic representations, formatting and paraphrasing tools, and flexible lexical analysis support. The addition of a more interactive editing style and various analysis tools is forthcoming. ACKNOWLEDGMENTS The dependency information that the incremental classifier will keep can also be used to enhance the modeling environment. This information is particularly useful for editing a concept definition and then making sure the network is properly updated and for supporting various kinds of analysis tools. The development of NIKL and our plans have been the result of interactions with a number of Al researchers. Many of these were developers or experienced users of KL-ONE or one of its variants. The rest were potential or new users. The contributors represent many different organizations and research interests. The following have all made, and in many cases continue to make, significant contributions: Don Cohen, Neil Goldman, Bill Mann, Norm Sondheimer, Bill Swat-tout, Bob Neches, Don Voreck, Steve Smoliar, Ron Brachman, Victoria Pigman, Peter Patel-Schnieder, Richard Fikes, Ramesh Patil, Jim Schmolze, Rusty Bobrow, Marc Vilain, Bill Mark, David Wilczynski, Mark Feber, and Tom Lipkis. 3.2.5. Surface language support As part of our efforts we have used a general lexical analysis and semantic interpretation package developed by [Wile 811. This package gives a flexible surface language that allows easy modifications to accommodate extensions to NIKL as we develop them. It also opens up the possibility of defining highly application dependent surface languages, 3.3. The Implementation The current version of NIKL is in Common LISP and we have experimented with its use on a variety of workstations and mainframe implementations of Common LISP. The integrated acquisition environment depends on some specific tools found in the Symbolics ZETALISP environment. We are actively pursuing the development of similar facilities that rely on a Common LISP implementation of a form and graphics package that requires only modest customizations for various graphic environments. 4. SUMMARY NIKL is an evolving knowledge representation tool based on KL- ONE. The experiences gained in a variety of applications have shaped the current implementation. Principal enhancements made to NIKL that were in direct response to applications needs were: the representation of roles more uniformly with concepts, REFERENCES [Bobrow & Webber 891 Robert Bobrow and Bonnie Webber, ““Knowledge Representation for Syntactic/Semantic Processing,” in Proceedings of the National Conference on Artificial Intelligence, AAAI, August 1980. [Brachman 781 Ronald Brachman, A Structural Paradigm for Representing Knowledge, Bolt, Beranek, and Newman, Inc., Technical Report, 1978. [Brachman 861 Ronald Brachman, Magazine VI, 1985. “I Lied About the Trees, ” A I [Brachman & Levesque 841 Ronald J. Brachman and Hector J. Levesque, The Tractability of Subsumption in Frame-Based Description Languages, Fairchild Research Laboratories, Technical Report, 1984. [Brachman & Schmolze 851 Brachman, R.J., and Schmolze, J.G., “An Overview of the KL-ONE Knowledge Representation System,” Cognitive Science, August 1985,171.216. [Brachman et al. 831 Ronald Brachman, Richard Fikes, and Hector Levesque, “KRYPTON: A Functional Approach to Knowledge Representation,” /EEE Computer, September 1983. [Cohen 8 Goldman 851 Cohen, D. and Goldman, N., Efficient Compilation of Virtual Database Specifications. 1985. 984 / ENGINEERING [Kaczmarek et al. 831 T. Kaczmarek, W. Mark, and N. Sondheimer, “The Consul/CUE Interface: An integrated Interactive Environment,” in Proceedings of CHI ‘83 Human Factors in Computing Systems, pp. 98-102, ACM, December 1983. [Kaczmarek et al . ??] T. Kaczmarek. W. Mark, and D. Wilczynski, “The CUE Project,” in Proceedings of SoftFair, July 1983. [Mark 811 William Mark, “Representation and Inference in the Consul System,” in Proceedings of the Seventh International Joint Conference on ArtificiaI intelligence, IJCAI, 1981. [McAllester 821 D.A. McAllester, Reasoning Utility Package User’s Manual, Massachusetts Institute of Technology, Technical Report, April 1982. [Neches et al. 861 Robert Neches, William Swartout, and Johanna Moore, “Explainable (and Maintainable) Expert Systems,” in Proceedings of the Ninth International Joint Conference on Artificial Intelligence, pp. 382-389, International Joint Conferences on Artificial Intelligence and American Association for Artificial Intelligence, August 1985. [Patel-Schneider et al. 841 Peter F. Patel-Schneider, Ronald J. Brachman, and Hector J. Levesque, ARGON: Knowledge Representation meets Information Retrieval, Fairchild Research Laboratories, Technical Report 654, 1984. [Schmolze & Lipkis 831 James Schmolze and Thomas Lipkis, “Classification in the KL-ONE Knowledge Representation System,” in Proceedings of fhe Eigth international Join? Conference on Artificial Intelligence, IJCAI, 1983. [Sidner 851 Candace L. Sidner, “Plan parsing for intended response recognition in discourse,” Computer intelligence 7, 1985. [Sondheimer 841 Norman K. Sondheimer, Ralph M. Weischedel, and Robert J. Bobrow, “Semantic Interpretation Using KL- ONE,” in Proceedings of Cofing 84, pp. 101-107, Association for Computational Linguistics, July 1984. [Tou et al. 821 Tou, F-F., M.D. Williams, R. Fikes, A. Henderson, and T. Malone, “RABBIT: An Intelligent Database Assistant,” in Proceedings AAAI-82, pp. 314-318, AAAI, 1982. [Vilain 841 Mark Vilain, KL-TWO, A Hybrid Knowledge Representation System, Bolt, Beranek, and Newman, Technical Report 5694,1984. [Wilczynski 841 David Wilczynski and Norman Sondheimer, Transportability in the Consul System: Model Modularity and /acquisition. 1984. [Wile 811 David S. Wile, POPART: Producer of Parsers and Related Tools System Builders’ Manual. 1981. KNOWLEDGE REPRESENTATION / 985
|
1986
|
46
|
490
|
A HYBRID STRUCTURED OBJECT AND CONSTRAINT REPRESENTATION LANGUAGE* David R. Harris Sanders Associates 95 Canal Street Nashua, New Hampshire 03061 ABSTRACT SOCLE is a hybrid representation system in which cells of constraints are identified with slots of frame networks. Constraint formulas are maintained with respect to slots in frame networks and in turn provide for the dependency regulation of values on the frames. This paper illustrates the use of SOCLE and outlines the control structure decisions made for its design and implementation. I. INTRODUCTION This paper describes SOCLE** (Structured Object and Constraint Language Environment), a hybrid system in which cells of constraint networks are identified with slots of frame networks. The hybrid system contains a structured object component (essentially FRL [Roberts, Goldstein, 771) and a constraint component (based on the Constraint Based Programming Language of Steele [Steele, 801). As such SOCLE’s benefits include the following : representation for both structure and formulas, constraint propagation, default reasoning, requirement enforcement, contradiction resolution, and explanation of computations. while the notion of adding constraints to structured objects is not new (for example [Morgenstern, 84],[Batali, Hartheimer, 80]), SOCLE fully integrates the key features of mechanisms from the constraint and frame paridigms and its architecture offers insights into communication and delegation between the two components. Recently several papers including, [Rich,85], [Vilain, 851, and [Bra&man, Gilbert, Levesque, 851 have reported on the advantages of such hybrid solutions for the representational needs of intelligent systems. It is our hope that SOCLE can serve as an additional data point for investigations of the space of hybrid solutions. A. Background A frame representation language is a programming language which supports a partitioning of knowledge into both ” type ” and “part” * (c) Copyright Sanders Associates, Inc., 1986. work reported here was developed under internal research and development at Sanders Associates. hierarchies. Individual frame objects are created containing slots which define the object and indicate relationships to other structured objects. The types of computation typically performed by frame based sys terns include subsumption, defaults, and procedural attachment. A constraint based language is used to express formulas and dependencies. The underlying mechanism supports (i) bi-directional propagation of values through constraint networks, (ii) recording of dependencies (to be used in support of contradiction resolution, retraction, and explanations of the history of computations) and (iii) persistence of certain types of values. As we will illustrate with an example below, SOCLE adds computational power to both structured object and constraint paridigms. B. Motivation for SOCLE SOCLE was motivated by our work on intelligent engineering assistance programs. Such programs must contain representations for highly structured engineering knowledge and must, in a mixed initiative mode, provide assistance when an engineer changes his or her mind in trying out solutions to complex problems. It was important for us to develop a completely integrated solution in which typical computation in one component invokes the correct response in the other component. A weak link between components would not have succeeded. The critical consideration which forced the hybrid approach was the fact that formulas relate variables only as they fill a particular role with respect to application objects. This is true for two reasons. First, mu1 tiple instances of formulas may be used in describing the same object. For example, a description of a function might make use of a “range constraint” (minimum value + range = maximum value) applied to both the ordinate and abscissa. Each instance requires the instantiation of a separate constraint network. The knowledge about the number of constraint networks to be declared can be stored in the structured object component. Secondly, engineering formulas often are approximations which are not universally applied. Hence ** An architectural term foundation piece. for a projecting 986 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. enforcement, dependent on the context, can be expressed declaratively in the structured object component. In the ensuing paragraphs, we will present SOCLE at the knowledge level and at the implementation level. At the knowledge level we illustrate the need for the hybrid approach with an example and then talk about the expressive power of SOCLE. At the implementation level we discuss issues of control structure: communication and division of labor between components. II. KNOWLEDGE LEVEL DISCUSSION A. An Example This example is motivated by the use of a simple "distance = rate x time" formula in a design problem for air traffic control systems. Figure 1 shows a decomposition of the AIR TRAFFIC CONTROL SYSTEM using "input-from", "objects- tracked", "tracker", and "geographic-coverage" slots. AIR TRAFFIC CONTROL 52 SYSTEM I + RADAR ’ ’ I SWEEP RETURNS RATE REQUIRED + + DISTANCE INITIATION TIME + SPEED I I . I Figure 1. Structured Object Decomposition for AIR TRAFFIC CONTROL SYSTEM Concept. Slot fillers for each of these parts are LONGRANGE-RADAR, COMMERCIAL-AIRCRAFT, ALPHA-BETA- TRACKER, and CRITICAL-AREA respectively. Each of these is a structured object which can inherit values, defaults, procedural attachments from generalized concepts. In such a decomposition, slots which are related through formulas may be functionally far apart. For example, the distance that an aircraft can cover before a track is established is related to the speed of the aircraft, sweep rate of the radar, and the number of hits required for the tracker to establish a new track. Letting T = initiation time for establishing a track, N = number of radar returns required to establish a new track, R = the sweep rate of the system radar, D = distance covered by the aircraft before a track is established, and S = speed of a commercial aircraft, we can quickly establish the formulas: T=R*N D=S*T. These formulas are illustrated on the diagram (and conceptualized) as wiring networks. The advantage of a hybrid approach can be seen from two viewpoints. From the point of view of the structured object component, these wiring networks constrain the values placed on slots. If values are set for "sweep-rate", "number-of-radar- returns-required", and "speed" as shown, then values of 1 minute, and 10 miles can be propagated to the "initiation-time" and "distance" variables, respectively. If, in an exploratory design session, an engineer sets the distance to 5 miles, SOCLE will declare a contradiction and help to resolve it by identifying premises and associated levels of confidence. From this view, what is significant is that variables located on structured objects are regulated using dependency information. From the point of view of the constraint component, the structured objects provide and maintain the context for constraint formulas. Constraints are only enforced for values that fill particular roles in structured object networks. Subsequent engineering changes can result in modification to these structured object networks, and constraint networks must be adjusted accordingly. Hence in the example, if the "input- from" slot filler is replaced by a radar with a sweep rate of 10 seconds rather than 12 seconds, SOCLE will move the constraint network to the new radar, disconnect the 12 second value from formulas, retract any values for which the 12 seconds was a premise, assert the fact that a new value of 10 seconds is to be used, and propagate appropriate values in the constraint network. This enforcement of formulas between values only as they fill slots in structured object networks is a key feature of the hybrid system. B. Expressibility In addition to the somewhat standardized vocabulary of frame systems, SOCLE includes functions which mix notions from the structured object and constraint paridigms. For example, levels of confidence (DEFAULT, SUPPOSITION, BELIEF, and CONSTANT) can be stated for values at particular locations in structured objects. The declaration of formulas is an important aspect of .using SOCLE. Two methods are available for this. In both methods, the functions for declaring formulas work with pathnames (i.e., sequences of slots whose fillers are frames) for variables. This idea is also used by Morgenstern, [Morgenstern, 19841, in declaring constraint equations in semantic networks. KNOWLEDGEREPRESENTATION / 9S7 First, a priori formulas can be declared on generic structured objects. The mechanism for installing constraints starts with structured object based inferencing. We profit from inheritance by declaring the formula on a CONSTRAINT slot of the most general concept appropriate. When an individual structured object is instantiated, procedural attachments are placed along the path to the variable referred to in the constraint. These procedural attachments are charged with installing and maintaining the constraint network when changes are made in the participating structure. In the example, the “distance = rate x time” formula, referred to above, could be declared on the AIR-TRAFFIC-CONTROL structured object as follows: (air-traffic-control (ako ($value (system))) (constraint ($value ((multiplier (at* tracker initiation-time) (at* objects-tracked speed) (at* geographic-coverage distance))))*** Formulas can also be declared for variables found only on specific individual structured objects. In this case, the maintenance of context along structured object links is forfeited. As an example, one might declare the formula between “sweep-rate”, “number-of-radar-returns-required”, and “initiation-time” by invoking: (multiplier (at radar-43 sweep-rate) (at tracker-21 number-of-radar- returns-required) (at tracker-21 initiation-time)) C. Assumptions Two assumptions of the current implementation should be mentioned. First, SOCLE supports numbers, symbols, sets, and number unit pairs as slot values to be tied to constraints (Internally, constraint primitives employ functions which understand number conversions and dimensional analysis). Second, it is assumed that all slots which participate in constraint formulas are single valued (i.e. x is the slot filler of s on frame f means s(f) = x). III. IMPLEMENTATION LEVEL DISCUSSION This section is organized to describe the control structure issues outlined in Brotsky and Rich’s paper [Brotsky, Rich, 19851 on hybrid systems. *** The hyphenated-expressions indicate that the frames are instantiations of generic frames for RADAR and TRACKER. The difference between the AT and AT* functions is that the first argument to the AT function is the name of a frame, while the AT* function is evaluated when the frame name is bound to the frame on which the a priori formula is defined. A. Communication Communication between the two components of SOCLE is performed through a collection of cells which are attached to slots of frames. Frame generated values are pushed into these cells. Subsequently, the cells are used for setting values, retrieving values and explanations, and invoking procedural attachments. 1. Setting Values: A value may be set through a frame based inference. For example, a request for a value may be answered by inheritance of a default. This value is returned and also stored on the cell attached to the slot. In addition, the confidence level of default is noted so that the value will behave as a default in constraint networks. A value may also be remotely set through a constraint based inference. Computation in the constraint system proceeds by constraints awakening when new values are set for participating variables. If the variable is in fact the slot of a frame (this information is stored on the cell), then control is passed to the frame system to awaken procedural attachments that reside there. 2. Retrieving Values: The frame-constraint boundary may need to be crossed to retrieve values. A frame based request for a value is honored by looking on the cell attached to a slot of a frame. From the other side, constraint computations may beg frame networks for values. This occurs in the course of contradiction resolution and retraction. When cells lose values, the frame location is determined and a frame based request for a value is made. If a value is available, it is immediately stored back in the cell as described above. In summary, all communication between the two components takes place using cells. These cells know their place in both worlds and contain the current state for the variables of interest. The division of labor and the needs for crossing the frame-constraint boundary are discussion in the next section. the topics of B. Division Of Labor 1. Strategies: Before explaining the approach we have taken in SOCLE, we might pause to consider two extreme strategies for integrating structured objects with constraints. On the one hand, it could be the responsibility of the constraint mechanism to perform all frame based inferences. Thus, for example, rather than having a frame retrieval function which uses subsumption to find a value, one could install “inheritance constraint” networks which link together values on all slots of two frames when one subsumes the other. This would lead to some difficulties, however. 988 1 ENGINEERING Importantly, propagation and contradiction resolution strategies would need to be tailored to support exception links. Also, additional control structure would be required when inheritance is considered prior to formula computation. On the other hand, one could move all the information stored in constraint nodes onto frame facets. In addition to default, type, and procedural attachments, one might have supplier, reason, and associated constraint pins as facets. The complexities involved in computations for local propagation, retraction, and contradiction resolution could be made the responsibility of frame representation language functions, but now all the checking related to constraint calculation would occur all the time whether or not there was ever any intent to tie a particular frame-slot to a constraint network. At issue, is the percentage of frame-slots in the application domain which can be expected to be tied into these constraint networks. In our work, only about 10% of the slots serve as variables for constraint networks. We have deemed both of these strategies to be inappropriate. The first is inappropriate due to differences between inheritance and constraint propagation. The second is inappropriate due to the expectation that only a small percentage of slots will participate in constraint networks. 2. Features Of A Good Hybrid Intelligent Assistance: System For In order to divide responsibility for computations it was necessary to look carefully at the union of the features provided by both mechanisms. with this in mind, we generated the following list of important benefits: a. Representation of structured engineering knowledge and engineering formulas. This item falls into the province of a frame based system. Of note is the fact that formulas are expressed declaratively on an appropriate frame. When this frame is instantiated, the constraint function (e.g. MULTIPLIER in the example above) is invoked to install the constraint network. b. Propagation of values which are constrained by underlying engineering formulas. This item is primarily the responsibility of the constraint component. If, however, values are lost in constraint network computation, then control is returned to the frame component to locate potential values there. C. Default reasoning, wherein default values eagerly assert themselves in formulas, but immediately bow out when they have created a contradictory state. Default reasoning has semantics in both paridigms. An important consideration of the implementation was to ensure that the two notions worked correctly together. **** While this feature can be associated with any confidence level, we have chosen to associate it with the weakest level. This is consistent with Steele's implementation. For frame based computation, defaults are used only when values are not present or can not be inherited. In constraint networks, there are potentially two dimensions to be considered. First defaults can be thought of as being the weakest confidence level for assertions. In this sense, the notion in the two paridigms is the same. In addition, however, there is a notion of persistence associated with constraint based computation. Values which are persistent must actively force themselves into formulas when they can.**** In SOCLE, we have placed responsibility for maintaining default state information on the frame component. when defaults are declared they are eagerly pushed onto instantiated frames and hence out into attached constraint networks. Also, when a value is lost (through retraction of a supporting premise perhaps), the transfer of control back to the frame network described in the paragraph above will of necessity discover and re- assert default values. d. Enforcement of requirements imposed by both structures and formulas. The enforcement of requirements on values is of course exactly what constraint networks are all about. However, one can also declare explicit requirements on a frame. For example, a requirement that speed be within a valid range (imposed by the laws of physics) might properly be placed on a MOVINGOBJECT structure independently of constraint networks. On the other hand, a value for the speed of a particular aircraft may be regulated by a formula which is enforced by a constraint network. SOCLE permits the assertion of values only as they are consistant with both types of requirements. e. Contradiction resolution based on recordings of premises for inferences. Resolution takes advantage of annotations for the level of confidence that an engineer has in the premise. Contradiction resolution is performed in both the frame and constraint mechanisms. The determining factor is whether or not a new value being set is intended to be a premise or is remotely established through other premises in constraint networks. In the first case, a preliminary investigation can compare the levels of confidence between the new assertion and the old. For example, a default, would bow out to a supposition. The second case, can only be resolved in the constraint network. Values dependent on the old value are retracted, the new value is asserted and an attempt is made to settle out the state of the constraint network. when SOCLE can not automatically resolve the contradiction, it informs the user of the problem and requests that the user either retract a premise or declare a formula's application to be invalid. f. Explanation of values based on history of computations. Explanation is handled totally by the constaint network, although values to be explained are referenced by their location in the frame network. IV. CONCLUSIONS In summary, SOCLE embodies the power of KNOWLEDGE REPRESENTATION / 989 the above six items: representation for structure and formula, propagation, default reasoning, requirement enforcement, contradiction resolution, and explanation. It is a generally useful knowledge representation language in application areas which contain highly structured knowledge including formulas which tie together variables from the structures. SOCLE is currently being used on several projects at Sanders. These include projects in system and software requirements analysis, automatic test equipment reprogramming, and reliability simulation. I would like to thank Chuck Rich for his suggestions and encouragement on this effort. Important contributions to the design and implementation of SOCLE were made by Andy Czuchry, Terry GiM, and Lynne Higbie. REFERENCES [l] Batali, Hartheimer, "The Design Procedure Language Manual". MIT/AI Memo 598, 1980. [2] Bra&man, Gilbert, Levesque, "An Essential Hybrid Reasoning System: Knowledge and Symbol Level Accounts of KRYPTON", Proc. IJCAI-85, Los Angeles, California, Aug. 1985, pp 532-539. [3] Brotsky, Rich, "Issues in the Design of Hybrid Knowledge Representation and Reasoning Systems". Proc. of the Workshop on Theoretical Issues in Natural Language Understanding, Halifax, Nova Scotia, May, 1985. [4] Morgenstern, "Constraint Equations: A Concise Compilable Representation for Quantified Constraints in Semantic Networks", Proc. AAAI-84, Austin, Texas, Aug., 1984, pp. 255-259. [5] Rich, "The Layered Architecture of a System for Reasoning about Programs", Proc. IJCAI-85, Los Angeles, California, Aug. 1985, pp 540-546. [6] Roberts, Goldstein, "The FRL Prime", MIT/AI Memo 408, 1977. [7] Steele, "The Definition and Implementation of a Computer Programming Language Based on Constraints", MIT/AI Technical Report 595, 1980. [8] Vilain, "The Restricted Language Architecture of a Hybrid Representation System", Proc. IJCAI- 85, Los Angeles, California, Aug. 1985, pp. 547- 561. 990 / ENGINEERING
|
1986
|
47
|
491
|
A System Which Uses Examples To Learn VLSI Structure Manipulations Richard H. Lathrop* and Robert S. Kirk** * MIT Artificial Intelligence Laboratory, NE43-795 545 Technology Square, Cambridge, MA 02139 ** Gould/AM1 Semiconductors, Inc. 3800 Homestead Road, Santa Clara, CA 95051 ABSTRACT. Focusing especially on the later stages of the design task, when a complete (or nearly so) design is being optimized at the structural level prior to final physical layout, we identify some aspects of the VLSI domain which complicate efficient design both for human and machine agents. We describe a simple but useful idea and a prototype implemented system which partially addresses these problems. Examples are conveyed graphically from an existing design. The system automatically learns a design precedent enabling it to infer local hierarchy corresponding to the example in new de- signs. The teacher may also substitute alternative actions, described to the system in its native Y hardware descrip- tion language, which the system remembers and can also apply later. CONSTELLATION can infer local hierarchy; undo and rationalize clever local tricks and work-arounds; search for situations in which a specific local optimization can be applied; and modify the circuit as described by the teacher. The system is in experimental use in a production environment. INTRODUCTION. Today’s complicated VLSI circuits require highly intricate structure. Especially in the later stages of the design pro- cess, it is difficult for reasoning systems (human or machine) to verify that the implemented circuit actually reflects the designer’s original intent; or to manipulate the structure in a correct, coherent, efficient manner. Two domain at- tributes exacerbate this: a The necessarily low level of design descriptions in the intermediate and late design stages obscures the de- signer’s intent with excessive detail. l The frequent use of clever, situation-specific sub-cir- cuits, obscures the designer’s intent with technology- dependent “tricks” and “work-arounds”. Both contribute additional layers to the task of reasoning about (or correctly changing) a design. Even when a circuit was designed using an hierarchical methodology, existing design tools often discard this knowledge soon after the initial design capture. This makes it difficult for subsequent tools or reasoners to exploit regularities or simplifications indicated by the hierarchical organization. Complexity of circuits in the VLSI domain also impedes the effective application of the designer’s expertise. Expe- rienced circuit designers use many techniques to improve the overall quality of a design. In a large and complicated design it is virtually impossible to insure that every op- timization a designer may know has been applied every- where possible. The finished design may be less efficient, even though the designers may know applicable techniques to improve it further, because of the practical difficulty of actually finding a particular place in the design where a particular technique applies. Knowledge in the VLSI domain takes many forms: global vs. local, structural vs. functional, synthesis vs. analysis, strategy vs. implementation, etc. This paper focuses on lo- cal structural knowledge and its application to analysis and refinement of nearly complete designs. We examine a sim- ple but useful idea addressing the sub-tasks of (1) reducing design complexity, by inferring local hierarchy and recog- nizing clever tricks and work-arounds; and (2) improving overall design quality by performing particular opt imiza- tions wherever they apply in a particular design. Figure 1. COMET Chip. Approximately 30,000 transistors comprising about 3,000 cells (at the logic-gate and flip-flop level) and about 3,000 connecting busses. CONSTELLATION often runs at about l-2 seconds per match-and-replace cycle on this chip. 1024 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The implemented system (CONSTELLATION) descri- bed below is a learning apprentice in the sense introduced by LEAP [18]: an intelligent tool which is taught by an experienced designer in the course of working on an ac- tual problem. While LEAP addressed generalization in the functional synthesis task beginning with the early stages of a design, CONSTELLATION addresses a different task of structural analysis and refinement operating on a nearly complete design. Our goal is to capture design expertise relevant to this late stage of the design process, and auto- matically apply it to new designs. We do this by using a construct we call a design precedent, loosely defined as, “A situation I have seen before,” and, “What to do when I see it again.” These pattern/action pairs closely resemble the rules in a traditional rule-based expert system [6, 23 . The major difference between CONSTELLATIO k and a traditional rule-based system lies in the source and nature of the pattern/action rules. They are captured, not by a Knowledge Engineer, but by the machine under the control of an experienced designer working from an actual design. The designer indicates a relevant situation by pointing at an example of it, and then describes the appropriate ac- tion to take in the Y hardware design language normally used to design circuits. CONSTELLATION’s design prece- dents embody specific local situations seen before and spe- cific actions or tricks applicable there, rather than global heuristic rules. Nonetheless aspects of the knowledge ac- quisition process described should be partially applicable to traditional rule-based systems. The system is implemented in LISP on a Symbolics 36001 employing an object-oriented message-passing archi- tecture. The largest design that has been run on this system consists of about 30,000 transistors comprising ap- proximately 3,000 primitive cells (at the level of logic gates and flip-flops) connected by approximately 3,000 primitive busses (figure 1). On this chip the system often takes about l-2 seconds per match-and-replace cycle, rendering it suit- able for interactive use. DATABASE MODEL. Our database model is based on the Y-diagram proposed by Gajski and Kuhn [9, 111. The original Y-Database rep- resented the Functional, Structural and Physical aspects of a design as the three arms of a Y. The Functional level describes the behavior and algorithms embodied, and the Physical level describes the geometry and actual layout. The Structural level, which concerns this paper, describes the component modules and their connections. Each mod- ule (block) has a few named communication ports, con- nected by busses (nets to the ports of other modules. Recursively, each I modu e is internally composed of sub- modules whose ports are connected by busses, and in this way design hierarchy is realized. We will typically be con- cerned with a netlist, a list of the modules at the black-box level and of which ports to connect together. The Star design system coordinates several related de- sign tools. It allows user interaction through both a generic graphical editor and the textual Y hardware description language. Objects in the database are automatically invert- ible into the Y language, and satisfy a meta-circularity con- dition that no information be lost or gained in going from internal database representation to text and back. The Y language is embedded in LISP, and Y and LISP statements may be arbitrarily intermixed. The concept is similar to DPL (Design Procedure Language) [2 except that DPL is restricted to physical representations compare Slices[22]). lA trademark of Symbolics, Inc. KNOWLEDGE ACQUISITION ENVIRONMENT. In any system which proceeds by large amounts of domain- specific knowledge, the knowledge acquisition bottleneck must be addressed. In rule-based expert systems this is the difficulty of extracting knowledge from experts, and work on this problem includes [7, 211. In VLSI it is reflected in the difficulty of accumulating design libraries, in the em- phasis on and difficulty of capturing the designer’s intent, and so forth. Any system (such as ours) which is based on large amounts of knowledge acquired from human experts must make this process as painless as possible. The intent of our interface is to have the designer exam- ine an existing design and point to an example where the precedent should apply. The system will produce a prece- dent which automatically replaces this with a fragment of local hierarchy when the situation is encountered in the fu- ture. This default action may be changed by substituting an alternative action, usually described by the designer in the system’s native Y hardware description language. A SESSION WITH CONSTELLATION. The environment is best described by walking through a session with the system. First an existing design is selected and displayed on the graphics window. Once the design schematic is on the screen, the designer uses the mouse to select interesting components as in figure 2. When all the desired components have been selected the designer selects the Precedent pop-up menu and chooses Make-precedent. The system then automatically performs the following steps: Isolates and verifies the sub-circuit represented by the selected modules. Inverts the isolated sub-circuit into a Y-language Mod- ule Definition (figure 3a). Sets up a SIMMER functional simulation environ- ment and test pattern. Creates a precedent pattern/action rule definition (fig- ure 3b) consisting of: (a) a pattern which will trigger the precedent rule when subsequently recognized, and (b) an action clause which replaces the recognized pattern with the higher-level module defined in step 2 (a fragment of low-level hierarchy). Figure 2. Example datapath circuit. Two cells (an INVERTER and an NOR gate) have been selected to form a design precedent for the system to record. LEARNING / 1025 (DEFMODULE INVNOR 0 (DEFPRECEDENT INVNOR-PRECEDENT (MODULE v~RB-O 'No025 0) "A NOR gate with one input inverted." (MODULE 'CSRA-0 'IN015 0) :Y-PATTERN (PORT 'cB 'BIT 0) ((MODULE 'CSRB-0 'NO025 0) (PORT '~2-0 'BIT 0) (MODULE 'CSRA-0 'IN015 0) (PORT 'DA-0 'BIT 0) (BUS 'CB 'BIT 0) (PORT 'DEE-• 'BIT 0) (BUS 'D2-0 'BIT 0) (BUS 'CB 'BIT 0) (BUS 'DA-O 'BIT 0) (BUS 'D2-0 'BIT 0) (BUS 'DEE-O 'BIT 0) (BUS 'DA-O 'BIT 0) (CONNECT (>> 'CSRB-O.NO025.MODULE 'B.BIT.PORT) (BUS 'DEE-O 'BIT 0) (>> 'CB.BIT.BUS)) (CONNECT (>> ~c~RB-o.N~o~~.M~DuLE *B.BIT.PORT) (CONNECT (>> 'CSRB-O.NO025.MODULE 'A.BIT.PORT) (>> 'CB.BIT.BUS)) (>> 'D2-O.BIT.BUS)) (CONNECT (>> .csRB-o.~o025.~o~uLE 'A.BIT.PORT) (CONNECT W 'CSRA-O.IN015.MODULE 'Q.BIT.PORT) (>> 'DZ-O.BIT.BUS)) (>> 'DZ-O.BIT.BUS)) (CONNECT (>> 'CSRA-O.INOl5.MODULE 'Q.BIT.PORT) (CONNECT (>> 'CSRB-O.NOO25.MODULE 'Q.BIT.PORT) (>> 'DZ-O.BIT.BUS)) (>> 'DA-O.BIT.BUS)) (CONNECT (>> 'CSRB-O.NOO25.MODULE 'q.BIT.PORT) (CONNECT (>> 'CSRA-O.INOlS.MODULE 'A.BIT.PORT) (>> 'DA-O.BIT.BUS)) (>> 'DEE-O.BIT.BUS))) (CONNECT (>> 'csRA-O.INOI~.MODULE 'A.BIT.PORT) :Y-ACTION (>> 'DEE-O.BIT.BUS)) ((MODULE (UNIQUE-NAME 'INVNOR) 'INVNOR 0) (CONNECT (>> 'CB.BIT.PORT) (>> 'CB.BIT.BUS)) (CONNECT (*= 'cB.BrT.Bus) (CONNECT (>> 'DZ-O.BIT.PORT) (>> 'D2-O.BIT.BUS)) (CONNECT (>> 'DA-O.BIT.PORT) (>> 'DA-O.BIT.BUS)) (CONNECT (>> 'DEE-O.BIT.PORT) (>> 'DEE-O.BIT.BUS))) (>> (UNIQUE-NAME 'INVNOR) 'CB.BIT.PORT)) (CONNECT (*>> 'DZ-O.BIT.BUS) (>> (UNIQUE-NAME 'INVNOR) 'DZ-O.BIT.PORT)) Figure 3a. The Y-Language Definition Constructed. At some later point the designer or a design supervi- sor system may call up a new design, retrieve previously constructed precedents, and apply them to the new design. CONSTELLATION will search the design for matches to the precedent patterns. When a match is found the action is performed, modifying the circuit as specified. Figure 4 shows the circuit of figure 2, after the precedent of figure 3b has been applied. The design has been simplified by re- placing the two-component INVERTER/NOR combination with a fragment of low-level hierarchy, INVNOR, represent- ing a NOR gate with one input inverted. Note that a series of components have disappeared. They have been spliced out and replaced by the new components, which have not yet been redisplayed. Figure 4. Datapath After Precedent Application. The precedent defined in figure 3b has been applied and the module definition of figure 3a has been spliced in (but not yet redisplayed). (CONNECT (*>> 'DA-O.BIT.BUS) (>> (UNIQUE-NAME 'INVNOR) 'DA-O.BIT.PORT)) (CONNECT (*>> 'DEE-O.BIT.BUS) (>> (UNIQUE-NAME 'INVNOR) 'DEE-O.BIT.PORT)) (ZAP (*>> 'CSRB-O.NO025.MODULE)) (ZAP (*>> 'CSRA-O.INOl5.MODULE>)))) Figure 3b. The Precedent Definition Constructed. Reviewing what has happened, the designer explicitly pointed out, the precedent with the mouse. The system then automatically isolated the sub-circuit, inverted it into Y-language statements, produced a module definition for the Y design library, constructed a simulation environment, and built a precedent for the precedent library. Subse- quently applying the precedent, to a design causes CON- STELLATION to search the design for sub-circuits which match the precedent pattern. When these are found the precedent action is executed. The default action created by the system is to infer a new layer of low-level hierarchy in the design, corresponding to the module definition pro- duced when the precedent was created. Figure 4 shows fig- ure 2 after the precedent described in figure 3b has been ap- plied and the low-level hierarchy fragment defined in figure 3a has been inserted (but before it has been redisplayed). In many circumstances, some more complicated action on the design may be indicated - perhaps recognizing and undoing an complicated and intent-obscuring situation- specific implementation, or applying an optimizing tech- nique to improve overall design quality. In these cases, the designer may describe to the system the appropriate action to perform on recognizing the pattern. These would nor- mally be the same Y-language statements that the designer would encode to effect the change by manually editing the design definition text. By manually editing the precedent definition text instead, the actions can be remembered and applied to other designs under the control of the designer or a design supervisor system. 1026 / ENGINEERING THE CONSTELLATION SYSTEM. This section will briefly describe the major components of CONSTELLATION. These include the Database (al- ready described), the Isolator, the Inverter, the Precedent Builder, the Simulation Model Builder, the Matcher, and the Action Evaluator. Once a precedent has been indicated from within the graphical interface by selecting a group of modules, the Iso- lator is responsible for figuring out its boundaries and sur- gically removing it from the surrounding data structures. It does this by following each bus connected to each port of each selected module. Purely internal busses connect only to other selected modules, and are entirely within the prece- dent boundaries. Otherwise a port is constructed which di- vides the internal and external parts of the bus (connecting respectively selected and unselected modules). The port type (INPUT, OUTPUT, I/O, etc.) is inherited from the internal port types to which it connects. The excised precedent is then passed to the Inverter, which generates the design library description of the com- posite block, figure 3a. The Precedent Builder modifies this suitably to serve as the precedent pattern, creating a default action to produce the precedent library entry, fig- ure 3b. The default action is to disconnect the recognized modules from the circuit where they occur, to instantiate in their place the design library entry from the Inverter, and to connect the new ports into the circuit. In this way a fragment of hierarchy can be automatically reconstructed. Arbitrary Y or LISP statements may be manually substi- tuted for the system-generated default action. The isolated precedent data structure, together with ex- ternal ports, is passed to the Simulation Model Builder. This module retrieves the SIMMER [14] functional simu- lation models of the component blocks, and constructs a structural block functional simulation library model. When the precedent is actually applied, the Matcher enumerates the instances in the new design which match the precedent pattern. A standard depth-first graph-matching search is used. For efficiency, only those neighborhoods which potentially contain part of the precedent are searched. When an instance of the pattern is found, matching data structures from the database are bound and passed to the Action Evaluator. The Action Evaluator executes the ac- tion part of the precedent (which may be statements in the database language, or arbitrary LISP code). APPLICATIONS. CONSTELLATION is currently in use on an experimental basis in a production environment. Gould/AM1 operates a silicon foundry and often takes in designs from customers for fabrication. These frequently arrive in a netlist format with no hierarchy information. There are three general areas where we have applied the tool. First is inferring hierarchy in order to reduce design complexity. Second is circuit criticism aimed at improving the overall quality of the design. The last addresses silicon compilation. Inferring Hierarchy. Highly structured and ordered systems are much easier to deal with than unstructured, complicated ones. De- sign software could do a more efficient job in terms of chip performance and area efficiency if additional hierar- chy information could be obtained. One application of the precedent-based reasoning system is to infer hierarchy from non-hierarchical structures. By applying different rule sets, different hierarchies can be extracted. We found that different foundry customers were combin- ing the standard cell set in similar ways to create higher- level functions. In some cases it became obvious that spe- cial optimized cells should be created to reduce chip area. On the large 3,000 cell design of figure 1, we discovered a 62 cell pattern which was part of a larger 500 cell regular structure which could be laid out much more efficiently by taking advantage of the regular structure. Circuit Criticism. The basic pattern recognition capabilities of the precedent- based reasoning system lends itself to design criticism. By pointing out examples of bad circuit design practice, a li- brary of criticism precedents can be developed. Presently the knowledge acquisition environment builds only the de- fault action form directly, but the suggestion-reporting ac- tion can easily be substituted manually. In practice it is useful to combine some hierarchy infer- ence precedents with the circuit criticism precedents. The circuit is modified considerably by the hierarchy inference precedents to bring out major design abstractions explicitly and to alter logic structures which obscure the functions of the circuit. Once the circuit is reorganized, the criticism precedents are applied. We expect that, as confidence in the correctness of the precedent-based suggestions increases in this sensitive and critical production environment, more of the actual circuit modification will be handled by this tool as has been demonstrated in the laboratory. Silicon Compiler Optimization. In the future we plan to develop a connection to a Function- to-Structure silicon compiler. In anticipation of this we have explored the use of CONSTELLATION as an opti- mizing post-processor. Function-to-Structure silicon com- pilers typically deal with high-level architectural issues and hence produce netlists which are optimized at the global (system) level. There are usually additional local (logic- level) optimizations which can be performed on the final netlist. (In fact, these optimizations can be performed on any netlist, regardless of source.) A group of precedents could be accrued which were customized to the idiosyn- crasies of a particular silicon compiler, by inspecting its output designs and indicating where optimizations should be made. RELATION TO OTHER WORK. The basic idea of precedent-based reasoning arose in ma- chine-learning research, from intuitions that a situation seen before should be of assistance in understanding a new sit- uation. Winston et al. 1251 attempted to infer potential functions of a novel device by noticing similarities between its structure and the structure of familiar devices whose function is known. The ideas of Minsky[l’l] on global com- putation through the interaction of numerous small, local pieces of knowledge have also strongly influenced the con- ceptual design. Precedents as we use them are accordingly less complex than in [25], and so each precedent can be con- sidered a small “expert” on its own small patch of structure. In the VLSI domain Ressler [19] in analog (op amp) de- sign, and Hall [lo] in digital design, have applied the idea of a “design grammar” to the task of design synthesis. Brot- sky [4] has implemented a fast parsing algorithm for some graph grammars. The LHS and RHS of these grammar rules correspond to the pattern and action parts of our precedents. Cliches were used as abstracted templates of commonly-used software-engineering strategies in the Pro- grammer’s Apprentice [20, 241 for automatic programming. Kramer [13] h as investigated the application of cliches to the control of reasoning in the VLSI design synthesis task. LEARNING / 1027 Attempts to automatically group elements of structure together have been reported for a schematic diagram graph- ical display [l] and a test pattern generator [8]. Although DPL was purely a physical level language, elsewhere Suss- man’s concept of Slices [22] (the re-expression of part of a design in a different representation) explores a general ab- straction mechanism. The LEAP learning apprentice pro- gram [18 h proposed an intelligent tool which the expert can uses in t e normal course of solving a problem. The first rule-based system ever constructed (DENDRAL [5, 161) used (chemical) structure graphs in the if- and then- parts of its rules. Manually editing the precedent is similar to the “copy&edit” mentioned by Lenat [15] for rule cre- ation except that the total universe of potential objects to copy is all of VLSI design space (huge) rather than existing- rule space (a few hundred). Rule-based expert systems have already proven useful in VLSI (e.g., 121). We antici- pate that our precedent library could e incorporated into a pattern-directed inference control mechanism, providing some of the low-level rule knowledge required. CONCLUSION. Precedent-based reasoning in CONSTELLATION provides a tool for capturing procedural structural knowledge about a design. The knowledge acquisition environment permits the designer to simply point out a pattern and specify what is to be done when the pattern is found in a circuit. This provides an easy way to capture and routinely apply sim- ple design transformations in common design situations. Other and more powerful reasoners will be necessary to guide major design decisions, and to reason about special cases for which no precedent is known. The underlying database, Y language, and computation environment serve to make the precedent pattern/action rule mechanism ca- pable of supporting a diverse range of applications. Future developments will focus on generalizing the precedent pat- tern/action processor, and use of information on the func- tional and physical arms of the Y diagram. ACKNOWLEDGMENTS. The authors would like to thank Mark Alexander, Gavan huffy, and John Mallery for their contributions to this project. Comments from Kelly Cameron, Richard Doyle, Bob Hall, Walter Hamscher, David Kirsh, Jintae Lee, Doug Marquardt, Bruce Richman, Brian Williams, and Patrick Winston were also very valuable. Personal support for the first author was furnished by an IBM Graduate Fellowship, and during the early stages of this research was furnished by an NSF Graduate Fellowship and by a research/teaching assistantship from MIT. This paper describes research per- formed jointly at the MIT Artificial Intelligence Laboratory, and at the AM1 Cad Research Laboratory. Support for the MIT Artificial Intelligence Laboratory’s research is pro- vided in part by the Advanced Research Projects Agency under Office of Naval Research contract N00014-80-C-0505. REFERENCES. [l] Arya, Anjali, et. al; UAutomatic Generation of Digital Sys- tem Schematic Diagrams”, 22nd IEEE Design Automation Con- ference (DAC’85), Las Vegas, Nev., June 23-26 1985, paper 24.4, P’i . 388-395. 2 Batali, John; Hartheimer, Anne; “The Design Procedure Lan- guage Manual”, M.I.T. Artificial Intelligence Laboratory Memo 598, Sept. 1980. I 3] Brown, Harold; Tong, Christopher; Foyster, Gordon; “Pal- adio: An Exploratory Environment For Circuit Design”, Com- P uter, Dec. 1983., pp. 41-56. 41 Brotsky, Dame1 C.; “An Algorithm for Parsing Flow Graphs”, M.I.T. Artificial Intelligence Laboratory Technical Report 704, March 1984. [“h Buchs, A., et al.; “Applications of Artificial Intelligence for C emical Inference VI. Approach to a General Method of Inter- preting Low Resolution Mass Spectra With a Computer”, Hel- vetica Chemica Acta, 1970, 53:1394. [6] Davis, R. and Lenat, D.; Knowledge-Based Systems in Artifi- cial Intelligence, McGraw-Hill (New York), 1982. [7] Davis, R.; “Knowledge Acquisition in Rule-Based Systems: Knowledge About Representations as a Basis for System Con- struction and Maintenance”, in Pattern-Directed Inference Sys- tems, D. Waterman and F. Hayes-Roth (ed.), Academic Press (New York), 1978, pp. 99-134. [8] Delorme, C., et al.; “A Functional Partitioning Expert System for Test Sequences Generation”, 22nd IEEE Design Automation Conference (DAC’85), Las Vegas, Nev., June 23-26 1985, paper 47.5, pp. 820-824. [9] Gajski, Daniel D.; Kuhn, Robert H.; “Guest Editor’s Intro- duction: New VLSI Tools”, Computer, Dec. 1983, pp. 11-14. [lo] Hall, Robert; On Using Analogy to Learn Design Grammar Rules, S.M. thesis, Massachusetts Institute of Technology, Dec. 1985. [ll] Healey, Steven T.; Gajski, Daniel D.; “Decomposition of Logic Networks Into Silicon”, 22nd IEEE Design Automation Conference (DAC’85), Las Vegas, Nev., 23-26 June 1985, paper 13.1, pp. 162-168. [13] Kramer, Glenn A.; “Representing and Reasoning about De- signs”, unpublished manuscript. [12] Kowalski, T. J.; Thomas, D. E.; “The VLSI Design Automa- tion Assistant: What’s in a Knowledge Base”, 22nd IEEE Design Automation Conference (DAC’85), Las Vegas, Nev., 23-26 June 1985, paper 18.1, pp. 252-258. [14] Lathrop, Richard H.; Kirk, Robert S.; “An Extensible Object- Oriented Mixed-Mode Functional Simulation System”, 22nd IEEE Design Automation Conference (DAC’85), Las Vegas, Nev., 23- 26 June 1985, paper 39.2, pp. 630-636. [15] Lenat, D.; Prakash, M.; Shepherd, M.; “CYC: Using Com- mon Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks,” AI Magazine, vol. 6, no. 4, pp. 65-85. [16] Lindsay, R., et al.; DENDRA L, McGraw-Hill (New York), 1980. [17] Minsky, Marvin; Society of Mind, Simon and Schuster (New York), to appear 1986. [18] Mitchell, Tom M.; Mahadevan, Sridhar; Steinberg, Louis I.; “LEAP: A Learning Apprentice for VLSI Design”, Proc. 9th Intl. Joint Conf. on Artificial Intelligence (IJCAI’85), Los Angeles, Ca., 18-23 Aug. 1985, vol. 1, pp. 573-580. [19] Ressler, Andrew L.; “A Circuit Grammar For Operational Amplifier Design”, M.I.T. Artificial Intelligence Laboratory Tech- nical Report 807, Jan. 1984. [20] Rich, C.; “Inspection Methods in Programming”, M.I.T. Ar- tificial Intelligence Laboratory Technical Report 604, June 1981. [21] Smith, Reid G., et al.; “Representation and Use of Explicit Justifications For Knowledge Base Refinement”, Proc. 9th Intl. Joint Conf. on Artificial Intelligence (IJCAI’85)) Los Angeles, Ca., 18-23 Aug. 1985, pp. 673-680. [22] Sussman, Gerald J.; “Slices: at the Boundary Between Anal- ysis and Synthesis”, in Artificial Intelligence and Pattern Recog- nition in Computer Aided Design, J.-C. Latombe (ed.), North- Holland Pub. Co. (Amsterdam), 1978, pp. 261-299. [23] Waterman, D. and Hayes-Roth, F. (ed.); Pattern-Directed Inference Systems, Academic Press (New York), 1978. [24] Waters, R. C.; “The Programmer’s Apprentice: A Session with KBEmacs”, IEEE Trans. on Software Eng., vol. 11, no. 11, pp. 1296-1320. [25] Winston, Patrick H., et al.; “Learning Physical Descriptions From Functional Definitions, Examples, and Precedents’, , M.I.T. Artificial Intelligence Laboratory Memo 679, January, 1983. 1028 / ENGINEERING
|
1986
|
48
|
492
|
Refining the Knowledge Base of a Diagnostic Expert System: An Application of Failure-Driven Learning Michael J. Pazzani The Aerospace Corporation P.O.Box 92957 Los Angeles, CA 90009 Abstract This paper discusses an application of failure-driven learning to the construction of the knowledge base of a diagnostic expert system. Diagnosis heuristics (i.e., efficient rules which encode empirical associations between atypical device behavior and device failures) are learned from information implicit in device models. This approach is desireable since less effort is required to obtain information about device functionality and connectivity to define device models than to encode and debug diagnosis heuristics from a domain expert, We give results of applying this technique in an expert system for the diagnosis of failures in the attitude control system of the DSCS-III satellite. The system is fully implemented in a combination of LISP and PROLOG on a Symbolics 3600. The results indicate that realistic applications can be built using this approach. The performance of the diagnostic expert system after learning is equivalent to and, in some cases, better than the performace of the expert system with rules supplied by a domain expert. Introduction An important part of the construction of an expert system is the development of the knowledge base. This paper describes an application of computer learning to the construction of the knowledge base of an expert system for the diagnosis of anomalies in the attitude control system of a satellite. (The attitude control system is responsible for detecting and correcting deviations from the desired orientation of the satellite.) Rather than inducing diagnosis heuristics (i.e., empirical associations between symptoms and device failures) from a number of training examples, diagnosis heuristics are deduced as needed from device models. The techniques illustrated in this paper are applicable to learning diagnosis heuristics for complex systems such as a power plant or a satellite. The status of such systems is continuously monitored for unusual or atypical features. When one or more atypical features are detected, a diagnosis process seeks to find an explanation for the atypical features. This explanation typically involves isolating the cause of the atypical feature to a component failure. Occasionly, the explanation may be that system is in a normal but unusual mode. Two different approaches have been used for fault diagnosis. In one approach [3,6], the observed functionality of devices are compared to their predicted functionality which is specified by a quantitative or qualitative model of the device [4]. For a large system, comparing observed to predicted functionality can be costly. The alternative approach [ll, 141 encodes empirical associations between unusual behavior and faulty components as heuristic rules. This approach requires extensive debugging of the knowledge base to identify the precise conditions which indicate a particular fault is present. We describe the the Attitude Control Expert System (ACES) which integrates model-based and heuristic-based diagnosis. Heuristics examine the atypical features and hypothesize potential faults. Device models confirm or deny hypothesized faults. Thus, heuristics focus diagnosis by determining which device in a large system might be at fault. Device models determine if that device is indeed responsible for the atypical features. The initial diagnosis heuristics used in ACES are quite simple. They often hypothesize faults which are later denied by device models. We call this a hypothesis failure. When a fault is proposed, and later denied by device models, the reasons for this hypothesis failure are noted and the heuristic which suggested the fault is revised so that the hypothesis will not be proposed in future similar cases. This is a kind of failure-driven learning 1121 which enables a diagnostic expert system to start with heuristics which indicate some of the signs (or symptoms) of a failure. As the expert system solves problems, the heuristics are revised to determine what part of the device model should be consulted to distinguish one fault from another fault with similar features. There are several reasons why this approach is desirable: l Device models are a natural way of expressing the functionality of a component. However, they are not the most natural or efficient representation for diagnosis [13]. l Determining some of the signs of a fault (i.e., the initial diagnostic heuristics) is a relatively easy task. Often, the initial fault diagnosis heuristics are definitional. For example, ACES starts with a heuristic which states that if a tachometer is reading 0, then it is faulty. Later this heuristic is revised to include conditions to distinguish a fault in a tachometer from a fault in the component measured by the tachometer. One way to view failure-driven learning is as an extension of dependency-directed backtracking [ 151. In dependency-directed backtracking, when a hypothesis failure occurs, the search tree of the current problem is pruned by removing those states which would lead to failure for the same reason. In failure-driven learning, the reason for hypothesis failure is recorded, so that the search tree of future similar problems does not include states which would lead to failure for the same reason. Failure-driven learning dictates two important facets of learning: a to learn (when a hypothesis failure occurs) and what to learn (features which distinguish a fault in one component from faults in other components). What is not specified is how to learn. For example, a learning system could learn to distinguish a faulty tachometer from failures with similar features by correlation over a number of examples (e.g. [7, 8, 161). Device models (or a teacher) could classify a large number of examples as positive or negative examples of broken tachometers. For example, the heuristic which suggests broken tachometers could be revised to include a description of those combination of features which are present when a tachometer is faulty, but not present when the tachometer is working properly. In contrast, ACES learns how to avoid a hypothesis a failure after just one example. It does this by finding the most general reason for the hypothesis failure. The device models serve a dual role here. First, they identify when to learn by denying a hypothesis. More importantly, they provide an explanation for the hypothesis failure. The device models indicate which features would have been needed to be present (or absent) to confirm the hypothesis. This deductive approach to learning is called explanation-based learning [5,9]. Explanation-based learning improves the performance of ACES by creating fault diagnosis heuristics from information implicit in the device models. Schank [12] has proposed failure-driven learning as the mechanism by which a person’s memory of events and generalized events evolves with experience. A person’s memory provides expectations for understanding natural language understanding and inferring other’s plans and goals. When a new event fails to conform to these expectations, it is stored in memory along with the explanation for the failure to prevent the generation of the erroneous expectation in the future. In future similar situations, this event will be the source of expectations rather than the generalized event whose LEARNING / 1029 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. expectations were incorrect. In failure-driven learning as applied to fault diagnosis, the failures are of fault hypotheses as opposed to expectations. The reason for failure is identified as some aspect of a device’s function which disagrees with the fault hypothesis. The correction is to modify the heuristic rule which proposed the incorrect hypothesis to check that aspect of the device before proposing the fault. Failure-driven Learning of Diagnosis Heuristics In this section, we describe our approach to learning fault diagnosis heuristics by finding symptoms of faults implicit in device models. First, let us clarify what we mean by a device model. Following Chandraskaran [ 131, we represent the following aspects of a device: . Structure: Specifies the connectivity of a device. l Functionality: Specifies the output of a device as a function of its inputs (and possibly state information). It is not important to the expert system or the learning module that the functionality be expressed quantitatively or qualitatively. The important part is that given the observed inputs of a device, the device model can make a prediction about the output. The predicted value of the output be compared to the observed value or can be treated as an input to another device. Reasons for Hypothesis Failure We have identified three different reasons for failing to confirm a hypothesis. For each reason, we have implemented a correction strategy. l Hypothesized Fault- Inconsistent Prediction: The hypothesized failure is inconsistent with observed behavior of the system. The strategy for correction is to check for other features which the proposed fault might cause. l Hypothesized Unusual Mode- Enablement Violated: The atypical features can be explained by the system being in a normal but unusual mode. However, the enabling conditions for that mode are not met. The strategy for correction is to consider an enabling condition of the unusual state. l Hypothesized Fault- Unusual Input: The device hypothesized to be faulty is in fact functioning properly. This typically occurs when the input to a device is very unusual. In this case, the output of the device is unusual and the device might be assumed to be faulty unless the input is considered. The strategy for correction is to consider the device functionality. Revising Fault Diagnosis Heuristics When there is a hypothesis failure, the explanation for the failure is found and the heuristic rule which proposed the hypothesis is revised. A heuristic rule which proposes a fault can apply to one particular component (e.g., the light bulb of the left taillight) or a class of components (e.g., light bulbs). Similarly, the correction strategy can apply to a particular component or a class of components. The manner in which the knowledge base of heuristic rules is revised depends on the generality of the heuristic rule and correction. These interact in the following manner: Cantone [l] gives an approach for ordering tests based in part on the cost of the test. A Definition of Failure-driven Learning of Fault Diagnosis Heuristics More formally, a diagnosis heuristic can viewed as the implication: F and consistent(H) + H where F is a set of features, H is a hypothesis, and consistent(H) is true if believing H does not result in a contradiction. (See [2], for a discussion of consistent.) In our approach to learning and fault diagnosis, consistent(H) corresponds to confirming a hypothesis with &vice models. Confirmation can be viewed as the following implications: H + C, . . . H + C, If F is true, but consistent(H) is false because not(C1) is true, then the diagnosis heuris tic can revised to: F and C, and consistent(H) + H In some cases, checking the consistency of a hypothesis with the device models is more properly viewed as the following implication: B and H + C, The situation when consistent(H) is false because B is true and Ct is false corresponds to the case that the revised heuristic is used in addition to the old heuristic. The form of the revised heuristic in this case is: F and C, and B and consistent(H) + H The point of failure-driven learning of diagnosis heuristics is that it is simpler to rule out a hypothesis by testing for Cl than proving consistent(H). An Example In this section, we describe an example of how the performance of the expert system to diagnose faults in the attitude control system is increased through failure-driven learning. To follow this example, it is necessary to know a little about attitude control. Attitude Control The attitude control system consists of a number of sensors which calculate the satellite’s orientation on the three axes (called yaw, pitch and roll) by detecting the location of the earth and the sun and a set of reaction wheels which can change the satellite orientation if it deviates from the desired orientation due to torques such as solar pressure. There are four reaction wheels (PY+, PY-, PR+, and PR-), arranged on the four sides of a pyramid (see Figure 1). Pitch momentum is stored as the sum of all four wheel speeds; roll momentum is stored as the difference between the PR+ and PR- speeds; and yaw momentum is stored as the difference between the PY+ and PY- speeds. t +yaw l Heuristic rule not more general than the correction: The correction is added to the heuristic rule and this new more specialized rule replaces the old rule. l Heuristic rule more general than the correction: The correction is added to the heuristic rule and applied only in the specialized case. The old rule is retained for use in other cases. There are two other issues to be considered in revising heuristic rules. First, since some testing is being added to hypothesis generation, it would be wasteful to repeat the same test during confirmation. To avoid this potential problem, the revision to a rule caches the results of a test. Second, the amount of search necessary to prove a conjunction of subgoals in PROLOG (the language we use to implement our rules) is dependent on the or&r in which the subgoals are attempted. We use a strategy to order the tests in a revised rule similar to one proposed by Naish [lo]. This strategy minimizes the size of the search space by detecting the ultimate failure of a rule as soon as possible. This assumes that &creasing the search space is the best means of increasing performance. This is true in our application since testing for the presence or absence of any feature is equally expensive. PY- 0 $fa6 0 pr- /a\ S?+ +roll t -pitch Figure 1: The reaction wheels 1030 / ENGINEERING Tachometer Drive Signal prormeei PA+ YG33”‘y~-y+@eaction Wheel )-d Tachometer h Roll Attitude YCTL + Yaw i- YATT h ! 118PR- YYLR Tachometer li WePAt RMAR Signal , Processing G ISPY- PYLR UaPYt Figure 2: Block diagram of the attitude control system A diagram of the attitude control system appears in Figure 2. The signals YATT, RATT, and PATT represent the attitude on the yaw, roll and pitch axes respectively. The wheel drive signal processing component issues drive signals to the motor of the reaction wheels to change the wheel speeds to correct for any deviations from the desired attitude. The wheel drive signals are WDPY +, WDPY-, WDPR+ and WDPR- for the PY+, PY-, PR+ and PR- wheels respectively. The wheel speeds are measured by tachometers yielding the signals WSPY+, WSPY-, WSPR+ and WSPR-. The tachometer signal processing module converts the four wheel speeds to the three values called momentum equivalent rates (YMER, RMER, and PMER) representing the equivalent wheel speeds on the three axes. These equivalent wheel speeds are also combined with the attitude information from the sensors to yield the estimated attitudes (YATT, RA’lT, and PATT). The attitude control system contains the logic necessary to maintain the desired attitude. For example, to compensate for a disturbance on the roll axis, the difference between the speed of PR+ and PR- wheels must change. Once or twice each day, at a predetermined part of the satellite’s orbit if the wheel speeds exceed a certain speed, the momentum stored by the wheels is dumped by fuing a thruster. ACES: The Attitude Control Expert System One reason that our particular satellite was chosen for this research is that The Aerospace Corporation possesses a simulator for the attitude control system which generates telemetry tapes reflecting faulty behaviors to aid engineers in faults diagnosis. In addition, these tapes serve as input to our expert system. ACES consists of two major modules: l Monitor. This module converts the raw telemetry data to a set of features which describe the atypical aspects of the telemetry. In ACES, the features detected include: l (value-violation signal start-time end-time value): Between start-time and end-time the average value of signal has taken on an illegal value. l (jump signal start-time end-time amount start-value end-value slope): The signal has changed from start-value to end-value between start-time and end-time. Amount is the difference between start-value and end-value and slope is amount divided by the difference between start-time and end-time. l Diagnostician. This module finds an explanation for the atypical features. In this article, we focus on the learning in the diagnostician. The diagnostician is comprised of several cooperating modules: l Fault Identification. The atypical features are used as symptoms of faults by heuristic rules to postulate a hypothesis which could account for the behavior of the satellite. l Fault Confirmation. This step compares the actual device functionality to the functionality as specified by a device model. This process either can confirm or deny that a hypothesized fault is present. If a hypothesis is denied, an attempt is made to identify another fault. l Fault Implication Analysis. After a fault has been confirmed, the effect of the fault on the values of other telemetry signals is assessed. A model of the attitude control system predicts the values of telemetry signals which might be affected by the fault. The predicted telemetry values are analyzed by the monitor to see if they are atypical. Descriptions of atypical predicted values are then compared against the set of atypical features to explain any features which are a result of a confirmed fault. Refining Fault Diagnosis Heuristics For this example, the initial fault diagnosis heuristics are quite simple. Figure 3 presents the definition of three fault diagnosis rules. These PROLOG rules have a LISP-like syntax since our PROLOG is implemented in LISP. The first element of a list is the predicate name and all variables are preceded by “?“, The part of the rule preceded by ‘I:-” is a fault hypothesis, and the part of the rule after “:-” are those conditions which are necessary to be proved to propose the hypothesis. These rules implement three very crude diagnosis heuristics: “if the speed of a reaction wheel is 0, then the tachometer is broken”, “if the speed of a reaction wheel is 0, then the wheel drive is broken” and “if there is a change of momentum, then a thruster has fired to unload the momentum”. Telemetry data indicating the values of the momentum equivalent rates, attitude and wheel speeds are illustrated in figure 4. Typical values for these signals are present between 1:07 and 1:08. After 1:08 the monitor notices several atypical features: 1. YMER, Rh4ER, and PMER have changed an unusual amount, 2. WSPR+, WSPR-, WSPY+, and WSPY- have changed an unusual amount. 3. YATT, PATT, and RA’IT have changed an unusual amount. 4. WSPR+ is 0. LEARNING / 1031 1: (problem (problem wheel-tach ?from (broken-wheel-tach ?wheel ?from))) :- ;there is a tachometer stuck at 0 (feature(value-violation ?sig ?from ?until 0)) (measurement ?sig ?wheel speed ?tach) (isa ?wheel reaction-wheel) ;if the speed of a wheel is 0 2: (problem (problem wheel-drive ?from (broken-wheel-drive ?wheel ?from ?sig) 1) 3: (problem (normal-operation ?device ?from-jump ?end-jump (wheel-unload ?axis ?sign ?thruster ?ntimes ?jump))) :- ;there is a wheel unload on the ?axis -in the iisa ?sign direction ?sig momentum-equivalent-rate-signal) (feature (jump ?sig ?from-jump ?end-jump ?jump ?start ?end ?slope)) *if there is a jump of the of I ;YMER, RMER, or PMER (momentum-sig-axis ?sig ?axis) (is ?s (sign ?jump)) (opposite ?s ?sign) (thruster-axis ?device ?axis ?sign) ;find the thruster on the same axis -as I the momentum change in the -opposite direction , .- . *there I is a wheel drive motor not ; responding to the drive signal (feature(value-violation ?sig ?from ?until 0)) (measurement ?sig ?wheel speed ?tach) (isa ?wheel reaction-wheel) ; if the speed of a wheel is 0 Figure 3: Initial Fault Diagnosis Heuristics Since ACES is implemented in PROLOG, it tries the heuristic rules in the In this training example, the third rule in Figure 3 first hypothesizes that the change in PMER is due to a wheel unload. The &vice model of a order that they are defined. However, for the purposes of learning, the thruster reveals that there are two enabling conditions for a wheel unload. ordering of the rules undefined. (This is implemented by randomly First, the satellite must be in part of the orbit called the wheel unload changing the order of the rules before each run.) This prevents one window. Second, the satellite must be in a high momentum state. In this heuristic from relying on the fact that another fault proposed by an earlier example, the satellite is in a high momentum state, but it is not in the unload heuristic has been ruled out. window. Therefore, the hypothesis is denied. 200.0 - 100.0 - **-. # ---. 0.0 - -100.0 - -200.0 - -300.0 - -400.0 - -500.0 - I I I I I I I I 1 1:07:00 1:07:30 1:08:00 1:08:30 1:09:00 .1:09:30 1:10:00 1:10:30 1:11:00 WSPY- in CNTS WSpy+ in CNTS-------------- MS/JR+ in CNTS . . .._.........I.........~...........................- WSPR- in CNTS _ _ - _ _ - - 1.0 - 0.5 - 0.0 - -0.5 - -1.0 - . \ \ \ -1.5 - '\ -2.0 - I I I I I I I 1:07:00 1:07:30 1:08:00 1:08:30 1:09:00 1:09:30 1:10:00 1:10:30 1:11:00 RATT in DEG PflTT in DEG ---_r--Dm--m ---- YATT in DEG . . . . ..I.........I.........~....,.........~................~ 8000.0 - 6000.0 - 4000.0 - 2000.0 - -a-----.._- 0.0 - -2000.0 - -s --_ -4000.0 - .,.+ ,,,._. __ . . . . . l -.-...r . . . +- ..... -... -6000.0 - m,...-..C.... ...--s -...._ -1 *.-... . . . . . . . .. -8000.0 - *...... . . . . . . . . . . . . . . . . . . ...C . . . . ..__...... * . . . . . ._........* .v--- l .- .. ..- .,..,...* -10000.0 1:07:00 1:07:30 1:08:00 1:08:30 1:09:00 1:09:30 1:10:00 1:10:30 1:11:00 RMER in CNTS YflER in CNTS --_---- - _--- --- PMER i n CNTS . . . . . . .._.........I.........-..............-.........-~ Figure 4: Telemetry data after a broken tachometer 1032 / ENGINEERING The hypothesis failure is found to be caused by not considering one of the enabling conditions of the wheel unload (Hypothesized Unusual Mode- Enablement Violated). The heuristic is revised to include this enabling condition (see Figure 5). Since the heuristic and the explanation for the failure both applied to thrusters, the revised rule replaces the old version. In addition to checking the enabling condition, the revision includes a call to the predicate “cache-proved” which indicates that if this rule succeeds, there is no need to recheck the enabling condition “unload-window-status” during the confirmation process. (problem (normal-operation ?device ?from-jump ?end-jump (wheel-unload ?axis ?sign ?thruster ?ntimes ?juq))) :- (IN-MODE WHEEL-UNLOAD-WINDOW ?START-97 ?END-98) ;MAIcE SURE THE SATELLITE IS IN THE UNLOAD WINDOW. (isa ?sig momentum-equivalent-rate-signal) (feature (jump ?sig ?from-jump ?end-jump ?jump ?start ?end ?slope)) (AFTER ?FROM-JUMP ?START-97) ;MAKE SUFUZ THE JUMP STARTS AFTER ;ENTERING THE UNLOAD WINDOW. (BEFORE ?END-JUMP ?END-98) ;MAKE SURE THE JUMP ENDS BEFORE ;EXITING THE UNLOAD WINDOW. (momentum-sig-axis ?sig ?axis) (is ?s (sign ?jump)) (opposite ?s ?sign) (thruster-axis ?device ?axis ?sign) (CACHE-PROVED UNLOAD-WINDOW-STATUS) Figure 5: Revised Wheel Unload Heuristic- changesin SMALLCAPITALS This example also illustrates another point. We are not drawing an arbitrary line between what we call generating hypotheses and confirming hypotheses. Not all information is moved from confirmation to generation. Rather, we move only those tests from confirmation which prevent the generation of an erroneous hypothesis. In this example, the high momentum state information is not included in the heuristic because it does not differentiate a wheel unload from the true fault. -. After the heuristic has been revised, diagnosis continues. The next hypothesis is a failure of a tachometer of the PR+ reaction wheel. This hypothesis is proposed by the second rule in Figure 3. The device model confirms this hypothesis. One further example will help to illustrate the other strategies for revising fault diagnosis heuristics. Figure 6 contains the relevant telemetry data. For this telemetry tape, the monitor notices several atypical features: 1. WSPR-, WSPR+, WSPY+ and WSPY- have changed an unusual amount. 2. WSPR+ and WSPR- are 0. The first hypothesis proposed by the first rule in Figure 3 is that the tachometer of the PR- wheel is stuck at 0. The confirmation module denies this hypothesis for the following reason: if the tachometer were stuck at 0, the attitude of the satellite would change drastically. (The attitude control system would believe that the wheel was not storing any momentum when in fact it is. To compensate for the erroneous report of loss of momentum, the attitude control system would adjust the momentum of the other wheels, changing the attitude of the satellite.) Since the attitude did not change, the heuristic must be revised to avoid the generation of this hypothesis in future similar cases. The hypothesis failure is caused by not checking the implications of a faulty tachometer (Hypothesized Fault- Inconsistent Prediction). Checking any of the attitude signals would suffice to distinguish a faulty tachometer from the actual fault. In Figure 7, the revision tests YATT. After the heuristic has been revised, diagnosis continues. The next hypothesis proposed by the second rule in Figure 3 is that the wheel drive of the PR- wheel is broken. The device model of a wheel drive includes the following information: the wheel speed is proportional to the integral of the wheel drive signal. If the wheel drive signal is positive, the wheel speed should increase. During the time that WSPR- increased from -100 to 0, WDPR- was positive (see figure 6). Therefore, the PR- wheel was not ignoring its drive signal and the hypothesis is denied. The hypothesis failure is caused by the fact that WSPR- wheel is indeed doing something very unusual by changing so rapidly and stopping. However, it is doing this because it is responding to WDPR-. The heuristic which proposed this fault is revised to consider the functionality of the device (Hypothesized Fault- Unusual Input). 0.0 - -50.0 - /- -)------e--------c----“--------- ,' -100.0 - : : . . . . : : - v- + -150.0 - -200.0 - -250.0 - -----*--c%--~~~,~ -300.0 - -350.0 - I I I I I I I 0:06:00 0:07:00 0:08:00 0:09:00 0:10:00 0:11:00 0:12:00 0:13:00 0:14:00 WSPY- in CNTS WSpR+ in CNTS-------------- WSpy+ in CNTS . . ..-.........................-.........-...........- WSPR- in CNTS - - - _ - - - -80.0 -- - 0:0z0 0~0~:00 0:0;:00 0:1&0 0:1::00 0:1;:00 0:1;:00 0:14:00 WDPR- in CNTS WDPR+ in CNTS- - - - - - Figure 6: Telemetry data after a broken wheel drive LEARNING / 1033 (problem (problem wheel-tach ?from (broken-wheel-tach ?wheel ?from))) (FEATURE (VALUE-VIOLATION YATT ?FROM-32 ?END-33 ?VALUE-34)) ;MAKX SURE THE YAW ATTITUDE HAS BEEN DISTURBED (feature(value-violation ?sig ?froxn ?until 0)) (AFTER ?FROM-32 ?FROM) ;MAKE SURE THE ATTITUDE DISTURBANCE IS :AFTER THE VALUE VIOLATION (measurexnent ?sig ?wheel speed ?tach) (isa ?wheel reaction-wheel) (CACHE-PROVED ATTITUDE-DISTURBANCE) Figure 7: Revised Faulty Tachometer Heuristic- changesin SMALLCAPITALS In Figure 8, the revised heuristic checks that the change of the wheel speed as it approaches 0 is not due to the drive signal. Since our heuristic rules and our &vice models are implemented in the same language, it is possible to move code from the device model to a heuristic rule by renaming variables. In other systems, this may not be possible. However, this strategy would still apply if the rule could be revised to indicate what part of the device model to check for (e.g., test that the observed wheel speed could not be produced given the wheel drive between timet and time,). In ACES, it is possible to revise the rule to specify how the test should be performed instead of m test should be performed. (problem (problem wheel-drive ?frorn (broken-wheel-drive ?wheel ?from ?sig))) :- (FEATURZ(JUMP ?SIG ?FROM-37 ?UNTIL-38 ?JWMP-39 ?START-40 ?END-41 ?SLOPE-42)) ;THERE IS A CHANGE IN THE WHEEL SPEED (feature(value-violation ?sig ?from ?until 0)) (AFTER ?FROM ?FROM-37) :THB WHEEL SPEED REACHES 0 AFTER IT CHANGES ‘(masurernent ?sig ?wheel speed ?tach) (isa ?wheel reaction-wheel) (DRIVES ?DRIVE-43 ?WHEEL) (MEASUREMENT ?DRIVF.-SIGNAL-44 ?DRIVE-43 AMPLITUDE DIRECT) ;FIND THE WHEEL DRIVE SIGNAL OF THE ?WHEEL (IS ?DRIVE-SI-GNAL-SIGN-45 (TELEMETRY-SIGNAL-SIGN 7DRIxmSIGNAL-44 ?FROM-37 ?UNTIL-38)) ;FIND THE SIGN OF THE THE DRIVE SIGNAL ;DURING THE JUMP (IS ?SLOPE-SIGN-46 (REPORT-SIGN ?SLOPE-42)) ;FIND THE SIGN OF SUM? (NOT (AGREE ?SLOPE-SIGN-46 ?DRIVE-SIGNAL-SIGN- ;MAKE SURE THE DIRECTION OF THE JUMP ;DISAGREES WITH THE DRIVE-SIGNAL. (CACHE-DISPROVED WHEEL-DRIVE-STATUS) Figure 8: Revised Wheel Drive Heuristic- changesin SMALLCAPITALS 45) ) After the heuristic has been revised, another hypothesis is found to account for the atypical features: the faulty wheel drive heuristic proposes that the PR+ drive is ignoring its input since WSPR+ is 0, and when it increased to 0, WDPR+ was negative indicating that the speed should decrease (see Figure 6). The confirmation of this hypothesis is trivial since the heuristic already proved that the drive was not functioning according to its device description. After the fault is confirmed, the effects on the rest of the attitude control system are assessed. Since roll momentum is stored as the difference between the speed of the PR+ and PR- reaction wheels, when WSPR+ goes to 0, WSPR- should change by the same amount. The satellite was in a very unusual state prior to the failure: WSPR+ and WSPR- were equal. When the PR+ drive broke, WSPR- went to 0 to compensate for the change in WSPR+. In addition, since the pitch momentum is stored as the sum of all four wheels, to maintain pitch momentum WSPY+ and WSPY- decreased by the amount that WSPR+ and WSPR- increased. While WSPY+ and WSPY- decreased, the difference between them remained constant to maintain the yaw momentum. The broken PR+ wheel drive accounts for the atypical features and the diagnosis process terminates. Results There are two standards for evaluating the effects of learning in ACES. First, there is the performance of ACES using the rules in Figure 3. We call this version naive-ACES. Additionally, there is the performance of ACES using rules hand-coded from information provided by an expert. We call this version of the system expert-ACES. The performance of the naive- ACES after learning is compared to naive-ACES and expert-ACES in Figure 9 and Figure 10. There are four test cases which are used for comparison: 1. A tachometer stuck at 0 (see Figure 4). 2. A wheel drive ignoring its input when the opposite wheel is at the same speed (see Figure 6). 3. A wheel unload (i.e., the the firing of a thruster). speed of the reaction wheels is changed by 4. A wheel drive ignoring its input in the usual case where the opposite wheel is at a different speed. The data in Figure 9 demonstrate that the failure driven learning technique presented in this paper improves the simple fault diagnosis heuristics to the extent that the performance of ACES using the learned heuristics is comparable to the system using the rules provided by an expert. In one case, the performance of the learned rules is even better than the expert provided rules. This particular case is the previous example in which a wheel drive broke when the satellite was in an unusual state. The heuristic provided by the expert did not anticipate the rare condition that two opposing wheel speeds were equal. CASE fault naive ACES naive+ expert learning ACES 1 tachometer 21 1 1 2 wheel drive 4 1 2 3 wheel unload 1 1 1 4 wheel drive 2 1 1 Figure 9: Number of Fault Hypotheses The data in Figure 10 reveal that the number of logical inferences required by the expert system decreases after learning. This demonstrates that after learning, the expert system is doing less work to identify a failure rather than moving the same amount of work from hypothesis confirmation to hypothesis generation. Comparing the number of inferences required by naive-ACES after learning to those of expert-ACES is not actually fair since it appears that the expert’s rules at times test some information retested by the confirmation process. Recall that retesting is avoided by a revised rule since the revision contains information to cache the results of consulting a device model. It has been our experience that this cache reduces the number of inferences by approximately ten percent. An additional ten percent of the inferences am saved through intelligent ordering of clauses of revised rules compared to our initial simple approach of appending the revision to the end of a rule. 1034 / ENGINEERING CASE fault naive naive+ expert ACES learning ACES 1 tachometer 2268 211 584 2 wheel drive 1238 616 910 3 wheel unload 870 861 947 4 wheel drive 745 409 643 Figure 10: Number of Inferences to Generate and Confirm Fault Conclusion We have presented an approach to learning fault diagnosis heuristics by determining what aspect of a device model must be consulted to distinguish one fault from another fault with similar features. This approach relies on explaining why a heuristic does not apply in a certain case and correcting the heuristic to avoid proposing an erroneous fault hypothesis. Applying this technique to a simple version of the ACES expert system for the diagnosis of faults in the attitude control system yields performance comparable to and in some cases better than the performance of ACES with expert fault diagnosis heuristics. Acknowledgements Comments by Anne Brindle, Jack Hodges, Steve Margolis, Rod McGuire and Hilarie Orman helped clarify this article. This research was supported by the U.S. Air Force Space Division under contract FO4701-85-C-0086 and by the Aerospace Sponsored Research Program. References 111 Cantone, R., Pipitone, F., Lander, W., & Marrone, M. Model-based Probabilistic Reasoning for Electronics Troubleshooting. In Proceedings of the Eigth International Joint Conference on Artificial Intellegence, pages 207-211. IJCAI, Vancouver, August, 1983. PI Charniak, E., Riesbeck, C. and McDermott, D. Artzjkial Intelligence Programming. Lawrence Erlbaum Associates, Hillsdale, NJ, 1980. [31 Davis, R., Shrobe, H., et al. Diagnosis Based on Description of Structure and Function. In Proceedings of the National Conference on Artificial Intelligence. American Association for Artificial Intelligence, Pittsburgh, PA, 1982. 141 de Kleer, J. & Brown, J. A Qualitative Physics Based on Confluences. Artzjicial Intelligence 24(l), 1984. 151 DeJong, G. Acquiring Sche:nata Through Understanding and Generalizing Plans. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence. Karlsruhe, West Germany, 1983. 161 Genesereth, M., Bennett, J.S., Hollander, C.R. DART: Expert Systems for Automated Computer Fault Diagnosis. In Proceedings of the Annual Conference. Association for Computing Machinery, Baltimore, MD., 1981. [71 Michie, D. Inductive Rule Generation in the Context of the Fifth Generation. In Proceedings of the International Machine Learning Workshop. Monticello, Illinois, 1983. PI Mitchell, T. Generalization as Search. Artificial Intelligence 18(2), 1982. PI Mitchell, T., Kedar-Cabelli, S. & Keller, R. A Unifying Framework for Explanation-bused Learning. Technical Report, Rutgers University, 1985. [lo] Naish, Lee. Prolog Control Rules. In Proceedings of the Ninth International Joint Conference on Artificial Intellegence, pages 720-722. IJCAI, Los Angeles, CA, August, 1985. [ 111 Nelson, W.R. REACTOR: An Expert System for Diagnosis and Treatment of Nuclear Reactor Accidents. In Proceedings of the National Conference on Artificial Intelligence. AAAI, Pittsburgh, PA, 1982. [ 121 Schank, R. Dynamic Memory: A Theory of Reminding and Learning in Computers and People. Cambridge University Press, 1982. [ 131 Sembugamoorthy, V. & Chandraskaran, B. Functional Representation of Devices and Compilation of Diagnostic Problem Solving Systems. Technical Report, Ohio State University, March, 1985. [ 141 Shortliffe, E.H. Computer-based Medical Consultation: MYCIN. American Elsevier, New York, NY, 1976. [IS] Stallman, R. M. & Sussman, G. J. Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis. Artificial Intelligence g(2): 135-196, 1977. [16] Vere, S. Induction of Concepts in the Predicate Calculus. In Proceedings of the Fourth International Joint Conference on Artificial Intelligence. Tbilisi, USSR, 1975. LEARNING / 1035
|
1986
|
49
|
493
|
CONNECTION MACHINE STEREOMATCHING Michael Drumheller Thinking Machines Corporation 245 First Street Cambridge, Massachusetts 02142 Abstract: This paper describes a parallel real-time stereomatching algorithm and its implementation on the Connection MachineTM computing system. computer, a new massively parallel The main features of the algorithm are 1) real-time performance, 2) the full exploitation of the OP- dering constraint, 3) a representation that easily maps onto a parallel computer architecture, and 4) the ability to effi- ciently use a variety of matching primitives. Some results, including timings, are shown for both real and synthetic data. Also discussed are the use of color information and some subtle variations of the basic algorithm. I Introduction Research on stereomatching has been, until recently, a slow and painstaking process. Most stereo algorithms are com- putationally expensive, requiring many seconds, minutes, or even hours to run on conventional computers. Such a long feedback delay for changing the algorithm or the data makes it impractical to refine stereo algorithms by experi- mental methods. In this paper we will show how a fine-grained massively parallel computer has been used to solve the speed prob- lem and to develop a new stereomatching algorithm. The algorithm was outlined in a section of [2]. 2 Description of the Connection Machine computer The Connection Machine computer is a fine-grained mas- sively parallel computing system designed and built by Thinking Machines Corporation for research in artificial intelligence. The prototype contains 65,536 l-bit serial processors which can communicate with each other by two distinct mechanisms. One of these mechanisms has the topology of a boolean 16-cube and is called the router network, or simply “the router.” The other mechanism con- sists of a four-connected x-y grid called the north-east-west- south connections, or “NEWS.” NEWS is used for opera- tions requiring local communication, such as convolutions and relaxation algorithms. The router is used for global operations such as permuting, sorting, merging, summing, histogramming, region-growing and image sampling. Each processor has 4K bits of memory. The machine is pro- grammed in a single instruction, multiple data fashion from a host computer such as a Lisp MachineTMor a VAXTM. The computations performed by a particular processor depend on the data contained in that processor’s memory. For ex- ample, the host machine may broadcast a request to each processor to add two numbers together, conditional upon whether the processor contains a particular piece of data. 0 m e m c ,: r 4096 Figure 1. Schematic representation of the Connection Machine computer from the viewpoint of image process- ing. The processors may be viewed as z and y dimen- sions; the memory forms a third dimension. An 8-bit image could be stored in the field (0, 8). In this way, any a computation. subset of the processors can “opt out” of 2.1 Virtual Processors The Connection Machine computer can be space-shared by dividing it into small boolean n-cubes whose sizes depend on the number of users and the amount of processing power they need. For example, it was most convenient at the time of this writing to use a 16K-processor subnetwork of the standard 64K-processor configuration. Image processing applications are programmed on the Connection Machine computer by assigning one processor to each pixel. If the number of pixels is greater than the number of processors, then virtual processors are used. Vir- tual processors are a low-level software facility through which each physical processor simulates several processors in separate blocks of its memory. Virtual processors are invisible to the user; they simply make the machine “seem larger” (and proportionally slower). We used 64K virtual processors (4 per physical processor) to process the 64K- pixel images shown in this paper. See (41 for a more detailed description of the Connection Machine computer. 2.2 Storing images in Connection Ma- chine memory A convenient way to visualize the Connection Machine com- puter for this implementation is shown in Figure 1. From the point of view of the NEWS communication mechanism, the processors act as “x” and “y” dimensions and each processor’s memory is a “memory dimension.” The term 748 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. field will be used to refer to a rectangular block of memory underlying the entire NEWS grid and occupying a contigu- ous segment of the memory dimension. A field is denoted (address, length). For example, an &bit video image might be contained in the field (0,s) which occupies memory loca- tions O-7 in every processor (see Figure 1). In this way, each processor represents a pixel. Since the amount of memory in each processor is large compared to the length of a typ- ical intensity value (usually 8 bits), the machine can hold many different superposed images. 3 A simple Connection Machine stereomatching algorithm Compute primitives for matching. Compute potential matches between primitives. Determine match. the amount of local suport for each potential Choose correct matches on the basis of constraints on uniqueness and ordering. local support and 3.1 Matching features This algorithm does not require a particular type of match- ing primitive. Many different types of primitives could be used; see [I] or [8] for examples. All results shown in this paper were obtained using zero crossings [5]. We have used other features; these will be discussed later. In our implementation we first convolved the image with a small gaussian filter, then detected zero crossings in the output of a discrete laplacian operator. 3.1.1 Z-D convolution chine computer on the Connection Ma- Convolutions are performed by collecting intensity values from nearby processors via the NEWS mechanism, multi- plying the values by a broadcast constant, i.e., the appro- priate filter weight, and accumulating the product. These steps occur in parallel for all pixels. Some timings for gaus- sian convolutions are given in Table 1. The time complexity of convolutions computed in this way is independent of the size of the image and proportional to the size of the filter. TOTAL DIAMETER CENTRAL CONVOLUTION TIME OF WIDTH ON THE GAUSSIAN MASK "SIGMA" CONNECTION MACHINE (PIXELS) (PIXELS) (SECONDS) 1.2 0.022 13 2.2 0.042 31 5.2 0.107 49 8.2 0.180 Table 1, Actual timings for gaussian convolution on the Connection Machine computer. It should be noted that these figures are for a true gaussian convolution computed using a sampled gaussian kernel, not for the commonly- used approximation obtained by iterated local averaging. L, (a radial line from I Rc right eye epipolar line) (left eye epipolar line) Figure 2. Schematic representation of stereomatching ge- ometry (redrawn from 61). This diagram represents the 1-D stereomatching pro b lem at a particular y coordinate, that is, for a pair of epipolar lines or scanlines. The black and white circles are potential matches, or points in x-d space where the features from both images are compati- ble. The black circles are hypothetical correct matches. The uniqueness constraint states that there can be at most one match along any radial line from either eye (such as L1 and L,). 3.2 Computing potential matches The set of potential matches in a one-dimensional stereo- matching problem, i.e., for a pair of corresponding epipolar lines, can be represented by the diagram in Figure 2. These diagrams can be “stacked” perpendicular to the page to obtain a three-dimensional set of potential matches. This representation of the stereomatching problem is very con- venient for mapping a stereo algorithm into the Connec- tion Machine computer, since the resulting set of potential matches can be stored in a field in the machine (see Figure 3, compare with Figures 1 and 2). For simplicity, we have assumed that the images are perfectly registered and that all epipolar lines are horizon- tal. Potential matches are allowed to occur between zero crossings of the same sign on corresponding epipolar lines. Note that this implements the compatibility constraint [6]. For D disparity values ranging from d; to d,, we computi! the set of potential matches in the following way: Allocate a field P = (paddr, D) to contain the set of potential matches. Initialize P to contain zero everywhere. the Allocate two 2-bit fields, L and R, initialized zero crossings of the left and right images. to contain While holding L stationary, “slide” R horizontally one pixel at a time along the x-axis of the NEWS grid, from x = d; to x = df. After the ith shift, write a I into the field ((paddr i- i), 1) at each (x, y) where L and R contain identical zero crossings. For example, in the third step, after the first shift of R, the would be 1 at every point (x, y) where AI LANGUAGES AND ARCHITECTURES / 749 Figure 3. Diagram describing how the geometry of stere- omatching is mapped into the Connection Machine com- puter. The 1-D situations (as in Figure 2) are “stacked” in the y direction to form a complete 2-D stereomatching problem. After this operation, the field P represents a rectangular block of z-y-d space with some configuration of matches (l’s) embedded in it. 3.2.1 Image registration As stated so far. this algorithm assumes that the stereo images are perfectly registered with respect to the epipolar lines. In nractice. of course. this is rarelv the case. In our I I I implementation. we first “undistort” the images to compen- satk for perspective distortion and camera-misalignment. The nositional error of each pixel is measured using a semi- automatic svstem based on a test nattern which, rn effect, produces a *stereo pair with no false matches, Each pixel is then “sent” to its corrected position via the hype&be communication mechanism. Due to rounding-off of pixel displacements, it is inevi- table that more than one pixel will collide at the same des- tination pixel. We take the average of the colliding pixels as the value of the destination. It is very convenient to use the router for this computation, since it automatically combines colliding messages with a user-specified “combin- ing function,” such as MAX, MIN, LOGIOR, etc. In our case. the combining function is ADD. The destination pixel is normalized acco:ding to the number of pixels that collide there. 3.3 Gathering local support Our first step at distinguishing the correct matches from the false ones is to apply a continuity or smoothness con- straint. Many algorithms based on such constraints have been developed [6] [9] [lo] [ll] [12]. 3.3.1 Three-dimensional convolution A straightforward way to measure how well each dispar- ity satisfies the smoothness condition is to convolve the three-dimensional region of x-y-d space contained by the field P with a three-dimensional kernel that gathers sup- port from locally smooth disparity configurations. There are many different kernels, or support functions, that will do a good job on this task. Reference [6] uses a very simple support function (or “excitatory region”) that is circular, uniformly-weighted, and flat, i.e., it occupies only one level in the disparity dimension. More elaborate, 3-D support functions are described in [9] and [12]. It should be noted that these algorithms do not perform a simple linear convo- lution. Prazdny’s algorithm, for example, includes a non- linear step designed to ignore irrelevant matches and cut down on computational costs [12]. Three-dimensional convolutions are computed in a man- ner similar to two-dimensional convolutions. In the 3-D case, however, a memory location in each processor might accumulate values not only from neighbors in the x and y dimensions, but also in the memory dimension. .4 Enforcing uniqueness The uniqueness constraint [6] allows a left image feature to be matched with only one right image feature and vice versa. Every working stereo algorithm uses the uniqueness constraint or something similar to it. It expresses the fact that under normal circumstances a single physical object does not simultaneously give rise to a single feature in one image and many features in another image. Figure 2 and its caption illustrate the uniqueness con- straint. In the simple algorithm which we are now describ- ing, potential matches are retained if they have the max- imum local support score along the radial lines projecting from each eye. This process of non-maximum suppression along lines of sight has been called the winner-take-all ap- proach; it is analyzed in detail in [14]. It is used, with slight variations, by the algorithms in [9], [12] and [14], and it is similar to the inhibition employed by the cooperative algorithm in [6]. 4 A new algorithm combining uniqueness and ordering con- straints 4.1 The forbidden zone Every potential match is surrounded by an hourglass-shaped region extending through the d and x dimensions, as shown in Figure 4. This region is called the forbidden zone, see [13]. Any straight 1 ine lying in the forbidden zone must intersect no more than one match, unless the scene con- tains transparent or narrow occluding objects. Examples of such a scene include a pane of glass with markings on both surfaces or a vertical wire suspended in front of a tex- tured wall. These situations can give rise to violations of the ordering constraint. Assuming that the scene contains none of the situations just described, any surviving match must be unique not only along the left and right eye radii (Li and L, in Fig- ures 2 and 4), but along any line situated between them, such as L in Figure 4. The set of all such lines fills the forbidden zone completely. Therefore, we implemented an algorithm in which a potential match is eliminated unless its local support score is greater than that of any other match in its entire forbidden zone. If there are D disparity levels, the forbidden zone sur- rounding each potential match contains approximately D2/2 other potential matches. Therefore, the a priori time complexity of an algorithm that considers every score in ev- 750 / ENGINEERING Figure 4. The forbidden zone. Enforcing uniqueness in the directions of Li and L, in this diagram corresponds to allowing only one match per image feature (recall Figure 2). If we assume that the scene contains opaque objects and no narrow occluding objects, then a straight line lying inside the forbidden zone, such as L above, must also contain at most one match. Consideration of all such lines implies that every correct match must be the only match in its own forbidden zone, The forbidden zones of a few matches are shown above (shaded). Note that none of the black dots hypothetical correct matches) has another black dot insi e its forbidden zone. T R . A'S Forbidden Zone B’s Forbidden Zone ery forbidden zone is O(D3). However, an O(D2) algorithm exists, which we now describe. 4.1.1 An O(D2) algorithm for non-maximum suppression over the entire forbidden zone. Figure 5 shows a slice of the field P, through the x-memory plane, in the neighborhood of a single processor. Figure 5 is equivalent to Figure 4, but it makes the relationship between Figure 4 and the Connection Machine architec- ture more explicit. Notice that here the forbidden zone is “skewed.” This is a consequence of the way the field P was constructed in section 3.2. In particular, a vertical line in Figure 4 maps into a vertical line in Figure 5, but a hori- zontal line in Figure 4 maps into a diagonal line in Figure 5. In our algorithm, a potential match must examine the local-supporr scores of every potential match in its forbid- den zone. The Connection Machine implementation steps through each disparity level i, first up (i = di, . . . . df) then down (; = d , . . . . d;). On the upward pass the lower “lobes” of the forbi den zones are considered. For example, in Fig- d ure 5, the match A checks whether its score is greater than all scores in the horizontally cross-hatched lobe below it. If this is not the case, then A is disqualified. Next, the match B checks whether its score is greater than all scores in the diagonally cross-hatched lobe below it. If this is not the case, then B is disqualified. Note, however, that the maximum value found in the horizontally-shaded lobe can be cached and used to help compute the maximum in the diagonally-shaded lobe. In particular, in order to compute the maximum value in the entire diagonally-shaded lobe, B needs to examine only the scores covered by diagonal shad- ing alone, because the scores covered by both diagonal and Figure 5. Algorithm for non-maximum suppression over the entire forbidden zone (see text). LI and L, (from Fig- ures 2 and 4) are redrawn here to make the relationship between this figure and Figure 4 explicit. Note, however, that the line L, is tilted 45 degrees. This is a conse- quence of the way the field P was constructed in section 3.2. In particular, lines parallel to Li lie inside a single processor’s memory, but lines parallel to L, are oriented as shown. horizontal shading have already been examined (by A). In this way, the time complexity of the foribdden zone com- putation can be kept down to O(D2). (Note that a similar “downward pass” is necessary to cover the upper lobes.) 5 Results and discussion Some results of running our algorithm on natural and syn- thetic images are shown in Figures 6 through 10. 5.1 Time complexity of the algorithm Given the efficient implementation of the forbidden-zone computation described in the previous section, the most time-consuming component of the entire algorithm is the local support step, which involves an expensive 3-D con- volution. This operation requires time proportional to the number of disparity levels being investigated. Therefore, for a particular local-support function, the time complex- ity of the entire algorithm is roughly O(D) . 5.2 Using 3-D support functions References [9] and [12 describe algorithms that are based on the winner-take-al i approach and which use elaborate support functions. These functions are circularly symmet- ric in the x-y plane, with a butterfly-shaped cross-section. The values of the function decrease gradually from a non- zero weight at the center to zero at the extreme perimeter. Such a function is designed to respond to a configuration of matches that is locally smooth and roughly planar, and which may have a nonzero disparity gradient [9]. AI LANGUAGES AND ARCHITECTURES / 75 1 The Connection Machine implementation allows the use of such 3-D support functions, but the best results we have obtained so far have been with simple 2-D kernels. We intend to perform further research on this topic. 5.2.1 Color vectors and other matching fea- tures It is possible to combine information from various color channels to improve stereomatching results. One method that we have tried is to use the sign of convolution (SOC) as a matching primitive. This was used by Nishihara in the PRISM stereomatching system [8]. Our approach was to compute the SOC for each of three color channels, then load them into separate positions in a S-bit field of “SOC color vectors.” These vectors were used instead of the 2-bit zero crossings to compute the potential matches. This was found to give a noticeable improvement for very contrived scenes. For example, we ran such an al- gorithm on a scene containing a matte white hammer on a matte white background, with a randomly colored random dot stereogram projected onto the objects to provide a rich texture for matching. This technique, called unstructured light, was invented by Nishihara (81; he used it for a sin- gle broad color channel. We noticed a severe drop in the number of matches and a roughly proportional drop in the number of errors. The method offered little or no improve- ment for typical natural scenes. We do not regard these results as conclusive; rather, we mention them in order to stimulate interest in the use of color in stereomatching. 6 Conclusions Stereomatching can be performed extremely fast on a mas- sively parallel processor such as the Connection Machine computer. The use of this machine also helps us represent the problem easily, since the geometry of the stereomatch- ing situation naturally maps into such a parallel architec- ture. The Connection Machine computer has been used to implement an efficient new algorithm, winner-take-all us- ing the entire forbidden zone. The new algorithm exploits uniqueness and ordering constraints more fully than previ- ous similar algorithms. The tremendous speed afforded by the Connection Ma- chine computer makes it possible to experiment with so- phisticated computer vision algorithms, such as the stereo- matching algorithm described here, interactively. 7 Acknowledgments The author is indebted to T. Poggio for his guidance in this work. Many of the ideas in this paper originated in discussions with him. Connection Machine is a trademark of Thinking Ma- chines Corporation. Figure 6. A natural stereo pair. The scene consisted of a terrain model approximately 1 meter wide, with a simulated building added. Figure 7. Contour map for the natural stereo pair. The disparities were computed using the simple version of al- gorithm, i.e., with non-maximum suppression only in the directions of Ll and L, (from Figure 2). The algorithm was run over a disparity range of 23 prxels; the support region was a flat square 23 pixels on a side. Computa- tion time for the stereomatching algorithm (not including edge detection was 0.95 seconds. The contour map was computed by d rawing isodisparity lines after interpolat- ing the disparity field. (The interpolation algorithm used a “rubber sheet” model. This is basically an iterative solution to the heat equation., performing approximately 1000 iterations of nearest-neighbor averaging at 20 bits of precision and with known disparities held fixed, fol- lowed by a slight gaussian smoothing, all computed in 1.5 seconds). Figure 8. Another contour map for the natural stereo pair, using the full-forbidden-zone algorithm. All other parameters are the same as in Figure 7. Note that drastic errors (such as the peaks and erratic elevations appearing in Figure 7) are almost non-existent in this contour map. 752 / ENGINEERING Figure 9. A computer-generated stereo pair. The consisted of a vase in front of a textured backdrop. scene 1 ~ A B C Figure 10. (A) h s ows the occluded region for the syn- thetic stereo pair, where no matches should be found. (B) shows the locations where matches were found using the simple algorithm. (C) shows the locations of matches found using the full-forbidden-zone algorithm. Note that the full-forbidden-zone algorithm was more successful at eliminating matches in the occluded region, where the ordering constraint is violated. REFERENCES [I] Canny, John F. “Finding Lines and Edges in Images,” M.I.T. A.I. Memo 720, 1983. [2] Drumheller, Michael and Poggio, Tomaso. “On Par- allel Stereo,” 1986 IEEE International Conference on Robotics and Automation, April 1986, pp. 1439 1448. [3] Grimson, W. Eric L. “From Images to Surfaces,” M.I.T. Press, Cambridge, MA, 1981. [4] Hillis, W. Daniel. The Connection Machine, M.I.T. Press, Cambridge, MA 1985. [5] Mur, D. and Hildreth, E. “Theory of Edge Detec- tion,” Proc. Roy. Sot. London, vol. B 207, pp.187-217, 1980. [6] Marr, D. and Poggio, T. “Cooperative computation of stereo disparity,” Science 194, pp.283-287, 1976. [7] Mayhew, J.E.W. and Frisby, J.P. “Psychophysical and computational studies towards a theory of human stereop- sis,” Artificial Intelligence 17 (1981), pp.349-385. [8] Nishihara, Keith. “PRISM: A practical real-time imag- ing stereo matcher,” M.I. T. A.I. Memo 780, Cambridge, MA, May 1984. [9] Pollard, S. B., Porrill, J., Mayhew, J. E. W. and Frisby, J. P. “Disparity Gradient, Lipschitz Continutity, and Computing Binocular Correspondences,” Artificial In- telligence Vision Research Unit AIVRU ref. no. 010, Uni- versity of Sheffield, England, 1985. [lo] Ohta, Yuichi and Kanade, Takeo. “Stereo by Intra- and Inter-scanline Search Using Dynamic Programming,” Carnegie-Mellon University Technical Report CMU-CS-83- 162, 1983. [ll] Baker, H. H. and Binford T. 0. “Depth from Edge and Intensity Based Stereo,” Seventh International Joint Conference on Artificial Intelligence, August 1981, pp. 631- 636. [ 121 Prazdny, K. “Detection of Binocular Disparities,” Bi- ological Cybernetics, 52, pp.93-99, 1985. [13] Yuille, Alan L., and Poggio, Tomaso. “A Generalized Ordering Constraint for Stereo Correspondence,” M. I. T. A-1. Memo 777, Cambridge, MA, May 1984. [ 141 Marroquin, Jose L. “Design of Cooperative Networks,” M.I. T. A.I. Lab Working Paper 255, Cambridge, MA July 1983. [15] Marr, D. Vision, Freeman, San Franscisco, pp.lll- 125, 1982. AI LANGUAGES AND ARCHITECTURES / 753
|
1986
|
5
|
494
|
LEARNING ARITHMETIC PROBLEM SOLVER Masamichi SHIMURA and Seiichiro SAKURAI Tokyo Institute of Technology Department of Computer Science Ohokayama, Ileguro, Tokyo ABSTRACT In this paper we describe a problem solving system with a learning mechanism (Learning Arithmetic Problem Solver, LAPS), which can solve arithmetic problems written in natural languages. Since LAPS has knowledge about arithmetic problems in the form of rules, it can solve many different problems without alteration of the program. When LAPS cannot solve a given problem because of a shortage of knowledge, it asks the user how to solve the problem. According to the user's advice LAPS acquires knowledge and rules. Using these rules, LAPS can solve problems. Furthermore, LAPS can improve its performance at problem solving by synthesizing rules that are applied. I INTRODUCTION Recently many researchers in A.I. or K.E. have developed several practical knowledge based systems. However, such systems are restricted to rather narrow fields. In general-use systems the knowledge required is excessive and knowledge acquisition is a bottleneck. This paper presents a knowledge acquisition method in problem solving systems. For problem solving, the system needs knowledge to understand the problem and to derive equations. Our LAPS can solve algebraic problems given in natural language. When knowledge is lack- ing, LA?S can acquire some knowledge to solve a given problem through interaction with a user or teacher. The knowledge obtained from him is generalized and stored in the system. Through the process of problem solving LAPS can get problem- solving knowledge by synthesizing rules that are applied. Once LAPS succeeds in solving the problem, it can solve a similar problem without backtracking. In other words, LAPS can improve its performance at problem solving by learning. Early attempts at solving algebraic problems given in natural language are the programs by Bobrow and Charniak[l]. However the elementary parsing technique and simple semantic structures used by Bobrow and Charniak are inadequate for any but the easiest problems. Bundy[2]'s MECHO solves a wide range of mechanics problems given in English. MECHO uses meta-level inference, which provides powerful techniques for controlling the use of knowledge. MECHO, however, must be provided with full knowledge in order to solve problems. LAPS has a learning module and can fill in knowledge deficiencies. Davis[3]'s TEIRESIAS can obtain knowledge through interaction with users, but TEIRESIAS's purpose is as a knowledge acquisition system for an expert system rather than a learning system. Our intention has been to build a system which can understand a natural language, solve problems, and learn through the interaction with a teacher or by problem solving. II LAPS PROGRAM As shown in Figure 1 LAPS consists of a natural language processor, problem solver, rule generator, rule modifier and knowledge base, Natural Language Process0 Problem Solver Rule Generato Figure 1. The structure of LAPS In our system, the input statements describing a given problem are translated into an accessible and modifiable structure for the system by the natural language processor. The natural language processor consists of a syntactic parser and a semantic analyzer. As the syntactic analyzer, the extended LINGOL[4] is used. LINGOL generates multiple parsed trees from an input statement when there is syntactic ambiguity. Each parsed tree is not only the structured data but also a program for the semantic analyzer. After selecting the most plausible one from the multiple trees, LAPS invokes the semantic processing routine in .order to produce the appropriate structure. This structure is called a "fact-graph" which is a kind of semantic network, In the fact-graph, nodes correspond to objects represented by subjective and objective words'in the problem statement, and links correspond to objects' property represented by their modifiers, The process of semantic analysis proceeds by the generation of nodes and the connection of two nodes with a link. Thus a data base about the given problem is constructed in the system. Figure 2 shows an example of the hierarchy of concepts in the knowledge base represented by the connection of nodes with links, Such hierarchical 1036 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. knowledge is used for acquired knowledge. the process of generalizing b-u solution sugar solution Figure 2. Example of the hierarchy of concepts For solving arithmetic problems, the system derives equations using the data base constructed from the problem description. In our system, some knowledge rules are used in deriving the appropriate equations for the given problem. That is, using heuristics, the system chooses the most likely equation among the candidates. Then the derived equation is rewritten by symbol manipulation so that variables are moved to the left hand side of the equation. A repetitive substitution of value into variables is made until no variables appear in the representation. In the above manipulation, the algebraic formula for solving a given problem is stored in a tree structure. Then the problem solver works toward extraction of simultaneous equations, and solves them by using an equation extraction rule. The equation extraction rule consists of a lambda part, a target part, a condition part, and an action part. The lambda part is a list of variables, and the target part contains a condition about the target variable. When LAPS solves a problem, it determines a target variable for the derivation of an equation and then generates an applicable rule list by checking the target part of each rule. By matching the condition part with the fact-graph, the correspondence of the variables in the condition part with the nodes and links in the fact-graph is performed. After replacing the variables in the action part according to this correspondence, an equation is extracted by the evaluation of the action part. Since the value of any variable in the action part does not need to be known, the rules are used in a variety of cases. The condition part is represented in the form of a Lisp function or the form defined by the user. Operators, "and", "or", and 'lnot" are used for connection of the conditions. An example of a rule for extracting equations is shown as follows. "Condition-add", "action-equation", "action-term" and l'action-expression" are names of forms defined by the user. The following rule in Figure 3 is paraphrased as: If * z is a sum of *x and the equation *z=*x+*y *y then extract Since LAPS has knowledge about extracting equations in the form of such rules, LAPS's knowledge of solving problems can be extended easily by adding rules. le-1 eq-ru (lambda = (target = (condition (action = le (*target *property)) (member *property (cardnum weight price . = (condition-add *x *y (action-equation (action-term *z *proper = - .I>) *z>> tY) (action-expression (action-term *y *proper tr) (action-term *x *property))))) Figure 3. Example of a rule Figure 4 shows the LAPS's main routine. When no rules match the given problem, LAPS asks a teacher how to solve it. LAPS analyzes the teacher's answer given in natural language and stores it in the form of a generalized rule. This generalized rule is produced by the rule generator module. The obtained rule, however, is not always correct because of the generalization process. If LAPS finds that the obtained rule is inappropriate for the given problem, the rule is revised according to the teacher's supervision, For the modification of the rule, LAPS invokes the rule modifier module and interacts with the teacher. repeat get a problem from a teacher repeat if a rule matches the given problem then apply it (no learning) else get an instruction from a teacher and generate a rule until the problem is solved until the teacher satisfies Figure 4. LAPS's main routine III GENERATING A RULE A. Generating A Rule From A Teacher When LAPS cannot solve a problem because of a shortage of knowledge, LAPS resolves this lack of knowledge in order to solve the problem. The teacher's advice given in natural language is transformed to the internal form. Consider the following advice. The weight of a solution of salt equals the total number of the water and the salt. The follwing equation can be obtained from the above statement, where "solution", "water" and "salt" are variables which correspond to the objects appearing in the problem statements. LEARNING / 1037 weight (solution) = weight (water) +weight (salt) equation is structured data. In this way, LAPS obtain the action part from a teacher’s advice can This rule can be obtained by the simple transformation of an input statement. It is, however, a very specific equation and can be used only for problems about salt solutions. For example, the above rule cannot be used for problems about sugar solutions. Even if we extend the LAPS’s inference mechanism so that LAPS can infer by using the hierarchy of concepts, its execution time may be excessive unless the user controls the inference mechanism. The generation of such specific rules may cause the explosion of the rule base. Our system generates more generalized rules which are applicable to a wider variety of problems. Generalization is done by 1) dropping conditions, 2) replacing constants with variables, and 3) abstracting the concepts by moving up the hierarchy of concepts in the knowledge base. Disjunctive generalization is made by adding additional conditions or replacing conjunction by disjunction. For the generalization of a rule, the standard generalization technique of Mitchell[5][6] has been used. However his candidate elimination algorithm is not very efficient because it is data driven. For the efficiency of generalization, LAPS restricts the initial description of the hypothesis so that the search space is comparatively small. For example, if all constants appearing in the generated rule are replaced by variables, LAPS could not derive an adequate equation because correspondence of variables with the nodes and links in the fact-graph cannot be obtained. To extract an adequate equation by applying learned rules, all the variables in the action part must be included in the condition part. Once a new variable is introduced into the action part by applying the constants replacing rule, the condition part must be fulfilled for generating new variables. Also, the condition part, even if some condition is dropped, must still include all the variables in the action part. If infinite disjunctive generalization is permitted, the space of the hypotheses becomes infinite. Disjunctive generalization is made, therefore, only when LAPS determines the initial description and there exist alternative hypotheses. To determine the initial description, LAPS generalizes the input data so that there exist no more general descriptions which include all the variables in the action part. 1. Action Part In order to translate the advice given in natural language, the equation extraction rules are utilized. The input statement is translated into a fact-graph, and then equations are extracted from the fact-graph, As a result of solving the simultaneous equations, an equation, which includes only one variable, is obtained. Finally, the system can get an executable form by using a pattern matcher, since the obtained 2. Condition Part It is not easy for a teacher to input a complete form of the conditions, Therefore, LAPS generates the condition part by translating the original problem statement into the internal format, Since the simply translated form is about a specific problem, the translated form should be generalized. In the generalization process redundant or noisy parts of the original problem must be ignored and over-generalization should be avoided. In order to modify the translated form without backtracking, two conditions are employed in association with the rules generated from the advice. These two conditions are the maximally general condition and the maximally specific condition, which hold information about the problem states where application of the rule is appropriate or inappropriate. In applying rules, LAPS checks whether the given problem satisfies both these conditions or not. If only the general condition is satisfied then LAPS identifies whether the application of the rule is appropriate or not. To identify applicability of the rule, LAPS generates an English statement from the rule and interacts with the outside teacher. If the application of the rule is appropriate, LAPS generalizes the maximally specific condition so that the current problem satisfies the new condition. If the application of the rule is inappropriate, LAPS specializes the maximally general condition so that the current state does not satisfy the new condition. After proceeding with the process of the modification of a rule, LAPS acquires a new complete rule, if the maximally general condition is equal to the maximally specific condition. LAPS can solve problems by using the incomplete rule during rule learning. B. Generating A Rule From The Execution Process As described above, LAPS combines equation extraction rules so as to solve problems. When some rules are applicable to the current target variable, LAPS can select a better rule by using heuristic information to resolve rule conflict, When an inadequate rule is selected, LAPS backtracks so as to obtain better rules. 1. Action Part In order to improve performance, LAPS generates a new rule by synthesizing rules which were applied during problem solving. As LAPS constructs a tree structure which contains the information for combining equations, LAPS can synthesize the action parts of the applied rules by traversing the tree and using the information stored in the structure. LAPS composes equations by symbol manipulation and then translates the 1038 / ENGINEERING composed equation into executable form. This process proceeds almost identically to the process of making an action part from the teacher’s state- ment. The difference between the two processes is that by solving a problem the composition process requires generalization of the equation. In other words, the tree structure generated is for a specific problem, and the rule generated by using only such specific information is a specific one. To get more generalized rules, new variables are introduced into the action part by replacing constants so that the generated rule can be used in similar types of problems. Unless the range of the value of the introduced variables is restricted, however, the application of a generated rule results in extracting an incorrect equation. Consider the following equation where X is a variable. speed(X) = distance(X) / time(X) Since the above equation is derived by rules, the system can recognize that the following equation is incorrect. distance(X) = speed(X) / time(X) Equation extraction rules represents not only equations for solving problems but also con- straints on the equations. In LAPS, the range of the variables is restricted by using the equation extraction rules. When a rule is extracted in the execution process, the constraints of the introduced variables are composed into the condition part of the rule. C. Comparative Review An equation extraction rule can be considered as a model of a problem, since its condition part represents the problem statements and its action part represents the problem solving procedures. In order to generate a new equation extraction rule, the rules initially given in the system are used as domain dependent knowledge. Hence our method is applicable for knowledge acquisition by altering the rules. Since the process of generating a rule from a teacher is based on a data driven method, it requires many examples to complete a new rule. On the other hand, the process of generalizing a rule is a model driven method guided by the equation extraction rules and does not require as many examples. If appropriate models are given, knowledge acquisition in LAPS is realized in comparatively short time. IV IMPLEMENTATION LAPS is written in UTILISP[S] on MC68000 (12.5MHz) , and the program contains about 7000 lines, There are about 20 grammar rules and almost 400 words in the dictionary. There are 10 equation extraction rules, but LAPS can solve many problems by combining these rules. When LAPS lacks knowledge to solve a given problem, LAPS asks the user how to solve the problem. Figure 5 shows the example of a dialogue when LAPS queries. The statement preceded by “3 I1 are the user’s input. -> There is water with a weight of 95 grams. Please continue. -> We dissolve salt with a weight of 5 grams in the water. Please continue. -> What is the concentration of the solution? Excuse me, please teach me how to solve the problem. -) The concentration of a solution is the weight of the salt divided by the weight of the solution times 100. Thank you very much. I’ll try to solve the problem. The answer is 5. Figure 5. Example of a dialogue knowledge. when LAPS lacks In Figure 5 the first attempts to solve the given problem results in a failure because of a shortage of knowledge. Then LAPS asks the user how to solve the problem, solves it by using the acquired knowledge, and acquires a new rule. However, since the acquired rule is not guaranteed correct, LAPS uses the newly acquired rule while checking its applicability, If only one condition of the learned rule is satisfied, LAPS asks the user whether the rule is applicable or not. According to the user’s advice, LAPS can then solve the second problem while modifying the rule. LEARNING / 1039 To improve the performance of LAPS, LAPS generates a new rule by synthesizing rules applied during solving problems, Figure 6 shows the example of a dialogue with LAPS. The cardnum in Figure 6 represents the function which returns the number of elements of the given objects. -> The total number of cranes and turtles is 20. Please continue. -> The total number of legs of cranes and turtles is 60. Please continue. -: How many cranes are there? As the number of crane's legs is 2, cardnum(crane-1) = cardnum(leg-of-crane-l) * l/2 As the total number of legs of cranes and turtles is 60, cardnum(leg-of-crane-l) = (60 - cardnwn(leg-of-turtle-l) ) As the number of a turtle's legs is 4, cardnum(leg-of-turtle-l) = cardnum(turtle-1) * 4 As the total number of cranes and turtles is 20, cardnum(turtle-1) = (20 - cardnum(crane-1) ) Consequently, cardnum(crane-1) = 10 The answer is 10. Figure 6. Example of a dialogue After solving the problem shown in Figure 6, LAPS generates a new rule which may produce the following equation. cardnum(crane-1) = (cardnum(animal-1) * cardnum cardnum(leg-of-animal-l)) / (cardnum(leg-of-turtle) cardnum(leg-of-crane)) Figure 7. Example of a composed leg-of-turtle) equation In Figure 7 cardnum(leg-of-turtle) and cardnum(leg-of-crane) represent the numbers of turtle's legs and crane's legs, respectively. Cardnum(animal-1) represents the total number of cranes and turtles, and cardnum(leg-of-animal-l) represents the total-number of legs of cranes and turtles. LAPS generalizes the above equation so that LAPS can solve a similar problem by applying the newly generated rule. In the above equation, the constants are replaced by the different variables, and the conditions which include the newly introduced variables are appended to the condition part of the rule. Then the newly synthesized rule will be tested in the future problem solving. To identify the applicability of the new rule, LAPS interacts with the teacher. If the generalization by replacing constants is incorrect, specialization of the rule by replacing variables to constants will be done. V CONCLUSIONS In this paper, we presented a problem solving system which employs learning, problem solving and natural language processing together. With the aid of a teacher, our system can acquire new knowledge and utilize it, Learning from examples creates equation extraction rules that can be used in the problem solver. However, LAPS cannot acquire disjunctive concepts because of using Mitchell's candidate elimination algorithm. And if the maximally specific condition is overly generalized, the rule learning results in a failure because LAPS cannot restore the information that were discarded. REFERENCES [l]Charniak, E. "Compute solution of calculus word problems", In Proc. IJCAI-69, Washington D.C., 1969, pp. 303-316 [2]Bundy, A., Byrd, L., Luger, G., Mellish, C. and Palmer, M., "Solving Mechanics Problems Using Meta-level Inference", In Proc. IJCAI-79. Tokyo, Japan, August, 1979, pp. 1017-1027. [S]Davis, R. and Buchanan, B. G. "Meta-level knowledge: overview and applications", In Proc. IJCAI-77. Cambridge, USA, August, 1977, pp. 920-927. [4]Unemi, T. Master thesis, Tokyo Institute of Technology, Tokyo, Japan, March, 1980. [SlMitchell, T. M. ltVersion spaces: a candidate elimination approach to rule learning", In Proc. IJCAI-77. Cambridge, USA, August, 1977, pp. 305-310. [6]Mitchell, T. M. "Generalization as Search" Artificial Intelligence 18:2 (1982), 205-226 [7]Neves, D. M. "Learning procedures from examples and by doing" In Proc: IJCAI-85. Los Angels, USA, August, 1985, pp. 624-630 [8]Michalski, R. S. "A Theory And Methodology Of Inductive Learning", Machine Learning, tioga, 1983, pp. 83-134. [SlChikayama, T. UTILISP Manual, METR 81-6, Department of mathematical engineering and instrumentation physics, University of Tokyo, 1981. 1040 / ENGINEERING
|
1986
|
50
|
495
|
THE MULTI-PURPOSE INCREMENTAL LEARNING SYSTEM AQ15 AND ITS TESTING APPLICATION TO THREE MEDICAL DOMAINS* Ryszard S. Michalski, Igor Mozetic**, Jiarong Hong***, Nada Lavrac**, Department of Computer Science University of Illinois at Urbana-Champaign ABSTRACT AQ15 is a multi-purpose inductive learning system that uses logic-based, user-oriented knowledge representation, is able to incrementally learn disjunctive concepts from noisy or overlapping examples, and can perform constructive induction (i.e., can generate new attributes in the process of learning). In an experimental application to three medical domains, the program learned decision rules that performed at the level of accuracy of human experts. A surprising and potentially significant result is the demonstration that by applying the proposed method of cover truncation and analogical matching, called TRUNC, one may drastically decrease the complexity of the knowledge base without affecting its performance accuracy. I INTRODUCTION It is widely acknowledged that the construction of a knowledge base represents the major bottleneck in the development of any AI system. An important method for overcoming this problem is to employ inductive learning from examples of expert decisions. In this knowledge acquisition paradigm, knowledge engineers do not have to force experts to state their “know how” in a predefined representational for- malism. Experts are asked only to provide correct interpreta- tion of existing domain data or to supply examples of their performance. It is known that experts are better at providing good examples and counterexamples of decisions than at for- malizing their knowledge in the form of decision rules. Early experiments exploring this paradigm have also shown that decision rules formed by inductive learning may outperform rules provided by human experts [Michalski & Chilausky 80; Quinlan 831. An important part of the development of an inductive learning systems is its evaluation on practical problems. There are several criteria for evaluating inductive learning methods. We argue that the most important one is the classification accuracy of the induced rules on new objects. In the paper we present an experimental evaluation of the AQ15 program for learning from examples in three medical domains: lymphography, prognosis of breast cancer recurrence, and location of primary tumor. These three domains are charac- terized by consecutively larger amounts of overlapping and *This research WAS supported in part by the National Science Foundation under Grant No. DCR 84-06801, the Office of Naval Research under Grant No. N00014-82-K-0186, the Defense Advanced Research Project Agency under Grant No. N00014-K-85-0878, and by the Slovene Research Council. sparse learning events. Examples of a few hundred patients with known diagnoses were available, along with the assessed classification accuracy of human experts. We randomly selected 70% of examples for rule learning and used the rest for rule testing. For each domain, the experiment was repeated four times. The induced rules reached the classification accuracy of human experts. Performance of experts was measured in two out of three domains, (breast cancer and primary tumor) testing four and five experts, respectively, The experiments also revealed the interesting phenomenon that by truncating covers and applying analogical rule matching one may significantly reduce the size of the knowledge base without decreasing its performance accuracy. A more detailed presentation of the results and of the program AQ15 is in [Michalski, Mozetic & Hong 86; Hong, Mozetic & Michalski 861. II AN OVERVIEW OF AQ15 The program AQ15 is a descendant of the GEM program and the AQl-AQll series of inductive learning programs, e.g., [Michalski & Larson 751. Its ancestors were experimented with in the areas of plant disease diagnosis [Michalski & Chi- lausky 80, chess end-games, diagnosis of cardiac arrhythmias (Mozetic 861, and others. All these systems are based on the AQ algorithm, which generates decision rules from a set of examples, as originally described in [Michalski 69; Michalski & McCormick 711. When building a decision rule, AQ performs an heuristic search through a space of logical expressions to determine those that account for all positive examples and no negative examples. Because there are usually many such complete and consistent expressions [Michalski 831, the goal of AQ is to find the most preferred one, according to a flexible extra-logical criterion. This criterion is defined by the user to reflect the needs of the application domain. Rules are represented as expressions in variable-valued logic system 1 (VL tional calculus wit it ), which is a multiple-valued logic proposi- typed variables [Michalski 8z Larson 751. In VL1, a selector relates a variable to a value or a disjunction of values. A conjunction of selectors forms a complex. A cover is a disjunction of complexes describing all positive examples **On leave from: Josef Stefun Institute, Ljubljana, Yugoslavia. ***On leave from: Harbin Institute of Technology, Harbin, The People’s Republic of China. LEARNING / 104 1 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. and none of the negative examples of the concept. A cover defines the condition part of a corresponding decision rule. AQl5 is able to produce rules of different degrees of gen- erality (rules may be general, minimal or specific). The pro- gram implements incremental learning with perfect memory. The user may supply his decision hypotheses as initial rules. In this type of learning the system remembers all learning examples that were seen so far, as well as the rules it formed [Reinke & Michalski 861. A f orm of constructive induction is implemented in AQ15 as well. The program’s background knowledge, expressed in the form of rules, is used to generate new attributes not present in input data. The background error, and requires a somewhat more sophisticated evaluation. We can proceed further and remove the next “light” complex from the cover, and observe the performance. Each such step produces a different trade-off between the complexity of the description on one side, and the risk factor and the evaluation complexity on the other (Figure 1). At some step the best overall result may be achieved for a given application domain. This method of knowledge reduction by truncating ordered covers and applying analogical matching is called TRUNC. CPXl 1 CPX2 ;.;.;?--:I I I I la ‘b ‘C knowledge rules are of two types: L-rules that define values of new variables by logical assertions, and A-rules that introduce new variables as arithmetic functions of original variables. III TRUNCATION OF COVERS AND ANALOGICAL MATCHING The underlaying idea behind the TRUNC method is that the meaning of a concept can be distributed between its expli- tit representation and the method of its interpretation (Michalski 86a, Michalski 86b]. This idea can be simply real- ized as described below. Figure 1. An example of a t-ordered cover. The cuts at a, b and c mark truncated covers with 1, 2 or 3 complexes, respec- tively. In each pair (x, y), x represents the t-weight, and y represents the u-weight. In AQ15 a concept is represented in the form of a simple conjunctive statement (called a complex), or as a disjunction of such statements. Each statement is associated with a pair of weights: t and u, representing the total number of instances (events) explained by the expression, and the number of events explained uniquely by that expression, respectively. The t- weight may be interpreted as a measure of the representative- ness of a complex as a concept description. The u-weight may be interpreted as a measure of importance of the complex. The complex with the highest t-weight may be interpreted as describing the most typical examples of the concept. It may also be viewed as a prototypical or the ideal definition of the concept. The complexes with lowest u-weights can be viewed as describing rare, exceptional cases. If the learning events from which rules are derived are noisy, such “light” complexes may be indicative of errors in the data. Two methods of recognizing the concept membership of an instance are distinguished: the strict match and the analogi- cal match. In the strict match, one tests whether an instance satisfies condition part of a rule. In the analogical match, one determines the degree of similarity or conceptual closeness between the instance and the condition part. Using the strict match, one can recognize a concept without checking other candidate concepts. In the analogical match, one needs to determine the most closely related concept. The analogical matching can be accomplished in a variety of ways, ranging from approximate matching of features to conceptual cohesive- ness (Michalski & Stepp 831. The above trade-off is related to the issues studied in Variable Precision Logic [Michalski & Winston 861. An interesting problem is to test how the cover truncation method affects the accuracy of recognition and the complexity of the decision rules in different practical settings. Section IV presents results of some such experiments, which in some cases came out very surprising. We now turn to the problem of ana- logical matching, and the resolution of conflict when several concept descriptions are matched by an event. The above weight-ordering of complexes suggests an interesting possibility. Suppose we have a t-weight ordered disjunction of complexes, and we remove from it the lightest complex. So truncated description will not strictly match events that uniquely satisfy the truncated complex. However, by applying the analogical match, these events may still come out to be the most similar to the correct concept, and thus be correctly recognized. A truncated description is of course simpler, but carries a potentially higher risk of recognition When strictly matching a new event against a set of (dis- junctive) rules, three outcomes are possible: there may be only one match, more than one, or no match (categories called SIN- GLE, MULTIPLE and NO-MATCH, respectively; Figure 2). Each category requires a different evaluation procedure, and a different method of determining the accuracy of concept recog- nition. For exact match (category SINGLE), the evaluation is easy: the decision is counted as correct if it is equal to the known classification of the testing object and as wrong other- wise. If there are several exact matches (the MULTIPLE case) or none (the NO-MATCH case) the system activates the fEezi- b/e evaluation scheme that determines the best decision (or the most probable one). Comparing this decision with the decision provided by experts, one evaluates it as correct or incorrect. Here we propose two simple heuristic classification criteria, one for the MULTIPLE case, and the other for the NO-MATCH case. ~Y2$i$Lii~ SINGLE MULTIPLE Figure 2. The three possible cases when matching a new event against a set of decision rules. 1042 / ENGINEERING Estimate of Probability for the MULTIPLE case (EP). Let Cl, . . . , C denote decision classes and e an event to be classified. FoFeach decision class C. we have a rule that con- sists of a disjunction of complexes Cpz , which, in turn are i 1 conjunctions of selectors (Sel). We define the estimate of pro- bability, EP, as follows: 1) ,!?p of a complex Cpx. is the ratio of the weight of the com- plex (the number of lea&ring examples covered by it) by the total number of learning examples (#examples), if the complex is satisfied by the event e, and equals 0 otherwise: Weight ( Cpxj) / #examples if complex Cpx, is EP(Cpxj,e) = satisfied by e, 0 otherwise. 2) EP of a class Ci is the probabilistic sum of EPs of its com- plexes. If the rule for C. consists of a disjunction of two com- plexes CpxI V Cpx2, we h ave: EP(Ci,e) = EP(Cpzl,e) + EP(Cpz,,e) - EP(Cpzl,e) EP(Cpx2,e) The most probable> class is the one with the largest EP, i.e., the one whose s:ll,isficd complexes cover the largest number of learning ex:~rrrl)lc~s. Obviously, if the class is not satisfied by the given event, its EP equals 0. Measure of Fit for the NO-MATCH case (MF). In this case the event belongs to a part of the decision space that is not covered by any decision rule and this calls for analogical matching. One way to perform such matching is to measure the fit between attribute values in the event and the class description, taking into consideration the prior probability of the class. We used in the experiments a simple measure, called measure offit, MF, defined as follows: 1) MF of a selector Se1 k is 1, if the selector is satisfied. Other- wise, this measure is proportional to the amount of the deci- sion space covered by the selector: I 1 if selector Sel, is satisfied by e, MF(SeZk,e) = I #Values otherwise. I DomainSize where #Values is the number of disjunctively linked attribute values in the selector, and DomainSize is the total number of the attribute’s possible values. 2) MF of a complex Cpxi is definr,d >js the product of MFs for a conjunction of its constituent selrc,tors, weighted by the pro- portion of learning examples covered by the complex: MF(Cpxj,e) = flMF(Selk,e) X (Weight(Cpxj) / #examples) 3) M. of a class Ci is obtained as a probabilistic sum for a dis- junction of complexes. MF(Ci,e) = MF(Cpxl,e) + MF(Cpx,,e) - MF(Cpxl,e) MF(Cpx2,e) We can interpret the measure of best fit of a class as a combination of “closeness” of the event to a class and an esti- mate of the prior probability of the class. This measure can be further extended by introducing a measure of degree to which a selector is satisfied [Michalski & Chilausky 801. IV EXPERIMENTS AND ANALYSIS OF RESULTS The experiments were performed on data from three medical domains: lymphography, prognosis of breast cancer recurrence and location of primary tumor (Table 1). All data were obtained from the Institute of Oncology of the University Medical Center in Ljubljana, Yugoslavia [Kononenko, BratI & I?oskar 861. Lyrnyhq:r:q~hy. This domain is characterized by 4 decision classes (diagnoses) and 18 attributes. Data of 148 patients were available. Diagnoses in this domain were not verified and actual testing of physicians was not done. A specialist’s esti- mation is that internists diagnose correctly in about 60% and specialists in about 85% of cases. Prognosis of Breast Cancer Recurrence. The domain is characterized by 2 decision classes and 9 attributes. The set of attributes is incomplete as it is not sufficient to completely discriminate between cases with different outcome. Data for 286 patients with known diagnostic status 5 years after the operation were available. Five specialists that were tested gave a correct prognosis in 64% of cases. Location of Primary tumor. Physicians distinguish between 22 possible locations of primary tumor. Patients’ diagnostic data involve 17 attributes (this set is also incom- plete). Data of 339 patients with known locations of primary tumor were available for the experiment. Four internists that were tested determined a correct location of primary tumor in 32% of cases and four oncologists (specialists) in 42% of test cases. Domain Examples Classes Attrs Vals Attr Lymphography 148 4 18 3.3 Breast cancer 286 2 9 5.8 Primary tumor 339 22 17 2.2 Table 1. The table presents the number of examples, of classes, of attributes, and the average number of values per attribute for each of the three medical domains. In all medical domains 70% of examples were selected for learning and the remaining 30% for testing. Each testing experiment was repeated 4 times with randomly chosen learn- ing examples. Final results are the average of 4 experiments (Table 2). In addition to results obtained from using complete (untruncated) rules, results of two other experiments are presented. In the first experiment we eliminated from rules all complexes that cover uniquely only one learning example (unique >l), and in the second we eliminated all complexes except the most representative one covering the largest number of learning examples (best cpx). Complexity of rules is measured by the number of selectors and complexes. Table 2 shows that some results came out very surpris- ing. When the cover of each class was truncated to only one (the heaviest) complex, the complexity of the rule set for lym- phography went down from the total of 12 complexes and 37 selectors to only 4 complexes (one per class) and 10 selectors (see bold numbers). At the same time the performance of rules LEARNING / 1043 Cover Complexity Accuracy Human Random Domain truncation Se1 CPX Experts Choice no 37 12 81% Lymphography unique >l 34 10 80% 85% 25% best cpx 10 4 82% (estimate) no 160 41 60% Breast cancer unique > 1 128 32 66% 64% 50% best cpx 7 2 68% no 551 104 39% Primary tumor unique >l 257 42 41% 42% 5% best cpx 112 20 29% Table 2. Average complexity and accuracy of AQ15’s rules learned from 70% of examples, over 4 experiments, as compared to the performance of human experts and a random choice classification algorithm. went slightly up (f rom 81% to 82%)! A similar phenomenon occurred in the breast cancer domain, where the number of selectors and complexes went down from 160 and 41 to 7 and 2, respectively; while the performance went slightly up from 66% to 68%. This means that by using the TRUNC method one may significantly reduce the knowledge base without affecting its performance accuracy. Results for human experts are the average of testing of five and four domain specialists in the domains of breast cancer recurrence and primary tumor, respectively [Kononenko, Bratko & Roskar 86). In the domain of lymphography, physicians’ accuracy is given only as their estimate and was not, actually measured. The domain of lymphography seems to have some strong patterns and the set of attributes is known to be complete. There are four possible diagnoses but only two of them are prevailing. The d omain of breast cancer has only two decision classes but does not have many strong patterns. Domain of location of primary tumor has many decision classes and mostly binary attributes. There are only a few examples per class, and the domain seems to be without any strong pat- terns. Both domains are underspecified in the sense that the set of available attributes is incomplete (not sufficient to discriminate between different classes). The statistics in Table 3 include average number of complexes per rule, average number of attributes per complex, average number of values per attribute and finally, average number of learning examples covered by one complex. We can see that in the domain of primary tumor decision rules consist of complexes that in average cover slightly more than 2 examples. In the domain of lymphography complexes in average cover 8 examples, which indicates a presence of relatively strong patterns. It is surprising that a cover truncation mechanism that strongly simplifies the rule base may have no effect on classification accuracy. Removing “light” complexes from a cover is equivalent to removing disjunctively linked conditions from a concept description. This process thus overspecializes a knowledge representation, producing an incomplete concept description (i.e., a one that does not cover some positive exam- ples). As the results show, this may lead to a substantial simplification of the concept description, without the decline in performance of the rules base. This knowledge reduction technique by specialization may be contrasted with knowledge reduction by generalization used in the ASSISTANT learning program, a descendant of ID3 [Quinlan 83). This program represents knowledge in the form of decision trees, and has been applied to the same medi- cal problems as here (Kononenko, Bratko & Roskar 861. The program applies a tree pruning technique based on the princi- ple of maximal classification accuracy. The technique removes certain nodes from a tree, and is equivalent to removing con- junctively linked conditions from a concept description. Thus, such ? knowledge reduction technique overgeneralizes the knowledge representation, producing an inconsistent concept description (i.e., a one that covers some negative examples). It is interesting to point out that this technique may also lead to an improvement of accuracy in decision making when learning from noisy and overlapping data. Table 4 presents the com- plexity and diagnostic accuracy of ASSISTANT’s trees built with and without the tree pruning mechanism [Kononenko, Bratko & Roskar 861. Tree pruning corresponds to the removal of selectors from complexes. This seems to suggest that when learning from noisy or overlapping data the knowledge reduction pro- cess may not only involve removal of complexes from a cover (a specialization process) but also removal of selectors from complexes (a generalization process). This means that a con- cept description would be both inconsistent and incomplete. It is an interesting problem for further research to determine conditions under which such a description produces better results than a consistent and complete one. Domain Cpx/Rule Attrs/Cpx Values/Attr Examples/Cpx Lymphography 3 3.1 1.8 8 Breast cancer 20 3.9 1.7 5 Primarv tumor 5.2 5.3 1.0 2.3 Table 3. Average complexity of AQ15’s decision rules in the three medical domains, when no cover truncation mechanism was applied. 1044 / ENGINEERING I 1 Tree Complexity Accuracy 1 Domain pruning Nodes Leaves no 38 22 76% Table 4. Average complexity and accuracy of decision trees built by ASSISTANT on 70% of examples, over 4 experiments. In all three domains the tree pruning mechanism reduced the complexity and increased the accuracy. V CONCLUSION A major contribution of the paper is to show that a rela- tively simple, attribute-based inductive learning method is able to produce decision rules of sufficiently high quality to be applicable to practical problems with noisy, overlapping and incompletely specified learning events. The AQ15 program has shown itself to be a powerful and versatile tool for experiment- ing with inductive knowledge acquisition in such problems. It produces decision rules which are easy to interpret and comprehend. The knowledge representation in the program is limited, however, to only attribute-based descriptions. For problems that require structural descriptions one may use a related program INDUCE2 [Hoff, Michalski & Stepp 831 or its incremental learning version INDUCE4 [Mehler, Bentrup & Riedsel 861. A weakness of the experimental part of the paper is that the authors had no influence on the way the data were prepared for the experiments and the available data allowed us to test only a few of the features of AQ15. Another major result is a demonstration that the knowledge reduction by truncating the covers may lead in some cases to a substantial reduction of the rule base without decreasing its performance accuracy. Further research will be required to find for any given domain a rule reduction criterion that leads to the best trade-off between accuracy and complex- ity of a rule base. ACKNOWLEDGEMENTS The authors thank Ivan Bratko and Igor Kononenko from the Faculty of Electrical Engineering at University of Ljubljana for collaboration and comments, and physicians Matjaz Zwitter and Milan Soklic from the Institute of Oncology at the University Medical Center in Ljubljana for providing medical data and helping to interpret them. We further acknowledge Gail Thornburg from the UI School of Library and Information Science and the AI Laboratory at the Dept. of Computer Science for her criticism and valuable suggestions. REFERENCES [l] Hong, J., Mozetic, I., Michalski, R.S. (1986). “AQ15: Incremental Learning of Attribute-Based Descriptions from Examples, the Method and User’s Guide.” Report ISG 86-5, UIUCDCS-F-86-949, Dept. of Computer Sci- ence, University of Illinois, Urbana. PI PI PI 151 PI PI PI PI Hoff, W., Michalski, R.S., Stepp, R.E. (1983). “INDUCE.2: A Program for Learning Structural Descriptions from Examples.” Report ISG 83-4, UIUCDCS-F-83-904, Dept. of Computer Science, University of Illinois, Urbana. Kononenko, I., Bratko, I., Roskar, E. (1986). “ASSIS- TANT: A System for Inductive Learning.” Informatica Journal, Vol. 10, No. 1, (in Slovenian). Mehler, G., Bentrup, J., Riedesel J. (1986). “INDUCE.4: A Program for Incrementally Learning Structural Descrip- tions from Examples.” Report in preparation, Dept. of Computer Science, University of I!linois, Urbana. Michalski, R.S. (1969). “On the Quasi-Minimal Solution of the General Covering Problem.” Proceedings of the V International Symposium on Information Processing (FCIP 69), Vol. A3 (Switching Circuits), Bled, Yugosla- via, pp. 125-128. Michalski, R.S. (1983). “Theory and Methodology of Machine Learning.” In R.S. Michalski, J.G. Carbonell, T.M. Mitchell (Eds.), Machine Learning - An Artificial Intelligence Approach, Palo Alto: Tioga. Michalski, R.S. (1986a). “Concept Learning.” To appear in AI Encyclopedia, John Wiley & Sons. Michalski, R.S. (1986b). “T wo-tiered Concept Representa- tion, Analogical Matching and Conceptual Cohesiveness.” Invited paper for the Workshop on Similarity and Analogy, Allerton House, University of Illinois, June 12-14. Michalski, R.S., Chilausky, R.L. (1980). “Learning by Being Told and Learning from Examples: An Experimen- tal Comparison of the Two Methods of Knowledge Acquisition in the Context of Developing an Expert Sys- tem for Soybean Disease Diagnosis.” International Journal of Policy Analysis and Information Systems, Vol. 4, No. 2, pp. 1255161. [lo] Michalski, R.S., Larson, J. (1975). “AQVAL/l (AQ7) 11 1 User’s Guide and Program Description.” Report No. 731, Dept. of Computer Science, University of Illinois, Urbana. Michalski, R.S., McCormick, B.H. (1971). “Interval Gen- eralization of Switching Theory.” Report No. 442, Dept. of Computer Science, University of Illinois, Urbana. Michalski, R.S., Mozetic, I., Hong, J. (1986). “The AQ15 Inductive Learning System: An Overview and Experi- ments.” Report ISG 86-20, UIUCDCS-R-86-1260, Dept. of Computer Science, University of Illinois, Urbana. 12 1 131 Michalski, R.S., Stepp, R.E. (1983). “Learning from Observations: Conceptual Clustering.” In R.S. Michalski, J.G. Carbonell, T.M. Mitchell (Eds.), Machine Learning - An Artificial Intelligence Approach, Palo Alto: Tioga. 141 Michalski, R.S., Winston, P.H. (1986). “Variable Precision Logic.” AI memo No. 857, MIT, Cambridge. An extended version to appear in AI Journal. [15] Mozetic, I. (1986). “K nowledge Extraction through Learn- ing from Examples.” In T.M. Mitchell, J.G. Carbonell, R.S. Michalski (Eds.), Machine Learning: A Guide to Current Research, Kluwer Academic Publishers. [ 161 Quinlan, J.R. (1983). “L earning Efficient Classification Procedures and their Application to Chess End Games.” In R.S. Michalski, J.G. Carbonell, T.M. Mitchell (Eds.), Machine Learning - An Artificial Intelligence Approach, Palo Alto: Tioga. [17] Reinke, R.E., Michalski, R.S. (1986). “Incremental Learn- ing of Decision Rules: A Method and Experimental Results.” To appear in J.E. Hayes, D. Michie, J. Richards (Eds.), Machine Intelligence 1.2, Oxford University Press. LEARNING / 1045
|
1986
|
51
|
496
|
RESTRICTING LOGIC GRAMMARS EDWARD P. STABLER, JR. Quintus Computer Systems 2345 Yale St., Palo Alto, CA, 94306 [email protected] ABSTRACT A parser formalism for natural languages that is so restricted as to rule out the definition of linguistic structures that do not occur in any natural language can make the task of grammar construction easier, whether it is done manually (by a programmer) or automatically (by a grammar induction system). A restrictive grammar formalism for logic programming languages is presented that imposes some of the constraints suggested by recent Chomskian linguistic theory. In spite of these restrictions, this formalism allows for relatively elegant characterizations of natural languages that can be translated into efficient prolog parsers. I. INTRODUCTION The best-known parser formalisms for logic programming systems have typically aimed to be expressive and efficient rather than restrictive. It is no surprise that in these systems a grammar writer can define linguistic structures which do not occur in any natural language. These “unnatural” structures might suffice for some particular processing of some particular fragment of a natural language, but there is a good chance that they will later need revision if the grammar needs to be extended to cover more of the natural language. On the other hand, if the grammar writer’s options could be limited in the right way, there would be less to consider when a choice had to be made among various ways to extend the current grammar with the aim of choosing an extension that will not later need revision. Thus a restricted formalism can actually make it easier to build large, correct, and upward- compatible natural language grammars. A similar point obviously holds for automatic “language learning” systems. If a large class of languages must be considered, this can increase the difficulty of the problem of correctly identifying an arbitrary language in the class. So there are certainly significant practical advantages to formalisms for natural language parsers which allow the needed linguistic structures to be defined gracefully while making it impossible to define structures that never occur. Recent work in linguistic theory provides some indications about how we can limit the expressive power of a grammar notation without excluding any human languages. There appear to be severe constraints on the possible phrase structures and on the possible “movement” and “binding” relationships that can occur. The exact nature of these constraints is somewhat controversial. This paper will not delve into this controversy, but will just show how some of the constraints proposed recently by Chomsky and others, constraints to which all human languages are thought to conform, can very easily be enforced in a parsing system that allows an elegant grammar notation. These grammars will be called “restricted logic grammars” (RLGs). Two well known logic grammar formalisms, definite clause grammars (DCGs) and extraposition grammars (XGs) will be briefly reviewed, and then RLGs will be introduced by showing how they differ from XGs. RLGs have a new type of rule (“switch rules”) that is of particular value in the definition of natural languages, and the automatic enforcement of some of Chomsky’s constraints makes RLG movement rules simpler than XGs’. We follow the work of (Marcus, 1980), (Berwick, 1982) and others in pursuing this strategy of restricting the grammar formalism by enforcing Chomsky’s constraints, but we use a simple nondeterministic top-down backtracking parsing method. This approach to parsing, which has been developed in logic programming systems by (Pereira and Warren, 1980) and others, allows our rules to be very simple and intuitive. Since, on this approach, determinism is not demanded, we avoid Marcus’s requirement that all ambiguity be resolved in the course of a parse. II. DEFINITE CLAUSE GRAMMARS (DCGs) DCGs are well known to logic programmers. (See Pereira and Warren, 1980 for a full account.) DCGs are similar to standard context free grammars (CFGs), but they are augmented with certain special features. These grammars are compiled into prolog clauses which (in their most straightforward use) define a top- down, backtracking recognizer or parser in prolog. A DCG rule that expands a nonterminal into a sequence of nonterminals is very similar to the standard CFG notation, except that when the right-hand side of a rule contains more than one element, some operator (like a comma) is required to collect them together into a single term. The rules of the following grammar provide a simple example: s --> np , vp. det --> [the]. *p --> det , n. n --> [woman]. VP --> v. v --> [reads]. (DCG 1) The elements of the terminal vocabulary are distinguished by being enclosed in square brackets. An empty expansion of a category “cat” is written “cat --> [I.” (DCG 1) defines a simple context free language which includes “the woman reads”. 1048 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Two additional features provide DCGs with considerably more power. First, the nonterminals in the DCG rules may themselves have arguments to hold structural representations or special features, and second, the right hand side of any rule may include not only the grammatical terminals and nonterminals but also arbitrary predicates or “tests”. The tests must be distinguished from the grammatical vocabulary, and so we mark them by enclosing them in braces, e.g., (test}. (Pereira and Warren, 1980) define a simple translation which transforms rules like these into Horn clauses in which each n-place nonterminal occurs as a predicate with n+2 arguments. The two added arguments provide a “difference list” representation of the string that is to be parsed under that nonterminal. Given the standard prolog depth-first, backtracking proof technique, these clauses define a standard top-down backtracking parser. The DCG notation is very powerful. The fact that arbitrary prolog tests are allowed makes the notation as powerful as prolog is: a DCG can effectively parse or recognize exactly the class of effectively parsable or recognizable languages, respectively. Even eliminating the tests would not restrict the power of the system. We get the full power of pure prolog when we are allowed to give our grammatical predicates arbitrary arguments. With just two arguments to grammatical predicates to hold the difference list representation of the string to be parsed, we could recognize only context free languages, but with the extra arguments, it is not hard to define context sensitive languages like anbncn which are not context free (cf., Pereira, 1983). III. EXTRAPOSITION GRAMMARS (XGs) In spite of the power of DCGs, they are not convenient for the definition of certain constructions in natural languages. Most notable among these are the “movement-trace” or “filler-gap” constructions. These are constructions in which a constituent seems to have been moved from another position in the sentence. This treatment of natural language by recent work in linguistic theory. syntax been well motivated For example, there are good reasons to regard the relative pronoun that introduces a relative clause as having been moved from a subject or object position in the clause. In the following sentences, the relative clauses have been enclosed in brackets, and positions from which “who” has moved is indicated by the position of the coindexed “[t]“, which is called the “trace”: The womani [who [t], likes books] reads. The woman [whoi booksellers like [tli] reads. The woman [who1 the bookseller told me about [tli] reads. In ATN parsers like LUNAR (Woods, 1970), filler-gap constructions are parsed by what can be regarded as a context free parser augmented with a “HOLD” list: when a prefixed wh-phrase like “in which garage” or “who” is parsed, it is put into the HOLD list from which it can be brought to fill a “gap” in the sentence that follows. Fernando Pereira (Pereira, 1981, 1983) showed how a very similar parsing method could be implemented in logic programming systems. These augmented grammars, which Pereira calls “extraposition grammars” (XGs) allow everything found in DCGs and allow, in addition, rules which put an element into a HOLD list - actually, Pereira calls the data structure which is analogous to the ATN HOLD list an “extraposition list”. So, for example, in addition to DCG rules, XGs accept rules like the following: nt . . . trace --> RHS where the RHS is any sequence of terminals, nonterminals, and tests, as in DCGs. The left side of an XG rule need not be a single nonterminal, but can be a nonterminal followed by I...’ and by any finite sequence of terminals or nonterminals. The last example can be read, roughly, as saying that nt can be expanded to RHS on condition that the category “trace” is given an empty realization later in the parse. We realize nt as RHS and put trace on the extraposition list. This allows for a very natural treatment of certain filler-gap constructions. For example, Pereira points out that relative clauses can, at first blush, be handled with rules like the following: *p --> det , n. *p --> det , n , relative. *p --> trace. relative --> rel marker , s. rel marker.. rel:pro .traGe --> relgro. --> [who]. These rules come close to enforcing the regularity noted earlier: a relative clause has the structure of a relative pronoun followed by a sentence that is missing a noun phrase. What these rules say is that we can expand the relative node to a rel-marker and sentence, and then expand the rel-marker to a relative pronoun on condition that some np that occurs after the relative pronoun be realized as a “trace” that is not realized at all in the terminal string. It is not hard to see that this set of rules does not quite enforce the noted regularity, though. These rules will allow the relative pronoun to be followed by a sentence that has no gap, so long as a gap can be placed somewhere after the relative pronoun. So, for example, these rules would accept a sentence like: * the woman [whoi the man reads the book] reads [tli. In this sentence, a gap cannot be found in the sentence [the man reads the book], but since the second occurrence of “reads” can be followed by an np, we can realize that np as the trace or associated with the moved np “who”. But this is clearly a mistake. To avoid this problem, Pereira suggests treating the extraposition list as a stack, and then “bracketing” relative clauses by putting an element on the stack at the beginning of the relative clause which must be popped off the top before the parsing of the relative can be successfully completed. This prevents filler-gap relationships that would hold between anything outside the relative clause and anything inside. The rest of this paper does not require a full understanding of Pereira’s XGs and their implementation. The important points are the ones we have noted: the extraposition list is used to capture the filler-trace regularities in natural language; and it is used as a stack so that putting dummy elements on top of the stack can prevent access to the list in inappropriate contexts. NATURAL LANGUAGE / 1049 IV. RESTRICTED LOGIC GRAMMARS (RLGs) The XG rules for moved constituents are really very useful. The RLG formalism that will now be presented maintains this feature in a slightly restricted form. RLGs differ from XGs in three respects which can be considered more or less independently. First, RLGs allow a new kind of rules, which we will call “switch rules”. Second, we will show how the power of the XG leftward movement rules can be expanded in one respect and restricted in another to accommodate a wider range of linguistic constructions. And finally, we show how a similar treatment allows constrained rightward movement. A. Switch Rules In the linguistic literature, the auxiliary verb system in English has been one of the most common examples of the shortcomings of context free grammars. The structure of the auxiliary is roughly described by (Akmajian et al., 1979) in the following way: “The facts to be accounted for can be stated quite simply: an English sentence can contain any combination of modal, perfective have, progressive be, and passive be, but when more than one of these is present, they must appear in the order given, and each of the elements of the sequence can appear at most once.” The difficult thing to account for elegantly in a context free definition is that the first in a sequence of verbs can occur before the subject. So for example, we have: I have been successful. Have I been successful? This is a rather peculiar phenomenon: it is as if the well defined sequences of auxiliaries can “wrap” themselves around the (arbitrarily long) subject np of the sentence. Most parsers have special rules to try to exploit the regularity between simple declarative sentences and their corresponding question forms. (Marcus, 1980) and (Berwick, 1982), for example, use a “switch” rule which, when an auxiliary followed by a noun phrase is detected at the beginning of a sentence, attaches the noun phrase to the parse tree first, leaving the auxiliary in its “unwrapped”, canonical position, so that it can be parsed with the same rules as are used for parsing the declarative forms. It turns out to be possible to implement a rule very much like Marcus’s in logic programming systems. When an auxiliary is found at the beginning of a sentence, its parsing is postponed while an attempt is made to parse an np immediately following it. When that np is parsed it is just removed from the list of words left to parse, leaving the auxiliary verb sequence in its canonical form. We use a notation like the following: s --> switch(aux-verb , np) , vp. The predicate “switch” triggers the special behavior. These switch rules can be implemented very easily and efficiently in prolog (Stabler, 1986ms, 1983). To account properly for the placement of negation, etc. requires some complication in the rules, but this kind of rule with its simple “look ahead” is exactly what is needed. B. Leftward Movement When introducing the XG rules above, we considered some rules for relative clauses but not rules for fronted wh-phrases like the one in “In which garage did you put the car?” or the one in “Which car did you put in the garage?“. The most natural rules for these constructions would look something like the following: s --> wh_phrase , s. whghrase. ..pp_trace(wh-feature) --> pp(wh-feature). wh-phrase.. .np_trace(wh-feature,Case,Agreement) --> np(wh-feature,Case,Agreement). pp --> pp_trace(wh-feature). np(Case,Agreement) --> np_trace(wh-feature,Case,Agreement). If we assume that these rules are included in the grammar along with the XG rules for relative clauses discussed above, then we properly exclude any possibility of finding the gapped wh-phrase inside a relative clause: * What car did the man [who put [*p-trace] in the garage] go? * In which garage did the man [who put the car [pp-trace]] go? These sentences are properly ruled out by Pereira’s “bracketing” constraint. There are other restrictions on filler-gap relations, though, that are not captured by the bracketing constraint on relative clauses. The following sentence, for example, would be allowed by rules like the ones proposed above: * About what did they burn [the politician's book [pp-trace]]? * Who did I wonder whether she was (*p-trace)? These filler-gap relations are unacceptable. How can this filler-gap relation be blocked? We cannot just use another bracketing constraint to disallow filler-gap relations that cross vp boundaries, because that would disallow lots of good sentences like “What did they burn?“. There is a very powerful and elegant set of constraints on filler-gap relations which covers all of these cases and more: . they are specified by Chomsky’s (Chomsky, 1981) theories of coreference (“binding”) and movement (“bounding”). The relevant principles can be formulated in the following way: (i) A moved constituent must c-command its trace, where a node 01 c-commands p if and only if a does not dominate p, but the first branching node that dominates a dominates p. (ii) No rule can relate a constituent x to constituents Y or Z in a structure of the form: Y . . . . . . [a . ..[p . ..X...l...l...Z . . . . where u and p are "bounding nodes." (In English, the bounding nodes for leftward movement are s and np.) The first rule, the c-command constraint, by itself rules Out sentences like the following: * The computer [which you wrote the program] uses *p-trace. * I saw the man who you knew him and I told np_trace. since the first branching node that dominates “who” and “which” in these cases is (on any of the prominent approaches to syntax) a node that does not dominate anything after the “him”. The second rule, called subjacency, rules out sentences like * Who [s did [np the man with *p-trace] like]? * About what [s did they burn [np my book h-trace1 II? 1050 / ENGINEERING In the first of these sentences, “who” does c-command the trace, but does so across two bounding nodes. In the second of these sentences, notice that the pp-trace is inside the np, so that we are not asking about the “burning”, but about the content of the book! This is properly ruled out by subjacency. There is one additional complication that needs to be added to these constraints in order to allow sentences like: Who [s do you think [s I said [s I read [*p-trace1 11 I ? Who [s does Mary think [s you think [s I said [s I read [np-trace]]]]]? These “movements” of wh-phrases are allowed in Chomskian syntax by assuming that wh-phrase movements are “successive cyclic”: that is, the movement to the front of the sentence is composed of a number of smaller movements across one s-node into its “camp” node. The implementation of RLG movement rules is quite natural. The trick is just to restrict the access to the extraposition list so that only the gaps allowed by Chomsky’s constraints will be allowed by the parser. The c-command restriction can be enforced by indicating the introduction of a gap at the first branching node that dominates the moved constituent, and making sure that the gap is found before the parsing of the dominating node is complete. So, for example, we replace the following three XG rules with two indicated RLG rules: (XG rules) relative --> rel-marker , s. rel marker...np trace --> rel-pro. rel-pro --> [wh;]. - (RLG rules) relative <<< *p-trace --> rel-pro , S. rel-pro --> [who]. The change from I’...” to “CCC” is made to distinguish this approach to constituents which are moved to the left (leaving a trace to the right) from RLG rules for rightward movement. The XG’s additional (linguistically unmotivated) category “rel-marker” is not needed in the RLG because the trace is introduced to the extraposition list afferthe first category has been parsed. So the translation of these RLG rules is similar to the translation of XG rules, except that rel_pro’s are not treated as gappable nodes, the traces are indexed, and a test is added to make sure that the trace that is introduced to the extraposition list is gone when the last constituent of the relative has been parsed (see Stabler, 1986ms for implementation details). Subjacency can be enforced by adding an indication of every bounding node that is crossed to the extraposition list, and then changing the access to the extraposition list. Once this is done, it is clear that we cannot just use the extraposition list as a stack: we have introduced the indications of bounding nodes, and we have indexed the traces. The presence of the bounding node markers allows us to implement subjacency with the rule that a trace cannot be removed from a list if it is covered by more than one bounding marker, unless the trace is of a wh-phrase and there is no more than one covering bound that has no available camp argument. So, to put the matter roughly, access to the RLG extraposition list is less restrictive than access to the XG’s in that the c-command and subjacency constraints are enforced. These restrictions allow a considerable simplification in the grammar rules. Note that the XG rules that were shown as examples are comparable in complexity to the RLG rules shown, but the XG rules were incorrect in the crucial respects that were pointed out! The XG rules shown allowed ungrammatical sentences (viz., violations of the subjacency and c- command constraints) that the RLG rules properly rejected. The XG rules that properly more complex. rule out cases would be considerably C. Riqhtward Movement Although the preceding account does successfully enforce subjacency for leftward movement, no provisions have been made for any special treatment of rightward moved constituents, as in sentences like the following: [The man [tli] arrived [who I told you aboutli. *The woman [who likes [the man [tli] 1 arrived [who I told you aboutli. It is worth pointing out just briefly how these can be accommodated with techniques similar to those already introduced. There are a number of ways to deal with these constructions: (i) The standard top-down left-to-right strategy of “guessing” whether there is a rightward moved constituent would obviously be expensive. Backtracking all the way to wherever the incorrect guess was made is an expensive process, since a whole sentence with arbitrarily many words may intervene between the incorrect guess and the point where the error causes a failure. (ii) One strategy for avoiding unnecessary backtracking is to use lookahead, but obviously, the lookahead cannot be bounded by any particular number of words in this case. More sophisticated lookahead (bounded to a certain number of linguistically motivated consitituents) can be used (cf., Berwick, 1983), but this approach requires a complicated buffering and parse-building strategy. (iii) A third approach would involve special backward modification of the parse tree, but this is inelegant and computationally expensive. (iv) A fourth approach is to leave the parse tree to the left unspecified, passing a variable to the right. This last strategy can be implemented quite elegantly and feasibly, and it allows for easy enforcement of subjacency. This is the approach we have taken. To handle optional rightward movement (extraposition from np), we use rules like the following: s --> np, vp, adjunct. optional-rel --> rel. optional-rel >>> ((adjunct-->rel) ; Tree). In these rules, “Tree” is the variable that gets passed to the right. The last rule can be read informally as saying that optional-rel has the structure Tree, where the content of Tree will be empty unless an “adjunct” category is expanded to a rel, in which case Tree can be instantiated to a trace that can be coindexed with rel. The situation here is more complicated than the situation in leftward movement. In rightward movement, following (Baltin, 1981), we provide a special node for attachment, the “adjunct” node. This violation of the “structure preserving constraint” has been well motivated by linguistic considerations. The adjunct node NATURAL LANGUAGE / 105 1 is a node that can do nothing but capture rightward moved pp’s or ACKNOWLEDGMENTS relative clauses.* I am indebted to Janet Dean Fodor, Fernando Pereira and Yuriy Tarnawsky for helpful discussions. (Stabler, 1986ms) provides a more complete discussion of this material, including implementation details as well as more theoretical discussion. REFERENCES [I] Akmajian, A., S. Steele, and T. Wasow. “The Category AUX in Universal Grammar.” Linguistic Inquiry, 10 (1979) l-64. [2] Baltin, M.R. “Strict Bounding.” In C.L. Baker and J.J. McCarthy, eds., The Loqical Problem of Lanquaqe Acquisition. MIT Press (1981). [3] Berwick, R.C. Locality Principles and the Acquisition of Syntactic Knowledqe. Ph.D. Dissertation, MIT Department of Computer Science and Electrical Engineering (1982). [4] Bet-wick, R.C. “A Deterministic Parser with Broad Coverage.” In Proc. 8th IJCAI, 1983. [5] Betwick, R.C. and Weinberg, A.S. “Deterministic Parsing and Linguistic Explanation.” 1985ms, forthcoming. [6] Chomsky, N. Lectures on Government and Bindinq. Foris Publications, Dordrecht, Holland, 1981. [7] Colmerauer, A. “Metamorphosis Grammars.” In L. Bolt, ed., A second respect in which rightward movement is more complicated to handle than leftward movement is in the enforcement of subjacency. Since in a left-to-right parse, rightward movement proceeds from an embedded gap position to the moved constituent, we must remove boundary indicators across the element in the extraposition list that indicates a possible rightward movement. So to enforce subjacency, we cannot count boundary indicators between the element and the top; rather we must count the boundary indicators that are removed across the element. Subjacency can be enforced only if the element of the extraposition list that carries “Tree” to the right can also mark whether a bounding category has been passed (i.e., when the parse of a bounding category has been completed). Again, the elaboration of the definition of “virtual” required to implement these ideas is fairly easy to supply (see Stabler 1986ms for implementation details). V. CONCLUSIONS AND FUTURE WORK Even grammar notations with unlimited expressive power can lack a graceful way to define certain linguistic structures, and they can define structures that never occur in human languages. DCGs have universal power, but XGs immediately offer a facility for elegant characterization of the movement constructions common in natural languages. RLGs are one more step in this direction toward a notation for logic grammars that is really appropriate for natural languages. RLGs provide “switch rule” notation to allow for elegant characterization of “inverted” or “wrapped” structures, and a notation for properly constrained movements that defines filler-gap relations for both rightward and leftward movement, even when those relations are not properly nested. Getting these results in an XG would be considerably more awkward, but our approach has shown how a careful handling of the “extraposition list” allows easy enforcement of movement constraints.*’ A fairly substantial RLG grammar for English has been constructed. It runs efficiently, but the real argument for RLGs is that their rules for movement are much simpler than would be possible if constraints on movement were not automatically enforced. Natural Lanquaqe Communication with Computers. Springer- Verlag (1978). [8] Dahl, V. “More on Gapping Grammars.” In Proc. of the Int. Conf. on Fifth Generation Computer Svstems. Tokyo, Japan, 1984. [9] Marcus, M. A Theory of Syntactic Recoqnition for Natural Lanquaqe. MIT Press, Cambridge, MA (1980). [lo] Pereira, F. “Extraposition Grammars.” American Journal of Computational Linquistics, 7 (1981) 243-256. [11] Pereira, F. “Logic for Natural Language Analysis.” Technical Note 275, SRI International, Menlo Park, California, 1983. [12] Pereira, F. and Warren, D.H.D. “Definite Clause Grammars for Natural Language Analysis.” Artificial Intelliaence 13 (1980) 231-278. [13] Stabler, E.P., Jr. “Deterministic and bottom-up parsing in prolog.” In Proc. of the National Conference on Al, AAAI-83, 1983. [14] Stabler, E.P., Jr. “Restricting Logic Grammars with Government-Binding Theory.” Unpublished manuscript, submitted to Computational Linquistics (1986ms). [15] Woods, W.A. “Transition Network Grammars for Natural Language Analysis.” Communications of the ACM 13 (1970) 591-606. *These rules for rightward movement are oversimplified. Most linguists follow (Baltin, 1981) and others in assuming that phrases extraposed from inside a VP are attached inside of that VP, whereas phrases extraposed from subject position are attached at the end of the sentence (in the position we have marked “adjunct”). (Baltin, 1981) points out that this special constraint on rightward movement seems to hold in other languages as well, and that we can capture it by counting VP as a bounding category for rightward movement. This approach could easily be managed in the framework we have set up here, though we do not currently have it implemented. “The MGs of (Colmerauer, 1978), the GGs of (Dahl, 1984) and other systems are very powerful, and they sometimes allow fairly elegant rules for natural language constructions, but they are not designed to automatically enforce constraints: that burden is left to the grammar writer, and it is not a trivial burden. 1052 / ENGINEERING
|
1986
|
52
|
497
|
A PARSER FOR PORTABLE NL INTERFACES USING GRAPH-UNIFICATION-BASED GRAMMARS Kent Wittenburg Microelectronics and Computer Technology Corporation Abstract This paper presents the reasoning behind the selection and design of a parser for the Lingo project on natural language interfaces at MCC. The major factors in the selection of the parsing algorithm were the choices of having a syntactically based grammar, using a graph-unification-based representation language, using Combinatory Categorial Grammars, and adopting a one-to-many mapping from syntactic bracketings to semantic representations in certain cases. The algorithm chosen is a variant of chart parsing that uses a best-first control structure managed on an agenda. It offers flexibility for these natural language processing applications by allowing for best-first tuning of parsing for particular grammars in particular domains while at the same time allowing exhaustive enumeration of the search space during grammar development. Efficiency advantages of this choice for graph- unification-based representation languages are outlined, as well as a number of other advantages that acrue to this approach by virtue of its use of an agenda as a control structure. We also mention two useful refinements to the basic best-first chart parsing algorithm that have been implemented in the Lingo project. 1. Introduction In designing a portable natural language (NL) interface, one of the first crucial decisions is whether to require of the grammar used in the system that it be syntactically or semantically based. Existing NL interface systems can be classified along these lines: there are those that use a syntactically based, general grammar of English, e.g., TEAM (Martin, Appelt, and Pereira 1983), versus those that use a semantically based grammar particular to the domain, e.g., Plume (Hayes, Andersen, amd Safier 1985). The semantically based grammars offer customization of the entire system for the domain; robustness of parsing along domain sensitive lines is an advantage usually cited. However, they generally suffer from patchiness of syntactic coverage and the grammar must be rewritten from scratch, in general, for each new domain. The syntactically based grammars, on the other hand, offer the advantage of being able to avoid rehacking the grammar for each new domain and achieving a greater level of generality and sophistication in the syntactic variations of the input. Robustness of parsing is a component in the syntactically-based systems which, along with semantics generally, must be attended to separately; though robustness is obviously not precluded by this initial design choice, it does not come for free since it is not entwined with the grammar itself. may offer advantages in the short run for relatively unsophisticated systems in highly constrained domains, syntactically based grammars offer a modularity in design that will achieve greater payoffs in the long run. Not only does the modularity enhance transportability to new applications, but also, it makes possible the greater sophistication required for interfaces to knowledge-based applications of the future. Given this first design choice, the next step was to choose a formalism for grammar representation and an approach to the grammar itself. The representation language chosen was a graph- unification-based formalism, in particular, one based on Karttunen (1984) and Shieber (1984). Graph-unification-based representation languages have had a tremendous impact in the field of computational linguistics, and in fact have been incorporated in one form or another into a number of influential linguistic theories (see Shieber 1986 for discussion). Among the many advantages of a graph-unification-based formalism are (i) it is easy to use, requiring no special training for grammar writers with linguistics backgrounds; (ii) it is a language separable from any particular machine-dependent implementation and thus amenable to optimizations at many levels; (iii) it avoids the typical explosion of ad hoc procedural operations in the grammar, being a purely declarative language; (iv) it is very flexible, accommodating a variety of grammatical theories; and (v) it is order free, which among other things means that the same grammar rules can be used to generate as well as to analyze. Our choice for an approach to the grammar was Combinatory Categorial Grammar (Ades and Steedman 1982, Steedman 1985). Though untested in natural language applications to date, we felt this approach to grammar was particularly promising in the following respects: (i) it handles English extraction phenomena (wh-questions, relative clauses, etc.) efficiently and elegantly withough resorting to empty rewrite rules or complicated feature passing schemes; (ii) it accounts for more of English coordination than alternative approaches without special rules or ad hoc operations; (iii) it offers a method of accounting for free word order and partially free word order with a simple and easily extendable formalism; (iv) it suggests natural techniques for dealing with lexical ambiguity and heuristic rule preferencing; (v) it is particularly suitable for left-to-right, incremental processing designs; and (vi) it is able to accomplish all this with a very small rule base, relative to the alternatives. Given the three design decisions just mentioned, we now come to the subject of this paper, namely, the selection and design of a parser for NL interface applications using the grammars and the representation language we have just mentioned. One of the first decisions in the MCC Lingo project on natural language interfaces was to go with a general, syntactically-based grammar. It was felt that, although semantically-based grammars NATURAL LANGUAGE / 1053 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. Charts First let us ask what we should expect from the parser independently of the choices related to the grammar. The following desiderata should speak for themselves: - l A formally sound basis for the algorithm in order to ensure termination, completeness, and tractability. l Modes for grammar development that maximize debugging facilities and inspection of parsing steps. l The ability to tune particular grammars in particular domains such that prototypes for efficient applied systems can be developed. l The potential to integrate semantics and contextual factors into preferencing factors for the purposes of tuning. l A design that in principle allows for incorporating credit assignment schemes to automate individual performance situations. adaptation to l A design that does not preclude future adaptation to parallel processing schemes. For maximum flexibility and formal soundness, the most obvious place to begin in constructing a parser is with some variant of chart parsing (Kay 1980). The many advantages of charts have been extolled by Kay and others and will not be repeated here. One of the most persuasive pieces of evidence in favor of charts is their widespread adoption. Charts, or something very similar, have figured prominently in important theoretical work on parsing from the computer science perspective, e.g., Earley (1970), in natural language research, e.g., Kaplan (1973), Bear and Karttunen (1979), Thompson (1981), Ford, Bresnan, and Kaplan (1982), Martin, Church, and Patil (1981), Shieber (1985), and in applied systems, e.g., Slocum (1981). However, despite the popularity of chart parsing in the literature, there has been relatively little attention in the more theoretical quarters to the role of agendas in chart parsing For whatever reason, there seems to be an association of chart parsing in most published work with exhaustive breadth-first control designs, despite the fact that Kay (1980) and also Kaplan (1973) have written of the possibility of using an agenda to control the enumeration order in maximally flexible ways. Research into agenda driven chart parsers, on the other hand, seems to have been mainly driven by psycholinguistic concerns (e.g., Ford, Bresnan, and Kaplan 1982), not specifically by the need to develop efficient applied systems. However, for the needs of NL interface development, an agenda is in fact the key ingredient. The advantages of having an agenda as a control mechanism for a parser go well beyond matters of increased efficiency. Retaining maximum flexibility for later adjustments to a parser without having to scrap or radically alter existing code or existing grammars is a major advantage. We will see an example shortly of ‘the relative ease of adding meta-level operations to the agenda structure. Other possible uses of an agenda are (i) as a means of integrating semantics and contextual factors into the scoring; (ii) as a framework for credit assignment schemes to automate adaptation to individual performance situations; and (iii) as the control mechanism for parallel processing of parsing steps, a natural use of agendas pointed out by Kay (1980), among others. And, to complete our list of criteria, the presence of an inspectable control structure makes it possible to develop sophisticated grammar development tools so that the state of a parse can be examined at any point. Another point worth making about chart parsing with agendas is its relation to questions of whether the control proceeds left-right, right-left, or some version of middle-out in processing natural language input. Enumeration in this respect is completely controlled by the heuristic ordering on the agenda, and any of these designs can be achieved as long as appropriate heuristics and agenda operations can be designed. In fact, an agenda can be designed to allow any mix of these orders; attention can be directed to any arbitrary area of the chart at any time, a property which produces the effect of being able to suspend processing on less promising paths with the possibility of resuming such processing later if necessary. Thus it is hard to imagine a more general algorithm with respect to control decisions. 2.1. Exhaustive VS. Partial Enumeration While many chart-parsing algorithms use control schemes that exhaustively enumerate the search space, there are a number of reasons why exhaustive enumeration is unsuitable for NL interface applications given the choices mentioned. The most obvious reason is that syntactically based grammars of extensive coverage admit staggering amounts of syntactic ambiguity for certain kinds of constructions. For example, working with a corpus collected through a Wizard-of-Oz experiment, Martin, Church, and Patil (1981) found 958 parses for the sentence In as much as allocating costs is u tough job I would like to have the total costs related to each product. It seems safe to say that no matter what optimizations are added to the parser, and Martin, Church, and Patil added many, exhaustive enumeration in the face of such data will probably not lead to satisfactory performance for a natural language interface application. Our choices relating to the representation language and the grammar approach add further weight to this argument against exhaustive enumeration. It is a commonly acknowledged fact that unification of graphs tends to be expensive computationally, mostly because unification is destructive of the graphs on which it is called (see, e.g., Karttunen 1984). The expense is particularly evident in chart parsing because, by definition, one needs to retain edge graphs in their original state before unification as well as create new edge graphs that reflect the result of unifying the originals. Optimizations of the unification algorithm itself have been suggested as a means for coping with this problem (Pereira 1985; Karttunen and Kay 1985). While such optimizations are obviously welcome, another place to cut costs is to minimize the calls to unification in the first place.* Avoiding complete enumeration is a first step in this line. Certain properties of Combinatory Categorial Grammars also suggest that exhaustive enumeration in parsing is inappropriate. As is detailed in Wittenburg (1985), a consequence of the mechanism for handling extraction and certain kinds of coordination in these grammars is the potential for ambiguous interpretations that have identical content. Another way to say this is that there are multiple paths through the search space of rule firings which lead to the identical solution. The obvious search method to use in such a domain avoids enumeration of all paths, since there is no obvious reason to do so. This general plan * It is relevant to report, based on unpublished work by David Wroblewski, that unification using structure sharing may not yield the expected dramatic improvements in parsing performance. In fact, unification with structure sharing has been (at least temporarily) shelved in the Lingo project. 1054 / ENGINEERING has been embraced in the theoretical linguistics literature dealing with the formal devices utilized in Combinatory Categorial Grammar. It has been suggested that one should use heuristic strategies in processing with these grammars in order to control which grammar rules are most appropriate to fire in a given context; as long as one proceeds forward to a successful parse, one has no reason to fire all the rules one can. There is one further motivation for avoiding exhaustive enumeration that should be mentioned. Let us assume that certain individual syntactic analyses admitted by the grammar are designed so as to be ambiguous with respect to semantic interpretation. This general idea has previously surfaced in work by Church (1980), Pereira (1983), and Marcus, Hindle, and Fleck (1983). In Wittenburg (1986a) various arguments are advanced for using a one-to-many mapping from syntactic bracketings to semantic interpretations in the case of extraposition from noun phrases. Rich et al. (1986) g ive an account of how such ideas can be extended to a variety of syntactic constructions and to the semantic representation itself. Given this picture, we again have a case of a search domain in which it is inappropriate to generate all paths. For if we are able to reach a semantic interpretation through more than one path, then we have no reason to continue generating all paths once we have reached the first one. The reason we can reach the same solution in these cases is different than the reason mentioned above. We are assuming that within the set of interpretations assigned to some syntactic analysis reached by one path, there may be one or more of those interpretations that are reachable by a different path. An example would be a treatment of prepositional phrase attachment in which, say, “high” attachment would yield the full set of attachments from the semantic point of view while “low” attachment would yield only one semantic interpretation that happens to be a member of the first set. Despite all these arguments against using exhaustive enumeration in an application mode, there are arguments for using exhaustive enumeration in a grammar development mode. Assuming that there will be times when parsing fails in an application (whether “hard” or “soft”), circumstances will probably arise in which all the search space &ill have to be explored. It is very important to have anticipated such an event during grammar development so that undesirable rule interactions can be eliminated. Also it is important to take note of the performance characteristics of the parser under such circumstances. Thus a parser that can be toggled between exhaustive and minimal enumeration, as well as arbitrary points in between, is the best we can ask for. Agenda controlled chart parsing offers this sort of flexibility. 2.2. Enumeration Order The Earley algorithm (Earley 1970), along with other breadth- first ordering schemes associated with chart parsing, is designed for exhaustive enumeration of the grammar with respect to some string during parsing. Although such a control mechanism by itself is ill-suited for anything but exhaustive enumeration, there have been some efforts at partitioning grammars in such a way as to get partial enumeration with basically breadth-first ordering (Slocum 1984). Given our additional motivation for keeping rule firings to a minimum, however, even restricted breadth-first search is less desirable than some of the alternatives. We should distinguish, then, what is about to be proposed from other suggestions in the literature for heuristically ordering a collection of parses achieved with some form of breadth-first enumeration (e.g., Robinson 1982, Heidorn 1982, Slocum 1984). What is of maximum benefit for our purposes is a design that could return just the best parse, first of all, but that could continue to enumerate other parses if subsequent semantic processing deems it necessary. Among the alternatives to breadth-first order is a depth-first, backtracking design, but since a strength of charts in general is that they do not require backtracking, a more attractive choice is a so-called “best-first” control scheme. Best-first search is of course a well-known paradigm in the A.I. literature. A number of interesting observations can be made about chart parsing from the perspective of heuristic graph search. l The search can be represented as a standard or graph (as opposed to an and/or graph). l The system is commutative in the sense of Nilsson (1980); thus the control scheme can be irrevocable, i.e., it need not involve backtracking. l The chart itself as database has, under certain conditions, the implicit effect of merging nodes in the search graph appropriately; thus, there may be no need to check whether newly added nodes have already been generated in the search graph. l The relative lengths of alternate paths through the search graph to the solution are of little importance; thus admissability of the search algorithm is not a strong factor. l On the other hand, the optimality of the algorithm, i.e., a measure of the total number of nodes in the complete search graph that are expanded, will have a strong bearing on efficiency. l The set of edges appearing on a chart can be looked at as the closed nodes in a best-first search algorithm; the other data structure we need is one to keep track of open nodes, which can be viewed as an agenda of possible actions to take.** These characteristics of chart parsing, then, allow a simplification of the general graph-searching procedure presented in Nilsson (1980). We now turn to an overview of the algorithm itself. 3. The Best-First Algorithm My purpose here is not to present the best-first chart parsing algorithm in depth. Readers may consult the original sources or Wittenburg (1986b) for detail in this respect. What I wish to do is highlight certain features of the algorithm that help achieve the goals outlined above. The parser for this project has been dubbed Astro, which is a reflection of the importance of the A* search algorithm (Hart, Nilsson, and Raphael 1968) in its design. The algorithm we use is less general than previously published chart parsing algorithms in two respects. We assume a grammar whose rules have only binary or unary daughters. Such a simplification is a consequence of the ** Intuitively, closed nodes in a heuristic search algorithm are options that have been exercised; open nodes are options which have been generated during the search but which have not yet been exercised. The observation that chart edges can be viewed as closed nodes must be understood in light of the fact that in the algorithm discussed here, an edge is placed on the chart concurrent with the expansion of all successors of that edge on the agenda. NATURAL LANGUAGE ! 1055 grammars being used in the MCC Lingo project, and not an inherent restriction due to chart parsing. Most published chart parsing algorithms generalize to grammars that have rules with right hand sides of arbitrary length by using the dotted rule technique introduced by Earley (1970). However, there is no motivation for complicating the algorithm in this way when using Combinatory Categorial Grammars; the effect of dotted rules is in fact already achieved in the categories of such grammars. The only binary rules that are needed are a small, fixed number of very general combination rules, operating over these complex categories. The remainder of the grammar consists of unary rules, which have the effect of altering the complex categories in some way. The second point about this chart parsing algorithm with respect to previously published ones is that it does not permit formal operations such as transformations or register setting as was done by Kaplan (1973), nor is it designed for backtracking. In our experience, these simplifications allow more flexibility in some of the agenda maintence aspects of the system. 3.1. The Main Loop The following procedure suffices as a basis for the main loop; it follows the basic organization of Nilsson (1980:64). 1. Initialize *agenda* 2. Initialize *chart*. 3. LOOP: if *agenda* is empty, exit with failure. 4. Select the best action from the *agenda*, remove it from the *agenda*, and apply the action, Set *best- edge* to the new edge added to the *chart*, if any; else, NIL. 5. If *best-edge* satisfies the terminating conditions, exit successfully. 6. If *best-edge* is non-nil, generate the successors. Install the members of M on following a heuristic ordering scheme. 7. Go LOOP. algorithm that achieves a major set A4 of its the *agenda*, 3.2. Generating Successors The critical feature of this efficiency advantage for graph-unification-based formalisms can be found in the generate successors step, which appears in step 6 of the main loop, and also in chart initialization. We make a critical distinction between checking to see if a rule call is likely to succeed and actually applying a rule to a set of edges. The checking operation is a part of generating the successors of a new edge on the chart, leading to augmentations on the agenda only, while actually applying a rule call is done only when invoking the highest ranked action on the agenda. Given this distinction, we can make make use of an optimized test function for checking edge graphs for rule application, leaving actual unification with its destructive effects to applying a rule call. Since the latter operation is invoked much less often than the generate successors step, there are major savings in unification costs. For graph unification-based grammars, at least three options present themselves for the test function. First is Shieber’s notion of restricted unification (Shieber 1985). Second, Karttunen (1986) has suggested a scheme where the destructive effects of unification can be undone. Last, a more “porous”, but more efficient check-graphs function could be used that is nondestructive of its graph arguments. The last of these options is the one used in the Lingo project; it seems the best choice given our algorithm because there are no penalties, except a slight downgrade in performance, if rule calls generated in step 6 fail to apply successfully once they are chosen upon iteration in step 4. 3.3. The heuristic function Critical to the success of this algorithm, as with all algorithms based on heuristic graph search, is the choice of a heuristic function to order the successors of a given node expansion. Developing the right set of heuristics is the product of intuition, trial, and error, and depends upon the particulars of the grammar and the domain. Here we mention some general factors that are a start for designing a heuristic function. l An important factor in scoring any rule call is the span of the edge to be added to the chart if the rule fires successfully. This span can be computed from the spans of the daughter edges in the rule call. In general, longer spans should be favored.*** l The rules that are more likely to lead to success should be given higher intrinsic scores, and play a role in **** scoring a rule call. l The linguistic content in the edges involved in a rule call should also play a role. Differentiating among the scores of ambiguous lexical sense assignments is one method for designing heuristics that take advantage of the fact that some word senses will be statistically more likely than others in certain domains of discourse. Naturally, such heuristics will tend to become much more refined and sophisticated through experience with particular grammars, lexicons, and corpora. Automated techniques for statistical information gathering and heuristic refinement are always a possibility. 4. Refinements We make mention here of two useful refinements to the basic chart parsing algorithm. The first introduces a technique for making environments for unary rules more restrictive, thus **a** avoiding ultimately useless unary rule applications. Also we review a technique to partition the rules of the grammar that is used to optimize the generate successors step. The partitioning of the grammar in this way also allows for further heuristic control of the parser’s actions. 4.1. Avoiding useless unary rule proposals Many chart parsing algorithms developed for context-free grammars make use of relations in the grammar which can prune the total search space. Kay (1980) mentions collecting this sort of reachability information into tables which help control parsing actions. Calls to unary rule productions are an obvious candidate to attempt to prune from the search space for Combinatory Categorial Grammars. It is a general fact about unary rules that *et Note that this scheme by itself will tend to favor binary rule applications over unary rule applications. **** Among ungrammatical ***** This agenda items the rules may be some whose function is to input. These, in general, will receive a lower priority. recover from augmentation also helps to establish a heuristic scoring technique for involving unary rule calls. See Wittenburg (1986b) for details. 1056 / ENGINEERING they tend to be overly promiscuous, i.e., typically, the right-hand- grammar by assigning different weights to partitions as a whole. side of the rule is a poor measure by itself of the ultimate Thus we can postpone the checking of subsets of rules that are less usefulness of a unary rule application. Even when a unary rule call can succeed in adding an additional edge to the chart, that edge often turns out to be unable to combine with edges on either side. Adding superfluous edges like this in chart parsing has the effect of making subsequent operations more expensive since this new edge must now be taken under consideration in all generate successors procedures involving any adjacent edges. What is called for then is the computation of some grammar relation that can be used to further restrict the conditions for applications of unary rules. A relation that has been found to be useful in this regard we call the e&ended sister relation. It is defined as follows: node A is an extended sister of node B if and only if there exists some node (Y that is a sister of node p where cr is a non- branching exhaustive dominator of A and @ is a non-branching exhaustive dominator of B. In the derivation tree below, extended sisters of A are shown as all B’s that can stand in the relation shown. likely or perhaps whose checking is more expensive than other subsets. Also, it is possible to define a single check for an entire grammar partition that in one fell swoop can eliminate the further consideration of any of those rules for the edge sets in question. 5. Evaluation and further study Since the makeup of the heuristic function itself plays such a critical role in a parser based on heuristic graph search, it is difficult to evaluate the efficiency of the design in the general case when compared to other non-heuristically based designs. For chart parsing systems that use different representation languages and different grammars than the ones used in the Lingo project, some of the arguments advanced here in favor of a best-first enumeration order would lose force. However, given an NL interface application, a graph-unification-based representation language, Combinatory Categorial Grammars, and a many-to-one mapping from syntactic bracketings to semantic representations, it is hard to imagine that any radically different alternative could beat the parsing program we have outlined here on the grounds of efficiency, clarity, and flexibility. Experience in the Lingo project indicates an overwhelming performance improvement with a best- first parsing design when compared to a non-selective, bottom-up, /\/\ P Q P I I I = = = I I I B A B The extended sister relation is used in the following way. We precompute a grammar table that holds the extended sisters for the left-hand-sides of each unary rule. An additional condition for unary-rule-call successors of a new edge is now that some adjacent edge must match at least one of the extended sisters of the left- hand-side of that unary rule. As pointed out in Wittenburg (1986b), a consequence of this move is that the generate successors step now has to consider as successors of a given edge not just those unary rule calls which apply to the edge directly, but also those unary rule calls which involve the edge as an extended sister. 4.2. Mets Agenda Items An additional augmentation that we have implemented involves adding a new type of agenda item. In the basic best-first algorithm, agenda items are made up of rule calls over a set of edges. These agenda items are generated by checking these edges against all the rules in in the grammar, weeding out all rules which fail the test. It is also possible to define a meta agenda item that generates these basic agenda items, i.e., that itself generates these rule calls. This augmentation is a way of breaking apart the iteration of checking the edges with respect to the whole grammar into n steps corresponding to a partitioning of the set of grammar rules into n cells. the edges in E’ with respect to the grammar rules of partition P Adding this sort of agenda item has the effect that in the only. Any rules which then pass these further tests will result in generate successors step, we produce agenda items of this new type at the first pass. These new agenda items will have the form [i, P, E’], where i is a heuristic score assignment, P is a partition cell of the grammar rules, and E’ is a triple that represents the new edge and its adjacent edges at the time the new edge was added. Exercising an agenda item of this form involves checking breadth-first operation. design that figured in our original bootstrapping Current research involves evaluation and refinement of heuristic scoring methods as well as a consideration of further design changes involving the agenda. We are using a set of tools for the tuning of grammars for particular corpora which take advantage of the inspectable control structure of the chart. Longer-term research includes the possibility of designing credit assignment schemes to automatically adapt to specific performance situations. We also count the possibility of incorporating some form of buffering within the agenda structure so that attention can be focused incrementally on local areas of the chart. Initial experience indicates that such a factor would improve the reliability of heuristics, which tends to be best when considered in a local fashion rather than globally. Also, left-to-right buffering could lead to a cascading design feeding into semantic and pragmatic components, an important step towards any future approximations of real-time parsing. Acknowledgements - The work reported on here involved many others at MCC besides the author. I gratefully acknowledge the contribution of other members of the Lingo team to this work. In particular, Elaine Rich made extensive contributions to the research itself and also offered valuable comments on earlier drafts of this paper. David Wroblewski has also made significant contributions to this work, including the coding of structure-sharing algorithms for graph unification and the design and coding of the window system interface to the parser and grammar development environment. I also wish to thank Lauri Karttunen for his long-standing support and advice on parsing matters. References rule calls added to the agenda of a type that we assumed previously. 1. Ades, A. and M. Steedman. 1982. On the Order of Words. Linguistics and Philosophy 4: 517-558. The advantages of incorporating this new type of agenda item are that it is possible to heuristically order the iteration over the NATURAL LANGUAGE / 1057 2. Bear, J., and L. Karttunen. 1979. PSG: A Simple Phrase Structure Parser. Texas Linguistic Forum 15: l-46. 3. Church, K. 1980. On Memory Limitations in Natural Language Processing. MIT Doctoral Dissertation. [Available through the Indiana University Linguistics Club.] 4. Earley, J. 1970. An Efficient Context-Free Parsing Algorithm. Communications of the ACM 13:94-102. 5. Ford, M., J. Bresnan, and R. Kaplan. 1982. A Competence-Based Theory of Syntactic Closure. In J. Bresnan (ed.), The Mental Representation of Grammatical Relations, pp. 727-796. Cambridge, Mass.: MIT. 6. Hart, P., N. Nilsson, and B. Raphael. 1968. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Transactions on SSC 4:100-107. 7. Hayes, P., P. Andersen, and S. Safier. 1985. Semantic Caseframe Parsing and Syntactic Generality. In Proceedings of the 23rd Meeting of the Association for Computational Linguistics, pp. 153-160. 8. Heidorn, G. 1982. Experience with an Easily Computed Metric for Ranking Alternative Parses. In Proceedings of the 20th Meeting of the Association for Computational Linguistics, pp. 82-84. 9. Kaplan, R. 1973. A General Syntactic Processor. In R. Rustin (ed.), Natural Language Processing, pp. 193-241. New York: Algorithmics. 10. Karttunen, L. 1984. Features and Values. Proceedings of Coling, pp. 28-33. Association for Computational Linguistics. 11. Karttunen, L. 1986. HUG: A development environment for unification-based grammars. Unpublished ms., SRI International and CSLI, Stanford University. 12. Karttunen, L., and M. Kay. 1985. Structure Sharing with Binary Trees. In Proceedings of the 23rd Meeting of the Association for Computational Linguistics, pp. 133-136. 13. Kay, M. 1980. Algorithm Schemata and Data Structures in Syntactic Processing. Xerox Palo Alto Research Center, tech report no. CSL-80-12. 14. Marcus, M., D, Hindle, and M. Fleck. 1983. D-Theory: Talking about Talking about Trees. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, pp. 129-136. 15. Martin, P., D. Appelt, and F. Pereira. 1983. Transportability and Generality in a Natural-Language Interface System. In Proceedings of IJCAI, pp.574-581. 16. Martin, W., K. Church, and R. Patil. 1981. Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results. MIT tech report no. MIT/LCS/TR-261. 17. Nilsson, N. 1980. Principles of Artificial Intelligence. Palo Alto, Ca.: Tioga. 18. Pereira, F. 1983. Logic for Natural Language Analysis. Technical note 275, A.I. Center, SRI International. 19. Pereira, F. 1985. A Structure-Sharing Representation for Unification-Based Grammar Formalisms. In Proceedings of the 23rd Meeting of the Association for Computational Linguistics, pp. 137-144. 20. Rich, E., J. Barnett, K. Wittenburg, and G. Whittemore. 1986. Ambiguity and Procrastination in NL Interfaces. Technical report no. HI-073-86, Microelectronics and Computer Technology Corporation. 21. Robinson, J. 1982. DIAGRAM: A Grammar for Dialogues. Communications of the ACM 25:27-37. 22. Shieber. S. 1984. The Design of a Computer Language for Linguistic Information. Proceedings of Coling84, pp. 362-366. Association for Computational Linguistics. 23. Shieber, S. 1985. Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms. In Proceedings of the 23rd Meeting of the Association for Computational Linguistics, pp. 145-152. 24. Shieber, S. 1986. An Introduction to Unification-Based Approaches to Grammar. Chicago: University of Chicago Press, forthcoming. 25. Slocum, J. 1984. METAL: The LRC Machine Translation System. Paper presented at the ISSCO Tutorial on Machine Translation, April 2-6, 1984, Lugano, Switzerland. [Working paper no. LRC-84-2, Linguistics Research Center, University of Texas at Austin .] 26. Steedman, M. 1985. Dependency and Coordination in the Grammar of Dutch and English. Language 61:523-568. 27. Thompson, H. 1981. Chart Parsing and Rule Schemata in PSG. In Proceedings of the 19th Annual Meeting of the Association for Computational Linguistics, pp. 167-172. 28. Wittenburg, K. 1985. Some Properties of Combinatory Categorial Grammars of Relevance to Parsing. Paper presented at the Annual Meeting of the Linguistics Society of America, December 27-30, Seattle. [MCC tech report no. HI-012-86.1 29. Wittenburg, K. 1986a. Extraposition from NP as Anaphora. To appear in Syntax and Semantics, Volume 20: Discontinuous Constituencies. New York: Academic. [MCC tech report no. HI-118-85.1 30. Wittenburg, K. 1986b. Parsing as Heuristic Graph Search. Technical report no. HI-075-86, Microelectronics and Computer Technology Corporation. 1058 / ENGINEERING
|
1986
|
53
|
498
|
A CHINESE NATURAL LANGUAGE PROCESSING SYSTEM BASED UPON THE THEORY OF EMPTY CATEGORIES Long-Ji Lin*, James Huang**, K.J. Chen*** and Lin-Shan Lee* *Dept. of Electrical Engineering, National Taiwan University, Taiwan, R.O.C. **Dept. of Modern Languages and Linguistics, Cornell University, U.S.A. ***Institute of Information Science, Academia Sinica, Taiwan, R.O.C. ABSTRACT In this paper, we will present a device specially designed on the basis of the theory of empty categories. This device cooperates with a bottom-up parser and is used as an elegant and efficient approachtotreatthetroublesome problems of the transformations of passivization,relativiz- atlon; toplcalization, ba-transformation and the use of zero pronouns in Chinese natural language. With the aid of the device, the grammar rules for Chinese will be much more simplified and easier to design, and the processing capability can be significantly improved. I INTRODUCTION Passivization, relativization, topicalization, ba-transformation and the use of zero pronouns play major roles in Chinese. To deal with those syntactic phenomena, the conventional approach is to collect a set of grammar rules to cover all the possible sentence patterns derived from those transformations. But such an approach needs a great set of grammar rules to cover all the possibilities. Especially the complexity result- ing from the interactions of several transform- ations will make such an approach infeasible. Another approach adopted in this paper is the use of the raise-bind mechanism based upon the theory of empty categories. It seems that the above syntactic phenomena are not related to each ot ier. However, the sentences derived from them all involve the common use of empty categories. With the use of the raise-bind mechanism, the parser will treat the transformations in the same way. The following sections will briefly describe our parsing algorithm first, then discuss empty categories in Chinese and how the raise-bind mechanism operates. The SASC system uses a bottom-up parser instead of a top-down parser, because the’former tends to be more efficient for Chinese sentence analysis. The parser uses charts (Kay, 1973; Kaplan; 1973) as global working structures, because many natural language processing systems, such as MIND (Kay, 1973) and GSP (Kaplan, 1973)) have proved the chart to be an efficient data structure to record what have been done so far in the course of oars- ing. A parser based on charts can avoid the inefficiency in duplicating many computations that a top-down parser often suffers when backtracking occurs. The input Chinese sentence is submitted to a preprocessor, which seqments input sentence (a seqbence of Chinese characters) into words. The result of the preprocessor is represented by a chart, and is sent to the parser. The parser parses sentences in the way that phrases are built up on the chart by startinq with their heads and adjoining constituknts on the left or the right of the heads. For example, according to the phrase structure rule (PSR), "NP-> QP N", N (noun) is the head of NP. When encounterinq a noun, the parser will try to build an NP by-starting’ with the noun and adjoining the proceding quantity phrase (QP). According to the PSR, "VP-> V-n NP", V-n (transitive verb) is the head of VP. When encountering a transitive verb, the parser’s action is similar to that of "NP -> QP N", except that it tries to adsjoin the followinq NP as its object. But if its following NP is not yet parsed by the parser, the expectation to build a VP is &spend&d until an NP' is built up in the object position. The parser using the above algorithm cons- tructs syntax trees of input sentences exactly from bottom to top. The alqorithm used seems to be a good combination of da&d-driven parsing and hypothesis-driven parsing. The implementation of the parsing algorithm and the grammar to model Chinese syntax can be found in (Lin et al., 1986). III EMPTY CATEGORIES II THE PARSING ALGORITHM Let's consider the following Chinese sentences. In the SASC system presented here, Chinese sentences are syntactically analyzed from the viewpoints of generative grammar (Huang, 1982). (I) ,flimfm SE he hurt Chang-san NATURAL LANGUAGE I 1059 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (2) (3) (4) (5) (6) (7) (8) ba-transformation: fib 82 ES t-%BT e he ba Chang-san hurt (He hurt Chang-san) passivization: $Ez 3% fib U@T e I Chang-san by him hurt (Chang-san was hurt by him) topicalization: ?flCWiJB ?5t ZSl!% e I that dog I never have seen (I have never seen that dog) relativization: 3%~ I% ‘M4 2 playing de children (the children who were playing) SE (sy %$I?) 5&1z Chang-san tried escape (Chang-san tried to escape) pivot construction: fibi EtrG /Ia b e El%) I he asked children go to dinner (He asked the children to go to dinner) using zero pronoun: SEE T3B e Chang-san likes (Chang-san likes someone or something) Sentence (Z)-(8) all involve a missing s&ject or object (indicated by “e”). But what does each missing subject or object refer to? The solid lines under sentence (Z)-(7) indicate the refer- ence of each one. The missing object in (81, how- ever, does not refer to any element within (8). In fact, it is an omitted pronoun, which refers to someone or something understood in the situation. According to the current linguistic theory (Chomsky, 1981; Huang, 1982), sentence (2) is derived from sentence (I) by a transformation called “move 0 “. The transformation is pemrformed as follows : the object, “Chang-San” in (1) , is moved by carrier ” E ” (“ba”) to the position indicated in (Z), and then leaves behind a “trace” . (indicated b y “e”). The trace dominates no lexcial material, but is “bound” to its antecedant ,“Chang- San”. In addition to ba-transformation, passiviz- ation, topicalization and relativization can also be analyzed as involving some form of “move 0 ‘I. Thus there are traces within these constructions. Sentence (6)-(8) also contain vacant NP-posi- tions, which are not traces, because they are not derived from “move Q “. They are called “null pronominals”. Null pronominals are in general free, for example, sentence (8). But those in certain constructions are bound, for example, sentence (6) and (7). Sentence (7) is called a pivot construction; that is, the object of the first verb is also the subject of the second verb. So, the null pronominal in the subject position is “bound” to the object. Traces and null pronominals are known as empty categories (or empty NPs). The syntactic behavior of null pronominals is different from that of traces. They, however, are treated in- discriminately in our implementation. IV THE RAISE-BIND MECHANISM The raise-bind mechanism is used to cope with empty categories; in other words, to find out the antecedant for each empty category except those which are free (eq. sentence (8) ) . With the aid of the raise-bind mechanism, the parser will generate an empty NP inserted into the vacant position where an NP is expected to appear. Then the empty NP will be raised up in some way along the parsing tree, when the tree is growing up (recall that the parser works bottom-up), until its antecedant is parsed. At this point, the parser binds the empty NP by setting it to refer to its antecedant. Once being bound, the empty NP will not be raised any further - this is because an empty NP has exactly one antecedant and cannot be bound more than one time. Not every NP position can be filled by an empty category. In Chinese, empty categories only appear in the subject position and direct object position, and never in the indirect object posi- tion, and never in the indirect object position and prepositional object position. In our implementation, an empty NP contains three fields: (1) a field to keep the pointer to its antecedant, (2) a field to keep where it came from, and (3) a field to keep the syntactic or semantic constraints on the empty NP for later checking. We can formulate the rules informally to treat relativization as follows: for a noun and a relative clause to be combined into an NP, the relative clause must contain an empty NP which is unbound and marked coming from either subject position or object position, and the empty NP will be bound to the (head) noun. We can also state the rules for passivization as follows: once a clause is constructed, the 1060 / ENGINEERING parser checks whether the prepositional phrase, ” @ +NP” (similar to “by+NP” in English) is in- volved in the clause. If so, there must be an empty NP which is unbound and marked coming from the object position, and it will be bound to the subject of the clause. Rules for pivot constructions can be formul- ated as follows: in a pivot construction, the direct object will bind the empty NP coming from the subject position of the embedded clause. Similarly, rules for topicalization, ba- Relativization in Chinese is a long-distance transofmration and others can be designed. To movement; that is, it can move an object across illustrate the above rules, let’s consider example several S (sentence) ndoes. Noun phrase (IO’) is an (9) and its parsing tree in figure 1. example. (~O)[sZk a4 +p9 rs $$J $&.W e 111 & s l- I ask Li-szu help me buy de book (the book which I asked Li-szu to buy for me) by Li-szu ask go to dinner de children (the children who were asked by Li-szu to go to dinner) NP (ask) 5iuEAR (go to dinne r) el 4 e2 e2 + e3 e3 + /J\ a (children) Figure 1. The parsing tree of (9) Let’s follow the bottom-up parser to parse example (9) : (I) Node Sl is constructed and el serves as the dummy subject. (2) Node V’ is constructed. V’ is a pivot construction, so el is bound to e2. (3) Node S2 is constructed. S2 is a passive clause, because of the PP, “by Li-szu”. According to the rules for passivization, e3 binds e2. (4) Node NP is constructed. According to the rules for rela- tivization, e3 is bound to “children”. Notice that only e3 was raised up across node S2, because el and e2 had been bound beforeS2was constructed. Once the parsing tree in figure1 is finished, it is easy to answer who were asked and who went to dinner. Since el is the dummy subject of “go to dinner” and the binder of el is e2, whose binder is e3, whose binder is “children”, we can conclude it is “children” who went to dinner. In the same way, we also conclude it is “children” who were asked. The raise-bind mechanism also serves as a filter to rule out incorrect sentencesorincorrect parsing trees. For example, if no empty NP is raised within a construction involving passiviz- ation or relativization, such a construction will be ruled out. If the mechanism is adopted for English sentence analysis, a test must be per- formed to rule out sentences with one or more empty categories which have no binder. But such sentences are in general grammatical in Chinese (see (8)). V MORE SYNTACTIC PHENOMENA (1-l) r .eZgf& el I& A like de the man Noun phrase (11) is ambiguous. If the head noun (“the man”) binds el, the NP means “the man whom someone likes”. If the head noun binds e2, it means “the man who likes someone or something”. To remove the ambiguity needs semantic interac- tions. Now we can formulate the rules for relativiz- ation as follows: for a noun and a relative clause to be combined into an NP, the parser checks the “empty-NP list” raised from the relative clause. And “if no empty NP is raised, rule out the NP; if an empty NP is raised and marked coming from subject position or object position or embedded object position (as in (IO)), set the empty NP to be bound to the head noun; if two empty NPs are raised from subject and object position (as in (II)), employ semantic analysis to determine the proper binding.” Like relativization, topicalization is also a long-distance movement and is treated in a similar way. Another syntactic phenomena crucial to the parser is known as the Complex NP Constraint (CNPC) (Radford, 1981): CNPC -- No transformation rule can move any element out of a complex NP. A complex NP (CNP) is an NP containing a relative clause. The CNPC can be easily encoded in our grammar in this way-- all empty NPs can not be raised up across ar: NP node. Hence it is impossible for the empty NP within a CNP to be bound to any element out of that CNP. In most cases, ba-transformation and passivi- zation will move the direct objects of verbs. But the phenomena known as “subject-to-object raising” (Radford, 1981) makes some differences: NATURAL LANGUAGE / 106 1 --The subject of an embedded clause can be moved into the subject (or ba-object) position of the higher clause by passivization (or ba-transform- ation). For example, sentence (13) is derived from sentence (12) by such a movement. people will believe this mistake is right (This mistake will be believed to be right) To cope with subject-to-object raising, the rules in previous section for passivization are modified as follows: the subject of a passive clause will bind the empty NP in either the object position or the subject position of an embedded clause. VI A COMPARISON WITH THE HOLD-LIST MECHANISM In ATN (Bates, 1978), the hold-list mechanism is used for the purpose similar to that of the raise-bind mechanism. But we object to such an approch, for (1) it is not fit for a bottom-up I;a;ser; (2) it cannot deal with null pronominals e . example (6)-(a)); (3) it handles left extra- position (eg. example (Z)-(4)), not right extra- position (eg. example (5)). An movement is called left (right) extraposition, if it moves an NP to the position left (right) to its trace. To deal with right extraposition, ATN uses another mech- anism. In linguistic theory, bcth left extraposition and right extraposition move an NP to a position dominating its trace, and a null pronominal, if bound, is always bound to an NP dominating the null pronominal (Chomsky , 1981) . So, the raise- bind mechanism is sufficient to cope with all empty categories, since its function is to raise up an empty category to be bouend to an NP which dominates this empty category. VII CONCLUSION We have presented how the raise-bindmechanism copes with traces and null pronominals in Chinese. With the use of the mechanism, many sophisticated syntactic phenomena can be encoded in the grammar easily. ACKNOWLEDGEMENTS Thanks to the enlightening discussions of Chen, J.J. and Chen, J.C. I31 121 l-31 [41 r51 b1 [71 [81 REFERENCES Bates, M. (1978) "The Theory and Practice of Augmented Transition Network Grammars”,Natural Language Communication with Computers, pp.lVl- 259. Chomsky, N. (1981) Lectures on Government and . . Blnding, Forise, Dordrecht. Huang J. (1982) Logical Relations in Chinese and the Theory of Grammar, MIT doctoral dissertation. Kaplan, R.M. (1973) "A General Syntactic Processor", in [Rustln 19731. Kay, M. (1973) The MIND System, in [Rustin 19731. L. J. Lin, K. J. Chen, James Huang and L.S. Lee (1986) "SASC: A Syntactic Analysis System for Chinese Sentences", International Journal of Computer Processing of Chinese and Oriental Languages, Published by Chinese Language Computer Society. Radford, A. (1981) Transformational Syntax: A Student's Guide to Chomsky's Extended Standard Theory, Cambridge Univ. Press, 1981. Rustin R, ed. (1973) Natural Language Process- ing, Algorithm Press, N.Y. The mechanism is simple and theoretically complete. If semantic analysis is employed to remove ambiguities, such as example (II), the correct bindings of empty categories can always be reached. 1062 / ENGINEERING
|
1986
|
54
|
499
|
ADAPTING MUMBLE: EXPERIENCE WITH NATURAL LANGUAGE GENERATION Robert Rubinoff Computer and Information Science Department Moore School of Electrical Engineering University of Pennsylvania Philadelphia, PA 19 104 1 Abstract This paper describes the construction of a MUMBLE-based [5] tactical component for the TEXT text generation system [7]. This new component, which produces fluent English sentences from the sequence of structured message units output from TEXT’s strategic component, has produced a 60-fold speed-up in sentence production. Adapting MUMBLE required work on each of the three parts of the MUMBLE framework: the inter- preter, the grammar, and the dictionary. It also provided some insight into the organization of the generation process and the consequences of MUMBLE’s commitment to a deterministic model. 2 TEXT’s Message Vocabulary The TEXT system [7] is designed to answer questions about the structure of a database. It is organized into two relatively independent components: a strategic component which selects and organizes the relevant information into a discourse struc- ture, and a tactical component which produces actual English sentences from the strategic component’s output. The original tactical component [l] used a functional grammar [3]; it is this component that has been replaced.* A tactical component for TEXT must be tailored to the form in which TEXT’s strategic component organizes information. The strategic component responds to a query with a list of rhetorical propositions. A rhetorical proposition indicates some information about the database and the rhetorical function the information TEXT intends it to perform. For example, the rhetorical proposition: (identification GUIDED PROJECTILE (restrictive (TRAVEL-MEANS SELF-PROPELLED)) (non-restrictive (ROLE PROJECTED-OBJECT))) indicates that TEXT wants to identify guided missiles by saying that they are projectiles and that they have certain attributes. ‘No attempt was made to investigate changing the overall division into strategic and tactical components. In part this was because the task of adapting the MUMBLE system to work with an independently developed text planner seemed like an interesting experiment in itself. Also, TEXT’s strategic component was in the process of being ported from a VAX to a Symbolics 3600, and was thus already in a state of flux. This same information might be presented with a different rhetorical function such as attributive, i.e. attributing certain information to guided missiles rather than using it to identify them. The information in the propositions generally consists of ob- jects and attributes from TEXT’s database model, indicating attributes of the mentioned objects and sub-type relationships between the objects. Some of the rhetorical functions allow other sorts of information. Inference propositions, for exam- ple, can indicate comparisons between database values: (inference OCEAN-ESCORT CRUISER (HULL-NO (1 2 DE) (I 2 CA)) (smaller DISPLACEMENT) (smaller LENGTH) (PROPULSION ST~~~TURCRD STMTURCRD) > Here TEXT infers that ocean escorts have smaller length and displacement than cruisers, that the two kinds of ships have the same form of propulsion and that their hull numbers differ in their first two letters. The strategic component also produces focus information for each proposition to insure that the individual sentences will form a coherent paragraph when combined. Following Sid- ner’s model [S], TEXT indicates a discourse focus and po- tential focus list for each proposition. The tactical component uses this information to decide when to pronominalize and what sentence-level syntactic structure to use. 3 Adapting MUMBLE to TEXT MUMBLE is a general-purpose generation framework which has been used with several domains and message representations[5,2].** MUMBLE-based systems are con- structed out of three components: the interpreter, the grammar, and the dictionary. The interpreter controls the overall genera- tion process, co-ordinating the propagation and enforcement of constraints and the (incremental) translation of the message.“’ The grammar enforces grammatical constraints and maintains * l The version of MUMBLE used with TEXT dates from March 1985 and was originally set up to translate the output of the GENARO scene description system[4]. ***A “message” is simply an expression that the text planner (here TEXT’s strategic component) sends to MUMBLE to be translated. This is the same as a “realization specification” in [4]. NATURAL LANGUAGE / 1063 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. local grammatical information. The dictionary indicates, for each term in the vocabulary of the message formalism, the var- ious ways it can be expressed in English. In adapting MUM- BLE to this new domain, each of these three components had to be modified to a different degree. 3.1 The Interpreter The interpreter is a domain-independent embodiment of MUMBLE’s approach to generation [4]. The translation pro- cess is guided by a depth-first traversal of the surface structure tree. Each position in the tree has one or more labels, which may indicate procedures to be run when the traversal enters or leaves that position, The leaves of the tree will either be words, which are printed out after morphological processing, or pieces of the original message. In the latter case, the interpreter looks up the message in the dictionary to find a realization for the message that satisfies the local constraints. The result is a new piece of surface structure tree which is spliced into the tree, possibly with part(s) of the original message as new leaves. In this way, the entire message is gradually translated and printed out. Because the interpreter actually does some additional work beyond guiding the generation process, some modification to it was required. In particular, the routine that handles word morphology needed changes to the way it determined noun phrase plurality. A noun phrase was considered to be plu- ral if it was derived from a message element that represented more than one object. This was adequate when the domain con- tained only specific objects, as has been the case in past uses of MUMBLE. In TEXT, however, many terms represent generic concepts, e.g. SHIP, which represents the concept of a ship rather than any particular ship. Generic concepts can be ex- pressed using either singular or plural, for example “A ship is a water-going vehicle” vs. “Ships are water-going vehicles”. Thus the morphology routine had to be modified to look at the surface structure tree to see how the term had actually been realized. (The grammar and dictionary also had to be modified to always explicitly mark plural noun phrases in the tree). This was the only modification necesary to the interpreter. However, not all of the interpreter was used. In addition to the traversal and incremental expansion of the surface structure tree, MUMBLE provides a mechanism for subsequent mes- sages to be combined with the original message as it is trans- lated. This is done via “attachment points”[6] that are marked in the tree; a new message from the planner can be added at an attachment point if there is a way to realize it that satisfies the attachment point’s grammatical constraints. For example, in translating messages from GENARO, MUMBLE puts an ATTACH-AS-ADJECTIVE attachment point before the head noun in noun phrases. This allows MUMBLE to combine the messages such as (introduce house-l) and (red house 1) and generate the single sentence “This is a picture - of a red house” instead of “This is a picture of a house. It is red.” This attachment mechanism is not used with the TEXT output. l Originally this decision was made because TEXT’s *Actually, attachment points are used to attach each proposition as 1064 / ENGINEERING strategic component organizes its messages into sentence-size packets (the propositions), and there seemed little reason to split these up and then have MUMBLE recombine them. It turned out, though, that there was one case where attach- ment points would have been useful. The attribute-value pair (TARGET-LOCATION x) (where x is the type of target lo- cation, e.g. SURFACE or WATER) can be translated as either “a target location <X as a prep. phrase>” or “a <X as an adjective> target location’ ’ . The latter form is preferred, but can only be used if x can be realized as an adjective. Thus MUMBLE can produce “a surface target location”, but must resort to “a target location in the water”. The problem is that since the interpreter traverses the tree in depth-first or- der, MUMBLE must decide which form to use for (TARGET- LOCATION X) before determining whether X has a realization as an adjective. This is the one case where it was necessary to circumvent MUMBLE’s control strategy. Attachment points could have solved this problem; the value x could have been a separate message which would have been attached ahead of “target location” only if it had a possible realization as an adjective. Unfortunately, there was a problem that prevented the use of attachment points. Attachment can be constrained so that the result will be grammatical and so that the attached mes- sage will be together with the proper objects. For example, (red house-l) will only be attached as an adjective in a noun phrase describing house 1. But there was no prin- cipled way to force several messages to be combined into a single phrase. To see why this is a problem, consider a simple rhetorical proposition: (identification SHIP WATER-VEHICLE (restrictive (TRAVEL-MODE SURFACE))) (‘ ‘restrictive” indicates that this attribute distinguishes SHIP from other kinds of WATER-VEHICLE.) This is intended to produce something like “a ship is a water-going vehicle that travels on the surface”. There are really two pieces of infor- mation here: that ships are water-going vehicles, and the ships travel on the surface. If we separate these out, the first would become (identification SHIP WATER-VEHICLE), and the second would become something like (attributive SHIP (TRAVEL-MODE SURFACE) ). The problem is that there is no way to force MUMBLE to combine these back to get something like the original sentence. Instead, MUM- BLE might translate these as “A ship is a water-going vehicle. Ships travel on the surface.” The precise characterization of ships has been diluted. Even worse, if the next proposition is about ships, the travel-mode information may be combined with it instead, completely destroying the rhetorical structure intended by the strategic component. Of course, there is no immediately apparent advantage to splitting up identification propositions (although it does sug- gest the possibility of letting more of the structural decisions be made by MUMBLE). But the same problems arise in trying a new sentence. This is simply a convenience to allow MUMBLE to be invoked once on a list of propositions; the results are exactly as they would be if MUMBLE were invoked individually on each proposition. to solve the problem with (TARGET-LOCATION X) discussed above. Attachment would allow the system to choose correctly between “a surface target location” and “a target location on the surface ’ ’ . But then instead of “The missile has a surface tar- get location. Its target location is indicated by the DB attribute DESCRIPTION”, MUMBLE might produce “The missile has a target location. Its surface target location is indicated by the DB attribute DESCRIPTION.” What is needed is a way to constrain the attachment process to build several messages into a single phrase. In fact, this ca- pacity has been added to MUMBLE, although it is not present in the version used with TEXT [McDonald, personal commu- nication]. It is possible to create “bundles” of messages that can have additional constraints on their overall realization while allowing the individual messages to be reorganized by the at- tachment process. This facility would make it feasible to use attachment with TEXT. 3.2 The Grammar A MUMBLE grammar is not simply a declarative specifi- cation of valid surface structure like the rules in a context-free grammar. Rather, it consists of procedures that enforce (local) constraints and update info about current grammatical environ- ment. The grammar provides the low-level control on realiza- tion as the interpreter traverses the tree. Grammar in the more conventional sense is a by-product of this process, The grammar operates via “constituent-structure labels”. These labels are placed on positions in the surface structure tree to identify their grammatical function. Some, such as ad- jective and np, are purely syntactic. Others, such as com- pound and name, have more of a semantic flavor (as used with TEXT). Labels constrain the generation process through an associated “grammatical constraint”. This is a LISP pred- icate that must be satisfied by a proposed realization. When- ever the interpreter tries to translate a message, it checks that the constraints associated with all the labels at the current tree position are satisfied. These constraints can depend on both the proposed realization and the current environment. The la- bels also provide for local operations such as propagation of constraints through the tree and production of purely grammat- ical words such as the “to” in infinitival complements and the “that” in relative clauses. As with the constraints, this is done by associating procedures with labels. Each label has several ‘ ‘grammar routines’ ’ to be run at various times (such as when the interpreter enters or leaves a node, or after a message is realized). For example, the rel-clause label prints “that” when the inter-peter enters a node it labels. The labels handle local aspects of the grammar; global as- pects are managed via “grammar variables”. These keep track of global information (i.e. information needed at more than one tree position). For example, there are grammar variables that record the current subject and the current discourse fo- cus. These “variables” are actually stacks so that embedded phrases can be handled properly. The grammar variables are maintained by the grammar routines associated with the labels. The clause label, for example, updates the current subject whenever the interpreter enters or leaves a clause. The gram- mar variables enable information to be passed from one part of the tree to another. Adapting MUMBLE to TEXT required considerable mod- ification and extension to the grammar. A number of new syntactic structures had to be added. Some, such as apposi- tives, simply required adding a new label. Others were more complex; relative clauses, for example, required a procedure to properly update the current subject grammar variable (if the rel- ative pronoun is serving as the subject of the relative clause) as well as a procedure to produce the initial “that”. Also, some of the existing grammar had to be modified. Post-nominal modifiers, for example, previously were always introduced via attachment and realized as prepositional phrases. When work- ing from TEXT, they are introduced as part of the original noun phrase, and they can sometimes be realized as relative clauses, so the constraints had to be completely redesigned. The grammar was also augmented to handle some constraints that were more semantic than syntactic. These were included in the grammar because it is the only mechanism by which decisions made at one place in the tree can affect subsequent decisions elsewhere. In fact, there is really nothing inherently grammatical about the “grammar”; it is a general mechanism for enforcement of local constraints and propagation of infor- mation through the tree. It serves well as a mechanism for enforcing grammatical constraints, of course, but it is also use- ful for other purposes. For example, the grammar variable current-entity-type keeps track of whether the current clause is dealing with specific or generic concepts. 3.3 The Dictionary The dictionary stores the various possible ways each kind of message can be realized in English. Dictionary entries provide the pieces of surface structure that are organized by the inter- preter and constrained by the grammar. The dictionary has two parts: a look-up function and a set of “realization classes” (or “r-classes”). The look-up function determines which rclass to use for a message and how to construct its arguments (which are usually either sub-parts of the message or particular words to use in the English realization of the message). An rclass is a list of possible surface structures, generally parameterized by one or more arguments. The look-up function is intended to be domain-dependent. However, the look-up function that was developed for GENARO, which has a fairly simple keyword strategy, seemed adequate for TEXT as well. The keyword is the first element of the message if the message is a list; otherwise it is the mes- sage itself. The function then simply looks up the keyword in a table of terms and rclasses. Using an existing function was convenient, but it did cause a few problems because it required that keywords be added to TEXT’s formalism in a few cases. For example, numbers had to be changed to (number #) so they would have a keyword. Some straightforward modifica- tions to the look-up function, however, would allow MUMBLE to generate from the original TEXT formalism. The realization classes vary greatly in their generality. Some of them are very general. The rclass SVO, for example, pro- duces simple transitive clauses; the subject, verb, and object NATURALLANGUAGE / 106s are arguments to the rclass. At the other extreme, the rclass TRAVEL-MEANS-CLASS is only useful for a particular at- tribute as used by TEXT; even if another system had an at- tribute called TRAVEL-MEANS, it is unlikely to mean exactly the same thing. Intuitively, it might seem that there would be a number of general realization classes like SVO. In fact, though, SVO was the only pre-existing rclass used in that was used for TEXT. None of the other rclasses proved useful. One source of this lack of generality is that concepts that seem similar are often expressed quite differently in natu- ral language. For example, of the eight generic attributes (e.g. TRAVEL-MEDIUM, TRAVEL-MEANS, and TARGET- LOCATION) in the dictionary, three require special rclasses because the general translation won’t work for them. In- side TEXT’s domain model, TRAVEL-MEDIUM and TRAVEL- MEANS are considered similar sorts of concepts. But in English, the two concepts are expressed differently. TEXT’s notion of generic attribute simply doesn’t correspond to any natural lin- guistic category. Furthermore, different message formalisms will tend to capture different generalizations. GENARO can use a CONDENSE-ON-PROPERTY rclass[4] because it has a partiC- ular notion of what a property is and how it gets translated into English. TEXT doesn’t have anything that exactly corresponds to GENARO’s properties (and even if it did, it couldn’t con- dense things because the properties would be buried inside the rhetorical propositions). The crux of the matter is that while there are linguistic gen- eralizations that might be captured in realization classes, they usually cut across the grain of the classes of expressions in a message formalism, and cut differently for different formalisms. Thus whatever generalizations can be encoded into the rclasses for one formalism are unlikely to be useful with a different formalism. For example, TEXT can produce attribute expressions of the form: “ (HULL-NO ( 1 2 DE) ) ” which means, roughly, “characters 1 through 2 of the HULL-NO are DE”. This is a very idiosyncratic sort of message; it is unlikely that another (independently developed) text planner would have a message form with even the same meaning, let alone the same syntax. Thus the dictionary entry for this message is unlikely to be of use with any system other than TEXT. Many of TEXT’s mes- sages were similarly idiosyncratic, because its message formal- ism was designed around the needs of its particular task. Sirni- larly, other generation systems will have their own idiosyncratic message formalism. Thus they will need their own highly spe- cific dictionaries to work with MUMBLE. 4 Using MUMBLE to produce text 4.1 Examples from TEXT The new MUMBLE-based tactical component has been very successful. It can process all of the examples in the appendix to [7] and produce comparable English text. Furthermore, it can process all 57 sentences in the appendix in about 5 minutes; the old tactical component took that long to produce a single sentence. For example, TEXT’s strategic component responds to a re- quest to describe the ONR database with: (attributive db OBJECT (name REMARKS)) (constituency OBJECT (VEHICLE DESTRUCTIVE-DEVICE)) (attributive db VEHICLE (based-dbs (SOME-TYPE-OF TRAVEL-MEANS) (SOME- TYPE-OF SPEED-INDICES) ) ) (attributive db DESTRUCTIVE-DEVICE (based-dbs (SOME-TYPE-OF LETHAL-INDICES))) which is then translated into English by MUMBLE as: All entities in the ONR database have DB attributes RE- MARKS. There are 2 types of entities in the ONR database: vehicles and destructive devices. The vehicle has DB attributes that provide information on SPEED-INDICES and TRAVEL-MEANS. The destructive device has DB at- tributes that provide information on LETHALJNDICES. This translation is guided and controlled by the various sub- components that make up the MUMBLE tactical component, as can be seen in a more detailed example. The message: identification SHIP WATER-VEHICLE (restrictive TRAVEL-MODE SURFACE)) when received by MUMBLE, is first looked up in the dictio- nary, which indicates that the overall structure of the sentence will be: clause / \ [subject] [predicate] [verb ] WI1 be (WATER-VEHICLE . ..)) The interpreter then traverses this (partially filled-out) surface structure tree, soon reaching the still untranslated message ele- ment SHIP. The first possibility listed for this in the dictionary is the noun phrase “a ship”; since no constraints rule it out, this choice is selected. The interpreter continues, printing the words “a” and “ship” as it reaches them. The morphology routine converts “be” to “is” by checking the number of the current subject and whether any deviation from simple present 1066 / ENGINEERING tense (the default) has been arranged for. Next the interpreter reaches the object, another message element which is translated (via dictionary lookup) as: [detl [head-noun] [post-mods] I I a water-going uehicle(TRAVEL-MODE SURFACE) “A” and “water-going vehicle” are simply printed when passed through. The treatment of (TRAVEL-MODE SURFACE is more complicated. This message element can be translated in many ways, such as a noun phrase, a bare noun, a verb phrase, and so on. The post -mods label, however will allow only two possibilities: a prepositional phrase or a relative clause. Since the dictionary indicates that relative clauses are preferred over prepositional phrases (for this message) and there are no other constraints blocking it, the relative clause form is chosen: rel-clause [subject] [predicate] <gap> ’ .;_\i [verb] [PPI travel SURFACE The interpreter continues on through the relative clause in a similar fashion, eventually producing “that travels on the sur- face”. (Note, incidentally, that the word “that” is not explic- itly in the tree; rather it is printed out by an attached routine associated with the rel-clause label.) The complete trans- lation produced by MUMBLE is: A ship is a water-going vehicle that travels on the surface. All three elements of the overall MUMBLE framework have worked together to produce the final English text. 4.2 Mumble and the Generation Process The fundamental constraint that MUMBLE places on gen- eration is, of course, that it is deterministic; this is the guiding principle driving its design, and has been discussed at length 1. The information used to guide the generation process is centered around the message formalism, not lan- guage. elsewhere [5,4]. There are, however, several other interesting constraints that MUMBLE places on the overall design of the generation process: MUMBLE’s knowledge of how language expresses things is stored in the dictionary, organized around the possible expressions in the message formalism. Thus the “dictio- nary” does not list meanings of words, but rather possible (partial) phrases that can express a message. Similarly, the grammar is not set up primarily to express whether a sentence is grammatical but rather to constrain the choice of realizations as the sentence is generated. The gram- matical constraints depend in part on the message being translated and the current grammatical environment (i.e. the grammar variables), none of which is preserved in the generated English sentence. Thus it may not be possible to tell whether a given sentence satisfies the grammar’s constraints (at least without knowing a message it could have been generated from). This organization is a natural consequence of MUMBLE’s purpose: to generate text. In language understanding, it is important to know about language, because that is what the system must be able to decipher. MUMBLE is also set up to know about its input, but its input is the message formalism, not natural language. What MUMBLE needs to know is not what a particular word or construction means, but rather when to generate it. Generation is incremental and top-down. Large messages are partially translated incrementally, with sub-messages left to be translated later as the in- terpreter reaches them. Thus it is easy for large-scale structure to influence more local decisions, but harder (or impossible) for local structures to constrain the global structure that contains them. This asymmetry is a direct consequence of determinism; so mething has to be decided first. Constraints can be associated both with the surface structure being built up and with possible realizations. Thus the existing structure can constrain what further structures are built, and candidate structures can constrain where they can be placed. This allows’ some of the bidi- rectionality that would seem to be ruled out by deter- minism. For example; transitive verbs can insist on only being used with direct objects, and verb phrases with di- rect objects can insist on getting transitive verbs. Note though that the decision to use a transitive verb phrase would still be made first, before the verb was selected. Constraints are largely local, with all global con- straints anticipated in advance. Most constraints are handled locally by constraint predi- cates that are attached to the surface structure tree or to the possible realization. Any global constraints must have been anticipated and prepared for, either by passing infor- mation down to the local node as the tree is traversed, or NATURAL LANGUAGE / 1067 by storing the information in globally accessable grammar variables. Furthermore, all constraints are still locally en- forced; global information can only constrain decisions if there are local constraints that use it. 5 Conclusion The new MUMBLE-based tactical component has been very successful; it produces equivalent English text approxi- mately 60 times faster than TEXT’s old tactical component. Its construction, however, required modifications to each of the three parts of MUMBLE: the dictionary needed new en- tries for the new types of messages that TEXT produced; the grammar needed expansion to handle additional constructions and to implement new constraints that were needed for TEXT; and the interpreter was modified to handle a new criterion for noun phrase number. Furthermore, the new component sheds some light on how MUMBLE organizes the generation pro- cess and the consequences of its commitment to determinstic generation. References [I] Steve Bossie. A Tactical Component for Text Genera- tion: Sentence Generation Using a Functional Gram- mar. Technical Report MS-CIS-81-5, CIS Department, University of Pennsylvania, Philadelphia, PA, 198 1. [2] Robin Karlin. Romper Mumbles. Technical Report MS- CIS-85-41, CIS Department, University of Pennsylvania, Philadelphia, PA, 1985. [3] Martin Kay. Functional grammar. In Proceedings of the 5th Annual Meeting of the Berkeley Linguistic Society, 1979. [4] David D. McDonald. Description directed control: its im- plications for natural language generation. In N. Brady, editor, Computational Linguistics, pages 111-129, Perg- amon Press, 1983. [5] David D. McDonald. Natural language generation as a computational problem. In M. Brady and Bob Berwick, editors, Computational Models of Discourse, pages 209- 265, MIT Press, 1983. [6] David D. McDonald and James Pustejovsky. Tags as a grammatical formalism for generation. In Proceedings of the 23rd Annual Meeting of the ACL, pages 94-103, AS- sociation for Computational Linguistics, Chicago, 1985. 171 Kathleen R. McKeown. TEXT GENERATION: Using Discourse Strategies and Focus Constraints to Generate Natural Language. Cambridge University Press, 1985. [8] C. L. Sidner. Focusing in the comprehension of definite anaphora. In M. Brady and Bob Berwick, editors, Compu- tational Models of Discourse, pages 267-329, MIT Press, 1983. 1068 / ENGINEERING
|
1986
|
55
|
Subsets and Splits
SQL Console for Seed42Lab/AI-paper-crawl
Finds papers discussing interpretability and explainability in machine learning from after 2010, offering insight into emerging areas of research focus.
Interpretability Papers Since 2011
Reveals papers from the AAAI dataset after 2010 that discuss interpretability or explainability, highlighting key research in these areas.
SQL Console for Seed42Lab/AI-paper-crawl
Searches for papers related to interpretability and explainability in NIPS proceedings after 2010, providing a filtered dataset for further analysis of research trends.
AI Papers on Interpretability
Finds papers discussing interpretability or explainability published after 2010, providing insight into recent trends in research focus.
ICML Papers on Interpretability
Retrieves papers from the ICML dataset after 2010 that mention interpretability or explainability, offering insights into trends in model transparency research.
ICLR Papers on Interpretability
Retrieves papers from the ICLR dataset that discuss interpretability or explainability, focusing on those published after 2010, providing insights into evolving research trends in these areas.
ICCV Papers on Interpretability
Finds papers from the ICCV dataset published after 2010 that discuss interpretability or explainability, providing insight into trends in research focus.
EMNLP Papers on Interpretability
Retrieves papers related to interpretability and explainability published after 2010, providing a focused look at research trends in these areas.
ECCV Papers on Interpretability
The query retrieves papers from the ECCV dataset related to interpretability and explainability published after 2010, providing insights into recent trends in these research areas.
CVPR Papers on Interpretability
Retrieves papers from the CVPR dataset published after 2010 that mention 'interpretability' or 'explainability', providing insights into the focus on these topics over time.
AI Papers on Interpretability
Retrieves papers from the ACL dataset that discuss interpretability or explainability, providing insights into research focus in these areas.