Generic Tasks in KnowledgeBased Reasoning: High-Level Building Blocks for Expert System Design B. Chandrasekaran Ohio State University

I n the view of our research group at the Laboratory for Artificial Intelligence Research, the field of expert systems is stuck in a level of abstraction that obscures the essential nature of the information processing tasks that current systems perform. The available paradigms often force us to fit the problem to the tools rather than fashion the tools to reflect the structure of the problem. This situation is caused by a failure to distinguish between what we might call the information processing level (or the knowledge level, in Allen Newell's words) and the implementation language level. Most available languages, be they rule-, frame-, or logic-based, are more like the assembly languages of the field than programming languages with constructs essential for capturing the essence of the information processing phenomena. We wish to provide a critique of the abstraction level of the currently dominant approaches and propose an alternative level of abstraction. The proposed alternative not only clarifies the issues but also makes possible tools and approaches that help in system design, knowledge acquisition, and explanation.

Information processing tasks in knowledge-based reasoning It seems intuitively clear that there are types of knowledge and control regimes that are common to diagnostic reasoning in different domains, and we similarly expect to find common structures and regimes for, say, design as an FALL 1986

0885-900018610800-

activity. In addition, we will also anticipate that the structures and control regimes for diagnostic reasoning and design problem solving will, generally speaking, be different. However, looking at the formalisms (or equivalently, the languages) commonly used in expert system design, we see that the knowledge representation and control regimes do not typically capture these distinctions. For example, in diagnostic reasoning we might wish to speak generically in terms of malfunction hierarchies, rule-out strategies, setting up a differential, etc.; while for design, the generic terms might be device/component hierarchies, design plans, ordering of subtasks, etc. Ideally we would like to represent diagnostic knowledge in a domain by using the vocabulary appropriate for the task. But typically the languages in which the expert systems have been implemented have sought uniformity across tasks and have thus sacrificed clarity of representation at the task level. The computational universality of representation languages such as Emycin or OPS5-any computer program can be written more or less naturally in these languagesoften confuses the issue, since after the system is finally built it may not be clear which portions represent domain expertise and which are programming devices. In addition, the control regimes that these languages come with (in rulebased systems they are typically variants of hypothesize-andmatch, such as forward or backward chaining) do not explicitly indicate the real control structure of the system at the task level. For example, the fact that RlI performs a linear sequence of subtasks-a very special and atypically simple version of design problem solving-is not explicitly 0023 $01.0001986 IEEE

23

encoded: The system designer so to speak "encrypted" this control in the pattern-matching control of OPS5. These comments need not be restricted to the rule-based framework. It is possible to represent knowledge as sentences in a logical calculus and use logical inference mechanisms to solve problems. Knowledge could also be represented as a frame hierarchy with procedural attachments in the slots. It is relatively straightforward to rewrite Mycin,2 for example, in this manner (see Szolovits and Pauker3). In the logic-based approach the control issues would deal with choice of predicates and clauses, and in the second approach they would be at the level of, for example, which links to pursue for inheritance. None of these choices have any direct connection with the control issues natural to the task. There is an opposite aspect to control. Because of the abstraction level relative to the information processing task, some control issues are artifacts of the representation. In our opinion these are often misinterpreted as issues at the knowledge level. For example, rule-based approaches often concern themselves with syntactic conflict resolution strategies. When the knowledge is viewed at the appropriate level, we can often see the existence of organizations of knowledge that bring up only a small, highly relevant body of knowledge without any need for conflict resolution at all. Of course, these organizational constructs could be "programmed" in the rule language (metarules are meant to do this in rule-based systems), but because of the status assigned to the rules and their control as knowledge-level phenomena (as opposed to the implementation-level phenomena, which they often are), knowledge acquisition is often directed toward strategies for conflict resolution, whereas they ought to be directed to issues of knowledge organization. This is not to argue that rule representations and backward- or forward-chaining controls are not natural for some situations. If all a problem solver has in the form of knowledge in a domain is a large collection of unorganized associative patterns, then data-directed or goal-directed associations may be the best the agent can do. But that is precisely the occasion for weak methods such as hypothesize -and-match (of which the above associations are variants), and, typically, successful solutions cannot be expected in complex problems without combinatorial searches. Typically, however, expertise consists of much better organized collections of knowledge, with control behavior indexed by the kinds of organization and forms of knowledge they contain. We have found six generic tasks that are very useful as building blocks for the construction (and understanding) of knowledge-based systems. These tasks cover a wide range of existing expert systems. Because of their role as building blocks, we call them elementary generic tasks. While we have been adding to our repertoire of elementary generic tasks for quite some time, the basic elements of the framework have been in place for a number of years. In particular, our work on MDX4,5 identified hierarchical classification, hypothesis matching, and knowledge-directed information passing as 24

three generic tasks and showed how certain classes of diagnostic problems can be implemented as an integration of these generic tasks. (In the past we have also referred to them as problem-solving types.) Over the years we have identified several others: object synthesis by plan selection and refinement,6 state abstraction,7 and abductive assembly of hypotheses.8 This list is not exhaustive; in fact, our ongoing research objective is to identify other useful generic tasks and understand their knowledge representation and control of problem solving.

Examining some generic tasks Each of the generic tasks will be described in somewhat greater detail. Abstractly, the generic tasks can be characterized by providing information about (1) a task specification in the form of generic types of input and output information; (2) specific forms in which the basic pieces of domain knowledge are needed for the task, and specific organizations of this knowledge particular to the task; and (3) a family of control regimes appropriate for the task. The explanations will be both concrete and abstract, employing historical examples. The first three generic tasks are best described by means of the MDX example. In the late 1970's, at about the time Mycin and rule-based systems had captured the imagination of many researchers and drawn attention to knowledge-rich problem solving, we began a project on medical diagnosis. The MDX system that resulted was the product of collaboration between four researchers: myself; Jack Smith, a member of the medical faculty at Ohio State University; and Fernando Gomez and Sanjay Mittal, graduate students in our group. The MDX system embodies many of the ideas that eventually resulted in the theory of generic tasks. Hierarchical classification. Gomez recognized that the core process of diagnosis can be thought of as a classificatory problem-solving process, that is, one of identifying a patient case description as a node in a disease hierarchy. This led him to consider the nature and organization of knowledge and the control processes required for hierarchi-

cal classification. The following briefly summarizes the more detailed account given elsewhere9 of classificatory problem solving for diagnosis. Let us imagine that corresponding to each node of the classification hierarchy alluded to earlier we identify a diag-

nostic hypothesis. The total diagnostic knowledge is then distributed through the conceptual nodes of the hierarchy in a specific manner to be discussed shortly. The problem solving for this task will be performed top down, that is, the topmost concept will first get control of the case, then control will pass to an appropriate successor concept, and so on. In the medical example, a fragment of such a hierarchy might resemble Figure 1. More general classificatory concepts are higher in the structure; more particular ones are lower in the IEEE EXPERT

hierarchy. It is as if Internist first establishes that there is in fact a disease, then Liver establishes that the case at hand is a liver disease, while, say, Heart, etc., reject the case for not being in their domain. After this level, Jaundice may establish itself, and so on. Each of the concepts in the classification hierarchy contains "how-to" knowledge to enable it to decide how well the concept matches the data (see the next section on hypothesis matching) and whether the concept can be established to some degree or rejected. When a concept rules itself irrelevant to a case, all its successors also get ruled out; thus, large portions of the diagnostic knowledge structure never get exercised. On the other hand, when a concept is properly invoked, a small, highly relevant body of knowledge comes into play. We note in passing that because of their role, the concepts are also called "specialists." (Thus the entire conceptual hierarchy is a community of specialists.) The problem solving that goes on in MDX is distributed. The problem-solving regime implicit in the structure can be characterized as an establish-refine type. That is, each concept first tries to establish or reject itself. If it succeeds in establishing itself, it keeps track of which observations it can "cover" (account for), and the refinement process is invoked. This consists of seeing which of its successors can establish itself. lypically, this process goes on until enough tip nodes (specific diseases or, more generally, classificatory hypotheses) are established to cover all the observations. This discussion of the problem solving in MDX is oversimplified and incomplete; nevertheless, it should be noted that certain kinds of multiple diseases can be handled, including diseases that are secondary to other diseases. Further discussion of these aspects of MDX is available in the literature.5 MDX uses a simple version of classificatory control. We have recently investigated some of the more complex issues in control for hierarchical classification. 10 But the important point is that the control issues for classification are not merely subsumed in the control issues, such as forward or backward chaining, for rule languages, but have a separate conceptual existence. Furthermore, classification has a homogeneous family of control, a distinct hierarchical organization, and specific types of knowledge associated with it. This point can be repeated for each of the generic tasks. Hypothesis matching, or assessment. While MDX as a whole is engaged in classification, what about the problem solving of each of the specialists, namely, the classificatory concepts, which, when called upon, attempt to establish themselves? Generally, the data and the knowledge needed for establishment both have a good deal of uncertainty associated with them-the sort of uncertainty that Mycin attempted to handle with its "uncertainty factors." Basically, the information processing task for establishing a concept is to match a concept against relevant data and determine a degree of fit. But this matching process itself is independent of its application for classification and has a generic FALL 1986

Figure 1. A diagnostic hierarchy.

character of its own. It could be used just as readily by a planner to select from among alternative plans; the features of the problem will be the data, and the plan's degree of appropriateness will be the result of the matching process. In the case of classification, the matching can be interpreted as degree of likelihood, but in the case of plan selection, the interpretation is "appropriateness, " or "applicability. " Consistent with the idea that this form of matching is a generic activity, our work in MDX revealed that the matcher -we call the task "hypothesis matching"-required distinctly separate forms of knowledge, organization, and control. The task involves hierarchical symbolic abstraction. An abstraction of the data is computed in the form of a degree of fit; it is symbolic because the abstraction is presented as one of a small number of discrete qualitative measures of fit ("definite," "very likely,"..." definitely not") and hierarchical because the final abstraction is computed from intermediate conceptual abstractions, which can in turn be computed from other intermediate ones, or from raw data. For example, assume that the evidence for a disease D can be of three kinds, chemical, physical, and historical, and that values of several laboratory tests together determine the strength of chemical evidence, while a number of specific observations during a physical examination are used to determine the strength of physical evidence, and so on. Further assume that each of the raw observations can be converted into one of a small set of appropriate symbolic values, for example, "abnormally high," "high," "normal," etc. For MDX, human experts were consulted to determine the comibinations of values that would result in a specific degree of strength for an abstraction. For example, to a question such as "Given an abnormally high value for test 1, a moderately high value for test 2, and a normal value for test 3, what, in your judgment, is the strength of chemical evidence?" the expert might reply "very high." Given that the number of possible symbolic values for each level is kept low, and that a number of combinations will be judged not possible by experts, the tables that incorporate such judgments are typically sparse and computationally very manageable. This process is repeated until the top-level abstractions can be readily computed. The theory of uncertainty handling implicit in this approach is discussed elsewhere.5', ",2 The relevant point here is that this problem of matching hypotheses against data is a general subtype of reasoning useful in a number of different contexts. 25

Knowledge-directed information passing. Now consider the following situation in classificatory problem solving. Suppose a piece of knowledge in the liver specialist states, "If history of anesthetic exposure, consider hepatitis." But what if there is no mention of anesthetics in the patient record even though his history indicates recent major surgery? We would expect a competent physician to infer possible exposure to anesthetics in this case and therefore consider hepatitis. Mittal13 noted that the reasoning involved to make this inference is not classificatory but involves a form of generic task that we call knowledge-directed information passing. In our work on MDX, we were led to the creation of a separate subsystem called PATREC for performing this inference. The literature will provide interested readers with details on PATREC.5"4 The knowledge about each data concept-not the values for a particular patient, but general domain knowledge such as default values for attributes, strategies for inferring them if they are not available, etc.-is stored in a frame, and these concepts are typically organized in a frame hierarchy. (These frames also have pointers to the actual data values for particular instances, but that is largely a matter of implementation and need not be discussed further.) Briefly, this form of reasoning involves accessing a frame that stores either the desired datum or information about how the value of the datum might be obtained, including possible default values. For instance, in the above example the frame corresponding to "anesthetics" will examine its portion of the database and find nothing in it, but it will find knowledge of the following kinds: (1) check if particular types of anesthetics were administered; (2) check if any major surgery was performed, and if so answer "yes"; (3) if answer is "no" by any of the above inferences, default answer is "no." In this particular case it will find no record of various specific anesthetics and will check the "surgery" frame, which will have its own knowledge in the form "Check various kinds of surgeries," and eventually it will infer that major surgery was performed. The anesthetics concept can then infer that the patient had been exposed to anesthetics. Mittal13 has worked out a complex theory of organization of such databases, including issues of temporal reasoning of certain types. Abductive assembly. MDX viewed diagnosis as largely classificatory, but we have since been building a more comprehensive framework for diagnostic reasoning in which diagnosis is viewed as building a hypothesis that best explains the data. The underlying information processing task can be conveniently thought of as having two components: In one, hypotheses are generated, each with some attached degree of plausibility and a list of observations it can account for (typically the number of hypotheses would be a small subset of the hypothesis space in general); in the other, a subset of these hypotheses is assembled into a composite hypothesis so as to satisfy various criteria of "best

coverage. "

26

In the RED system for antibody identification,8 we propose an architecture for this problem. Our plan calls for an MDX-style system, working with a classifier, matcher, and a data abstractor, to produce a small number of highly plausible classificatory hypotheses, and an assembler uses them to build a best explanatory composite. The assembly process itself is again quite generic; given a set of hypotheses and an account of what they can explain, the abductive assembly problem solving is useful to produce a composite that best explains the data. Dendral'5 and Internist'6 can both be thought of as systems with components that perform this type of reasoning. Josephson8 has analyzed the problem-solving process needed for this form of reasoning. The knowledge that the assembler needs is in the form of causal or other relations (such as incompatibility, suggestiveness, special case of) between the hypotheses and relative significance of data items. A simplified version of the control regime for problem solving can be given as follows: Assembly and criticism alternate. In assembly, a means-ends regime, driven by the goal of explaining all the significant findings, is in control. At each stage, the best hypothesis that offers to explain the most significant datum is added to the composite hypothesis so far assembled. After each assembly, the critic removes explanatorily superfluous parts. This loops until all the data are explained or until no hypotheses are left.

Other generic tasks The generic tasks described so far were motivated by our investigation of diagnostic reasoning, though they can all be used as building blocks for other types of problem solvers. In the early 1980's, we began to investigate design problem solving and identified routine design, a form of problem solving wherein the way to decompose a design problem is already known, and compiled "design plans" are available for each major stage in design. We distinguished this form of design problem solving (class 3 design, as we called it) from situations in which even the components of the object being designed are unknown (class 1), or situations in which the components are known but design plans are not available in a compiled form (class 2). These three classifications are not meant to be a rigorous account of the design process but to give a feeling for the spectrum of difficulty of the design task, with class 3 being the most routine. We are talking about design as an abstract activity; it can be applied to concrete things such as mechanical devices or abstract objects such as plans or programs. In fact, we have applied the following ideas to the construction of a missionplanning assistant in the domain of military logistics. A number of planning systems, such as the Molgen system,17 can be brought under this framework. Hierarchical design by plan selection and refinement. In our group, Brown'8 investigated the forms of knowledge and IEEE EXPERT

control for class 3 design. The Aircyl system, which captures the mechanical design expertise in the domain of air-cylinder design, is a result of this study. Typically, knowledge for this type of design activity comes in two forms: (1) The object structure is known at some level of abstraction; that is, typical components of a device under design and their configuration are known. For example, in the Aircyl domain the general structure of the air cylinder under design is known-the air cylinder is not being invented-but the actual dimensions and choice of material are to be made

case-specific. (2) Design plans are available for each part in the structure. These plans have knowledge to help make some design choices for that component, and they also invoke subcomponent designs for refining the design at that level of abstraction. This knowledge is organized as a hierarchy of design specialists, mirroring the device-component hierarchy of the object structure. Each specialist has design plans which, as mentioned, can be used to make commitments for some dimensions of the component. The control regime for routine design is top-down in general. The following is done recursively until a complete design is worked out: A specialist corresponding to a component of the object is called; it chooses a plan based on some specification, instantiates and executes some part of the plan, which in turn suggests further specialists to call to set other details of the design. Plan failures are passed up until appropriate changes are made by higher level specialists, so that specialists who failed may succeed on a retry. Design is generally a complex activity. However, the realization of a design by invoking a design plan, which makes some commitments and also calls other plans for refinement, is an elementary building block for design and has a great deal of generality.

State abstraction. Now let's look at a generic task for predicting consequences of actions. Often when actions are contemplated on a complex system, we would like to be able to predict their consequences to the system's functionality. (For example, "What will happen if valve A is closed in this process plant?") The reasoning necessary for this task requires some type of qualitative simulation. One such type occurs when expertise in the domain is available in a highly compiled form, that is, when the reasoner has knowledge about the structure of the device or system (the components and how they are connected), the functionality of the components and how they relate to the functionality of the system as a whole, and when he has compiled knowledge about how state changes to components affect their functionality. Tfypically, experts in a domain would need to have knowledge of this form in order to evaluate proposed actions very quickly. In this reasoning, the proposed action is interpreted as a state change in the component, and the change in the functionality of the component is inferred. This change in functionality is in turn interpreted as a state change of the FALL 1986

We speak of design as an abstract activity, but it can be applied to both abstract and concrete things.

higher level subsystem of which the component is a part, and the change in that subsystem's functionality is then inferred, and so on. In the example above, the closing of the valve will be used to infer the loss of function "cooling water output" of the component "cooling water inlet." Assuming that this is a component of the cooling system, the change in functionality of the component can be interpreted as a change of state in the cooling system. This chain of reasoning can be recursively carried on until the effect on the functionality of the whole system can be inferred. Abstractly, then, the following characterization of this task can be given. Task specification: Given a change in some state of a system, provide an account of the changes that can be expected in the functions of the system. Form of knowledge: < change in functionality of subsystem = change in state of the immediately larger system>. Organization of knowledge: Knowledge of the above form is distributed in conceptual specialists corresponding to system/subsystems. The way these conceptual specialists are connected mirrors the way the system/subsystem is put together. Control regime: The control regime is basically bottom-up and follows the architecture of the system/subsystem relationship. The changes in states are followed through, interpreted as changes in functionalities of subsystems, until the changes in the functionalities at the level of abstraction desired are obtained. The literature offers a concrete example of this task.7

Viewing existing expert systems in this framework Let's look at some of the better known expert systems from the perspective of the framework developed so far in this article. Mycin's task is to classify a number of observations describing a patient's infection as resulting from one or another organism, and, once this is done, to instantiate a plan with parameters appropriate to the particular patient situation. We have shown elsewhere'9 how the diagnostic portion of Mycin can be recast as a classification problem solver with a more direct encoding of domain knowledge and a control structure directly appropriate to this form of problem solving. Prospector20 classifies a geological description as one of a previously enumerated set of formations. 27

Internist'6 generates candidate hypotheses by a form of enumeration (plausibility scoring and keeping only the top few) and uses a form of abductive assembly. These two types of problem solving alternate. Dendral'5 generates candidate hypotheses by a form of hypothesis matching and uses a form of abductive assembly that puts together the best molecular hypothesis from the fragments produced by the matching process. Note that in the above analysis we have not mentioned rules (Mycin), networks (Prospector), graphs (Dendral), etc., which are the means of encoding and carrying out the tasks. This separation is one reason we are tempted to refer to this level of analysis as the "right" level of abstraction.

Complex generic tasks Earlier we contrasted diagnosis and design as examples of distinctively different but generic problem-solving activities. Note, however, that diagnosis was not included as one of the generic tasks discussed, and only a type of design was included. There are, in fact, other levels at which generic phenomena are reported. Some expert system researchers point to the existence of generic problem areas such as process-control problems, and there are attempts to provide Al programming environments with constructs that can specify generic process-control structures for particular instances. What is the relation between these generic phenomena and the ones we have been discussing? Further distinctions will aid understanding. Typically, many tasks that we intuitively think of as generic are really complex generic tasks. That is, they are further decomposable into components that are more elementary in the sense that each of them has a homogeneous control regime and knowledge structure. For example, what we call the diagnostic task-generic in the sense that it may be quite similar across domains-is not a unitary task structure. Diagnosis may involve classificatory reasoning at a certain point, reasoning from one datum to another at some other point, and abductive assembly of multiple diagnostic hypotheses at yet another point. Classification, as we have seen, has a form of knowledge and control behavior that is different from those for data-to-data reasoning, which in turn are dissimilar in these dimensions to assembling hypotheses. Similar arguments apply with even greater force to generic problem areas such as process control. Thus diagnosis, design, process control, etc., are "compound" processes, while the phenomena we have been discussing are more "atomic." Hence the term "building blocks" in the title. Let's assume we have a complex, real-world, knowledgebased reasoning task and a set of generic tasks for each of which we have a representation language and a control regime to perform the task. If we can perform an epistemic analysis of the domain such that (1) the complex task can be decomposed in terms of the generic tasks, (2) paths and conditions for information transfer from the agents that per28

form these generic tasks to the others which need the information can be established, and (3) knowledge of the domain is available to encode into the knowledge structures for the generic tasks, then the complex task can be "knowledgeengineered" clearly and successfully. Notice that an ability to decompose complex tasks in this way brings with it the ability to characterize them in a useful way. We can see, for example, that the reason we are not yet able to handle difficult design problem solving is that we are often unable to find an architecture of generic tasks in terms of which the complex task can be constructed. Clancey's work on classification problem solving2l can be contrasted with our generic task on hierarchical classification. For Clancey, classification problem solving is an identifiable phenomenon that occurs within a number of expert systems. However, it is not a unitary building block structure in his analysis. For example, he includes as part of classification a component called data abstraction, which roughly corresponds to the functionality of our knowledge-directed data inference component. The latter functionality, however, is not uniquely needed for classification; it could be used by a planner just as well, since a stage of data abstraction can also be involved in planning. Thus, what Clancey has called heuristic classification is in reality a compound task in our analysis and can be broken down into more elementary problem-solving tasks for greater clarity. Furthermore, identifying classification problem solving as separate from data abstraction enables us to associate a form of knowledge, an organization, and a control regime with each of the tasks, thus truly giving them the status of building blocks. This has advantages for knowledge encoding and system building, as we will see.

Building blocks for knowledge-based systems Discussions of knowledge representation normally assume that one uses some language to represent knowledge about a domain and then uses various procedures to operate on the knowledge to make inferences to produce solutions to problems. Whatever the preferred knowledge representation used-OPS5, Emycin, predicate calculus, semantic nets, frames, etc.-this point of view permeates the field. We have argued that this separation of knowledge from its use leads to a number of difficulties. However, our generic-task approach suggests an alternative point of view: that the representation of knowledge should closely follow its use, since the form of knowledge is tied to its use, and that there are different organizations of knowledge for different types of problem solving. Ideally, we need knowledge representation languages that will enable us to directly encode knowledge at the appropriate level by using primitives that naturally describe the domain knowledge for a given generic task. The problemIEEE EXPERT

solving behavior for the task can then be automatically controlled by regimes appropriate for the task. If done correctly, this would simultaneously facilitate knowledge representation, problem solving, and explanation. For each generic task, the form and organization of the knowledge directly suggest the appropriate representation for encoding the domain knowledge. Since there is a control regime associated with each task, the problem solver can be implicit in the representation language. That is, as soon as knowledge is represented in the shell corresponding to a given generic task, a problem solver that uses the control regime on the knowledge representation created for the domain can be created by the interpreter. This is similar to what representation systems such as Emycin do, but note that we are deliberately trading generality at a lower level for specificity, clarity, richness of ontology, and control at a higher level. We have designed and implemented representation languages for a number of these generic tasks, and other languages are on the way. Languages for classification (CSRL)12,22 and object synthesis by selection and refinement (DSPL)6 have been reported in the literature. The current version of CSRL includes facilities for hypothesis matching, but a language called HYPER for hypothesis matching and assessment will soon be available separately. A simple version of the state abstraction task language is available now. A database language called IDABLE and an abductive assembly language called PEIRCE will soon be available from our laboratory and will complete the current list of high-level building-block tools. All these languages will make it possible to encode the knowledge directly at the level of abstraction appropriate for the task. But these are not mere knowledge representation languages; they are really shells, and once the knowledge is represented, they and their associated control regimes together become a problem-solving agent. It is in this sense that these languages are building blocks. Building an expert system for a complex task using these languages involves matching portions of problem solving with various generic tasks. To the extent that such a mapping is possible-the really difficult task in expert system construction turns out to be this kind of epistemic analysisthis compound task is realizable by using our approach. As indicated earlier, a number of existing expert systems can in fact be analyzed and reencoded in this manner. One aspect of this approach has not been discussed because of space limitations but is at least worth noting. Since the problem solving is decomposed into distinct problem-solving activities by a number of different structures, there is a need to integrate the activities by making it possible for these various structures to communicate with each other. As it turns out, all our implementations use a message-passing communities-of-specialists paradigm, and thus such communication and integration can be handled naturally in our framework. FALL 1986

While there are candidates, is there a holy grail in AIa uniform mechanism to explain and produce intelligence?

h T

here has been an ongoing search in artificial intelligence for the "holy grail" of a uniform mechanism that will explain and produce intelli-

gence. This desire has resulted in a number of candidate mechanisms-from perceptrons of the 1960's through first-order predicate calculus to rules and framesto satisfy this need. Another school, often called the "scruffies, " has proposed Al theories with a multitude of knowledge entities and mechanisms. While these programs have been more or less successful at their tasks, they have nevertheless been subject to the charge of ad hoc-ness from the proponents of simpler and more uniform mechanisms. This article takes the middle ground. A multitude of mechanisms do exist, but they constitute an armory of generic information-processing strategies available to intelligent agents. That they are generic is what rescues them from the charge of ad hoc-ness, while their multiplicity yields an ability to match the information-processing strategy to the problem faced by the problem solver and the type of knowledge he has available. This position is not an argument for or against any of the mechanisms that have been repeatedly proposed. Even if some intelligent agent can be built out of perceptrons of appropriate types, or implemented entirely in a rule language or within a logic representation, it does not follow that the conceptual problems in the design of intelligent artifacts would vanish. There are problems to be solved at higher levels of abstraction before the artifact can be built. This article proposes an appropriate level of abstraction at which to discuss the issues in the design of knowledge-based problem solving. Ease and clarity in system design and implementation, along with knowledge acquisition at a more conceptual level, are some of the main advantages of our approach. Of course, there are advantages in other dimensions as well. For example, elsewhere I have described how this approach directly helps in providing clearer explanations of problem solving in expert systems.23 The approach has a number of other implications. For example, uncertainty handling in problem solving is best viewed as consisting of different types of each kind of problem solving rather than as a uniform general method. In the literature,""l12'24 we have described a method of uncertainty handling that is especially appropriate for classification problem solving. 0 29

Acknowledgments This research was supported by Air Force Office of Scientific Research grant 82-0255, National Science Foundation grant MCS-8305032, and the Defense Advanced Research Projects Agency, RADC contract F30602-85-C-0010. I would like to thank John Roach for inviting me to write this article, and one of the reviewers, who made especially useful comments.

References 1. J. McDermott, "RI: A Rule-Based Configurer of Computer Systems," Artificial Intelligence, Vol. 19, No. 1, 1982, pp. 39-88. 2. E. H. Shortliffe, Computer-Based Medical Consultations: MYCIN, Elsevier-North Holland, New York, 1976. 3. P. Szolovits and S. G. Pauker, "Categorical and Probabilistic Reasoning in Medical Diagnosis," Artificial Intelligence, Vol. 11, No. 1-2, 1978, pp. 115-144. 4. B. Chandrasekaran, S. Mittal, E Gomez, and Smith, "An Approach to Medical Diagnosis Based on Conceptual Structures," Proc. Sixth Int'l Joint Conf. Artificial Intelligence, Aug. 1979, pp. 134-142. 5. B. Chandrasekaran and S. Mittal, "Conceptual Representation of Medical Knowledge for Diagnosis by Computer: MDX and Related Systems," in Advances in Computers, M. Yovits, ed., Academic Press, 1983, pp. 217-293. 6. D. C. Brown and B. Chandrasekaran, "Expert Systems for a Class of Mechanical Design Activity," IFIP WG5.2 Working Conf., Sept. 1984. 7. B. Chandrasekaran, "Towards a TLxonomy of Problem-Solving Types," AlMagazine, Vol. 4, No. 1, winter/spring 1983, pp. 9-17. 8. John R. Josephson, B. Chandrasekaran, and J. W. Smith, "Assembling the Best Explanation, " Proc. IEEE Workshop Principles of Knowledge-Based Systems, IEEE Computer Society, Los Alamitos, Calif., Dec. 1984. Revised version available from Laboratory for Artificial Intelligence Research, Ohio State

16. H. W. Pople, "Heuristic Methods for Imposing Structure on IllStructured Problems," in Artificial Intelligence in Medicine, P. Szolovits, ed., Westview Press, 1982, pp. 119-190. 17. Peter Friedland, Knowledge-Based Experiment Design in Molecular Genetics, PhD thesis, Computer Science Department, Stanford University, 1979. 18. D. C. Brown, Expert Systems for Design Problem-Solving Using Design Refinement with Plan Selection and Redesign, dissertation, Ohio State University, 1984. 19. Jon Sticklen, B. Chandrasekaran, J. W. Smith, and John Svirbely, "MDX-MYCIN: The MDX Paradigm Applied to the MYCIN Domain," Int'l J. Computers and Mathematics with Applications, Vol. 11, No. 5, 1985, pp. 527-539. 20. Richard 0. Duda, John G. Gaschnig, and Peter E. Hart, "Model Design in the Prospector Consultant System for Mineral Exploration," in Expert Systems in the Microelectronic Age, D. Michie, ed., Edinburgh University Press, 1980, pp. 153-167. 21. William J. Clancey, "Classification Problem Solving," Proc. Nat' Conf. Artificial Intelligence, Austin, Tex., 1984, pp. 49-55. 22. T. Bylander, S. Mittal, and B. Chandrasekaran, "CSRL: A Language for Expert Systems for Diagnosis," Proc. Int'l Joint Conf. Artificial Intelligence, Aug. 1983, pp. 218-221. 23. B. Chandrasekaran, "Generic Tasks in Expert System Design and Their Role in Explanation of Problem Solving," Proc. Office of Naval Research Workshop on Distributed Problem Solving, May 16-17, 1985, National Academy of Sciences, to appear. 24. B. Chandrasekaran and Michael C. Thnner, "Uncertainty Handling in Expert Systems: Uniform vs. Task-Specific Formalisms," in Uncertainty in Artificial Intelligence, Laveen N. Kanal and John Lemmer, eds., North Holland Publishing, 1986, in press.

University.

9. F. Gomez and B. Chandrasekaran, "Knowledge Organization and Distribution for Medical Diagnosis," IEEE Trans. Systems, Man and Cybernetics, Vol. 11, No. 1, Jan. 1981, pp. 34-42. 10. J. Sticklen, B. Chandrasekaran, and J. R. Josephson, "Control Issues in Classificatory Diagnosis," Proc. Ninth Int'l Joint Conf. Artificial Intelligence, Aug. 18-24, 1985. 11. B. Chandrasekaran, S. Mittal, and J. W.Smith, "Reasoning with Uncertain Knowledge: The MDX Approach," Proc. First Ann. Joint Conf. American Medical In.formatics Assoc., May 1982, pp. 335-339. 12. T. C. Bylander and Sanjay Mittal, "CSRL: A Language for Classificatory Problem Solving and Uncertainty Handling," Al Magazine, summer 1986, to appear. 13. S. Mittal, Design of a Distributed Medical Diagnosis and Database System, doctoral dissertation, Dept. of Computer and Information Science, Ohio State University, 1980. 14. Sanjay Mittal, B. Chandrasekaran, and Jon Sticklen, "Patrec: A Knowledge-Directed Database for a Diagnostic Expert System," Computer, Vol. 17, No. 9, Sept. 1984, pp. 51-58. 15. B. Buchanan, G. Sutherland, and E. A. Feigenbaum, "Heuristic DENDRAL: A Program for Generating Explanatory Hypotheses in Organic Chemistry," in Machine Intelligence 4, B. Meltzer and D. Michie, eds., American Elsevier, New York, 1969. 30

B. Chandrasekan has been at Ohio State University since 1969, where he directs the Al group. He is currently professor of computer and information science. From 1967 to 1969 he was a research scientist with the Philco-Ford Corporation in Blue Bell, Pennsylvania, working on speech- and character-recognition machines. His major research activities are currently in knowledge-based reasoning. Chandrasekaran received his bachelor of engineering degree with honors from Madras University in 1963 and his PhD from the University of Pennsylvania in 1967. He is associate editor for Al of IEEE Transactions on Systems, Man and Cybernetics and chairs the society's 'Technical Committee on Al. He was elected a fellow of the IEEE in 1986.

The author's address is Laboratory for Artificial Intelligence Research, Department of Computer and Information Science, Ohio State University, Columbus, OH 43210. IEEE EXPERT

Based Reasoning: High-Level System Design - IEEE Xplore

Page 1. Generic Tasks in Knowledge-. Based Reasoning: High-Level. Building Blocks for Expert .... building blocks forthe construction (and understanding) of.

4MB Sizes 2 Downloads 281 Views

Recommend Documents

An Ambient Robot System Based on Sensor Network ... - IEEE Xplore
In this paper, we demonstrate the mobile robot application associated with ubiquitous sensor network. The sensor network systems embedded in environment.

Copula-Based Statistical Health Grade System Against ... - IEEE Xplore
Abstract—A health grade system against mechanical faults of power transformers has been little investigated compared to those for chemical and electrical faults ...

SROS: Sensor-Based Real-Time Observing System for ... - IEEE Xplore
field ecological data transportation and visualization. The system is currently used for observation by ecological research scientists at the Institute of Geographic ...

Random FH-OFDMA System Based on Statistical ... - IEEE Xplore
Email: [email protected]. Abstract—We propose a random frequency hopping orthog- onal frequency division multiple access (RFH-OFDMA) system.

A Novel Error-Correcting System Based on Product ... - IEEE Xplore
Sep 23, 2011 - with large changes in signal amplitude and large values of log-likelihood ratios ... Low-density parity check (LDPC) codes and Reed-Solomon.

Low-power design - IEEE Xplore
tors, combine microcontroller architectures with some high- performance analog circuits, and are routinely produced in tens of millions per year with a power ...

Codebook-Based Opportunistic Interference Alignment - IEEE Xplore
May 9, 2014 - based on the exiting zero-forcing receiver. We first propose a codebook-based OIA, in which the weight vectors are chosen from a predefined ...

Design and Development of a Flexure-Based Dual ... - IEEE Xplore
flexure mechanisms, micro-/nanopositioning, motion control. Manuscript received ... The author is with the Department of Electromechanical Engineering, Fac-.

Computationally Efficient Template-Based Face ... - IEEE Xplore
head poses, illuminations, ages and facial expressions. Template images could come from still images or video frames. Therefore, measuring the similarity ...

Noniterative Interpolation-Based Super-Resolution ... - IEEE Xplore
Noniterative Interpolation-Based Super-Resolution. Minimizing Aliasing in the Reconstructed Image. Alfonso Sánchez-Beato and Gonzalo Pajares, Member, ...

Improved Hand Tracking System - IEEE Xplore
May 1, 2012 - training time by a factor of at least 1440 compared to the ... Taiwan University of Science and Technology, Taipei 106, Taiwan (e-mail:.

Based Reasoning: High-Level System Design
a domain by using the vocabulary appropriate for the task. But typically the languages in which the expert systems have been implemented have sought ...

Content-Based Copy Retrieval Using Distortion-Based ... - IEEE Xplore
very large databases both in terms of quality and speed. ... large period, refers to a major historical event. ... that could be exploited by data mining methods.

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...

Evolutionary Computation, IEEE Transactions on - IEEE Xplore
search strategy to a great number of habitats and prey distributions. We propose to synthesize a similar search strategy for the massively multimodal problems of ...

Throttling-Based Resource Management in High ... - IEEE Xplore
Jul 20, 2006 - power management and that our strategy can significantly improve ... resource management scheme tests the processor condition cycle by ...

Vector potential equivalent circuit based on PEEC ... - IEEE Xplore
Jun 24, 2003 - ABSTRACT. The geometry-integration based vector potential equivalent cir- cuit (VPEC) was introduced to obtain a localized circuit model.

Adaptive Air-to-Ground Secure Communication System ... - IEEE Xplore
Corresponding author, e-mail: [email protected]. Abstract—A novel ... hardware setup for the ADS-B based ATG system is analytically established and ...

I iJl! - IEEE Xplore
Email: [email protected]. Abstract: A ... consumptions are 8.3mA and 1.lmA for WCDMA mode .... 8.3mA from a 1.5V supply under WCDMA mode and.

Gigabit DSL - IEEE Xplore
(DSL) technology based on MIMO transmission methods finds that symmetric data rates of more than 1 Gbps are achievable over four twisted pairs (category 3) ...