%------------------------------------------------------------------------------% ILP Newsletter Volume 3, Number 1, 15th February 1996 %------------------------------------------------------------------------------% Editors: Saso Dzeroski and Nada Lavrac, Jozef Stefan Institute, Ljubljana, SI %------------------------------------------------------------------------------% Address all communication related to the ILP Newsletter to ilpnet@ijs.si To subscribe/unsubscribe send email with subject SUBSCRIBE/UNSUBSCRIBE ILPNEWS Send contributions in messages with subject heading ILPNEWS CONTRIBUTION Send comments and suggestions under subject heading ILPNEWS COMMENTS Back issues of the Newsletter and other information about ILPNET and ILP available via the World Wide Web (WWW), URL http://www-ai.ijs.si/ilpnet.html %------------------------------------------------------------------------------% Contents: - Abstracts of PhD theses related to ILP * Henrik Bostrom: Explanation-Based Transformation of Logic Programs * Peter Idestam-Almquist: Generalization of Clauses * Kamal Ali: Learning Probabilistic Relational Concept Descriptions - ILP'96 - Sixth International Workshop on ILP (CFP LaTeX) - ALT'96 - Seventh International Workshop on Algorithmic Learning Theory (CFP) - IDAMAP'96 - Intelligent Data Analysis in Medicine and Pharmacology (CFP) - KR'96 Workshop on Relevance in Knowledge Representation and Reasoning (CFP) - ILP tutorial at ECAI'96 - New release of MOBAL %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% Abstracts of PhD theses related to ILP %------------------------------------------------------------------------------% Title: "Explanation-Based Transformation of Logic Programs" Author: Henrik Bostrom University: Department of Computer and Systme Sciences, Stockholm University and Royal Institute of Technology Explanation-Based Generalization (EBG) is a technique for deriving a specialization of a concept definition (target concept) from a proof (explanation) of why a particular instance (training example) belongs to the concept. The reason for deriving the specialization is to provide a new definition of the target concept that can be used more efficiently than the original one to determine (some) instances of the concept. In a logic programming framework, the specialization corresponds to a clause that can be used to identify instances of a predicate more efficiently than by the original definition. However, the addition of clauses produced by EBG to a logic program may degrade, rather than improve, the efficiency of the program. Two potential causes of this problem are the inefficient organization of produced clauses and the increased redundancy. We present approaches to these problems, which are reformulations of EBG in a program transformation frame- work. The presented algorithms are shown to be meaning preserving and to produce clauses that are equivalent to the clauses produced by EBG. Worst-case analyses of the size of the resulting programs are presented and the limitations of the algorithms are discussed. Experimental results are presented showing that the efficiency of programs obtained by applying EBG can be significantly improved while organizing produced clauses efficiently and avoiding increased redundancy. %------------------------------------------------------------------------------% Title: "Generalization of Clauses" Author: Peter Idestam-Almquist University: Department of Computer and Systme Sciences, Stockholm University and Royal Institute of Technology Year: 1993 In the area of inductive learning, generalization is a main operation, and the usual definition of induction is based on logical implication. Recently there has been a rising interest in clausal representation of knowledge in machine learning. Almost all inductive learning systems that perform generalization of clauses use the generality relation theta-subsumption instead of implication. The main reason is that there is a well-known and simple technique to compute lest general generalizations under theta-subsumption (LGGthetas), but not under implication. However, there is a difference between theta-subsumption and implication, which sometimes causes LGGthetas to be overly general w.r.t. implication. We describe the well-known technique to compute LGGthetas, and the most important theoretical results connected with it. We study the theory of generalization under implication, and note that implication between clauses is undecidable. We therefore introduce a stronger form of implication, called T-implication, which is decidable between clauses. We show that for every finite set of clauses there exists a least general generalization under T-implication. We describe a technique to reduce generalizations under implication of a clause to generalizations under theta-subsumption, by replacing the clause with a set of clauses or-introduced from the clause by a sequence of literals. We also show how an or-introduced set of clauses equivalently can be described by a single clause, which we call an expansion of the original clause. Moreover we prove that for every non-tautological clause there exists a T-complete expansion, which means that every generalization under T-implication of the clause is reduced to a generalization under theta-subsumption of the expansion. We present a technique to compute generalizations under implication of a set of clauses by first computing common expansions of the clauses and then computing an LGGtheta of the common expansions. The computational complexity both of the well-known technique to compute LGGthetas and of our technique to compute generalizations under implication grows exponentially in the size of the input. %------------------------------------------------------------------------------% The following dissertation is available via anonymous FTP and through http://www.ics.uci.edu/~ali (either as a whole or by chapters). Title: "Learning Probabilistic Relational Concept Descriptions" Author: Kamal Ali University: Department of Information and Computer Science, University of California, Irvine Key words: Learning probabilistic concepts, multiple models, multiple classifiers, combining classifiers, evidence combination, relational learning, First-order learning, Noise-tolerant learning, Learning of small disjuncts, Inductive Logic Programming. A B S T R A C T This dissertation presents results in the area of multiple models (multiple classifiers), learning probabilistic relational (first order) rules from noisy, "real-world" data and reducing the small disjuncts problem - the problem whereby learned rules that cover few training examples have high error rates on test data. Several results are presented in the arena of multiple models. The multiple models approach in relevant to the problem of making accurate classifications in ``real-world'' domains since it facilitates evidence combination which is needed to accurately learn on such domains. It is also useful when learning from small training data samples in which many models appear to be equally "good" w.r.t. the given evaluation metric. Such models often have quite varying error rates on test data so in such situations, the single model method has problems. Increasing search only partly addresses this problem whereas the multiple models approach has the potential to be much more useful. The most important result of the multiple models research is that the *amount* of error reduction afforded by the multiple models approach is linearly correlated with the degree to which the individual models make errors in an uncorrelated manner. This work is the first to model the degree of error reduction due to the use of multiple models. It is also shown that it is possible to learn models that make less correlated errors in domains in which there are many ties in the search evaluation metric during learning. The third major result of the research on multiple models is the realization that models should be learned that make errors in a negatively-correlated manner rather than those that make errors in an uncorrelated (statistically independent) manner. The thesis also presents results on learning probabilistic first-order rules from relational data. It is shown that learning a class description for each class in the data - the one-per-class approach - and attaching probabilistic estimates to the learned rules allows accurate classifications to be made on real-world data sets. The thesis presents the system HYDRA which implements this approach. It is shown that the resulting classifications are often more accurate than those made by three existing methods for learning from noisy, relational data. Furthermore, the learned rules are relational and so are more expressive than the attribute-value rules learned by most induction systems. Finally, results are presented on the small-disjuncts problem in which rules that apply to rare subclasses have high error rates The thesis presents the first approach that is simultaneously successful at reducing the error rates of small disjucnts while also reducing the overall error rate by a statistically significant margin. The previous approach which aimed to reduce small disjunct error rates only did so at the expense of increasing the error rates of large disjuncts. It is shown that the one-per-class approach reduces error rates for such rare rules while not sacrificing the error rates of the other rules. The dissertation is approximately 180 pages long (single spaced) (~590K). ftp ftp.ics.uci.edu logname: anonymous password: your email address cd /pub/ali binary get thesis.ps.Z quit %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% \documentstyle[fullpage]{article} \begin{document} \thispagestyle{empty} {\large\bf\center The Sixth International Workshop on\\ Inductive Logic Programming (ILP'96) \\} {\center 28-30 August, 1996 \\ Stockholm, Sweden \\} \vspace{.2in} This workshop is the sixth in a series of international workshops on Inductive Logic Programing. ILP'96 will be run in parallel with the Sixth International Workshop on Logic Program Synthesis and Transformation (LOPSTR'96). Papers should fit into one, or preferably more, of the following three areas. \begin{itemize} \item {\bf Theory.} Of particular interest are papers that either 1) prove new results concerning algorithms which use inductive learning to construct first or higher order logic descriptions or 2) reveal relationships to theoretical work done outside of ILP, especially work in program synthesis and transformation. \item {\bf Implementation.} Details of implemented inductive algorithms. Time complexity results should be included. \item {\bf Application.} Experimental results within one or more application areas should be tabulated with appropriate statistics. Sufficient details should be included to allow reproduction of results. Comparative studies of different algorithms running on the same examples, using the same background knowledge, are especially welcome, as are papers that explore new application areas for ILP. \end{itemize} ILP'96 and LOPSTR'96 will take place on board a ship which will sail from Stockholm to Helsinki and back during the workshop. \section*{Program Committee} \vspace{-.2in} \begin{tabbing} mmmmmmmmmmmmmmmm \= mmmmmmmmmmmmmmmm \= mmmmmmmmmmmmmmmm \= mmmmmmmmmmmmmmmm \kill \\ F. Bergadano \> P. Flach \> R. Mooney \> J.R. Quinlan \\ I. Bratko \> P. Idestam-Almquist \> S. Muggleton \> C. Rouveirol \\ L. De Raedt \> N. Lavra\v{c} \> M. Numao \> C. Sammut \\ S. D\v{z}eroski \> S. Matwin \> C.D. Page \> A. Srinivasan \\ \> \> \> S. Wrobel \\ \end{tabbing} \section*{Organization} \begin{tabbing} \= {\it Program Chair:} \= Stephen Muggleton \hspace{1.9in} \= {\it Local Chair:} \= Carl Gustaf Jansson \\ \>\>Oxford University Computing Laboratory \>\> University of Stockholm \\ \>\>Wolfson Building, Parks Road \>\> Email: calle@dsv.su.se \\ \>\>Oxford, OX1 3QD, U.K. \\ \>\>Email: steve@comlab.ox.ac.uk \\ \end{tabbing} \section*{Deadlines} Submissions (hardcopy only) must be received by the {\bf program chair} no later than {\bf 17 May, 1996}. Submissions should include the postal address and email address (if available) of each author; the first author will be used as the contact author unless otherwise specified. Authors will be informed of acceptance by {\bf 28 June, 1996}. Some or all of the papers accepted for presentation at ILP'96 will be selected for inclusion in a post-workshop publication, at the discretion of the program committee. Notification of acceptance for the post-workshop publication also will be made by {\bf 28 June, 1996}. Authors also will then be notified of the deadline for camera-ready copies, which will be no earlier than {\bf October 15, 1996}. \end{document} %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% Call for Papers: ALT'96 The Seventh International Workshop on Algorithmic Learning Theory Coogee Holiday Inn, Sydney, Australia October 23-25, 1996 The 7th International Workshop on Algorithmic Learning Theory (ALT'96) will be held at the Coogee Holiday Inn, Sydney, Australia during October 23-25, 1996, and will be collocated with the Pacific Rim Knowledge Acquisition Workshop. The workshop is being sponsored by the Japanese Society for Artificial Intelligence (JSAI) and the University of New South Wales (UNSW). We invite submissions to ALT'96 in all areas related to algorithmic learning theory including (but not limited to): the design and analysis of learning algorithms, the theory of machine learning, computational logic of/for machine discovery, inductive inference, learning via queries, artificial and biological neural networks, pattern recognition, learning by analogy, Bayesian/MDL estimation, statistical learning, inductive logic programming, robotics, application of learning to databases, gene analysis, etc. INVITED TALKS: Invited talks will be given by Prof. J.R. Quinlan, (University of Sydney), Prof. T. Shinohara (Kyushu Institute of Technology), Prof. Les Valiant (Harvard Univ.), and Prof. Paul Vitanyi (CWI and Univ. of Amsterdam). SUBMISSIONS: Authors must submit nine copies of their extended abstracts to: Arun Sharma - ALT'96 School of Computer Science and Engineering University of New South Wales Sydney, 2052, Australia ABSTRACTS must be received by April 15, 1996. NOTIFICATION of acceptance or rejection will be mailed to the first (or designated) author by June 3, 1996. CAMERA-READY copy of accepted papers will be due July 1, 1996. FORMAT: The submitted abstract should consist of a cover page with title, authors' names, postal and e-mail addresses, an approximately 200 word summary, and a body not longer than ten (10) pages of size A4 or 7x10.5 inches in twelve-point font. Note that only the first ten (10) pages of the body will be sent out for review. Double-sided printing is strongly encouraged. POLICY: Each submitted abstract will be reviewed by the members of the program committee, and be judged on clarity, significance, and originality. Simultaneous submission of papers to any other conference with published proceedings is not allowed. Papers that have appeared in journals or other conferences are not appropriate for ALT'96. PROCEEDINGS will be published as a volume in the Lecture Notes Series in Artificial Intelligence from Springer-Verlag, and will be available at the conference. Selected papers of ALT'96 will be invited to be published in a special issue of a distinguished journal. CONFERENCE CHAIR: Prof. Setsuo Arikawa RIFIS, Kyushu University 33 Fukuoka, 812 Japan arikawa@rifis.kyushu-u.ac.jp PROGRAM COMMITTEE CHAIR: Arun Sharma, Univ. of New South Wales arun@cse.unsw.edu.au PROGRAM COMMITTEE: H. Arimura (KyuTech), Jose Balcazar (UPC, Barcelona), P. Bartlett (ANU), W. Cohen (AT&T), S. Ben David (Technion), H. Imai (U. Tokyo), K.P. Jantke (TH Leipzig), S. Kobayashi (U. Electro-Comm.), M. Numao (TiTech), S. Jain (National U. Singapore), S. Lange (TH Leipzig), L. De Raedt (Leuven), Y. Sakakibara (Fujitsu Labs) M. Sato (Osaka Pref. U.), O. Watanabe (TiTech), K. Yamanishi (NEC), T. Zeugmann (Kyushu) LOCAL ARRANGEMENTS CHAIR: Achim Hoffmann School of Computer Science and Engineering University of New South Wales Sydney 2052 Australia alt96@cse.unsw.edu.au For more information, contact: Email: alt96@cse.unsw.edu.au Homepage: http://www.cse.unsw.edu.au/~alt96/ %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% IDAMAP-96 INTELLIGENT DATA ANALYSIS IN MEDICINE AND PHARMACOLOGY First Call for Papers for the Workshop at ECAI-96 12th European Conference on Artificial Intelligence August 12-16, 1996 Budapest, Hungary Organized by: Nada Lavrac, J. Stefan Institute, Slovenia (chair) Pedro Barahona, Universidade Nova de Lisboa, Portugal Riccardo Bellazzi, University of Pavia, Italy Werner Horn, Austrian Research Institute for Artificial Intelligence Elpida Keravnou, University of Cyprus (co-chair) Cristiana Larizza, University of Pavia, Italy Blaz Zupan, J. Stefan Institute, Slovenia (co-chair) GENERAL INFORMATION IDAMAP-96, an ECAI-96 workshop, will be held in Budapest, Hungary, on 13 August 1996, immediately before the main ECAI-96 conference, August 14-16, 1996. The workshop will last one full day. Gathering in an informal setting, workshop participants will have the opportunity to meet and discuss selected technical topics in an atmosphere which fosters the active exchange of ideas among researchers and practitioners. To encourage interaction and a broad exchange of ideas, the workshop will be kept small, preferably around 30 participants. TOPIC The gap between data generation and data comprehension is widening in all fields of human activity. In medicine and pharmacology overcoming this gap is particularly crucial since medical decision making needs to be supported by arguments based on basic medical and pharmacologi- cal knowledge as well as knowledge, regularities and trends extracted from data by intelligent data analysis techniques. The topic of the workshop are computational methods for intelligent data analysis aimed at narrowing the gap between data gathering and data comprehension, as well as their applications in medicine and pharmacology. Topics include, but are not limited to, effective machine learning tools, clustering, data visualization, interpretation of time-ordered data (derivation and revision of temporal trends and other forms of temporal data abstraction), learning with case bases, discovery of new diseases, new drug compounds, pharmacodynamical modelling, predicting drug activity, etc. Emphasis will also be given to solving of problems which result from automated data collection in modern hospitals, such as analysis of computer-based patient records (CPR), analysis of data from patient-data management system (PDMS), intelligent alarming, effective and efficient monitoring, etc. SCIENTIFIC PROGRAM The scientific program of the workshop will consist of presentations of accepted papers and panel discussions. Papers are invited both on methodological issues of data mining as well as on specific applications in medicine and pharmacology. The preferred length of papers is 10 pages. Panel discussions will consist of commentators' views on the presented papers as well as on discussions initialized by participants. In order to be able to organize these discussions, entries for discussions are encouraged on any topic related to the workshop. We especially encourage entries on the topic "Data mining and knowledge discovery - its practical potential in medicine and pharmacology". The preferred length of entries for panel discussions is 1 page. SUBMISSION OF PAPERS Submit papers (preferably 5 hard copies, 8-12 pages, possibly postscript) and panel discussion entries (hardcopy or electronic, 1 page) to: Nada Lavrac, Blaz Zupan J. Stefan Institute Jamova 39 61000 Ljubljana Slovenia tel. +386 61 177 3272, 177 3380 fax. +386 61 125 1038, 219 385 email: ecai96wk@ijs.si Submissions must include first author's complete contact information, including address, email, phone and fax. WORKSHOP PARTICIPATION Workshop participation is not limited to authors of submissions. A limited number of other attendees will be selected based on submitted statements of interest for participation at the workshop. A statement of interest (send an email to ecai96wk@ijs.si) should include the name, address, email, phone, fax and description of research interest. IMPORTANT DATES - Paper submission deadline April 2, 1996 - Notification to Authors April 26, 1996 - Camera-ready papers May 15, 1996 PUBLICATION OF PAPERS Accepted papers will be published in ECAI-96 working notes. It is planned to published a post-conference publication based on selected workshop papers. WORKSHOP FEE - Workshop fee is 50 ECU per participant. - Attendees at workshops must register also for the main ECAI conference. WORLD WIDE WEB (WWW) For up-to-date workshop information please check: http://www-ai.ijs.si/ailab/activities/idamap96.html %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% KR'96 Pre-Conference Workshop on Relevance in Knowledge Representation and Reasoning 3-4 November, 1996 Boston, Massachusetts ------------------------------------------------------ C A L L F O R P A P E R S ------------------------------------------------------ http://www.research.att.com/orgs/ssr/people/levy/rrr-cfp.html ------------------------------------------------------ Essentially all reasoning systems use a corpus of information to reach appropriate conclusions. For example, deductive systems use initial theories (possibly encoded as predicate calculus statements) from which they draw conclusions, probabilistic systems use prior distributions (possibly encoded as a Bayesian network) to compute event probabilities, and abductive processes produce explanations based on both background theories and observations. With too little information, these systems clearly cannot work correctly. Surprisingly, too *much* information is also problematic, as it too can cause significant degradation in system performance. It is therefore critical to determine what information is irrelevant, to know what can be ignored or downplayed when considering a specific task (e.g., a specific query, or distribution of queries, to the system, or a specific observation to be explained). In some cases, ignoring irrelevant information is needed in order to draw the correct conclusions. There are many forms of irrelevance. In some contexts, the initial theory may include more information than the task requires, or information at a level of granularity that is more detailed than necessary. Here, the system may perform more effectively if it ignores or deletes certain irrelevant facts or if it ignores certain distinctions made in the representation. Another flavor of irrelevance arises during the course of reasoning: A reasoning process can ignore certain intermediate results, once it has established that they will not contribute to the eventual answer. This workshop follows the very eclectic 1994 Relevance Symposium, which investigated the notion of relevance across various fields of Artificial Intelligence and Computer Science. The current workshop, however, will focus on the use of relevance in knowledge representation and reasoning, specifically, on understanding different forms of irrelevance, and exploiting this "relevance information" to improve the performance of reasoning systems. Submissions are requested in areas relating to relevance in KR&R, including, but not limited to, the following: o Speeding up inference using relevance reasoning. o Relevance in probabilistic reasoning. o Relevance in explanation. o Relationships between relevance and belief revision and updates. o Relevance reasoning as a basis for abstraction and reformulation. o Using relevance of information to enable drawing appropriate conclusions. o Applications of relevance reasoning. o Reasoning about relevance of information, and foundations of relevance reasoning. Submission Information ====================== Authors wishing to present a paper should submit an extended abstract of at most 5000 words. Accepted participants will be invited to submit full papers for the workshop proceedings, which will be distributed to the workshop participants. Persons wishing to attend the workshop and not to present papers should submit a 1--2 page research summary that includes a list of relevant publications. Authors are encouraged to submit PostScript versions of their paper by email to either Russ Greiner (greiner@scr.siemens.com) or Alon Levy (levy@research.att.com). Authors unable to submit by email should send 4 copies of their paper to the address below. All submissions should be received by July 8, 1996. Please be sure to include e-mail address, telephone number and mailing address of the principal author. In case of multiple authors, please indicate which authors wish to participate. Notification of acceptance or rejection will be mailed to the principal author by August 16, 1996. Camera-ready copies of papers accepted for inclusion in the proceedings will be due September 17, 1996. Address for hardcopy submissions: Russell Greiner Siemens Corporate Research, Inc 755 College Road East Princeton, NJ 08540-6632 Important Dates =============== - Submissions due: July 8, 1996. - Notification of acceptance August 16, 1996. - Final version due September 17, 1996. - Workshop dates November 3-4, 1996. Program Chairs: =============== Russ Greiner (Siemens Corporate Research, greiner@scr.siemens.com) Alon Levy (AT&T Bell Laboratories, levy@research.att.com) Program Committee: ================== Adnan Darwiche (Rockwell) Jim Delgrande (Simon Frasier University) Daphne Koller (Stanford University) Gerhard Lakemeyer (University of Bonn) Alberto Mendelzon (University of Toronto) Devika Subramanian (Rice University) %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% ILP tutorial @ ECAI Inductive Logic Programming Instructor: Dr. Stan Matwin Professor of Computer Science and Electrical Engineering University of Ottawa Canada Inductive Logic Programming (ILP) is a new, burgeoining field of AI, combining machine learning and logic programming. ILP learns relational (first order logic) concept descriptions from facts. One of the most active and innovative fields in AI, particulary in Europe, ILP can also be viewed as a technique of developing logic programs from known instances of their input-output behavior. ILP reaches beyond the limitations of inductive learning systems based on attribute-value representation of examples and concepts. The tutorial will clarify the goals and the motivations of ILP. Classical bottom-up and top-down methods for learning Horn clauses from examples will be described in a simplified form. We will discuss the priciples behind the successful ILP systems: FOIL and PROGOL. We will discuss how some ILP systems are capable of "creative" learning, going beyond the language in which the examples and the background knowledge are expressed. We will then survey the recent, successful applications of ILP in areas such as pharmaceutical design, music, protein structure prediction, CAD, natural language processing, etc. The tutorial does not assume any advanced background beyond the basic concepts in logic. S. Matwin is Professor of Computer Science and Electrical Engineering at the University of Ottawa, Canada, where he teaches machine learning, ILP, AI, and compiler construction. His research interests are in machine learning and its applications, with the special emphasis on ILP and environmental applications of AI. Stan has published more than 70 papers in journals and refereed international conferences. Member of the program committee of a number of conferences in machine learning, he is also the president of the Canadian Society for Computational Studies of Intelligence, and chair of IFIP Working Group 12.2 (Machine Learning). He has taught an ILP tutorial at AAAI-94 and IJCAI95. Check http://www.dfki.uni-sb.de/ecai96/tutorials for details of regfistration etc. %------------------------------------------------------------------------------% %------------------------------------------------------------------------------% Announcing: new version of Mobal available Mobal The knowledge acquisition and machine learning system MOBAL (release 4.1b9) is available free for non-commercial academic use from the anonymous ftp-server 'ftp.gmd.de' in the directory 'gmd/mlt/Mobal'. The system requires a Sun Sparc Station, SunOS 4.1 and X11R5 or later. The user interface is implemented with Tcl/Tk --- all you need to run Mobal is a Sun and X11. The information included below, plus a little more, is more colorfully available at: http://nathan.gmd.de/projects/ml/mobal.html About Mobal 4.1b9 Mobal is a sophisticated system for developing operational models of application domains in a relational knowledge representation. It integrates a manual knowledge acquisition and inspection environment, a powerful inference engine, machine learning methods for automated knowledge acquisition, a knowledge revision tool, and -- this is the new bit -- a host of services related to the topic of Theory Restructuring: * various forms of redundancy analysis & elimination * methods & strategies for changing inferential structure (folding & unfolding) * evaluation criteria for comparing empirically equivalent but syntactically different forms of a theory (such as those produced by applying one or more restructuring operators) * miscellaneous analysis, restructuring and cleanup services: o detailed overview over form & content of the theory (pos, neg, covered, uncovered instances; statistics on the number of rules, predicates, facts, sorts, integrity constraints, metarules & -facts, etc.; focus-able text and graphical views of rules, facts, predicates, etc.) o hypothetical reasoning, esp. explanation of failure to cover o detection of non-generative rules & suggested fix o detection of unused predicates o determination of minimum required inputs relative to a given goal concept See also http://nathan.gmd.de/persons/edgar.sommer/scientific.html By using Mobal's knowledge acquisition environment, you can incrementally develop a model of your domain in terms of logical facts and rules. You can inspect the knowledge you have entered in text or graphics windows, augment the knowledge, or change it at any time. The built-in inference engine can immediately execute the rules you have entered to show you the consequences of your inputs, or answer queries about the current knowledge. Mobal also builds a dynamic sort taxonomy from your inputs. If you wish, you can use machine learning methods to automatically discover additional rules based on the facts that you have entered, or to form new concepts. (Mobal can be used as a front-end for any induction algo that runs under SunOS & can do i/o via files -- if you have one we don't, consult the example interfaces that come with the distribution (in the tools dir), modify to your needs, and send us a message.) If there are contradictions in the knowledge base due to incorrect rules or facts, there is a knowledge revision tool to help you locate the problem and fix it. User Guide MOBAL's User Guide has completely reworked and extended for the new release. A draft of this version is browse-able: http://nathan.gmd.de/projects/ml/mobal.html .. and is part of the distribution package, so if you're getting that, you do not need to get the user guide separately. Acknowledgments Mobal is a result of research funded in part by the European Community within the type B ESPRIT Project 2154 "Machine Learning Toolbox" and the ESPRIT Project "Inductive Logic Programming" (ILP, PE 6020) and is based on the System BLIP developed in the project "Lerner" at the Technical University Berlin funded by the German government (BMFT) under contract ITW8501B1, and is further made possible by the existence of coffee. Thanks! Restrictions MOBAL is available in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of FITNESS FOR A PARTICULAR PURPOSE. MOBAL can be used free of charge for academic, educational, or non-commercial uses. We do emphatically request, however, that you send us mail (mobal@gmd.de) so we know where MOBAL is going. Warm words in closing Sadly, work on Mobal is not currently at the apex of our official duties, so we may be slow in responding to Stupid Questions(sm), but promise never to get angry. Please, however, consider RTFM'ing (see pointers to animals called "user guide" & "web pages" above) before attempting to grab our attention with -- examples picked at random -- requests for porting several MB's worth of code to DOS/286 or ZX Spectrum. We are currently fiddling around with an unmoderated mailing list for Mobalites; if you use Mobal, consider sending mail to majordomo@gmd.de, with content subscribe mobal-list Let us know if you are doing anything interesting with Mobal! Cheers, the MLGroup@GMD %------------------------------------------------------------------------------%