|References:||Blockeel and De Raedt 1998|
In the ILP domain, up till now most systems have used the covering approach, although some authors (e.g., Bostrom 1995) have already pointed out that the divide-and-conquer strategy can have advantages in some cases.
Recently, an algorithm has been developed at the K.U.Leuven that learns a predicate logic theory by means of so-called logical decision trees. Logical decision trees are a first-order logic upgrade of the classical decision trees used by propositional learners. In the same manner as propositional rules can be derived from decision trees (each rule corresponds to a path from the root to some leaf; the tests in the nodes on that path are conditions of the rule), clauses can be derived from logical decision trees (each test on the path from root to leaf now being a literal or conjunction of literals that is part of the clause). The resulting trees can directly be used for classification of unseen examples, but they can also easily be transformed into a logic or Prolog program.
The ILP setting used by this algorithm, is the ``learning from interpretations'' setting (De Raedt and Dzeroski 1994), as also used by the Claudien (De Raedt and Dehaspe 1996) and ICL (De Raedt and Van Laer 1995) systems.
The TILDE system is a prototype implementation of this algorithm. It incorporates many features of Quinlan's C4.5, which is a state-of-the-art decision tree learner for attribute-value problems. Next to these, a number of techniques are used that are specific to ILP: a language bias can be specified (types and modes of predicates), a form of lookahead is incorporated, and dynamic generation of literals (DGL) is possible. The latter, based on Claudien's call handling, is a technique that allows, among other things, to fill in constants in a literal. For learning in numerical domains, a discretization procedure is available that can be used by the DGL procedure to find interesting constants.
This implementation, not surprisingly, performs as well as C4.5 on propositional problems (except for lower speed), but experiments on typical ILP data sets also show promising results with respect to predictive accuracy, efficiency and theory complexity.
Development of the TILDE system was continued during the last year. In the previous report it was mentioned that a derivative of TILDE called CO.5 had been developed that can be used for regression and clustering. This derivative has in the mean time been improved  and integrated into the original system , resulting in a single system that can perform classification, clustering and regression. The resulting system was called TILDE2.0.
A second important development is that whereas in previous versions of TILDE examples were represented by interpretations (the "learning from interpretations" setting), the current version is able to work with data represented in the standard setting (learning from entailment). This makes it easier for users of other ILP systems to run TILDE on their data (no change of format is needed anymore). This change has been incorporated in TILDE2.1.
Finally, a separate version of TILDE called TILDELDS has been developed; this system was implemented so that can handle very large data sets. This version of TILDE employs an algorithm originally proposed by . It has been shown  that TILDELDS is able to handle data sets of over 100MB or 100,000 examples. TILDELDS is only a prototype and (in its current form) will not be integrated with TILDE2.1; the main results obtained from this research are 1) experimental proof that induction of first order logical decision trees scales up linearly, making the processing of large data sets feasible, and 2) it has pointed to new methods for improving the efficiency of ILP systems .
Current work includes improving the user-friendliness of the system and further improving its performance (by incorporating the lessons learned during the development of TILDELDS. Our ultimate goal is to have an inductive system that has very few constraints with respect to the tasks that it can handle (classification/clustering/regression), the representation of the examples (simple or structured) and the size of the data set.