# Inductive Constraint Logic (KUL)

 System: ICL Further specification: Inductive Constraint Logic Pointers: http://www.cs.kuleuven.ac.be/~wimv/ICL/ Code: Prolog References: De Raedt and Van Laer 1995

An overview of ICL can be found in (De Raedt and Van Laer 1995). Here, we shortly describe the framework of ICL and expand on some practical aspects.

## Framework

We will use some notions of first order logic and model theory. Definitions of the concepts used here can be found in (De Raedt and Van Laer 1995).

ICL is a classification system that learns a clausal theory which discriminates as good as possible between two classes of examples (let's say positives and negatives). Examples are seen as interpretations I of the target theory T. A positive example (interpretation) P is a model of the target theory T (i.e. P is a model for all clauses C in T) and a negative example N is not a model of T. One also says that a clause/theory is true/false in an example/interpretation, or even that an example is true for a clause, or covered by a clause. This view that examples are interpretations and that the aim is to discriminate between two classes of examples, is similar to classical learning from positive and negative examples and originates from (De Raedt and Dzeroski 1994).

Note that all variables in the clauses are universally quantified. So an interpretation I is a model for a clause iff for all grounding substitutions  of c :  . This is easily checked by a theorem prover like Prolog.

In ICL, a definite clause background theory can be used as follows. Rather than starting from complete interpretations of the target theory, examples are a kind of partial interpretations I (i.e. sets of facts) and are completed by taking the minimal Herbrand model  of background theory B plus the partial interpretation I. Note that the completion need not be computed explicitly, but can be left to the theorem-prover. Using Prolog, one can assert background theory and interpretation into the knowledge base, and run the query , in order to test whether a clause c makes an interpretation true or not.

Formally, the setting for ICL thus becomes:

• Given
• P is a set of interpretations such that for all  is a true interpretation of the unknown target theory (positive examples);
• N is a set of interpretations such that for all  is a false interpretation of the unknown target theory (negative examples);
• a definite clause background theory B;
• a language  which defines the set of a priori acceptable clauses in a hypothesis;
• Find: a clausal theory  such that
• for all  is a true interpretation of H (Completeness);
• for all  is a false interpretation of H (Consistency).

## Practice

An overview of the ICL algorithm can be found in (De Raedt and Van Laer 1995). In short, ICL uses a covering approach and beam search to find solutions in CNF form. In this way, ICL is a kind of dual first order version of the DNF-learner CN2.

To specify the hypothesis language, ICL uses the same declarative bias as CLAUDIEN, i.e. DLAB (declarative language bias). DLAB is a formalism to formulate an intensional syntactic definition of language  (a hypothesis is a conjunction of clauses, and DLAB specifies the allowed syntax for the clauses). This is automatically translated in a refinement operator (under -subsumption) for the specified language which is used by ICL to traverse the search space. For a complete overview of DLAB, we refer to the CLAUDIEN paper (De Raedt and Dehaspe 1996).
A small example:

```{false
<--
0-len:[len-len:[lumo(Lumo), lt(Lumo, 1-1:[-1, -2])],
len-len:[atom(A1, Elem1, Type1, Charge1),
lt(Charge1, 1-1:[-0.2, -0.1, 0, 0.1])
]
]
}```
Min-Max:[List] means that at least Min and at most Max literals of List are allowed (len is the length of List). Note that lt(Lumo, 1-1:[-1,-2]) is a short notation for 1-1:[lt(Lumo, -1), lt(Lumo, -2)].

Currently, there is a working implementation of ICL in ProLog by BIM. Several heuristics (based on CN2) are incorporated to handle noise. This means that the learned theory doesn't need to be strictly complete and consistent with the training set: some clauses might not cover all positives, and not all negatives need to be excluded by some clause in the theory. Several settings can be set to tune the heuristics.

## References

1. L. De Raedt and L. Dehaspe. Clausal discovery. Machine Learning, 1996. To appear.
2. L. De Raedt and S. Dzeroski. First order jk-clausal theories are

3. PAC-learnable. Artificial Intelligence, 70:375-392, 1994.
4. L. De Raedt and W. Van Laer. Inductive constraint logic. In Proceedings of the 5th Workshop on Algorithmic Learning Theory, volume 997 of Lecture Notes in Artificial Intelligence. Springer-Verlag, 1995.

back to index