CLOG (York)

System CLOG
Version N/A
Code SICStus Prolog
References [3]
Pointers suresh@cs.york.ac.uk
Other comments restricted use

CLOG [3] is a system for learning of first-order decision lists. CLOG shares a fair amount of similarity with one of its predecessors, FOIDL [1]. Like FOIDL, CLOG can learn first-order decision lists from positive examples only using the output completeness assumption [1]. In the current implementation the generalisations relevant to an example are supplied by a user-defined predicate which takes as input an example and generates a list of all generalisations that cover that example.

CLOG treats the set of generalisations of an example as a generalisation set. It then cycles every input example through the generalisation set in a single iteration checking whether a candidate generalisation covers the example positively or negatively. Once this process is complete the "best" candidate generalisation is chosen. The example set is pruned using this candidate and the cycle repeats.

The gain function currently used in CLOG is user-defined. For the segmentation problem [2] the following gain function has been chosen: gain = QP- SN - C where QP denotes the number of new examples covered positively, SN denotes the number of previously covered examples that are covered negatively and C is the number of literals in the clause body.

References

  1. Mary Elaine Califfand Raymond J. Mooney. Advantages of decision lists and implicit negatives in inductive logic programming. Technical report, University of Texas at Austin, 1996.

  2. Dimitar Kazakov and Suresh Manandhar. A hybrid approach to word segmentation. In David Page, editor, The Eighth International Conference ILP-98, pages 125-134, Madison, Wisconsin, USA, 1998. Springer-Verlag.

  3. Suresh Manandhar, Saso Dzeroski, and Tomaz Erjavec. Learning Multilingual Morphology with CLOG. In The Eighth International Conference on Inductive Logic Programming (ILP'98), Madison, Wisconsin, USA, 1998.


back to index