|Other comments||restricted use|
CLOG  is a system for learning of first-order decision lists. CLOG shares a fair amount of similarity with one of its predecessors, FOIDL . Like FOIDL, CLOG can learn first-order decision lists from positive examples only using the output completeness assumption . In the current implementation the generalisations relevant to an example are supplied by a user-defined predicate which takes as input an example and generates a list of all generalisations that cover that example.
CLOG treats the set of generalisations of an example as a generalisation set. It then cycles every input example through the generalisation set in a single iteration checking whether a candidate generalisation covers the example positively or negatively. Once this process is complete the "best" candidate generalisation is chosen. The example set is pruned using this candidate and the cycle repeats.
The gain function currently used in CLOG is user-defined. For the segmentation problem  the following gain function has been chosen: gain = QP- SN - C where QP denotes the number of new examples covered positively, SN denotes the number of previously covered examples that are covered negatively and C is the number of literals in the clause body.