System | Multiple predicate learning by means of Abductive Concept Learning (M-ACL) |
Version | version 1 |
Code | SICStus Prolog 3 #5 source code |
References | [1, 2] |
Pointers | http://www-lia.deis.unibo.it/Software/ACL/ |
M-ACL is a system that is based on the Abductive Concept Learning framework as the ACL system. In this framework, both the background and the target theories are abductive theories and abductive entailment is employed as the coverage relation.
Abductive theories are represented as triples ,
where P is a logic program, A is a set of abducible predicates, or
predicates about which assumptions can be made, and I is a set of integrity
constraints. The notion of entailment is replaced by that of abductive entailment:
a goal G is abductively entailed from T (we write
) if there exists a set of ground facts
(abductive explanation) such that
(
explains G) and
(
is consistent with the integrity
constraints).
ACL finds a natural application in the problem of multiple predicate learning. Most ILP systems find problems in learning multiple predicates. One of these problems consists in the necessity to distinguish between two types of consistency of a learned clause: local and global consistency of a new clause with respect to the theory learned so far.
Intuitively, a clause is locally consistent if it does not cover any negative example for its head predicate when it is added to a consistent partial hypothesis. On the other hand, a clause is globally consistent if the theory obtained by adding it to the current partial hypothesis does not cover any negative example for any, target predicates.
By repeating several times a single predicate learning task, we repeatedly add locally consistent clauses to the current partial hypothesis. However, when learning multiple predicates, addi ng a locally consistent clause to a consistent hypothesis can produce a globally inconsistent hypothesis.
Another problem that can arise in multiple predicate learning concerns the case when scarce training examples, particularly negative examples, are available for a subsidiary predicate. In this case, a system could learn an overgeneral definition for the subsidiary predicate and this may prevent the system from finding a consistent definition for other predicates.
The basic idea of M-ACL is to set the target predicates to be learned as abducible predicates and use the abductive information that are generated on these to link the learning of the different predicates. This information can be used in two inter-related ways. Firstly, it acts as extra training examples for the target predicates. In this way, training information for one predicate is transformed into training information for other predicates. At the same time, this abductive information is used to detect when the addition of a new clause makes the hypothesis globally inconsistent. In this case, the global consistency is restored by identifying and retracting the clauses that have generated the inconsistency. The M-ACL system has been obtained from ACL1 by encompassing this in a process that uses the abductive information, produced by ACL1, to detect and restore consistency.
The Prolog implementation of the two versions of ACL1, together with the relative user manuals, is available. The code and manuals can be downloaded from http://www-lia.deis.unibo.it/Software/ACL/.