Further specification: Model Extraction by Regular Language INference
Code:SICStus Prolog
References:Boström 1996

MERLIN (Model Extraction by Regular Language INference) is a non-interactive, multiple predicate learning system that is particularly suited for learning recursive hypotheses. Like SPECTRE, it uses an overly general hypothesis in the form of a logic program together with sets of positive and (possibly) negative examples in order to find an inductive hypothesis which entails all positive examples but no negative examples. The basic idea of the approach is to learn finite-state automata that represent allowed sequences of resolution steps. MERLIN first finds SLD-refutations for all examples using the overly general hypothesis, and then tries to find the minimal finite-state automaton that can generate all sequences of input clauses in the SLD-refutations of the positive examples and no sequences of input clauses in the SLD-refutations of the negative examples. After having learned the automaton, MERLIN produces a new theory that allows only those sequences of resolution steps which are allowed by both the initial theory and the learned automaton. This is done by calculating the intersection of the automaton and a grammar that corresponds to the overly general hypothesis. The intersection is used to derive the final hypothesis in the form of a logic program. During this process MERLIN may introduce new predicate symbols, i.e., it carries out a form of predicate invention. As the induced hypothesis allows only such resolution sequences which are possible in the initial theory, the initial theory defines MERLIN's language bias. For instance, as the resolution sequences produced by the new theory involve only those clauses which are also part of resolution sequences allowed by the initial theory (besides clauses defining the newly introduced predicates), it is important that all relevant clauses occur in the resolution sequences produced by the initial theory.

MERLIN 2.0 provides two modes. When learning from positive and negative examples, it searches by hill-climbing the minimal automaton that can generate all positive and no negative sequences. When learning from positive examples only, MERLIN employs an incremental hill-climbing technique for inducing the most probable Hidden Markov Model structure that can generate the positive proof sequences. In this mode, MERLIN 2.0 completely disregards the negative examples, e.g. it does not check whether negative examples are covered or not.


  1. H. Boström. Theory-guided induction of logic programs by inference of regular languages. In L. Saitta, editor, Proceedings of the 13th International Conference on Machine Learning, pages 46-53. Morgan Kaufmann, 1996.
  2. H. Boström. Predicate invention and learning from positive examples only. In Proceedings of the Tenth European Conference on Machine Learning (ECML-98), 1998.

back to index