Further specification: GA-based Concept Learning system
Code: C source code and PVM3 library + Makefile
References: (Giordana and Neri 1996), (Neri and Saitta 1996)
Other comments: The release also includes a graphical user interface

REGAL is a concept learning system based on a distributed genetic algorithm that learns First Order Logic multi-modal concept descriptions. REGAL uses a relational database to handle the learning examples that are represented as relational tuples.

Usually, propositional or first order logic is assumed as language and the multi-modal concept description takes the form of a disjunction of conjunctive formulas. Disjuncts describe concept modalities. Formally, given a logic language L, a multi-modal concept description H is a disjunction of conjunctive formulas in Ltex2html_wrap_inline296 H . In REGAL, tex2html_wrap_inline298 is a predicate with an internal disjunction of terms (Michalski's tex2html_wrap_inline300 language): color(x, [red tex2html_wrap_inline302 yellow tex2html_wrap_inline302 green]).

The task of learning disjunctive concept descriptions, by using genetic algorithms, can be mapped both to Pittsburgh's and Michigan's approach. A way of using the Michigan's approach to learn disjunctive concepts is to allow ''niches'' and ''species'' formation, as it is done in REGAL. In this case, species formation is achieved by using a novel selection operator, called Universal Suffrage (US). Its ability to let subpopulations emerge and reach an equilibrium state have been theoretically and experimentally proved. Anyway, as the size of the population necessary to find a solution may be quite large the idea of exploiting the potential parallelism of genetic algorithms, is an appealing one.

One way of exploiting distributed computation is to try to forcibly differentiate the nodal GAs, as REGAL does by assigning to each node a different subset of examples to cover.

REGAL's current architecture consists of a network of Nodal Genetic Algorithms (NGAs) coordinated by a Supervisor. Each tex2html_wrap_inline310 performs the US selection on an assigned subset tex2html_wrap_inline312 of the learning set E, which, in general, differ from one to another. The Supervisor dynamically assigns the sets to the NGAs according to a long-term strategy aimed at distributing different species on different nodes in order to reduce the genetic pressure of the large disjuncts on the small ones.

From another perspective, each NGA can be seen as a niche needed for the survival of a species or for a group of species. The Supervisor distributes the examples of the learning set according to the emerging species, thus identifying NGAs with niches, where specific species, or groups of them, can grow up. Moreover, the Supervisor iteratively extracts the evolving concept descriptions from the distributed population, as long as the evolution proceeds. Eventually, the Supervisor stops the search when a satisfactory concept description is found.

REGAL's objective is to find the simplest complete and consistent (w.r.t. the learning set) concept description according to the fitness function definition provided it.

REGAL's robustness with respect to the control parameters emerged quite clearly: many runs have been executed in very different conditions but REGAL was always able to find a reasonable solution without any tuning.


  1. A. Giordana and F. Neri. Search-intensive concept induction. Evolutionary Computation Journal, 3(4):375-416, 1996.
  2. F. Neri and L. Saitta. An analysis of the universal suffrage selection operator. Evolutionary Computation Journal, 4(1):89-109, 1996.
  3. F. Neri and L. Saitta. Exploring the power of genetic search in learning symbolic classifiers. IEEE Trans. on Pattern Analysis and Machine Intelligence, 18:1135--1142, 1996.

back to index