1.

Extraction of Meta-Knowledge to Restrict the Hypothesis Space for ILP Systems
- Eric Mccreath; Arun Sharma
Many ILP systems, such as GOLEM, FOIL, and MIS, take advantage of user supplied meta-knowledge to restrict the hypothesis space. This meta-knowledge can be in the form of type information about arguments in the predicate being learned, or it can be information about whether a certain argument in the predicate is functionally dependent on the other arguments (supplied as mode information). This meta knowledge is explicitly supplied to an ILP system in addition to the data. The present paper argues that in many cases the meta knowledge can be extracted directly from the raw data. Three algorithms are presented that...

2.

Discovery of First-Order Regularities in a Relational Database Using Offline Candidate Determination
- Irene Weber
. In this paper, we present an algorithm for the discovery of first order clauses holding in an relational database in the framework of the nonmonotonic ILP setting [1]. The algorithm adopts the principle of offline candidate determination algorithm used for mining association rules in large transaction databases [4]. Analoguous to the measures used in mining association rules, we define a support and a confidence measure as acceptance criteria for discovered hypothesis clauses. The algorithm has been implemented in C with an interface to the relational database management system INGRES. We present and discuss the results of an experiment in...

3.

Learning Function-Free Horn Expressions
- Roni Khardon
The problem of learning universally quantified function free first order Horn expressions is studied. Several models of learning from equivalence and membership queries are considered, including the model where interpretations are examples (Learning from Interpretations), the model where clauses are examples (Learning from Entailment), models where extentional or intentional background knowledge is given to the learner (as done in Inductive Logic Programming), and the model where the reasoning performance of the learner rather than identification is of interest (Learning to Reason). We present learning algorithms for all these tasks for the class of universally quantified function free Horn expressions. The...

4.

Bayesian Inductive Logic Programming
- Stephen Muggleton
Inductive Logic Programming (ILP) involves the construction of first-order definite clause theories from examples and background knowledge. Unlike both traditional Machine Learning and Computational Learning Theory, ILP is based on lockstep development of Theory, Implementations and Applications. ILP systems have successful applications in the learning of structure-activity rules for drug design, semantic grammars rules, finite element mesh design rules and rules for prediction of protein structure and mutagenic molecules. The strong applications in ILP can be contrasted with relatively weak PAC-learning results (even highlyrestricted forms of logic programs are known to be prediction-hard). It has been recently argued that the...

5.

Building Theories into Instantiation
- Alan M. Frisch; C. David Page; Jr.
Instantiation orderings over formulas (the relation of one formula being an instance of another) have long been central to the study of automated deduction and logic programming, and are of rapidly-growing importance in the study of database systems and machine learning. A variety of instantiation orderings are now in use, many of which incorporate some kind of background information in the form of a constraint theory. Even a casual examination of these instantiation orderings reveals that they are somehow related, but in exactly what way? This paper presents a general instantiation ordering of which all these orderings are special cases,...

6.

An Inductive Logic Programming Framework to Learn a Concept from Ambiguous Examples
- Dominique Bouthinon; Henry Soldano; Atelier De Bioinformatique (abi
. We address a learning problem with the following peculiarity : we search for characteristic features common to a learning set of objects related to a target concept. In particular we approach the cases where descriptions of objects are ambiguous : they represent several incompatible realities. Ambiguity arises because each description only contains indirect information from which assumptions can be derived about the object. We suppose here that a set of constraints allows the identification of "coherent" sub-descriptions inside each object. We formally study this problem, using an Inductive Logic Programming framework close to characteristic induction from interpretations. In particular,...

7.

Strongly Typed Inductive Concept Learning
- P. A. Flach; Flach Giraud-Carrier; J. W. Lloyd
. In this paper we argue that the use of a language with a type system, together with higher-order facilities and functions, provides a suitable basis for knowledge representation in inductive concept learning and, in particular, illuminates the relationship between attribute-value learning and inductive logic programming (ILP). Individuals are represented by closed terms: tuples of constants in the case of attribute-value learning; arbitrarily complex terms in the case of ILP. To illustrate the point, we take some learning tasks from the machine learning and ILP literature and represent them in Escher, a typed, higher-order, functional logic programming language being developed...

8.

Difference to Inference 1 Running Head: DIFFERENCE TO INFERENCE Difference to Inference: Teaching logical and statistical reasoning through online interactivity.
- Thomas E. Malloy
Difference to Inference is an online JAVA program simulating theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support the learning of thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades. Difference to Inference 3 Difference to Inference: Teaching logical and statistical reasoning through online interactivity Emphasizing the active nature of information processing, Posner and Osgood (1980) proposed that computers be used to train inquiry...

9.

Inductive Object-Oriented Logic Programming
- Erivan Alves De Andrade; Jacques Robin
Abstract. In many of its practical applications, such as natural language processing, automatic programming, expert systems, semantic web ontologies and knowledge discovery in databases, Inductive Logic Programming (ILP) is not used to substitute but rather to complement manual knowledge acquisition.This manual acquisition is increasingly done using hybrid languages integrating objects with rules or relations. Since using a common representation language for both manually encoded and ILP learned knowledge is key to their seamless integration, this raises the issue of using such hybrid languages for induction. In this paper, we present Cigolf, an ILP system that uses the object-oriented logic language...

10.

Applying Inductive Logic Programming and Rule Relaxation for the Generation of Metadata
- Andreas D. Lattner; Jan D. Gehrke
Activities towards Knowledge Management have recently become popular in enterprises to have all relevant information easily accessible. As one of the major problems encountered here is the cost for the manual acquisition of metadata this task should be supported by some semi-automated generation of metadata. In this work we present an approach for applying Inductive Logic Pro-gramming to create metadata generation rules. These rules can later be applied to create values for attributes of information items. As learned rules might not be completely correct or might miss values for attributes of objects, we propose a rule relaxation algorithm while applying...

11.

2.1 Choice of Synthesis Systems..................... 4
- Andreas Hirschberger; Martin Hofmann
The paper we present compares the three systems for program syn-thesis, namely Adate, an approach through evolutionary computation, the inductive/abductive logic program synthesizer Dialogs-II and the classification learner Atre, capable of simultaneously learning mutually dependent, recursive target predicates. It gives an overview over the func-tionality of all three systems, and evaluates their capabilities under equal premises. As a consequence, we propose to combine Adate’s expressional power with Dialog-II’s search heuristic.

12.

Difference to Inference 1 Running Head: DIFFERENCE TO INFERENCE Difference to Inference: Teaching logical and statistical reasoning through online interactivity.
- Thomas E. Malloy
Difference to Inference is an online JAVA program simulating theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support the learning of thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades. Difference to Inference 3 Difference to Inference: Teaching logical and statistical reasoning through online interactivity Emphasizing the active nature of information processing, Posner and Osgood (1980) proposed that computers be used to train inquiry...

13.

Learning of Agents with Limited Resources
- Sławomir Nowaczyk
In our research we investigate rational agent which con-sciously balances deliberation and acting, and uses learning to augment its reasoning. It creates several partial plans, uses past experience to choose the best one and, by executing it, gains new knowledge about the world. We analyse a possible application of Inductive Logic Programming to learn how to evaluate partial plans in a resource-constrained way. We also discuss how ILP framework can generalise partial plans.

14.

On the Integration of Learning, Logical Deduction and Probabilistic inductive Inference
- J. Gerard Wolff
This paper introduces the conjecture that many kinds of cognition and computing may usefully be seen as a search for efficiency in information, where efficiency is defined in terms of Shannon's (1949) concept of redundancy in information. A prototype of a new kind of computing system which is based on this theory is described in outline. Examples from the prototype are presented showing how a search for efficiency in information may achieve autonomous inductive learning, logical deduction and probabilistic inductive inference. 2 INTRODUCTION The nascent field of inductive logic programming has been defined by Muggleton (1990) as "the intersection of...

15.

Carcinogenesis predictions using ILP
- Srinivasan King; A. Srinivasan; R. D. King; S. H. Muggleton; M. J. E. Sternberg
Obtaining accurate structural alerts for the causes of chemical cancers is a problem of great scientific and humanitarian value. This paper follows up on earlier research that demonstrated the use of Inductive Logic Programming (ILP) for predictions for the related problem of mutagenic activity amongst nitroaromatic molecules. Here we are concerned with predicting carcinogenic activity in rodent bioassays using data from the U.S. National Toxicology Program conducted by the National Institute of Environmental Health Sciences. The 330 chemicals used here are significantly more diverse than the previous study, and form the basis for obtaining Structure-Activity Relationships (SARs) relating molecular structure...

16.

Stochastic Logic Programs
- Stephen Muggleton
One way to represent a machine learning algorithm's bias over the hypothesis and instance space is as a pair of probability distributions. This approach has been taken both within Bayesian learning schemes and the framework of U-learnability. However, it is not obvious how an Inductive Logic Programming (ILP) system should best be provided with a probability distribution. This paper extends the results of a previous paper by the author which introduced stochastic logic programs as a means of providing a structured definition of such a probability distribution. Stochastic logic programs are a generalisation of stochastic grammars. A stochastic logic program...

17.

Tidying up the Mess around the Subsumption Theorem in Inductive Logic Programming
- Shan-hwei Nienhuys-cheng; Ronald de Wolf
The subsumption theorem is an important theorem concerning resolution. Essentially, it says that if a set of clauses \Sigma logically implies a clause C, then either C is a tautology, or a clause D which subsumes C can be derived from \Sigma with resolution. It was originally proved in 1967 by Lee in [Lee67]. In Inductive Logic Programming, interest in this theorem is increasing since its rediscovery by Bain and Muggleton [BM92]. It provides a quite natural "bridge" between subsumption and logical implication. Unfortunately, a correct formulation and proof of the subsumption theorem are not available. It is not clear...

18.

Distributed Representations for Terms in Hybrid Reasoning Systems
- Alessandro Sperduti; Antonina Starita; Christoph Goller
This paper is a study on LRAAM-based (Labeling Recursive Auto-Associative Memory) classification of symbolic recursive structures encoding terms. The results reported here have been obtained by combining an LRAAM network with an analog perceptron. The approach used was to interleave the development of representations (unsupervised learning of the LRAAM) with the learning of the classification task. In this way, the representations are optimized with respect to the classification task. The intended applications of our approach are hybrid (symbolic/connectionist) systems, where the connectionist part has to solve logic-oriented inductive learning tasks similar to the term-classification problems used in our experiments. Specifically,...

19.

The Subsumption Theorem in Inductive Logic Programming: Facts and Fallacies
- Shan-hwei Nienhuys-cheng; Ronald de Wolf
. The subsumption theorem is an important theorem concerning resolution. Essentially, it says that if a set of clauses \Sigma logically implies a clause C, then either C is a tautology, or a clause D which subsumes C can be derived from \Sigma with resolution. It was originally proved in 1967 by Lee. In Inductive Logic Programming, interest in this theorem is increasing since its independent rediscovery by Bain and Muggleton. It provides a quite natural "bridge" between subsumption and logical implication. Unfortunately, a correct formulation and proof of the subsumption theorem are not available. It is not clear which...

20.

Learning part of speech disambiguation rules using Inductive Logic Programming
- Nikolaj Lindberg; Martin Eineborg
A pilot study on inducing rules for part of speech tagging of unrestricted Swedish text is reported. Using the Progol machine-learning system, Constraint Grammar inspired rules were learnt from the part of speech tagged Stockholm-Umea Corpus. Several thousand disambiguation rules discarding faulty readings of ambiguously tagged words were induced. When tested on unseen data, 97% of the words retained the correct reading after tagging. However, there were still ambiguities in the output after applying the tagging rules --- on an average, 1.15 tags/word. 1 Introduction This paper reports an experiment of learning Constraint Grammar style disambiguation rules for unrestricted Swedish...