1.

Knowledge Based Evolutionary Computing for Inductive Learning in First-Order Logic
- Federico Divina
Learning from examples in FOL, also known as Inductive Logic Programming (ILP) (Muggleton & Raedt, 1994), constitutes a central topic in Machine Learn-ing, with relevant applications to problems in complex domains like natural lan-guage and molecular computational biology (Muggleton, 1999). Given a FOL

2.

Detecting traffic problems with ILP
- Saso Dzeroski; Nico Jacobs; Martin Molina; Carlos Moure; Stephen Muggleton; Wim Van Laer
Expert systems for decision support have recently been successfully introduced in road transport management. These systems include knowledge on traffic problem detection and alleviation. The paper describes experiments in automated acquisition of knowledge on traffic problem detection. The task is to detect road sections where a problem has occured (critical sections) from sensor data. It is necessary to use inductive logic programming (ILP) for this purpose as relational background knowledge on the road network is essential. In this paper, we apply three state-of-the art ILP systems to learn how to detect traffic problems and compare their performance to the performance...

3.

The Equivalence of the Subsumption Theorem and the Refutation-completeness for Unconstrained Resolution
- Shan-hwei Nienhuys-cheng; Ronald de Wolf
The subsumption theorem is an important theorem concerning resolution. Essentially, it says that a set of clauses \Sigma logically implies a clause C, iff C is a tautology, or a clause D which subsumes C can be derived from \Sigma with resolution. It was originally proved in 1967 by Lee in [Lee67]. In Inductive Logic Programming, interest in this theorem is increasing since its independent rediscovery by Bain and Muggleton [BM92]. It provides a quite natural "bridge" between subsumption and logical implication. Unfortunately, a correct formulation and proof of the subsumption theorem are not available. It is not clear which...

4.

Using Inductive Logic Programming for Natural Language Processing
- James Cussens; Stephen Muggleton; Ashwin Srinivasan
We summarise recent work on using Inductive Logic Programming (ILP) for Natural Language Processing (NLP). ILP performs learning in a first-order logical setting, and is thus well-suited to induce over the various structured representations used in NLP. We present Stochastic Logic Programs (SLPs) and demonstrate their use in ILP when learning from positive examples only. We also give accounts of work on learning grammars from children's books and part-of-speech tagging. 1 Inductive Logic Programming and Progol By using computational logic as the representational mechanism for hypotheses and observations, Inductive Logic Programming (ILP) can overcome the two main limitations of classical...

5.

Generating Specialised Update Procedures Through Partial Deduction of the Ground Representation
- Michael Leuschel; Bern Martens
Integrity constraints are very useful in many contexts, such as, for example, deductive databases, abductive and inductive logic programming. However, fully testing the integrity constraints after each update or modification can be very expensive and methods have been developed which simplify the integrity constraints. In this paper, we pursue the goal of writing this simplification procedure as a meta-program in logic programming and then using partial deduction to obtain specialised update procedures for certain update patterns. We argue that the ground representation has to be used to write this meta-program declaratively. However, contrary to what one might expect, current partial...

6.

Normal forms for Inductive Logic Programming
- On Inductive Logic Programming Ilp; Nada Lavrac; Saso Dzeroski (eds; Peter A. Flach
. In this paper we study induction of unrestricted clausal theories from interpretations. First, we show that in the propositional case induction from complete evidence can be seen as an equivalence-preserving transformation from DNF to CNF. From this we conclude that induction is essentially a process of determining what is false in the domain of discourse. We then proceed by investigating dual normal forms for evidence and hypotheses in predicate logic. We define evidence normal form (ENF), which is Skolemised existential DNF under a Consistent Naming Assumption. Because ENF is incomplete, in the sense that it does not have the...

7.

Clausal Discovery
- Luc De Raedt; Luc Dehaspe
The clausal discovery engine Claudien is presented. Claudien is an inductive logic programming engine that fits in the knowledge discovery in databases and data mining paradigm as it discovers regularities that are valid in data. As such Claudien performs a novel induction task, which is called characteristic induction from closed observations, and which is related to existing formalizations of induction in logic. In characterising induction from closed observations, the regularities are represented by clausal theories, and the data using Herbrand interpretations. Claudien also employs a novel declarative bias mechanism to define the set of clauses that may appear in a...

8.

Relational Learning for NLP using Linear Threshold Elements
- Roni Khardon; Dan Roth; Leslie G. Valiant
We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby offer an explanation of the theoretical basis for the SNoW system. We suggest that extensions of this system along the lines suggested by the theory may provide new levels of scalability and functionality. 1 Introduction The paper explores some aspects of relational knowledge representation and their learnability. While the discussion is to a large extent general it is made in...

9.

Applications of a Logical Discovery Engine
- Luc Dehaspe; Wim Van Laer; Luc De Raedt
The clausal discovery engine claudien is presented. claudien discovers regularities in data and is a representative of the inductive logic programming paradigm. As such, it represents data and regularities by means of first order clausal theories. Because the search space of clausal theories is larger than that of attribute value representation, claudien also accepts as input a declarative specification of the language bias, which determines the set of syntactically well-formed regularities. Whereas other papers on claudien focus on the semantics or logical problem specification of claudien, on the discovery algorithm, or the PAC-learning aspects, this paper wants to illustrate the...

10.

Predicate Invention and Utilisation
- Stephen Muggleton; Keble Road
Inductive Logic Programming (ILP) involves the synthesis of logic programs from examples. In terms of scientific theory formation ILP systems define observational predicates in terms of a set of theoretical predicates. However, certain basic theorems indicate that with an inadequate theoretical vocabulary this is not always possible. Predicate invention is the augmentation of a given theoretical vocabulary to allow finite axiomatisation of the observational predicates. New theoretical predicates need to be chosen from a well defined universe of such predicates. In this paper a partial order of utilisation is described over such a universe. This ordering is a special case...

11.

Loglinear Models for First-Order Probabilistic Reasoning
- James Cussens
Recent work on loglinear models in probabilistic constraint logic programming is applied to first-order probabilistic reasoning. Probabilities are defined directly on the proofs of atomic formulae, and by marginalisation on the atomic formulae themselves. We use Stochastic Logic Programs (SLPs) composed of labelled and unlabelled definite clauses to define the proof probabilities. We have a conservative extension of first-order reasoning, so that, for example, there is a one-one mapping between logical and random variables. We show how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data. We also compare...

12.

Contributions to Inductive Logic Programming
- Afstudeerscriptie Bestuurlijke Informatica; R. M. De Wolf; What Is Inductive Logic Programming
Contents Preface iii 1 What is Inductive Logic Programming? 1 1.1 The importance of learning : : : : : : : : : : : : : : : : : : : : : 1 1.2 Inductive learning : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.3 The problem setting for ILP : : : : : : : : : : : : : : : : : : : : : 4 1.4 Other problem settings :...

13.

An intelligent search method using Inductive Logic Programming
- Nobuhiro Inuzuka; Hirohisa Seki; Hidenori Itoh
We propose a method to use Inductive Logic Programming to give heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. In a large class of problems, a problem is described as a set of states and a set of operators, and is solved by finding a series of operators. A solution, a series of operators that brings an initial state to a final state, is transformed into positive and negative examples of a relation "better-choice", which describes that an operator...

14.

Constructing a Legal Ontology from a General Ontology
- Masaki Kurematsu; Masayoshi Tada; Takahira Yamaguchi
This paper discusses how to construct a legal ontology from a general ontology that has already been developed. In the construction process, we must solve two hard issues. The one is to localize legal contexts in a general ontology in order to match a legal ontology with a general ontology. The other is to identify bugs in constructed legal ontology and refine them. Here is presented a new method to match a legal concept with the most similar concept in a general ontology and also two strategies to refine a legal ontology, a static analysis based on the comparison between...

15.

WiM: A Study on the Top-Down ILP Program
- Lubos Popelínsky; Olga Stepánková; Lubo Popel#nsk
In the area of the inductive synthesis of logic programs it is the small number of examples which is crucial. We show that the classical MIS-like architecture can be adapted using techniques described in ILP literature so that we reach very good results if to compare with other ILP systems. We describe the top-down ILP program WiM and the results obtained through it. WiM needs from 2 to 4 examples for most of the ILP benchmark predicates. Even though it is interactive, not more that one membership query is enough to receive the correct target program. WiM has higher efficiency...

16.

Declarative Knowledge Discovery in Industrial Databases
- Stephen Muggleton
Industry is increasingly overwhelmed by large-volume-data. For example, the pharmaceutical industry generates vast quantities of data both internally as a side-effect of screening tests and combinatorial chemistry, as well as externally from sources such as the human genome project. Industry is also becoming predominantly knowledgedriven. Increased understanding not only improves products, but is also central in market assessment and strategic decision making. From a computer science point of view, the knowledge requirements within industry often give higher emphasis to "knowing that" (declarative or descriptive knowledge) rather than "knowing how" (procedural or prescriptive knowledge). Mathematical logic has always been the preferred...

17.

Partial Deduction of the Ground Representation and its Application to Integrity Checking
- Michael Leuschel; Bern Martens
Integrity constraints are very useful in many contexts, such as, for example, deductive databases, abductive and inductive logic programming. However, fully testing the integrity constraints after each update or modification can be very expensive and methods have been developed which simplify the integrity constraints. In this paper, we pursue the goal of writing this simplification procedure as a meta-program in logic programming and then using partial deduction to obtain pre-compiled integrity checks for certain update patterns. We argue that the ground representation has to be used to write this metaprogram declaratively. We however also show that, contrary to what one...

18.

Using Meta-Languages for Learning
- Simon Anthony; Alan M. Frisch
. This paper proposes the use of meta-languages to provide an effective representation scheme for Machine Learning. This proposal is supported by both: -- a demonstration of the benefits that meta-languages can bring to Inductive Logic Programming, and -- the successful and widespread use of meta-languages in other search intensive areas of Artificial Intelligence, such as planning. 1 Introduction A typical Machine Learning setting involves a teacher selecting a target concept from a concept space, and providing a learner with a labelled training sample of examples of this concept. Each example is drawn from an example space, and is labelled...

19.

Cautious Induction in Inductive Logic Programming
- Simon Anthony; Alan M. Frisch
. Many top-down Inductive Logic Programming systems use a greedy, covering approach to construct hypotheses. This paper presents an alternative, cautious approach, known as cautious induction. We conjecture that cautious induction can allow better hypotheses to be found, with respect to some hypothesis quality criteria. This conjecture is supported by the presentation of an algorithm called CILS, and with a complexity analysis and empirical comparison of CILS with the Progol system. The results are encouraging and demonstrate the applicability of cautious induction to problems with noisy datasets, and to problems which require large, complex hypotheses to be learnt. 1 Introduction...

20.

A Project to Build Background Knowledge into Refinement Operators for Inductive Logic Programming
- Alan M. Frisch
This paper describes a project whose goals are to develop and study refinement operators that take account of background knowledge. As such refinement operators are relative to a theory---the background knowledge---we refer to them as theory refinement operators. Since entailment between clauses is undecidable, refinement operators are often based on subsumption, a decidable approximation to entailment.