Mostrando recursos 1 - 20 de 3.083

  1. Learning extended logic programs

    Katsumi Inoue
    inouefieedept.kobe-u.ac.jp This paper presents a method to generate nonmonotonic rules with exceptions from positive/negative examples and background knowledge in Inductive Logic Programming. We adopt extended logic programs as the form of programs to be learned, where two kinds of negation—negation as failure and classical negation—are effectively used in the presence of incomplete information. While default rules axe generated as specialization of general rules that cover positive examples, exceptions to general rules are identified from negative examples and are then generalized to rules for cancellation of defaults. We implemented the learning system LELP based on the proposed method. In LELP, when...

  2. Relational data mining with inductive logic programming for link discovery

    Raymond J. Mooney; Prem Melville; Jude Shavlik; Inês De Castro Dutra; Vítor Santos Costa
    Link discovery (LD) is an important task in data mining for counter-terrorism and is the focus of DARPA’s Evidence Extraction and Link Discovery (EELD) research program. Link discovery concerns the identification of complex relational patterns that indicate potentially threatening activities in large amounts of relational data. Most data-mining methods assume data is in the form of a feature-vector (a single relational table) and cannot handle multi-relational data. Inductive logic programming is a form of relational data mining that discovers rules in first-order logic from multi-relational data. This paper discusses the application of ILP to learning patterns for link discovery. 1

  3. Learning to parse natural language database queries into logical form

    Cynthia A. Thompson; Raymond J. Mooney
    For most natural language processing tasks, a parser that maps sentences into a semantic representation is signi cantly more useful than a grammar or automata that simply recognizes syntactically wellformed strings. This paper reviews our work on using inductive logic programming methods to learn deterministic shift-reduce parsers that translate natural language into a semantic representation. We focuson the task of mapping database queries directly into executable logical form. An overview of the system is presented followed by recent experimental results on corpora of Spanish geography queries and English jobsearch queries.

  4. Declarative kernels

    Paolo Frasconi; Andrea Passerini; Stephen Muggleton; Huma Lodhi
    We introduce a declarative approach to kernel design based on background knowledge expressed in the form of logic programs. The theoretical foundation of declarative kernels is mereotopology, a general theory for studying parts and wholes and for defining topological relations among parts. Declarative kernels can be used to specify a broad class of kernels over relational data and represent a step towards bridging statistical learning and inductive logic programming. The flexibility and the effectiveness of declarative kernels is demonstrated in a number of real world problems. 1

  5. Combining Macro-Operators with Control Knowledge

    Rocío García-durán; O Fernández; Daniel Borrajo
    Abstract. Inductive Logic Programming (ilp) methods have proven to succesfully acquire knowledge with very different learning paradigms, such as supervised and unsupervised learning or relational reinforcement learning. However, very little has been done on applying it to General Problem Solving (gps). One of the ilp-based approaches applied to gps is hamlet. This method learns control rules (heuristics) for a non linear planner, prodigy4.0, which is integrated into the ipss system; control rules are used as an effective guide when building the planning search tree. Other learning approaches applied to planning generate macro-operators, building high-level blocks of actions, but increasing the...

  6. Learning Declarative Bias Will Bridewell 1 and Ljupčo Todorovski 1,2

    Computational Learning Laboratory
    Abstract. In this paper, we introduce an inductive logic programming approach to learning declarative bias. The target learning task is inductive process modeling, which we briefly review. Next we discuss our approach to bias induction while emphasizing predicates that characterize the knowledge and models associated with the HIPM system. We then evaluate how the learned bias affects the space of model structures that HIPM considers and how well it generalizes to other search problems in the same domain. Results indicate that the bias reduces the size of the search space without removing the most accurate structures. In addition, our approach...

  7. Relational learning of pattern-match rules for information extraction

    Mary Elaine Cali; Raymond J. Mooney
    Information extraction systems process natural language documents and locate a speci c set of relevant items. Given the recent success of empirical or corpusbased approaches in other areas of natural language processing, machine learning has the potential to signi cantly aid the development of these knowledge-intensive systems. This paper presents a system, Rapier, that takes pairs of documents and lled templates and induces pattern-match rules that directly extract llers for the slots in the template. The learning algorithm incorporates techniques from several inductive logic programming systems and learns unbounded patterns that include constraints on the words and partof-speech tags surrounding...

  8. An experimental comparison of genetic programming and inductive logic programming on learning recursive list functions

    Mary Elaine Cali; Raymond J. Mooney
    This paper experimentally compares three approaches to program induction: inductive logic programming (ILP), genetic programming (GP), and genetic logic programming (GLP) (a variant of GP for inducing Prolog programs). Each of these methods was used to induce four simple, recursive, list-manipulation functions. The results indicate that ILP is the most likely to induce a correct program from small sets of random examples, while GP is generally less accurate. GLP performs the worst, and is rarely able to induce a correct program. Interpretations of these results in terms of di erences in search methods and inductive biases are presented.

  9. QA UdG-UPC System at TREC-12

    Marc Massot; Dept Informàtica I; Daniel Ferrés; Horacio Rodríguez
    This paper describes a prototype multilingual Q&A system that we have designed to participate in the Q&A Track of TREC-12. The system answer concrete responses, then we participate in the Q&A main task for factoid questions. The main areas of our system are: (1) Inductive Logic Programming to learn the question type, (2) Clustering of Named Entities to improve Information Retrieval and (3) Semantic relations and EuroWordNet synsets to perform a language-independent answer extraction. 1.

  10. A Framework for Set-Oriented Computation in Inductive Logic Programming and its Application in Generalizing Inverse Entailment ⋆

    Héctor Corrada Bravo; Raghu Ramakrishnan; Vitor Santos Costa
    Abstract. We propose a new approach to Inductive Logic Programming that systematically exploits caching and offers a number of advantages over current systems. It avoids redundant computation, is more amenable to the use of set-oriented generation and evaluation of hypotheses, and allows relational DBMS technology to be more easily applied to ILP systems. Further, our approach opens up new avenues such as probabilistically scoring rules during search and the generation of probabilistic rules. As a first example of the benefits of our ILP framework, we propose a scheme for defining the hypothesis search space through Inverse Entailment using multiple example...

  11. Inducing Deterministic Prolog Parsers from Treebanks: A Machine Learning Approach


    This paper presents a method for constructing deterministic Prolog parsers from corpora of parsed sentences. Our approach uses recent machine learning methods for inducing Prolog rules from examples (inductive logic programming). We discuss several advantages of this method compared to recent statistical methods and present results on learning complete parsers from portions of the ATIS corpus.

  12. Induction as a Search Procedure

    Stasinos Konstantopoulos; Rui Camacho; Nuno A. Fonseca; Vítor Santos Costa
    1 Induction as a Search Procedure This chapter introduces Inductive Logic Programming from the perspective of search al-gorithms in Computer Science. It first briefly considers the Version Spaces approach to induc-tion, and then focuses on Inductive Logic Programming: from its formal definition and main techniques and strategies, to priors used to restrict the search space and optimized sequential, parallel, and stochastic algorithms. The authors hope that this presentation of the theory and applications of Inductive Logic Programming will help the reader understand the theoretical underpinnings of inductive reasoning, and also provide a helpful overview of the State-of-the-Art in the domain....

  13. Learning from Interpretations: A Rooted Kernel for Ordered Hypergraphs

    Gabriel Wachman; Roni Khardon
    The paper presents a kernel for learning from ordered hypergraphs, a formalization that captures relational data as used in Inductive Logic Programming (ILP). The kernel generalizes previous approaches to graph kernels in calculating similarity based on walks in the hypergraph. Experiments on challenging chemical datasets demonstrate that the kernel outperforms existing ILP methods, and is competitive with state-of-the-art graph kernels. The experiments also demonstrate that the encoding of graph data can affect performance dramatically, a fact that can be useful beyond kernel methods. 1.

  14. Markov Logic Networks

    Matthew Richardson; Pedro Domingos
    Abstract. We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by...

  15. Grounding for model expansion in k-guarded formulas with inductive definitions

    Murray Patterson; Yongmei Liu; Eugenia Ternovska; Arvind Gupta
    Mitchell and Ternovska [2005] proposed a constraint programming framework based on classical logic extended with inductive definitions. They formulate a search problem as the problem of model expansion (MX), which is the problem of expanding a given structure with new relations so that it satisfies a given formula. Their long-term goal is to produce practical tools to solve combinatorial search problems, especially those in NP. In this framework, a problem is encoded in a logic, an instance of the problem is represented by a finite structure, and a solver generates solutions to the problem. This approach relies on propositionalisation of...

  16. Learning the structure of Markov logic networks

    Stanley Kok; Pedro Domingos
    Markov logic networks (MLNs) combine logic and probability by attaching weights to first-order clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. The algorithm performs a beam or shortestfirst search of the space of clauses, guided by a weighted pseudo-likelihood measure. This requires computing the optimal weights for each candidate structure, but we show how this can be done efficiently. The algorithm can be used to learn an MLN...

  17. Building Relational World Models for Reinforcement Learning

    Trevor Walker; Lisa Torrey; Jude Shavlik; Richard Maclin
    Abstract. Many reinforcement learning domains are highly relational. While traditional temporal-difference methods can be applied to these domains, they are limited in their capacity to exploit the relational nature of the domain. Our algorithm, AMBIL, constructs relational world models in the form of relational Markov decision processes (MDPs). AMBIL works backwards from collections of high-reward states, utilizing inductive logic programming to learn their preimage, logical definitions of the region of state space that leads to the high-reward states via some action. These learned preimages are chained together to form an MDP that abstractly represents the domain. AMBIL estimates the reward...

  18. Speeding up relational data mining by learning to estimate candidate hypothesis scores

    Frank Dimaio; Jude Shavlik
    The motivation behind multi-relational data mining is knowledge discovery in relational databases containing multiple related tables. One difficulty relational data mining faces is managing intractably large hypothesis spaces. We attempt to overcome this difficulty by first sampling the hypothesis space. We generate a small set of hypotheses, uniformly sampled from the space of candidate hypotheses, and evaluate this set on actual data. These hypotheses and their corresponding evaluation scores serve as training data in learning an approximate hypothesis evaluator. We use this approximate evaluation to quickly rate potential hypotheses without needing to score them on actual data. We test our...

  19. Relational data mining with inductive logic programming for link discovery

    Raymond J. Mooney; Prem Melville; Jude Shavlikþ; Inês De Castro Dutraþ; Vítor Santos Costaþ; Þdepartment Of Biostatistics; Medical Informatics
    Link discovery (LD) is an important task in data mining for counter-terrorism and is the focus of DARPA’s Evidence Extraction and Link Discovery (EELD) research program. Link discovery concerns the identification of complex relational patterns that indicate potentially threatening activities in large amounts of relational data. Most data-mining methods assume data is in the form of a feature-vector (a single relational table) and cannot handle multi-relational data. Inductive logic programming is a form 1 2

  20. Noname manuscript No. (will be inserted by the editor) Incremental Learning of Event Definitions with Inductive Logic Programming

    Nikos Katzouris; George Paliouras; Alexander Artikis
    Abstract Event Recognition systems rely on properly engineered knowledge bases of event definitions to infer occurrences of events in time. The manual development of such knowledge is a tedious and error-prone task, thus event-based applications may benefit from automated knowledge construction techniques, such as Inductive Logic Programming (ILP), which combines machine learning with the declarative and formal semantics of First-Order Logic. However, learning temporal logical for-malisms, which are typically utilized by logic-based Event Recognition systems is a challenging task, which most ILP systems cannot fully undertake. In addition, event-based data are usually massive and collected at different times and under...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.