Mostrando recursos 1 - 20 de 3.233

  1. A framework for inductive learning of typed-unification grammars

    Liviu Ciortuz
    LIGHT, the parsing system for typed-unification grammars [3], was recently extended so to allow the automate learning of attribute/feature paths values. Motivated by the logic design of these grammars [2], the learning strategy we adopted is inspired by Inductive Logic Programming [4]; we proceed by searching through hypothesis spaces generated by logic transformations of the input grammar. Two procedures | one for generalisation, the other for specialisation | are in charge with the creation of these hypothesis spaces.

  2. Relational macros for transfer in reinforcement learning

    Lisa Torrey; Jude Shavlik; Trevor Walker; Richard Maclin
    Abstract. We describe an application of inductive logic programming to transfer learning. Transfer learning is the use of knowledge learned in a source task to improve learning in a related target task. The tasks we work with are in reinforcement-learning domains. Our approach transfers relational macros, which are finite-state machines in which the transition conditions and the node actions are represented by first-order logical clauses. We use inductive logic programming to learn a macro that characterizes successful behavior in the source task, and then use the macro for decision-making in the early learning stages of the target task. Through experiments...

  3. Relative Relevance of Subsets of Agent’s Knowledge

    Sławomir Nowaczyk; Jacek Malec
    We study agents situated in partially observable environments, who do not have the resources to create conformant plans. Instead, they create conditional plans which are partial, and learn from experience to choose the best of them for execution. Our agent employs an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge in order to distinguish “bad ” plans early, before agent’s computational resources are wasted on considering them. In this paper we present experiments which show that in order for learning...

  4. Inducing Differences among Documents using Aleph with Construction of Background Knowledge

    Chieko Nakabasami
    Abstract. This paper presents a case study in which an Inductive Logic Programming (ILP) technique is applied to natural language processing. Aleph, an ILP system, is used to induce differences among documents. A Case-Based Reasoning (CBR) system is proposed for the purpose of compiling the background knowledge inputted into Aleph. In the CBR system, lexical and syntactic information concerning words in close proximity to the target word(s) in training sentences is provided in order to infer new cases effectively. The compiled background knowledge is used to determine the semantic differences in documents that are written in natural language. Tentative experiments...

  5. Empirical Learning of Natural Language Processing Tasks

    Walter Daelemans; Antal Van Den Bosch; Ton Weijters
    Language learning has thus far not been a hot application for machine-learning (ML) research. This limited attention for work on empirical learning of language knowledge and behaviour from text and speech data seems unjusti ed. After all, it is becoming apparent that empirical learning of Natural Language Processing (NLP) can alleviate NLP's all-time main problem, viz. the knowledge acquisition bottleneck: empirical ML methods such as rule induction, top down induction of decision trees, lazy learning, inductive logic programming, and some types of neural network learning, seem to be excellently suited to automatically induce exactly that knowledge that is hard to...

  6. RELEVANT RULE DERIVATION FOR SEMANTIC QUERY OPTIMIZATION ABSTRACT

    Junping Sun; Nittaya Kerdprasop; Kittisak Kerdprasop; Nakorn Ratchasima Thailand
    Semantic query optimization in database systems has many advantages over the conventional query optimization. The success of semantic query optimization will depend on the set of relevant semantic rules available for semantic query optimizer. The semantic query optimizer utilizes a set of available semantic rules to further explore extra query optimization plans for conventional query optimizer to choose. Semantic rules represent the dynamic database state at an instantaneous time point. Finding such set of relevant semantic rules can be very beneficial to support both semantic and conventional query optimizations. In this paper, we will show how to use inductive logic...

  7. Abductive Stochastic Logic Programs for Metabolic Network Inhibition Learning

    Jianzhong Chen; Stephen Muggleton; José Santos
    Abstract. We revisit an application developed originally using Inductive Logic Programming (ILP) by replacing the underlying Logic Program (LP) description with Stochastic Logic Programs (SLPs), one of the underlying Probabilistic ILP (PILP) frameworks. In both the ILP and PILP cases a mixture of abduction and induction are used. The abductive ILP approach used a variant of ILP for modelling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes...

  8. Clp(bn): Constraint logic programming for probabilistic knowledge

    Vítor Santos Costa; James Cussens
    Abstract. In Datalog, missing values are represented by Skolem constants. More generally, in logic programming missing values, or existentially quantified variables, are represented by terms built from Skolem functors. The CLP(BN) language represents the joint probability distribution over missing values in a database or logic program by using constraints to represent Skolem functions. Algorithms from inductive logic programming (ILP) can be used with only minor modification to learn CLP(BN) programs. An implementation of CLP(BN) is publicly available as part of YAP Prolog at

  9. Discourse Parsing: Learning FOL Rules based on Rich Verb Semantic Representations to automatically label Rhetorical Relations

    Rajen Subba
    We report on our work to build a discourse parser (SemDP) that uses semantic features of sentences. We use an Inductive Logic Programming (ILP) System to exploit rich verb semantics of clauses to induce rules for discourse parsing. We demonstrate that ILP can be used to learn from highly structured natural language data and that the performance of a discourse parsing model that only uses semantic information is comparable to that of the state of the art syntactic discourse parsers. 1

  10. OF

    Chi Shen; Chi Shen; Major Professor; Lutz Hamel; James Kowalski; James Baglama
    Concept learning is a branch of machine learning concerned with learning how to discriminate and categorize things based on positive and negative examples. More specifically, the learning algorithm induces a description of the concept (in some representation language) from a set of positive and negative facts. Inductive logic programming can be considered a subcategory of concept learning where the representation language is first-order logic and the induced descriptions are a set of statements in first-order logic. This problem can be viewed as a search over all possible sentences in the representation language to find those that correctly predict the given...

  11. Breeding Algebraic Structures – An Evolutionary Approach to Inductive Equational Logic Programming

    Lutz Hamel
    Concept learning is the induction of a description from a set of examples. Inductive logic programming can be considered a special case of the general notion of concept learning specifically referring to the induction of first-order theories. Both concept learning and inductive logic programming can be seen as a search over all possible sentences in some representation language for sentences that correctly explain the examples and also generalize to other sentences that are part of that concept. In this paper we explore inductive logic programming with equational logic as the representation language and genetic programming as the underlying search paradigm....

  12. Hybrid Learning of Ontology Classes

    Jens Lehmann
    Abstract. Description logics have emerged as one of the most successful formalisms for knowledge representation and reasoning. They are now widely used as a basis for ontologies in the Semantic Web. To extend and analyse ontologies, automated methods for knowledge acquisition and mining are being sought for. Despite its importance for knowledge engineers, the learning problem in description logics has not been investigated as deeply as its counterpart for logic programs. We propose the novel idea of applying evolutionary inspired methods to solve this task. In particular, we show how Genetic Programming can be applied to the learning problem in...

  13. Learning phonotactics using ILP

    Stasinos Konstantopoulos; Alfa-informatica Rijksuniversiteit Groningen
    This paper describes experiments on learning Dutch phonotactic rules using Inductive Logic Programming, a machine learning approach based on the notion of inverting resolution. Different ways of approaching the problem are experimented with, and compared against each other as well as with related work on the task. Further research is outlined. 1

  14. Prediction-Hardness of Acyclic Conjunctive Queries

    Kouichi Hirata
    A conjunctive query problem is a problem to determine whether or not a tuple belongs to the answer of a conjunctive query over a database. Here, a tuple, a conjunctive query and a database in relational database theory are regarded as a ground atom, a nonrecursive function-free definite clause and a finite set of ground atoms, respectively, in inductive logic programming terminology. An acyclic conjunctive query problem is a conjunctive query problem with acyclicity. Concerned with the acyclic conjunctive query problem, in this paper, we present the hardness results of predicting acyclic conjunctive queries from an instance with a j-database...

  15. Comparison of graph-based and logic-based multi-relational data mining

    Nikhil S. Ketkar
    The goal of this paper is to generate insights about the differences between graph-based and logic-based approaches to multi-relational data mining by performing a case study of graph-based system, Subdue and the inductive logic programming system, CProgol. We identify three key factors for comparing graph-based and logic-based multi-relational data mining; namely, the ability to discover structurally large concepts, the ability to discover semantically complicated concepts and the ability to effectively utilize background knowledge. We perform an experimental comparison of Subdue and CProgol on the Mutagenesis domain and various artificially generated Bongard problems. Experimental results indicate that Subdue can significantly outperform...

  16. Programming by Demonstration: an Inductive Learning Formulation

    Tessa A. Lau; Daniel S. Weld
    Although Programming by Demonstration (PBD) has the potential to improve theproductivity ofunsophisticated users, previous PBD systems have used brittle, heuristic, domain-speci c approaches to execution-trace generalization. In this paper we de ne two applicationindependent methods for performing generalization that are based on well-understood machine learning technology. TGenvs uses version-space generalization, and TGenfoil is based on the FOIL inductive logic programming algorithm. We analyze each method both theoretically and empirically, arguing that TGenvs has lower sample complexity, but TGenfoil can learn a much moreinteresting class of programs.

  17. GOODMAN’S “NEW RIDDLE”

    Branden Fitelson
    Abstract. First, a brief historical trace of the developments in confirmation theory leading up to Goodman’s infamous “grue ” paradox is presented. Then, Goodman’s argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman’s “grue ” argument against classical inductive logic. The upshot of this analogy is that the “New Riddle ” is not as vexing as many commentators have claimed (especially, from a Bayesian inductive-logical point of view). Specifically, the analogy reveals an intimate connection between Goodman’s problem, and the “problem of old evidence”. Several other...

  18. Enriching Ontologies through Data

    Mahsa Chitsaz; Mahsa Chitsaz
    Abstract. Along with the vast usage of ontologies in different areas, non-standard reasoning tasks have started to emerge such as concept learning which aims to drive new concept definitions from given instance data of an ontology. This paper proposes new scalable approaches in light-weight description logics which rely on an inductive logic technique in favor of an instance query answering system.

  19. Co-Logic Programming: Extending Logic Programming with Coinduction

    Luke Simon; Ajay Bansal; Ajay Mallya; Gopal Gupta
    Abstract. In this paper we present the theory and practice of co-logic programming (co-LP for brevity), a paradigm that combines both inductive and coinductive logic programming. Co-LP is a natural generalization of logic programming and coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation, concurrent LP, model checking, bisimilarity proofs, etc. 1

  20. Knowledge Discovery and Data Mining (KDD-2003)


    Multi-Relational Data Mining (MRDM) is the multi-disciplinary field dealing with knowledge discovery from relational databases consisting of multiple tables. Mining data which consists of complex/structured objects also falls within the scope of this field, since the normalized representation of such objects in a relational database requires multiple tables. The field aims at integrating results from existing fields such as inductive logic programming, KDD, machine learning and relational databases; producing new techniques for mining multi-relational data; and practical applications of such tecniques. Typical data mining approaches look for patterns in a single relation of a database. For many applications, squeezing data...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.