Sunday, December 14, 2014

 

 



Soy un nuevo usuario

Olvidé mi contraseña

Entrada usuarios

Lógica Matemáticas Astronomía y Astrofísica Física Química Ciencias de la Vida
Ciencias de la Tierra y Espacio Ciencias Agrarias Ciencias Médicas Ciencias Tecnológicas Antropología Demografía
Ciencias Económicas Geografía Historia Ciencias Jurídicas y Derecho Lingüística Pedagogía
Ciencia Política Psicología Artes y Letras Sociología Ética Filosofía
 

rss_1.0 Clasificación por Disciplina

Nomenclatura Unesco > (11) Lógica > (1104) Lógica inductiva
(1104.01) Inducción (1104.02) Intuicionismo
(1104.03) Probabilidad (1104.99) Otras (especificar)

Mostrando recursos 1 - 20 de 2,619

1. Compression, Significance and Accuracy - Stephen Muggleton; Ashwin Srinivasan; Michael Bain
Inductive Logic Programming (ILP) involves learning relational concepts from examples and background knowledge. To date all ILP learning systems make use of tests inherited from propositional and decision tree learning for evaluating the significance of hypotheses. None of these significance tests take account of the relevance or utility of the background knowledge. In this paper we describe a method, called HP-compression, of evaluating the significance of a hypothesis based on the degree to which it allows compression of the observed data with respect to the background knowledge. This can be measured by comparing the lengths of the input and output...

2. Inductive logic and empirical psychology - Nick Chater; Mike Oaksford; Ulrike Hahn; Evan Heit
An inductive logic is a system for reasoning that derives conclusions which are plausible or credible, but are nonetheless not certain. Thus, inductive logic goes beyond the more familiar systems of deductive logic, in which the truth of the premises requires the truth of the conclusions. Thus, from All people are mortal,

3. Achievements and prospects of learning word morphology with inductive logic programming - Dimitar Kazakov
kazakovcsyorkacuk

4. Learning the past tense of English verbs using Inductive Logic Programming - Raymond J Mooney; Mary Elaine Cali
Abstract This paper presents results on using a new inductive logic programming method called Foidl to learn the past tense of English verbs The past tense task has been widely studied in the context of the symbolicconnectionist debate Previous papers have presented re sults using various neuralnetwork and decisiontree learning methods We have developed a technique for learning a special type of Prolog program called a rstorder decision list dened as an ordered list of clauses each ending in a cut Foidl is based on Foil but employs intensional background knowledge and avoids the need for explicit neg ative...

5. Probabilistic inductive logic programming - Luc De Raedt; Kristian Kersting; K. U. Leuven
Abstract. Probabilistic inductive logic programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the inte-gration of probabilistic reasoning with machine learning and first order and rela-tional logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive logic programming and sketch how the inductive logic programming formalisms, settings and techniques can be ex-tended to the statistical case. More precisely, we outline three classical settings for inductive logic programming, namely learning from entailment,...

6. Lazy propositionalisation for relational learning - Erick Alphonse
A number of Inductive Logic Programming (ILP) systems have addressed the problem of learning First Order Logic (FOL) discrimi-nant definitions by first reformulating the FOL learning problem into an attribute-value one and then applying efficient learning techniques dedicated to this simpler formalism. The complexity of such propo-sitionalisation methods is now in the size of the reformulated prob-lem which is exponential when tackling non-determinate relational problems. We propose a method that selectively propositionalises the FOL training set by interleaving attribute-value reformulation and al-gebraic resolution. It avoids, as much as possible, the generation of reformulated examples which are not relevant wrt the...

7. Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs - Raymond Mooney; Mary Elaine Califf
This paper presents a method for inducing logic programs from examples that learns a new class of concepts called first-order decision lists, defined as ordered lists of clauses each ending in a cut. The method, called Foidl, is based on Foil (Quinlan, 1990) but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with specific exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate. Foidl is able to learn concise, accurate programs for this problem from significantly...

8. Knowledge Discovery in Databases - An Inductive Logic Programming Approach - Katharina Morik
. The need for learning from databases has increased along with their number and size. The new field of Knowledge Discovery in Databases (KDD) develops methods that discover relevant knowledge in very large databases. Machine learning, statistics, and database methodology contribute to this exciting field. In this paper, the discovery of knowledge in the form of Horn clauses is described. A case study of directly coupling an inductive logic programming (ILP) algorithm with a database system is presented. 1 Introduction Databases are used in almost all branches of industry and commerce. The aim of KDD is to discover rules hidden...

9. Least Generalizations and Greatest Specializations of Sets of Clauses - Shan-Hwei Nienhuys-Cheng; Ronald De Wolf
The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others...

10. Inductive Learning of Characteristic Concept Descriptions - Werner Emde
This paper deals with the problem of learning characteristic concept descriptions from examples and describes a new generalization approach implemented in the system Cola-2. The approach tries to take advantage of the information which can be induced from descriptions of unclassified objects using a conceptual clustering algorithm. Experimental results in various real-world domains strongly support the hypothesis that the new approach delivers more correct (and possibly more comprehesible) concept descriptions than exisiting methods, if the induced concept descriptions are also used to classify objects which belong to concepts which were not present in the training data set. This paper describes...

11. Inverse Narrowing for the Induction of Functional Logic Programs - J. Hernandez-orallo; M.J. Ramirez-Quintana
We present a framework for the Induction of Functional Logic Programs (IFLP) from facts. This can be seen as an extension to the now consolidated field of Inductive Logic Programming (ILP). Inspired in the inverse resolution operator of ILP, we study the reversal of narrowing, the more usual operational mechanism for Functional Logic Programming. We also generalize the selection criteria for guiding the search, including coherence criteria in addition to the MDL principle. A non-incremental learning algorithm and a more sophisticated incremental extension of it are presented. We discuss the advantages of IFLP over ILP, most of which are inherited...

12. Integrity Constraints in ILP using a Monte Carlo approach - Alipio Jorge; Pavel Brazdil
. : Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely inductive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy theorem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety...

13. Using Inductive Logic Programming for Natural Language Processing - James Cussens; Stephen Muggleton; Ashwin Srinivasan
We summarise recent work on using Inductive Logic Programming (ILP) for Natural Language Processing (NLP). ILP performs learning in a first-order logical setting, and is thus well-suited to induce over the various structured representations used in NLP. We present Stochastic Logic Programs (SLPs) and demonstrate their use in ILP when learning from positive examples only. We also give accounts of work on learning grammars from children's books and part-of-speech tagging. 1 Inductive Logic Programming and Progol By using computational logic as the representational mechanism for hypotheses and observations, Inductive Logic Programming (ILP) can overcome the two main limitations of classical...

14. Database Dependency Discovery: A Machine Learning Approach - Peter A. Flach; Iztok Savnik
this paper are designed such that they can easily be generalised to other kinds of dependencies. Like in current approaches to computational induction such as inductive logic programming, we distinguish between topdown algorithms and bottom-up algorithms. In a top-down approach, hypotheses are generated in a systematic way and then tested against the given relation. In a bottom-up approach, the relation is inspected in order to see what dependencies it may satisfy or violate. We give algorithms for both approaches. Keywords: Induction, attribute dependency, database reverse engineering, data mining. 1. Introduction

15. Learning to Parse Database Queries Using Inductive Logic Programming - John M. Zelle; Raymond J. Mooney
This paper presents recent work using the Chill parser acquisition system to automate the construction of a natural-language interface for database queries. Chill treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, Chill is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that...

16. Proof Theory in Type Theory - Thierry Coquand
Introduction The negative translation provides a general way to make constructive sense of some non effective reasoning. However this method has some limitations. It does not work in presence of the axiom of description/choice. In this note, we analyse the interaction of classical logic with generalised inductive definition. In the metatheory, we allow generalised inductive definitions, but with an intuitionistic logic. Actually, we can take as our metatheory a system like Martin-Lof type theory. The starting motivation of this work was a comparison between the work of Tait/Schutte and the work of Lorenzen/Novikov on cut-elimination. From a constructive perspective, the...

17. Inductive Learning of Normal Clauses - Christel Vrain; Lionel Martin
this paper, we are interested in the induction of normal clauses. Several semantics have been proposed for normal programs, we consider here the well-founded semantics, based on a three-valued logics. We have generalized to this semantic the classical constraint: all the positive examples must be covered by the learned program and all the negative ones must be rejected. In the case of inductive learning, this constraint seems too strong and we have defined a weaker criteria: we require that a positive (resp. negative) example is not considered as F alse (resp. T rue) by the learned program. This study has...

18. Induction of Contraint Logic Programs - Lionel Martin; Christel Vrain
. Inductive Logic Programming is mainly concerned with the problem of learning concept definitions from positive and negative examples of these concepts and background knowledge. Because of complexity problems, the underlying first order language is often restricted to variables, predicates and constants. In this paper, we propose a new approach for learning logic programs containing function symbols other than constants. The underlying idea is to consider a domain that enables to interpret the function symbols and to compute the interest of a given value for discriminating positive and negative examples. This is modelized in the framework of Constraint Logic Programming...

19. Predicate invention in inductive program synthesis - Pierre Flener
In Inductive Logic Programming, predicate invention is the process of introducing a hitherto unknown pred-icate, and its description, into the description of the currently learned predicate. This is only necessary when a finite axiomatization of the current predicate is otherwise impossible, in which case the description of the invented predicate is recursive. So necessary predicate invention is a program synthesis task, and had thus best be done by a synthesizer rather than by a general purpose learner, because one can then re-use the vast body of knowledge about recursive algorithm design. Taking a schema-guided approach, I show how predicate in-vention...

20. www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007 Applying Data Mining Techniques to e-Learning Problems - Félix Castro; Alfredo Vellido; Àngela Nebot; Francisco Mugica; Hidalgo México
Summary. This chapter aims to provide an up-to-date snapshot of the current state of research and applications of Data Mining methods in e-learning. The cross-fertilization of both areas is still in its infancy, and even academic references are scarce on the ground, although some leading education-related publications are already beginning to pay attention to this new field. In order to offer a reasonable organization of the available bibliographic information according to different criteria, firstly, and from the Data Mining practitioner point of view, references are organized ac-cording to the type of modeling techniques used, which include: Neural Networks, Genetic Algorithms,...

Página de resultados:
 

Busque un recurso