Monday, September 1, 2014

 

 



Soy un nuevo usuario

Olvidé mi contraseña

Entrada usuarios

Lógica Matemáticas Astronomía y Astrofísica Física Química Ciencias de la Vida
Ciencias de la Tierra y Espacio Ciencias Agrarias Ciencias Médicas Ciencias Tecnológicas Antropología Demografía
Ciencias Económicas Geografía Historia Ciencias Jurídicas y Derecho Lingüística Pedagogía
Ciencia Política Psicología Artes y Letras Sociología Ética Filosofía
 

rss_1.0 Clasificación por Disciplina

Nomenclatura Unesco > (11) Lógica > (1104) Lógica inductiva
(1104.01) Inducción (1104.02) Intuicionismo
(1104.03) Probabilidad (1104.99) Otras (especificar)

Mostrando recursos 1 - 20 de 2,488

1. Learning Simple Phonotactics - Erik F. Tjong; Kim Sang; John Nerbonne
The present paper compares stochastic learning (Hidden Markov Models) , symbolic learning (Inductive Logic Programming), and connectionist learning (Simple Recurrent Networks using backpropagation) on a single, linguistically fairly simple task, that of learning enough phonotactics to distinguish words from non-words for a simplified set of Dutch, the monosyllables. The methods are all tested using 10% reserved data as well as a comparable number of randomly generated strings. Orthographic and phonetic representations are compared. The results indicate that while stochastic and symbolic methods have little difficulty with the task, connectionist methods do. 1 Introduction This paper describes a study of the...

2. Database Mining through Inductive Logic Programming - Himanshu Gupta; Iain Mclaren; Alfred Vella
Rapid growth in the automation of business transactions has lead to an explosion in the size of databases. It has been realised for a long time that the data in these databases contains hidden information which needs to be extracted. Data mining is a step in this direction and aims to find potentially useful and non-trivial information from these databases in the form of patterns. As the size and complexity of these database increases, the question that normally arises is "Are the existing data mining techniques efficient enough for large databases"? This paper addresses this issue and looks at an...

3. Agents Learning in a Three-Valued Logical Setting - Evelina Lamma; Fabrizio Riguzzi; Luís Moniz Pereira
We show that the adoption of a three-valued setting for inductive concept learning is particularly useful for learning in single and multiple agent systems. Distinguishing between what is true, what is false and what is unknown can be useful in situations where decisions have to be taken on the basis of scarce information. Such situation occurs, for example, when an agent incrementally gathers information from the surrounding world and has to select its own actions on the basis of such acquired knowledge. In a three-valued setting, we learn a definition for both the target concept and its opposite, considering positive...

4. Higher Order Generalization - Jianguo Lu; Masateru Harao; Masami Hagiya
Generalization is a fundamental operation of inductive inference. While first order syntactic generalization (anti-unification) is well understood, its various extensions are needed in applications. This paper discusses syntactic higher order generalization in a higher order language 2[1]. Based on the application ordering, we proved the least general generalization exists and is unique up to renaming. An algorithm to compute the least general generalization is presented. Keywords: higher order logic, unification, anti-unification, generalization. 1 Introduction The meaning of the word generalization is so general that we can find its occurrences in almost every area of study. In computer science, especially in...

5. A Multistrategy Approach to Relational Knowledge Discovery in Databases - Katharina Morik; Peter Brockhausen
When learning from very large databases, the reduction of complexity is of highest importance. Two extremes of making knowledge discovery in databases (KDD) feasible have been put forward. One extreme is to choose a most simple hypothesis language and so to be capable of very fast learning on real-world databases. The opposite extreme is to select a small data set and be capable of learning very expressive (firstorder logic) hypotheses. A multistrategy approach allows to combine most of the advantages and exclude most of the disadvantages. More simple learning algorithms detect hierarchies that are used in order to structure the...

6. Multi Instance Neural Networks - Jan Ramon; Luc De Raedt
This paper is concerned with extending neural networks to multi-instance learning. In multi-instance learning, each example corresponds to a set of tuples in a single relation. Furthermore, examples are classied as positive if at least one tuple (i.e. at least one attribute-value pair) satises certain conditions. If none of the tuples satisfy the requirements, the example is classied as negative. We will study how to extend standard neural networks (and backpropagation) to multi instance learning. It is clear that the multi-instance setting is more expressive than the attribute-value setting, but less expressive than e.g. relational learning or inductive logic programming....

7. Executing Query Packs in ILP - Hendrik Blockeel; Hendrik Blockeel; Luc Dehaspe; Luc Dehaspe; Jan Ramon; Bart Demoen; Bart Demoen; Gerda Janssens; Gerda Janssens; Henk Vandecasteele; Henk Vandecasteele
Inductive logic programming systems usually send large numbers of queries to a database. The lattice structure from which these queries are typically selected causes many of these queries to be highly similar. As a consequence, independent execution of all queries may involve a lot of redundant computation. We propose a mechanism for executing a hierarchically structured set of queries (a "query pack") through which a lot of redundancy in the computation is removed. We have incorporated our query pack execution mechanism in the ILP systems Tilde and Warmr by implementing a new Prolog engine ilProlog which provides support for pack...

8. Theory completion using Inverse Entailment - S. H. Muggleton; C. H. Bryant
The main real-world applications of Inductive Logic Programming (ILP) to date involve the "Observation Predicate Learning" (OPL) assumption, in which both the examples and hypotheses define the same predicate. However, in both scientific discovery and language learning potential applications exist in which OPL does not hold. OPL is ingrained within the theory and performance testing of Machine Learning. A general ILP technique called "Theory Completion using Inverse Entailment" (TCIE) is introduced which is applicable to non-OPL applications. TCIE is based on inverse entailment and is closely allied to abductive inference. The implementation of TCIE within Progol5.0 is described. The implementation...

9. Biases and their Effects in Inductive Logic Programming - Birgit Tausend; Fakultat Informatik
In inductive learning, the shift of the representation language of the hypotheses from attribute-value languages to Horn clause logic used by Inductive Logic Programming systems accounts for a very complex hypothesis space. In order to reduce this complexity, most of these systems use biases. In this paper, we study the influence of these biases on the size of the hypothesis space. For this comparison, we first identify the basic constituents the biases are combined of. Then, we discuss the restrictions set on the distribution of terms in the clause by the constituents of the bias. The effects of several constituents...

10. Learning Chomsky-like Grammars for Biological Sequence Families - Stephen Muggleton; C. H. Bryant; A Srinivasan
This paper presents a new method of measuring performance when positives are rare and investigates whether Chomsky-like grammar representations are useful for learning accurate comprehensible predictors of members of biological sequence families. The positiveonly learning framework of the Inductive Logic Programming (ILP) system CProgol is used to generate a grammar for recognising a class of proteins known as human neuropeptide precursors (NPPs). As far as these authors are aware, this is both the first biological grammar learnt using ILP and the first real-world scientific application of the positive-only learning framework of CProgol. Performance is measured using both predictive accuracy and...

11. Scientific Discovery on Positive Data via Belief Revision - Eric Martin
Introduction Contemporary approaches to inductive logic divide into two schools distinguished by how each perceives the investment of a rational agent in a given hypothesis. For the ########### ######, investments are degrees-of-belief, scattered over all the hypotheses in play. For advocates of ###### ######## ######,aninvestment has a more absolute character, and is designated as \acceptance." The dierence between belief and acceptance is thoroughly treated in [Cohen, 1992]. Suce it here to say that acceptance can be just as provisional as belief, but is more qualitative. It represents deliberate selection of a particular hypothesis as the one to elaborate and test,...

12. Bias: Preference, Restriction or Core of Induction - Birgit Tausend
In this paper, we aim to give a more precise definition of bias in order to clarify different views. Since this definition involves the problem setting of inductive learning, we have to take into account this definition as well. A more precise definition of the bias can be used to declare, to adapt and to shift the bias of an inductive system. We demonstrate these advantages by declaring and shifting the bias for an Inductive Logic Programming system in MILES-CTL.

13. A New Declarative Bias for ILP: Construction Modes - Esra Erdem; Pierre Flener
Inductive logic programming (ILP) systems use some declarative bias to constrain the hypothesis space. We introduce a new declarative bias, called construction modes, capturing the required dataFLow of a relation, and design a language for expressing such construction modes. Their semantics is captured via the notion of admissibility. Experiments with the ILP systems synapse and dialogs have established the usefulness of construction modes. Since the new bias is orthogonal to the existing search biases, it can be used in conjunction with the existing biases.

14. A system of logic, ratiocinative and inductive, being a connected view of the principles of evidence and the methods of scientific investigation. - Mill, John Stuart, 1806-1873.
At head of title: Sir John Lubbock's hundred books, 15.

15. A decision procedure for satisfiability in separation logic with inductive predicates - Brotherston, J; Fuhs, C; Pérez, JAN; Gorogiannis, N
We show that the satisfiability problem for the "symbolic heap" fragment of separation logic with general inductively defined predicates- which includes most fragments employed in program verification - is decidable. Our decision procedure is based on the computation of a certain fixed point from the definition of an inductive predicate, called its "base", that exactly characterises its satisfiability. A complexity analysis of our decision procedure shows that it runs, in the worst case, in exponential time. In fact, we show that the satisfiability problem for our inductive predicates is EXPTIME- complete, and becomes NP-complete when the maximum arity over all...

16. Meta-Interpretive Learning of Higher-Order Dyadic Datalog: Predicate Invention revisited - Stephen Muggleton; Dianhuan Lin
In recent years Predicate Invention has been underexplored within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and context-free grammars, by way of abduction with respect to a meta-interpreter. New predicate symbols are introduced as constants representing existentially quantified higher-order variables. In this paper we generalise the approach of Meta-Interpretive Learning (MIL) to that of learning higher-order dyadic datalog programs. We show that with an infinite signature the higher-order dyadic datalog class H 2 2 has universal...

17. The elements of inductive logic, designed mainly for the use of students in the universities / - Fowler, Thomas.
Mode of access: Internet.

18. The principles of science : a treatise on logic and scientific method / - Jevons, William Stanley, 1835-1882.
Book I. Formal logic, deductive and inductive -- Book II. Number, variety and probability -- Book III. Methods of measurement -- Book IV. Inductive investigation -- Book V. Generalisation, analogy, and classification -- Book VI. Reflections on the results and limits of scientific method.

19. On the Implementation of an ILP System with Prolog - Nuno Fonseca; Vitor Santos Costa; Fernando Silva; Rui Camacho
Inductive Logic Programming (ILP) systems is a set of Machine Learning techniques that have been quite successful in knowledge discovery in relational domains. These systems implemented in Prolog are among the most successfull ILP systems. They challenge the limits of Prolog systems due to heavy usage of resources, such as database accesses and memory usage, and very long execution times. In this work we discuss the fundamental performance issues found in an ILP engine – the April system. Namely, we evaluate the impact of a fundamental technique, called coverage caching, that stores previous results in order to avoid recomputation. To...

20. On the Implementation of an ILP System with Prolog - Nuno Fonseca; Vitor Santos Costa; Fernando Silva; Rui Camacho

Página de resultados:
 

Busque un recurso