Monday, September 22, 2014

 

 



Soy un nuevo usuario

Olvidé mi contraseña

Entrada usuarios

Lógica Matemáticas Astronomía y Astrofísica Física Química Ciencias de la Vida
Ciencias de la Tierra y Espacio Ciencias Agrarias Ciencias Médicas Ciencias Tecnológicas Antropología Demografía
Ciencias Económicas Geografía Historia Ciencias Jurídicas y Derecho Lingüística Pedagogía
Ciencia Política Psicología Artes y Letras Sociología Ética Filosofía
 

rss_1.0 Clasificación por Disciplina

Nomenclatura Unesco > (11) Lógica > (1104) Lógica inductiva
(1104.01) Inducción (1104.02) Intuicionismo
(1104.03) Probabilidad (1104.99) Otras (especificar)

Mostrando recursos 1 - 20 de 2,530

1. ILP with Noise and Fixed Example Size: A Bayesian Approach - Eric Mccreath; Arun Sharma
Current inductive logic programming systems are limited in their handling of noise, as they employ a greedy covering approach to constructing the hypothesis one clause at a time. This approach also causes difficulty in learning recursive predicates. Additionally, many current systems have an implicit expectation that the cardinality of the positive and negative examples reflect the "proportion" of the concept to the instance space. A framework for learning from noisy data and fixed example size is presented. A Bayesian heuristic for finding the most probable hypothesis in this general framework is derived. This approach evaluates a hypothesis as a whole...

2. Extraction of Meta-Knowledge to Restrict the Hypothesis Space for ILP Systems - Eric Mccreath; Arun Sharma
Many ILP systems, such as GOLEM, FOIL, and MIS, take advantage of user supplied meta-knowledge to restrict the hypothesis space. This meta-knowledge can be in the form of type information about arguments in the predicate being learned, or it can be information about whether a certain argument in the predicate is functionally dependent on the other arguments (supplied as mode information). This meta-knowledge is explicitly supplied to an ILP system in addition to the data. The present paper argues that in many cases the meta- knowledge can be extracted directly from the raw data. Three algorithms are presented that learn...

3. Machine Learning Techniques for Adaptive Logic-Based Multi-Agent Systems: A Preliminary Report - Eduardo Alonso; Daniel Kudenko
: It is widely recognised in the agent community that one of the more important features of high level agents is their capability to adapt and learn in dynamic, uncertain domains [12, 30]. A lot of work has been recently produced on this topic, particularly in the field of learning in multi-agent systems [1, 32, 33, 34, 42, 43]. It is, however, worth noticing that whereas some kind of logic is used to specify the (multi-)agents' architecture, mainly non-relational learning techniques such as reinforcement learning are applied. We think that these approaches are not well-suited to deal with the large...

4. Top-Down Pruning in Relational Learning - Johannes Fürnkranz
Pruning is an effective method for dealing with noise in Machine Learning. Recently pruning algorithms, in particular Reduced Error Pruning , have also attracted interest in the field of Inductive Logic Programming . However, it has been shown that these methods can be very inefficient, because most of the time is wasted for generating clauses that explain noisy examples and subsequently pruning these clauses. We introduce a new method which searches for good theories in a top-down fashion to get a better starting point for the pruning algorithm. Experiments show that this approach can significantly lower the complexity of the...

5. Structural Regression Trees - Stefan Kramer
In many real-world domains the task of machine learning algorithms is to learn a theory predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with non-determinate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems by integrating the statistical method of regression...

6. Y = 2x Vs. Y = 3x - Alexei Stolboushkin; Damian Niwinski
We show that no formula of first order logic using linear ordering and the logical relation y = 2x can define the property that the size of a finite model is divisible by 3. This answers a long-standing question which may be of relevance to certain open problems in circuit complexity. Introduction Descriptive complexity theory originated with a fundamental result of Fagin [8] which characterized queries computable in nondeterministic polynomial time as classes of models of existential sentences of second order logic. Subsequently, the basic complexity classes L, NL, P, and PSPACE, were also tied to logical languages, particularly with...

7. Learning for Semantic Interpretation: Scaling Up Without Dumbing Down - Raymond J. Mooney
Most recent research in learning approaches to natural language have studied fairly "lowlevel " tasks such as morphology, part-of-speech tagging, and syntactic parsing. However, I believe that logical approaches may have the most relevance and impact at the level of semantic interpretation, where a logical representation of sentence meaning is important and useful. We have explored the use of inductive logic programming for learning parsers that map naturallanguage database queries into executable logical form. This work goes against the growing trend in computational linguistics of focusing on shallow but broad-coverage natural language tasks ("scaling up by dumbing down") and instead...

8. Hogdalenverket: Applying ILP in an Industrial Setting - Robert Engels Dept
. When applying Inductive Logic Programming techniques in real-world settings, many problems will come up. The project was done in co-operation with Hogdalenverket, a heat and power plant burning household refuse in the Stockholm area, Sweden. The application problems with collecting data and the application of ILPtechniques are discussed. Results of tests performed while using SPECTRE, an ILP-algorithm developed at Stockholm University, are reported. These results show that the addition of background knowledge and addition/retraction of parameters has positive effect on the performance of the ILP-techniques. After initial tests a knowledge acquisition stage was started. This resulted in knowledge about...

9. Application of Inductive Logic Programming for Learning ECG Waveforms - Gabriella Kokai; Zoltán Alexin; Tibor Gyimóthy
. In this paper a learning system is presented which integrates an ECG waveform classifier (called PECG) with an interactive learner (called IMPUT). The PECG system is based on an attribute grammar specification of ECGs that has been transformed to Prolog. The IMPUT system combines the interactive debugging technique IDT with the unfolding algorithm introduced in SPECTRE. Using the IMPUT system we can effectively assist in preparing the correct description of the basic structures of ECG waveforms. The application of the system for learning ECG waveforms is demonstrated with the help of an example. 4 Keywords: inductive logic programming, program...

10. Pharmacophore Discovery using the Inductive Logic Programming System Progol - Paul Finn; David Page; Ronny Kohavi; Foster Provost
. This paper is a case study of a machine aided knowledge discovery process within the general area of drug design. More specifically, the paper describes a sequence of experiments in which an Inductive Logic Programming(ILP) system is used for pharmacophore discovery. Within drug design, a pharmacophore is a description of the substructure of a ligand (a small molecule) which is responsible for medicinal activity. This medicinal activity is produced by interaction between the ligand and a binding site on a target protein. ILP was chosen by the domain expert (first author) at Pfizer since active molecules are most naturally...

11. Efficient Pruning Methods for Relational Learning - Johannes Fürnkranz
This thesis is concerned with efficient methods for achieving noise-tolerance in Machine Learning algorithms that are capable of using relational background knowledge. While classical algorithms are restricted to learn propositional concepts in the form of decision trees or decision lists, relational learning algorithms are able to include into the learning process not only knowledge about data attributes and values, but also about relations between the attributes. As these algorithms use a more powerful representation language --- they learn PROLOG programs for classification --- they are part of the recent field of Inductive Logic Programming, a new research area at the...

12. Subsumption and Refinement in Model Inference - Patrick R.J. van der Laag; Shan-Hwei Nienhuys-Cheng
In his famous Model Inference System, Shapiro [10] uses so-called refinement operators to replace too general hypotheses by logically weaker ones. One of these refinement operators works in the search space of reduced first order sentences. In this article we show that this operator is not complete for reduced sentences, as he claims. We investigate the relations between subsumption and refinement as well as the role of a complexity measure. We present an inverse reduction algorithm which is used in a new refinement operator. This operator is complete for reduced sentences. Finally, we will relate our new refinement operator with...

13. Object-oriented data modelling and rules: ILP meets databases - Lubos Popelinsky; Lubo Popel#nsk
In deductive object-oriented databases, both classes and attributes may be defined by rules. We will show how inductive logic programming can help in synthesis of those rules. New approach to the object-oriented database modelling by means of inductive logic programming is introduced. Experimental results obtained by WiM \Gamma D system are discussed.

14. Stochastic Propositionalization of Non-Determinate Background Knowledge - Stefan Kramer; Bernhard Pfahringer; Christoph Helma
Both propositional and relational learning algorithms require a good representation to perform well in practice. Usually such a representation is either engineered manually by domain experts or derived automatically by means of so-called constructive induction. Inductive Logic Programming (ILP) algorithms put a somewhat less burden on the data engineering effort as they allow for a structured, relational representation of background knowledge. In chemical and engineering domains, a common representational device for graph-like structures are so-called non-determinate relations. Manually engineered features in such domains typically test for or count occurrences of specific substructures having specific properties. However, representations containing non-determinate relations...

15. Inductive Logic Programming: issues, results and the challenge of Learning Language in Logic - Stephen Muggleton
Inductive Logic Programming (ILP) is the area of AI which deals with the induction of hypothesised predicate definitions from examples and background knowledge. Logic programs are used as a single representation for examples, background knowledge and hypotheses. ILP is differentiated from most other forms of Machine Learning (ML) both by its use of an expressive representation language and its ability to make use of logically encoded background knowledge. This has allowed successful applications of ILP in areas such as molecular biology and natural language which both have rich sources of background knowledge and both benefit from the use of an...

16. Knowledge Acquisition From Complex Domains By Combining Inductive Learning and Theory Revision - Xiaolong Zhang; Xiaolong Zhang; Masayuki Numao; Masayuki Numao
In the process of knowledge acquisition, inductive learning and theory revision play important roles. Inductive learning is used to acquire new knowledge (theories) from training examples; and theory revision improves an initial theory with training examples. A theory preference criterion is critical in the processes of inductive learning and theory revision. A new system called knowar is developed by integrating inductive learning and theory revision. In addition, the theory preference criterion used in knowar is the combination of the MDL-based heuristic and the Laplace estimate. The system can be used to deal with complex problems. Empirical studies have confirmed that...

17. Inductive Constraint Logic and the Mutagenesis Problem - Hendrik Blockeel; Wim Van Laer; Luc De Raedt
A novel approach to learning first order logic formulae from positive and negative examples is incorporated in a system named ICL (Inductive Constraint Logic). In ICL, examples are viewed as interpretations which are true or false for the target theory, whereas in present inductive logic programming systems, examples are true and false ground facts (or clauses). Furthermore, ICL uses a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form. We present some experiments with this new system on...

18. Separate-and-Conquer Rule Learning - Johannes Fürnkranz
This paper is a survey of inductive rule learning algorithms that use a separate-andconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. 1. Introduction In this paper we will give an overview of a large family of symbolic rule learning algorithms, the so-called separate-and-conquer or covering algorithms. All members of this family...

19. On the Expressive Power of Counting - Stéphane Grumbach; Christophe Tollu
We investigate the expressive power of various extensions of first-order, inductive, and infinitary logic with counting quantifiers. We consider in particular a LOGSPACE extension of first-order logic, and a PTIME extension of fixpoint logic with counters. Counting is a fundamental tool of algorithms. It is essential in the case of unordered structures. Our aim is to understand the expressive power gained with a limited counting ability. We consider two problems: (i) unnested counters, and (ii) counters with no free variables. We prove a hierarchy result based on the arity of the counters under the first restriction. The proof is based...

20. Avoiding noise fitting in a FOIL-like learning algorithm - Johannes Fürnkranz
The research reported in this paper describes Fossil, an ILP system that uses a search heuristic based on statistical correlation. This algorithm implements a new method for learning useful concepts in the presence of noise. In contrast to Foil's stopping criterion which allows theories to grow in complexity as the size of the training sets increase, we propose a new stopping criterion that is independent of the number of training examples. Instead, Fossil's stopping criterion depends on a search heuristic that estimates the utility of literals on a uniform scale. 1 Introduction In this paper we introduce an Inductive Logic...

Página de resultados:
 

Busque un recurso