Wednesday, September 24, 2014

 

 



Soy un nuevo usuario

Olvidé mi contraseña

Entrada usuarios

Lógica Matemáticas Astronomía y Astrofísica Física Química Ciencias de la Vida
Ciencias de la Tierra y Espacio Ciencias Agrarias Ciencias Médicas Ciencias Tecnológicas Antropología Demografía
Ciencias Económicas Geografía Historia Ciencias Jurídicas y Derecho Lingüística Pedagogía
Ciencia Política Psicología Artes y Letras Sociología Ética Filosofía
 

rss_1.0 Clasificación por Disciplina

Nomenclatura Unesco > (57) Lingüística

Mostrando recursos 181 - 200 de 49,527

181. Distributed Parsing With HPSG Grammars - Intelligenz Gmbh; Abdel Kader Diagne; Walter Kasper; Hans-Ulrich Krieger; Deutsches Forschungszentrum
Unification-based theories of grammar allow for an integration of different levels of linguistic descriptions in the common framework of typed feature structures. Dependencies among the levels are expressed by coreferences. Though highly attractive theoretically, using such codescriptions for analysis create problems of efficiency. We present an approach to a modular use of codescriptions on the syntactic and semantic level. Grammatical analysis is performed by tightly coupled parsers running in tandem, each using only designated parts of the grammatical description. In the paper we describe the partitioning of grammatical information for the parsers and present results about the performance. Acknowledgements. We...

182. A Fuzzy Perceptron as a Generic Model for Neuro-Fuzzy Approaches - Detlef Nauck
This paper presents a fuzzy perceptron as a generic model of multilayer fuzzy neural networks, or neural fuzzy systems, respectively. This model is suggested to ease the comparision of different neuro--fuzzy approaches that are known from the literature. A fuzzy perceptron is not a fuzzification of a common neural network architecture, and it is not our intention to enhance neural learning algorithms by fuzzy methods. The idea of the fuzzy perceptron is to provide an architecture that can be initialized with prior knowledge, and that can be trained using neural learning methods. The training is carried out in such a...

183. A compositional treatment of polysemous arguments in Categorial Grammar - Anne-Marie Mineur; Paul Buitelaar
We discuss an extension of the standard logical rules (functional application and abstraction) in Categorial Grammar (CG), in order to deal with some specific cases of polysemy. We borrow from Generative Lexicon theory which proposes the mechanism of coercion, next to a rich nominal lexical semantic structure called qualia structure. In a previous paper we introduced coercion into the framework of sign-based Categorial Grammar and investigated its impact on traditional Fregean compositionality. In this paper we will elaborate on this idea, mostly working towards the introduction of a new semantic dimension. Where in current versions of sign-based Categorial Grammar only...

184. Constructing Fuzzy Models with Linguistic Integrity - AFRELI Algorithm - Jairo J. Espinosa; Joos Vandewalle; Joos V
We present an algorithm to extract rules relating input-output data. The rules are created in the environment of fuzzy systems. The concept of linguistic integrity is discussed and used as a framework to propose an algorithm for rule extraction (AFRELI). The algorithm is complemented with the use of the FuZion algorithm created to merge consecutive membership functions and guaranteed the distinguishability between fuzzy sets on each domain. Keywords Fuzzy Modeling, function approximation, knowledge extraction, data minning I. Introduction Mathematical models are powerful tools to natural phenomena represent in a systematic way . They open the possibility of studying the behavior...

185. A Database Interface for File Update - Serge Abiteboul; Sophie Cluet; Tova Milo
this paper, we consider how structured data stored in files can be updated using database update languages. The interest of using database languages to manipulate files is twofold. First, it opens database systems to external data. This concerns data residing in files or data transiting on communication channels and possibly coming from other databases [2]. Secondly, it provides high level query/update facilities to systems that usually rely on very primitive linguistic support. (See [6] for recent works in this direction). Similar motivations appear in [4, 5, 7, 8, 11, 12, 13, 14, 15, 17, 19, 20, 21] In a previous...

186. Semantic-Oriented Chart Parsing with Defaults - Thomas Stürmer
We present a computational model of incremental, interactive text analysis. The model is based on an active chart and supports interleaved syntactic and semantic processing. It can handle intra- and intermodular constraints without forcing the use of the same formalism for the description of syntactic and semantic knowledge. An essential part of the model are defaults which guide an analysis algorithm based on our approach to compute the most plausible solution. We will argue that the resulting computation can be understood as semanticoriented parsing. We will also show how our model can be abstracted into a NL understanding system architecture,...

187. A Proposal for Improving the Accuracy of Linguistic Modeling - O. Cordón; F. Herrera
Nowadays, Linguistic Modeling is considered to be one of the most important areas of application for Fuzzy Logic. Descriptive Mamdani-type Fuzzy Rule-Based Systems (FRBSs), the ones used to perform this task, provide a human-readable description of the model in the form of linguistic rules, which is a desirable characteristic in many problems. Unfortunately, interpretability and accuracy are contradictory requirements in the field of modeling. Due to this reason, in many cases the linguistic model designed is not accurate to a sufficient degree and has to be discarded and replaced by other less interpretable but more accurate models (fuzzy models generated...

188. Optimization Under Fuzzy Linguistic Rule Constraints - Christer Carlsson; Robert Fullér; Silvio Giove
Suppose we are given a mathematical programming problem in which the functional relationship between the decision variables and the objective function is not completely known. Our knowledge-base consists of a block of fuzzy if-then rules, where the antecedent part of the rules contains some linguistic values of the decision variables, and the consequence part is a linear combination of the crisp values of the decision variables. We suggest the use of Takagi and Sugeno fuzzy reasoning method to determine the crisp functional relationship between the objective function and the decision variables, and solve the resulting (usually nonlinear) programming problem to...

189. Behavior-based Language Generation for Believable Agents - A. Bryan Loyall; Joseph Bates
We are studying how to create believable agents that perform actions and use natural language in interactive, animated, real-time worlds. We have extended Hap, our behavior-based architecture for believable non-linguistic agents, to better support natural language text generation. These extensions allow us to tightly integrate generation with other aspects of the agent, including action, perception, inference and emotion. We describe our approach, and show how it leads to agents with properties we believe important for believability, such as: using language and action together to accomplish communication goals; using perception to help make linguistic choices; varying generated text according to emotional...

190. A Modular Calculus for Module Systems - Davide Ancona; Elena Zucca; Via Dodecaneso
. We present a simple and powerful calculus of modules supporting mutual recursion and higher order features. The calculus allows to encode a large variety of existing mechanisms for combining software components, including parameterized modules like ML functors, extension with overriding of object-oriented programming, mixin modules and extra-linguistic mechanisms like those provided by a linker. As usual, we rst present an untyped version of our calculus and then a type system which is proved sound w.r.t. the reduction semantics; moreover we give a translation of other primitive calculi. Introduction Considerable eort has been recently invested in studying theoretical foundations and...

191. Objecthood: An event structure perspective - Beth Levin
this paper. Since transitive verbs necessarily have objects, a challenge for theories of transitivity is how to deal with the just-mentioned problems involving the semantic correlates of objecthood. In this paper I revisit these issues from a novel perspective, showing that the notion `object' of a transitive verb can be fruitfully explored in the context of recent work on the structure and representation of verb meaning and the licensing of arguments. Much recent research has converged on the notion `event' as an important organizing notion in the linguistic representation of meaning, and the grammatically-relevant component of a representation of verb...

192. Extracting Semantic Features for Aspectual Meanings from a Syntactic Representation Using Neural Networks - Gabriele Scheler
The main point of this paper is to show how we can extract semantic features, describing aspectual meanings, from a syntactic representation. The goal is to translate English to Russian aspectual categories. This is realized by a specialized language processing module, which is based on the concept of vertical modularity. The results of supervised learning of syntactic-semantic correspondences using standard back-propagation show that both learning and generalization to new patterns is successful. Furthermore, the correct generation of Russian aspect from the automatically created semantic representations is demonstrated. 1 Introduction A common goal of theoretical linguistic work as well as machine...

193. Engineering of KR-Based Support Systems for Conceptual Modelling & Analysis - Ernesto Compatangelo; Francesco M. Donini; Giovanni Rumolo
Most of present intelligent knowledge management environments for conceptual modelling and analysis suffer, in our opinion, from mixing two representation levels: (1) a conceptual level, where domain-specific concepts are represented (e.g., data and processes in Data-Flow Diagrams); (2) an epistemological level, where inferences are drawn from domain-independent linguistic constructors (e.g., concepts and roles in Description Logics). We propose an engineering approach to the development of new systems, where the two levels are separately represented and are either linked by the protodl methodology or by concept emulators. We exemplify our approach by modelling Data-Flow Diagrams in level (1) and by using...

194. Speech Recognition Using Dynamical Model of Speech Production - Ken-ichi Iso
We propose a speech recognition method based on the dynamical model of speech production. The model consists of an articulator and its control command sequences. The latter has linguistic information of speech and the former has the articulatory information which determines transformation from linguistic intentions to speech signals. This separation makes our speech recognition model more controllable. It provides new approaches to speaker adaptation and to coarticulation modeling. The effectiveness of the proposed model was examined by speaker-dependent letter recognition experiments. Visiting Scientist from C & C Information Technology Research Laboratories, NEC Corporation, 4-1-1 Miyazaki, Miyamae-ku, Kawasaki 216, JAPAN Keywords:...

195. Linguistics, Logic, and Finite Trees - Patrick Blackburn; Wilfried Meyer-viol
A modal logic is developed to deal with finite ordered binary trees as they are used in (computational) linguistics. A modal language is introduced with operators for the `mother of', ` first daughter of' and `second daughter of' relations together with their transitive reflexive closures. The relevant class of tree models is defined and three linguistic applications of this language are discussed: context free grammars, command relations, and trees decorated with feature structures. An axiomatic proof system is given for which completeness is shown with respect to the class of finite ordered binary trees. A number of decidability results follow....

196. , Breck Baldwin, Jeffrey C. Reynar and B. Srinivas - Institute For Research; Christine Doran; Michael Niv; Breck Baldwin; Jeffrey C. Reynar; B. Srinivas
We present Mother of Perl, a pattern description language developed for use in information extraction. Patterns are described in mop by left-to-right enumeration of components, with each component specified at the appropriate level of descriptive granularity. The patterns are compiled into Perl scripts, which perform back-tracking search on the input text. mop also allows for rapid integration of a variety of analytical modules, such as part-of-speech taggers and parsers. 1 Introduction Information extraction (IE) is the task of processing large volumes of texts in order to extract the information necessary to fill a predefined template or to populate a database....

197. Morphological Ambiguity Reduction Using Subsumption Relation in Korean - Jae-Hoon Kim; Jae-hoon Kimy; Gil Chang Jungyun; Byung-Gyu Jang; Gil Chang Kimy; Jungyun Seoz
In Korean morphological analysis, one reason of morphological over-analysis is lack of ordering restrictions in the connectivity information table used as morphotactics in many cases. To alleviate this problem, we use a subsumption relation as a kind of linguistic knowledge. In this paper, we define the subsumption relation and propose a method to reduce morphological ambiguities using the subsumption relation. Our experiment shows very promising results. We expect that the results may be positively reflected to probabilistic part-of-speech tagging systems with some difficulty in incorporating linguistic knowledge. 1 Introduction In agglutinative languages such as Korean, morphological analysis plays an important...

198. Efficient Parameterizable Type Expansion for Typed Feature Formalisms - Ulrich Schäfer; Hans-Ulrich Krieger
Over the last few years, constraint-based grammar formalisms have become the predominant paradigm in natural language processing and computational linguistics. From the viewpoint of computer science, typed feature structures can be seen as a record-like data structure that allow the representation of linguistic knowledge in a uniform fashion. Type expansion is an operation that makes the idiosyncratic and inherited constraints defined on a typed feature structure explicit and thus determines its satisfiability. We describe an efficient expansion algorithm that takes care of recursive type definitions and permits the exploration of different expansion strategies through the use of control knowledge. This...

199. Fuzzy Modeling with Linguistic Integrity - Jairo J. Espinosa; Joos Vandewalle; Jairo Espinosa; Joos V; Joos V
The current paper presents an algorithm to build a fuzzy relational model from input-output data. The paper discuss the trade-off between linguistic integrity and accuracy and propose an algorithm for rule extraction (AFRELI). The algorithm uses a routine named FuZion to merge consecutive membership functions and guaranteed the distinguishability between the fuzzy sets on each domain. I. Introduction The problem of modeling and identification demands the use of multiple sources of information. The main sources of information about the system are physical description of the components, expert knowledge about the behaviour of the system and input-output data collected during operation....

200. Component-Based Handprint Segmentation Using Adaptive Writing Style Model - Michael D. Garris
Building upon the utility of connected components, NIST has designed a new character segmentor based on statistically modeling the style of a person's handwriting. Simple spatial features (the thickness of the pen stroke and the height of the handwriting) capture the characteristics of a particular writer's style of handprint, enabling the new method to maintain a traditional character-level segmentation philosophy without the integration of recognition or the use of oversegmentation and linguistic postprocessing. Estimates for stroke width and character height are used to compute aspect ratio and standard stroke count features that adapt to the writer's style at the field...

 

Busque un recurso