Recursos de colección

UW-SIG Publications (312 recursos)

Structural Informatics Group (SIG) is an interdisciplinary team of computer scientists, engineers and biologists, which is part of the Department of Biological Structure and the Division of Biomedical and Health Informatics, Department of Medical Education and Biomedical Informatics, with strong ties to the Department of Computer Science and Engineering. The emphasis of SIG is on the development of methods for representing, managing, visualizing and utilizing information about the physical organization of the body.

Status = Published

Mostrando recursos 1 - 20 de 216

  1. A lightweight freezer management system for small laboratories

    In clinical studies, researchers must often maintain a freezer inventory of biosamples. Existing software packages, designed to track and manage freezer inventory, are not always suitable for small laboratories. We present a lightweight, low cost, alternative that's more appropriate for small studies with limited resources.

  2. Lightweight Data Integration Frameworks for Clinical Research

    Shaker, Ronald; Hertzenberg, Xenia; Brinkley, James
    Research data from a single clinical study is often spread across multiple applications and systems. We present a reusable, lightweight, secure framework for automatically integrating and querying study data from heterogeneous sources in order to answer routine, operational questions for researchers.

  3. vSPARQL: A View Definition Language for the Semantic Web

    Shaw, Marianne; Detwiler, Landon T; Noy, N. F.; Brinkley, James F; Suciu, Dan
    Translational medicine applications would like to leverage the biological and biomedical ontologies, vocabularies, and data sets available on the semantic web. We present a general solution for RDF information set reuse inspired by database views. Our view definition language, vSPARQL, allows applications to specify the exact content that they are interested in and how that content should be restructured or modified. Applications can access relevant content by querying against these view definitions. We evaluate the expressivity of our approach by defining views for practical use cases and comparing our view definition language to existing query languages.

  4. The Value of Value Sets

    Wynden, Rob; Solbrig, Harold; Tu, Samson; Brinkley, James F
    A common definition of value set will be provided and fully characterized relative to its proposed uses. We will describe, compare, and contrast several approaches to specifying and referencing value sets in a stable manner over time. The term “value set”, although ubiquitous within biomedical informatics has no common definition and has yet to be fully described in a formal manner. It is essential for the design and launch of new ontologies, biomedical informatics applications and data sharing environments that a common and well-­‐ understood definition of “value set” is provided. It is also essential that options and trade-­‐offs be...

  5. Distributed Queries for Quality Control Checks in Clinical Trials

    Nichols, Nolan; Detwiler, Landon T; Franklin, Joshua D; Brinkley, James F
    Operational Quality Control (QC) checks are standard practice in clinical trials and ensure ongoing compliance with the study protocol, standard operating procedures (SOPs) and Good Clinical Practice (GCP). We present a method for defining QC checks as distributed queries over case report forms (CRF) and clinical imaging data- sources. Our distributed query system can integrate time-sensitive information in order to populate QC checks that can facilitate discrepancy resolution workflow in clinical trials.

  6. Value Sets via Ontology Views

    Detwiler, Landon T; Brinkley, James F
    We present a method for defining value sets as queries over ontologies (ontology views), and a mechanism for evaluating such queries. In particular we demonstrate an approach utilizing reusable template queries and parameterized URLs. We illustrate this method using an example from the Ontology of Clinical Research (OCRe).

  7. A Partnership Approach for Electronic Data Capture in Small-Scale Clinical Trials

    Franklin, Joshua D; Guidry, Alicia F; Brinkley, James F
    The data collection process for clinical trials can be a tedious and error-prone process, and even a barrier to initiating small-scale studies. Electronic Data Capture (EDC) software can meet the need for faster and more reliable collection of data, but these informatics solutions can also be difficult to for researchers to set up. Establishing a full-featured commercial Clinical Trials Management System (CTMS) ecosystem is not realistic due to current institutional resource constraints. As an alternative solution, our Biomedical Informatics core (BMI) provided the technical expertise to pilot each EDC system in partnership with research teams and performed a qualitative evaluation...

  8. Guide to Low Cost Electronic Data Capture Systems for Clinical Trials

    Oldenkamp, Paul
    There are many ways to use computer systems to record the information generated during the course of clinical trials. The Electronic Data Capture, or EDC, systems have a range of features from a simple basic functionality to sophisticated and complex specialty systems. The costs of these systems also vary from very expensive proprietary products to a recent trend of Open Source software that is distributed without a license fee. Traditionally, academic projects have made use of existing software resources like spreadsheets and Microsoft Access databases. This guide will present information on the low cost options using existing software or Open...

  9. Intelligent Queries over BIRN Data using the Foundational Model of Anatomy and a Distributed Query-Based Data Integration System

    Brinkley, James F; Turner, Jessica A; Detwiler, Landon T; Mejino, Jose L V; Martone, Maryanne E; Rubin, Daniel L
    We demonstrate the usefulness of the Foundational Model of Anatomy (FMA) ontology in reconciling different neuroanatomical parcellation schemes in order to facilitate automatic annotation and “intelligent” querying and visualization over a large multisite fMRI study of schizophrenic versus normal controls.

  10. Ontology View Query Management

    Detwiler, Landon T; Shaw, Marianne; Brinkley, James F
    Like views in relational databases, ontology views are expressed as queries, but over source ontologies rather than tables. To enhance the reusability of such views, we are constructing a view Query Manager application. The Query Manager allows queries to be edited, executed, and stored for reuse. View queries are discoverable by searching the Query Manager's metadata catalog. The Query Manager also supports the storage of materialized view results upon which further queries may be issued.

  11. Enabling RadLex with the Foundational Model of Anatomy Ontology to Organize and Integrate Neuro-imaging Data

    Mejino, Jose L V; Detwiler, Landon T; Turner, Jessica A; Martone, Maryann E; Rubin, Daniel L; Brinkley, James F
    In this study we focused on empowering RadLex with an ontological framework and additional content derived from the Foundational Model of Anatomy Ontology1 thereby providing RadLex the facility to correlate the different standards used in annotating neuroradiological image data. The objective of this work is to promote data sharing, data harmonization and interoperability between disparate neuroradiological labeling systems.

  12. Representing Neural Connectivity in the Foundational Model of Anatomy Ontology

    Nichols, Nolan; Perlmutter, Aaron; Mejino, Jose L V; Brinkley, James F
    Our current effort focuses on representing connectivity relationships between gray and white matter structures in the Foundational Model of Anatomy Ontology (FMA). There are a number of terms used that imply either structural or functional relationship, but the semantics have not been made formally explicit in the ontology. In this work we propose a set of definitions to disambiguate and clarify the terminologies describing the types of connectivity relationships that exist between gray and white matter structures at different levels of granularity.

  13. Evaluation of probabilistic and logical inference for a SNP annotation system

    Shen, Terry H; Detwiler, Landon T; Tarczy-Hornoch, Peter; Cadag, Eithon; Carlson, Christopher S
    Genome wide association studies (GWAS) are an important approach to understanding the genetic mechanisms behind human diseases. Single nucleotide polymorphisms (SNPs) are the predominant markers used in genome wide association studies, and the ability to predict which SNPs are likely to be functional is important for both a priori and a posteriori analyses of GWA studies. This article describes the design, implementation and evaluation of a family of systems for the purpose of identifying SNPs that may cause a change in phenotypic outcomes. The methods described in this article characterize the feasibility of combinations of logical and probabilistic inference with...

  14. Application of neuroanatomical ontologies for neuroimaging data annotation

    Turner, Jessica A; Mejino, Jose L V; Brinkley, James F; Detwiler, Landon T; Lee, Hyo Jong; Martone, Maryann E; Rubin, Daniel L
    The annotation of functional neuroimaging results for data sharing and re-use is particularly challenging, due to the diversity of terminologies of neuroanatomical structures and cortical parcellation schemes. To address this challenge, we extended the Foundational Model of Anatomy Ontology (FMA) to include cytoarchitectural, Brodmann area labels, and a morphological cortical labeling scheme (e.g., the part of Brodmann area 6 in the left precentral gyrus). This representation was also used to augment the neuroanatomical axis of RadLex, the ontology for clinical imaging. The resulting neuroanatomical ontology contains explicit relationships indicating which brain regions are “part of” which other regions, across cytoarchitectural...

  15. Integration of multi-scale biosimulation models via light-weight semantics

    John H, Gennari; Maxwell L, Neal; Brian E, Carlson; Daniel L, Cook
    Currently, biosimulation researchers use a variety of computational environments and languages to model biological processes. Ideally, researchers should be able to semi- automatically merge models to more effectively build larger, multi-scale models. How- ever, current modeling methods do not capture the underlying semantics of these models sufficiently to support this type of model construction. In this paper, we both propose a general approach to solve this problem, and we provide a specific example that demon- strates the benefits of our methodology. In particular, we describe three biosimulation models: (1) a cardio-vascular fluid dynamics model, (2) a model of heart rate...

  16. Advances in semantic representation for multiscale biosimulation: a case study in merging models

    Neal, Maxwell L; Gennari, John H; Arts, Theo; Cook, Daniel L
    As a case-study of biosimulation model integration, we describe our experiences applying the SemSim methodology to integrate independently-developed, multiscale models of cardiac circulation. In particular, we have integrated the CircAdapt model (written by T. Arts for MATLAB) of an adapting vascular segment with a cardiovascular system model (written by M. Neal for JSim). We report on three results from the model integration experience. First, models should be explicit about simulations that occur on different time scales. Second, data structures and naming conventions used to represent model variables may not translate across simulation languages. Finally, identifying the dependencies among model variables...

  17. Content-specific auditing of a large scale anatomy ontology

    Kalet, Ira; Mejino, Jose L V; Wang, Vania; Whipple, Mark; Brinkley, James F
    Biomedical ontologies are envisioned to be usable in a range of research and clinical applications. The requirements for such uses include formal consistency, adequacy of coverage, and possibly other domain specific constraints. In this report we describe a case study that illustrates how application specific requirements may be used to identify modeling problems as well as data entry errors in ontology building and evolution. We have begun a project to use the UW Foundational Model of Anatomy (FMA) in a clinical application in radiation therapy planning. This application focuses mainly (but not exclusively) on the representation of the lymphatic system...

  18. Relationship auditing of the FMA ontology

    Gu, Huanying; Wei, Duo; Mejino, Jose L V; Elhanan, Gai
    The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relation- ship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments:...

  19. Lightweight distributed XML-based integration of translational data

    Franklin, Joshua D; Detwiler, Landon T; Brinkley, James F
    A distributed XQuery engine sends sub queries to separate XML data sources, and then combines the results into a single XML composite result. The system is lightweight in that it is very simple to add a new data source. An illustrative example is given for integrating data from an electronic data capture (EDC) system and a separate specimen management system.

  20. Post-Coordinating Orthogonal Ontologies for Data Annotation

    Detwiler, Landon T; Shaw, Marianne; Cook, Daniel L; Gennari, John H; Brinkley, James F
    Using an extended SparQL syntax and engine, known as VSparQL, we demonstrate a method for joining concepts from orthogonal reference ontologies to form new concepts on-the-fly for data annotation. We use Skolem functions to produce unique references for each new data annotation instance.

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.