Recursos de colección

ETD at Indian Institute of Science (3.494 recursos)

Repository of Theses and Dissertations of Indian Institute of Science, Bangalore, India. The repository has been developed to capture, disseminate and preserve research theses of Indian Institute of Science.

Supercomputer Education and Research Centre (serc)

Mostrando recursos 1 - 20 de 55

  1. Approximate Nearest Neighbour Field Computation and Applications

    Avinash Ramakanth, S
    Approximate Nearest-Neighbour Field (ANNF\ maps between two related images are commonly used by computer vision and graphics community for image editing, completion, retargetting and denoising. In this work we generalize ANNF computation to unrelated image pairs. For accurate ANNF map computation we propose Feature Match, in which the low-dimensional features approximate image patches along with global colour adaptation. Unlike existing approaches, the proposed algorithm does not assume any relation between image pairs and thus generalises ANNF maps to any unrelated image pairs. This generalization enables ANNF approach to handle a wider range of vision applications more efficiently. The following is...

  2. Motion Based Event Analysis

    Biswas, Sovan
    Motion is an important cue in videos that captures the dynamics of moving objects. It helps in effective analysis of various event related tasks such as human action recognition, anomaly detection, tracking, crowd behavior analysis, traffic monitoring, etc. Generally, accurate motion information is computed using various optical flow estimation techniques. On the other hand, coarse motion information is readily available in the form of motion vectors in compressed videos. Utilizing these encoded motion vectors reduces the computational burden involved in flow estimation and enables rapid analysis of video streams. In this work, the focus is on analyzing motion patterns, retrieved...

  3. Fast Identification of Structured P2P Botnets Using Community Detection Algorithms

    Venkatesh, Bharath
    Botnets are a global problem, and effective botnet detection requires cooperation of large Internet Service Providers, allowing near global visibility of traffic that can be exploited to detect them. The global visibility comes with huge challenges, especially in the amount of data that has to be analysed. To handle such large volumes of data, a robust and effective detection method is the need of the hour and it must rely primarily on a reduced or abstracted form of data such as a graph of hosts, with the presence of an edge between two hosts if there is any data communication...

  4. Cooperative Execution of Opencl Programs on Multiple Heterogeneous Devices

    Pandit, Prasanna Vasant
    Computing systems have become heterogeneous with the increasing prevalence of multi-core CPUs, Graphics Processing Units (GPU) and other accelerators in them. OpenCL has emerged as an attractive programming framework for heterogeneous systems. However, utilizing mul- tiple devices in OpenCL is a challenge as it requires the programmer to explicitly map data and computation to each device. Utilizing multiple devices simultaneously to speed up execu- tion of a kernel is even more complex, as the relative execution time of the kernel on different devices can vary significantly. Also, after each kernel execution, a coherent version of the data needs to be...

  5. Timing-Driven Routing in VLSI Physical Design Under Uncertainty

    Samanta, Radhamanjari
    The multi-net Global Routing Problem (GRP) in VLSI physical design is a problem of routing a set of nets subject to limited resources and delay constraints. Various state-of-the-art routers are available but their main focus is to optimize the wire length and minimize the over ow. However optimizing wire length do not necessarily meet timing constraints at the sink nodes. Also, in modern nano-meter scale VLSI process the consideration of process variations is a necessity for ensuring reasonable yield at the fab. In this work, we try to nd a fundamental strategy to address the timing-driven Steiner tree construction (i.e.,...

  6. Communication Structure and Mixing Patterns in Complex Networks

    Choudhury, Sudip Hazra
    Real world systems like biological, social, technological, infrastructural and many others can be modeled as networks. The field of network science aims to study these complex networks and understand their structure and dynamics. A common feature of networks across domains is the distribution of the degree of the nodes according to a power-law (scale invariance). As a consequence of this skewness, the high degree nodes dominate the properties of these networks. The rich-club phenomenon is observed when the high degree or the rich nodes of the network prefer to connect amongst themselves. In the first part, the thesis investigates the rich-club...

  7. Stochastic Chemical Kinetics : A Study on hTREK1 Potassium Channel

    Metri, Vishal
    Chemical reactions involving small number of reacting molecules are noisy processes. They are simulated using stochastic simulation algorithms like the Gillespie SSA, which are valid when the reaction environment is well-mixed. This is not the case in reactions occuring on biological media like cell membranes, where alternative simulation methods have to be used to account for the crowded nature of the reacting environment. Ion channels, which are membrane proteins controlling the flow of ions into and out of the cell, offer excellent single molecule conditions to test stochastic simulation schemes in crowded biological media. Single molecule reactions are of great...

  8. Analysis of Molecular Dynamics Trajectories of Proteins Performed using Different Forcefields and Identifiction of Mobile Segments

    Katagi, Gurunath M
    The selection of the forcefield is a crucial issue in any MD related work and there is no clear indication as to which of the many available forcefields is the best for protein analysis. Many recent literature surveys indicate that MD work may be hindered by two limitations, namely conformational sampling and forcefields used (inaccuracies in the potential energy function may bias the simulation toward incorrect conformations). However, the advances in computing infrastructures, theoretical and computing aspects of MD have paved the way to carry out a sampling on a sufficiently longtime scale, putting a need for the accuracies in...

  9. Polymorphic ASIC : For Video Decoding

    Adarsha Rao, S J
    Video applications are becoming ubiquitous in recent times due to an explosion in the number of devices with video capture and display capabilities. Traditionally, video applications are implemented on a variety of devices with each device targeting a specific application. However, the advances in technology have created a need to support multiple applications from a single device like a smart phone or tablet. Such convergence of applications necessitates support for interoperability among various applications, scalable performance meet the requirements of different applications and a high degree of reconfigurability to accommodate rapid evolution in applications features. In addition, low power consumption...

  10. Automated Selection of Hyper-Parameters in Diffuse Optical Tomographic Image Reconstruction

    Jayaprakash, *
    Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue in-vivo. This modality uses near infrared light( 600nm-900nm) as the probing media, giving an advantage of being non-ionizing imaging modality. The image reconstruction problem in diffuse optical tomography is typically posed as a least-squares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is non-linear and ill-posed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The...

  11. Development and Validation of Analytical Models for Diffuse Fluorescence Spectroscopy/Imaging in Regular Geometries

    Ayyalasomayajula, Kalyan Ram
    New advances in computational modeling and instrumentation in the past decade has enabled the use of electromagnetic radiation for non-invasive monitoring of the physio-logical state of biological tissues. The near infrared (NIR) light having the wavelength range of 600 nm -1000 nm has been the main contender in these emerging molecular imaging modalities. Assessment of accurate pathological condition of the tissue under investigation relies on the contrast in the molecular images, where the endogenous contrast may not be sufficient in these scenarios. The fluorescence (exogenous) contrast agents have been deployed to overcome these difficulties, where the preferential uptake by the...

  12. Adaptive Fault Tolerance Strategies for Large Scale Systems

    George, Cijo
    Exascale systems of the future are predicted to have mean time between node failures (MTBF) of less than one hour. At such low MTBF, the number of processors available for execution of a long running application can widely vary throughout the execution of the application. Employing traditional fault tolerance strategies like periodic checkpointing in these highly dynamic environments may not be effective because of the high number of application failures, resulting in large amount of work lost due to rollbacks apart from the increased recovery overheads. In this context, it is highly necessary to have fault tolerance strategies that can...

  13. Low Overhead Soft Error Mitigation Methodologies

    Prasanth, V
    CMOS technology scaling is bringing new challenges to the designers in the form of new failure modes. The challenges include long term reliability failures and particle strike induced random failures. Studies have shown that increasingly, the largest contributor to the device reliability failures will be soft errors. Due to reliability concerns, the adoption of soft error mitigation techniques is on the increase. As the soft error mitigation techniques are increasingly adopted, the area and performance overhead incurred in their implementation also becomes pertinent. This thesis addresses the problem of providing low cost soft error mitigation. The main contributions of this...

  14. Development of Novel Reconstruction Methods Based on l1--Minimization for Near Infrared Diffuse Optical Tomography

    Shaw, Calbvin B
    Diffuse optical tomography uses near infrared (NIR) light as the probing media to recover the distributions of tissue optical properties. It has a potential to become an adjunct imaging modality for breast and brain imaging, that is capable of providing functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) tends to be non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. Traditional image reconstruction methods in diffuse optical tomography employ l2 –norm based regularization, which is known to remove high frequency noises in...

  15. A Runtime Framework for Regular and Irregular Message-Driven Parallel Applications on GPU Systems

    Rengasamy, Vasudevan
    The effective use of GPUs for accelerating applications depends on a number of factors including effective asynchronous use of heterogeneous resources, reducing data transfer between CPU and GPU, increasing occupancy of GPU kernels, overlapping data transfers with computations, reducing GPU idling and kernel optimizations. Overcoming these challenges require considerable effort on the part of the application developers. Most optimization strategies are often proposed and tuned specifically for individual applications. Message-driven executions with over-decomposition of tasks constitute an important model for parallel programming and provide multiple benefits including communication-computation overlap and reduced idling on resources. Charm++ is one such message-driven language...

  16. Malware Analysis using Profile Hidden Markov Models and Intrusion Detection in a Stream Learning Setting

    Saradha, R
    In the last decade, a lot of machine learning and data mining based approaches have been used in the areas of intrusion detection, malware detection and classification and also traffic analysis. In the area of malware analysis, static binary analysis techniques have become increasingly difficult with the code obfuscation methods and code packing employed when writing the malware. The behavior-based analysis techniques are being used in large malware analysis systems because of this reason. In prior art, a number of clustering and classification techniques have been used to classify the malwares into families and to also identify new malware families,...

  17. Development of Next Generation Image Reconstruction Algorithms for Diffuse Optical and Photoacoustic Tomography

    Jaya Prakash, *
    Biomedical optical imaging is capable of providing functional information of the soft bi-ological tissues, whose applications include imaging large tissues, such breastand brain in-vivo. Biomedical optical imaging uses near infrared light (600nm-900nm) as the probing media, givin ganaddedadvantageofbeingnon-ionizingimagingmodality. The tomographic technologies for imaging large tissues encompasses diffuse optical tomogra-phyandphotoacoustictomography. Traditional image reconstruction methods indiffuse optical tomographyemploysa �2-norm based regularization, which is known to remove high frequency no is either econstructed images and make the mappearsmooth. Hence as parsity based image reconstruction has been deployed for diffuse optical tomography, these sparserecov-ery methods utilize the �p-norm based regularization in the estimation problem with 0≤...

  18. Study of RCS from Aerodynamic Flow using Parallel Volume-Surface Integral Equation

    Padhy, Venkat Prasad
    Estimation of the Radar Cross Section of large inhomogeneous scattering objects such as composite aircrafts, ships and biological bodies at high frequencies has posed large computational challenge. The detection of scattering from wake vortex leading to detection and possible identification of low observable aircrafts also demand the development of computationally efficient and rigorous numerical techniques. Amongst the various methods deployed in Computational Electromagnetics, the Method of Moments predicts the electromagnetic characteristics accurately. Method of Moments is a rigorous method, combined with an array of modeling techniques such as triangular patch, cubical cell and tetrahedral modeling. Method of Moments has become...

  19. Development of Sparse Recovery Based Optimized Diffuse Optical and Photoacoustic Image Reconstruction Methods

    Shaw, Calvin B
    Diffuse optical tomography uses near infrared (NIR) light as the probing media to re-cover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. Diffuse optical image reconstruction problem is always rank-deficient, where finding the independent measurements among the available measurements becomes challenging problem. Knowing these independent measurements will help in designing better data acquisition set-ups and lowering the costs associated with...

  20. Variance of Difference as Distance Like Measure in Time Series Microarray Data Clustering

    Mukhopadhyay, Sayan
    Our intention is to find similarity among the time series expressions of the genes in microarray experiments. It is hypothesized that at a given time point the concentration of one gene’s mRNA is directly affected by the concentration of other gene’s mRNA, and may have biological significance. We define dissimilarity between two time-series data set as the variance of Euclidean distances of each time points. The large numbers of gene expressions make the calculation of variance of distance in each point computationally expensive and therefore computationally challenging in terms of execution time. For this reason we use autoregressive model which...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.