
Raghavendra, G S
Many methods of analyzing both the physical and chemical behavior of proteins require information about its structure and stability. Also various other parameters such as energy function, solvation, hydrophobic/hydrophilic effects, surface area and volumes too play an important part in such analysis. The contribution of cavities to these parameters are very important. Existing methods to compute and measure cavities are limited by the inherent inaccuracies in the method of acquisition of data through xray crystallography and uncertainities in computation of radii of atoms. We present a topological framework that enables robust computation and visualization of these structures. Given a fixed...

Thakkar, Jay
Unreliable communication channels are a practical reality. They add to the complexity of protocol design and verification. In this work, we consider noisy channels which can corrupt messages. We present an approach to model and verify protocols which combine error detection and error control to provide reliable communication over noisy channels. We call these protocols retransmission protocols as they achieve reliable communication through repeated retransmissions of messages. These protocols typically use cyclic redundancy checks and sliding window protocols for error detection and control respectively. We propose models of these protocols as regular transducers operating on bit strings. Deterministic streaming string...

Rajendraprasad, Deepak
This thesis touches three diﬀerent topics in graph theory, namely, rainbow colouring, product dimension and boxicity.
Rainbow colouring An edge colouring of a graph is called a rainbow colouring, if every pair of vertices is connected by atleast one path in which no two edges are coloured the same. The rainbow connection number of a graph is the minimum number of colours required to rainbow colour it. In this thesis we give upper bounds on rainbow connection number based on graph invariants like minimum degree, vertex connectivity, and radius. We also give some computational complexity results for special graph classes.
Product dimension...

Chethan Kamath, H
An identitybased cryptosystem (IBC) is a publickey system where the public key can be represented by any arbitrary string such as an email address. The notion was introduced by Shamir with the primary goal of simplifying certificate management. An identitybased signature(IBS) is the identitybased counter part of a digital signature.
In the first (and primary) part of the work, we take a closer look at an IBS due to Galindo and Garcia–GGIBS, for short. GGIBS is derived through a simple and elegant concatenation of two Schnorr signatures and, importantly, does not rely on pairing. The security is established through two algorithms...

Jayaram, Sampath
Given a program with assertions, the assertion checking problem is to tell whether there is an execution of the program that violates one of the assertions. One approach to this problem is to explore different paths towards assertion violations, and to learn “blocking” conditions whenever a path is blocked from reaching the violations. The Synergy algorithm of Gulavani et al. [FSE 2006] and the Lazy Annotation algorithm of McMillan [CAV2010] are two recent algorithms that follow this approach to assertion checking. Each technique has its own advantages.
Synergy uses concrete tests which are very cheap as compared to theorem prover calls....

Ravindar, Archana
Estimating program worst case execution time (WCET) is an important problem in the domain of realtime systems and embedded systems that are deadlinecentric. If WCET of a program is found to exceed the deadline, it is either recoded or the target architecture is modified to meet the deadline. Predominantly, there exist three broad approaches to estimate WCET static WCET analysis, hybrid measurement based analysis and statistical WCET analysis. Though measurement based analyzers benefit from knowledge of runtime behavior, amount of instrumentation remains a concern.
This thesis proposes a CPIcentric WCET analyzer that estimates WCET as a product of worst case...

Margoor, Amogh
The problem addressed in this thesis is sound, scalable, demanddriven nulldereference veriﬁcation for Java programs via overapproximated weakest preconditions analysis. The base version of this analysis having been described in a previous publication, in this thesis we focus primarily on describing two major optimizations that we have incorporated that allow for longer program paths to be traversed more efficiently, hence increasing the precision of the approach. The ﬁrst optimization is to bypass certain expensivetoanalyze constructs, such as virtual calls with too many possible targets, by directly transferring dataﬂow facts from points after the construct to points before along defuse edges...

Kumar, Kanuj
Microscopic analysis of biological structures can be significantly enhanced by representing the object of study as a threedimensional entity. To assist neurobiologists investigate the molecular mechanisms involved in neurite formation requires an adequate visual model or at least some measurable data. Reconstruction helps analysis of biological structures by representing the object of study as a threedimensional entity. It helps gain insight into the morphological variation observed in each class of neurons and for simulations of neuronal behavior. To perform the reconstruction, biologists today have to rely on timeconsuming manual or semimanual methods which either doesn't exhibit robustness against noise of...

Kumar, Vikas
Private set intersection(PSI) is a two party protocol where both parties possess a private set and at the end of the protocol, one party (client) learns the intersection while other party (server) learns nothing. Motivated by some interesting practical applications, several provably secure and efficient PSI protocols have appeared in the literature in recent past. Some of the proposed solutions are secure in the honestbutcurious (HbC) model while the others are secure in the (stronger) malicious model. Security in the latter is traditionally achieved by following the classical approach of attaching a zero knowledge proof of knowledge (ZKPoK) (and/or using...

Pananilath, Irshad Muhammed
LatticeBoltzmann method(LBM), a promising new particlebased simulation technique for complex and multiscale fluid flows, has seen tremendous adoption in recent years in computational fluid dynamics. Even with a stateoftheart LBM solver such as Palabos, a user still has to manually write his program using the librarysupplied primitives. We propose an automated code generator for a class of LBM computations with the objective to achieve high performance on modern architectures.
Tiling is a very important loop transformation used to improve the performance of stencil computations by exploiting locality and parallelism. In the first part of the work, we explore diamond tiling, a...

Goyal, Prachi
The classical graph edge coloring problem deals in coloring the edges of a given graph with minimum number of colors such that no two adjacent edges in the graph, get the same color in the proposed coloring. In the following work, we look at the other end of the spectrum where in our goal is to maximize the number of colors used for coloring the edges of the graph under some vertex specific constraints.
We deal with the MAXIMUM EDGE COLORING problem which is defined as the following –For an integer q ≥2 and a graph G, the goal is to...

Bhattacharya, Suparna
Large flexible software systems tend to incur “bloat”, here defined as the runtime overhead induced by the accumulation of excess functionality and objects. Removing bloat is hard as these overheads are a sideeffect of the same trends that have fuelled software growth. Even defining and measuring bloat is nontrivial, as software doesn’t come with builtin labels that indicate which portions of computation are necessary for a given application and which lead to bloat. Much progress has been made in novel analysis tools that aid (human experts in) the process of finding bloat by highlighting signs of excessive activity and data...

Lakshmanan, K
In many optimization problems, the relationship between the objective and parameters is not known. The objective function itself may be stochastic such as a longrun average over some random cost samples. In such cases finding the gradient of the objective is not possible. It is in this setting that stochastic approximation algorithms are used. These algorithms use some estimates of the gradient and are stochastic in nature. Amongst gradient estimation techniques, Simultaneous Perturbation Stochastic Approximation (SPSA) and Smoothed Functional(SF) scheme are widely used. In this thesis we have proposed a novel multitime scale quasiNewton based smoothed functional (QNSF) algorithm for...

Bhaskaracharya, Somashekaracharya G
Efficient memory usage is crucial for dataintensive applications as a smaller memory footprint ensures better cache performance and allows one to run a larger problem size given a axed amount of main memory. The solutions found by existing techniques for automatic storage optimization for arrays in a new loopnests, which minimize the storage requirements for the arrays, are often far from good or optimal and could even miss nearly all storage optimization potential. In this work, we present a new automatic storage optimization framework and techniques that can be used to achieve intraarray as well as interarray storage reuse within...

Harish, D
Level sets are extensively used for the visualization of scalar fields. The Reeb graph of a scalar function tracks the evolution of the topology of its level sets. It is obtained by mapping each connected component of a level set to a point. The Reeb graph and its loopfree version called the contour tree serve as an effective user interface for selecting meaningful level sets and for designing transfer functions for volume rendering. It also finds several other applications in the field of scientific visualization.
In this thesis, we focus on designing algorithms for efficiently computing the Reeb graph of...

Kommaraju, Ananda Varadhan
Green computing techniques aim to reduce the power foot print of modern embedded devices with particular emphasis on processors, the power hotspots of these devices. In this thesis we propose compilerdriven and profiledriven optimizations that reduce power consumption in a modern embedded processor. We show that these optimizations reduce power consumption in functional units and memory subsystems with very low performance loss. We present three new techniques to reduce power consumption in processors, namely, transition aware scheduling, leakage reduction in data caches using criticality analysis, and dynamic power reduction in data caches using locality analysis of data regions.
A novel...

Saneem Ahmed, C G
The problem of feature selection is critical in several areas of machine learning and data analysis such as, for example, cancer classification using gene expression data, text categorization, etc. In this work, we consider feature selection for supervised learning problems, where one wishes to select a small set of features that facilitate learning a good prediction model in the reduced feature space. Our interest is primarily in filter methods that select features independently of the learning algorithm to be used and are generally faster to implement compared to other types of feature selection algorithms. Many common filter methods for feature...

Bharadwaj, Subramanya B V
In this thesis we consider extensions and generalizations of some classical problems
in Combinatorial Geometry. Our work is an offshoot of four classical problems in
Combinatorial Geometry. A fundamental assumption in these problems is that the
underlying point set is R2. Two fundamental themes entwining the problems considered
in this thesis are: “What happens if we assume that the underlying point set is finite?”, “What happens if we assume that the underlying point set has a special structure?”. Let P ⊂ R2 be a finite set of points in general position. It is reasonable to expect that if P is large then certain ‘patterns’...

Ashok, Pradeesha
A range space (P, S) consists of a set P of n elements and a collection S = {S1,...,Sm} of subsets of P , referred to as ranges. A hitting set for this range space refers to a subset H of P such that every Si in S contains at least one element of H. The hitting set problem is studied for many geometric range spaces where P is a set of n points in Rd and the ranges are deﬁned by the intersection of geometric objects with P . The algorithmic question of ﬁnding the minimumsized hitting set for...

Chandan, G
Scientiﬁc applications that operate on large data sets require huge amount of computation power and memory. These applications are typically run on High Performance Computing (HPC) systems that consist of multiple compute nodes, connected over an network interconnect such as InﬁniBand. Each compute node has its own memory and does not share the address space with other nodes. A signiﬁcant amount of work has been done in past two decades on parallelizing for distributedmemory architectures. A majority of this work was done in developing compiler technologies such as high performance Fortran (HPF) and partitioned global address space (PGAS). However, several...