My research interests are broadly in information theory, communications, statistical inference and learning. A list of my publications can be found here.


Here is a sample of my current work:

Communication and Compression via Sparse Linear Regression.

Codes based on high-dimensional linear regression were recently introduced for communication over channels with additive Gaussian noise. In recent work, we have developed a low-complexity capacity-achieving decoder for these codes based on "approximate message passing" (AMP).


  • NEW: ISIT'16 tutorial on Sparse Regression Codes: handout + slides
  • "Capacity-achieving Sparse Superposition Codes with Approximate Message Passing Decoding" [PDF] [Slides from ITA '15]

We have also designed codes for lossy data compression using the sparse regression framework. These codes attain the optimal compression rate (the rate-distortion function) for Gaussian sources with computationally efficient encoding and decoding algorithms. We have also shown that the source and channel codes constructed above can be combined to yield fast, rate-efficient codes for several canonical models in network information theory.


  • Slides from talk: "Compression and Communication via Sparse Regression", SPCOM, IISc Bangalore, July 2014.
  • "Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding", IEEE Transactions on Information Theory, vol. 60, no. 6, pp. 3265-3278, June 2014. [PDF] [Slides from ITA '13]
  • "Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding", IEEE Transactions on Information Theory, vol. 60, no. 6, pp. 3254-3264, June 2014. [PDF] [Slides from ISIT '12]
  • "Sparse Regression Codes for Multi-terminal Source and Channel Coding", Allerton 2012. [PDF] [Slides]


Analysis of Approximate Message Passing.

The work on decoding sparse regression codes has also led to non-asymptotic results on the performance of AMP algorithms in the more general setting of high-dimensional regression.

  • "Finite Sample Analysis of Approximate Message Passing" [PDF]


Shrinkage Estimation in High Dimensions.

Consider the problem of estimating a high-dimensional vector of parameters θ from a noisy one-time observation. Shrinkage estimators (that shrink the data towards a target vector or subspace) are known to dominate the simple maximum-likelihood (ML) estimator for this problem. However, the gains over ML are substantial only when θ happens to lie close to the target subspace. This leads to the question: how do we design estimators that give significant risk reduction over ML for a wide range of θ?

In this work, we infer the clustering structure of θ from the data, and use it to design target subspaces tailored to θ. As the dimension grows, these shrinkage estimators give substantial risk reduction over ML for a wide range of θ.


  • "Cluster-seeking James-Stein Estimators" [PDF] [Slides from talk at ITA '16]


Please click here for a description of other projects I have worked on, including coding for insertion and deletion models, feedback in multi-user channels, and rewritable storage channels.