My research interests are broadly in information theory, communications, statistical learning and inference. A list of my publications can be found here. Some topics I have recently worked on are:

A description of some past projects can be found here.

Sparse Regression Codes

Sparse regression codes for communication

Efficient codes based on sparse linear regression have been recently proposed for communication over channels with additive Gaussian noise. We have developed a low-complexity capacity-achieving decoder for these codes based on "approximate message passing" (AMP) techniques.

  • Survey article in Information Theory Society newsletter [PDF]
  • ISIT'16 tutorial on Sparse Regression Codes: handout + slides
  • "Capacity-achieving Sparse Superposition Codes with Approximate Message Passing Decoding", IEEE Transactions on Information Theory, vol. 63, no. 3, pp. 1476-1500, March 2017. [PDF]
  • C. Rush and R. Venkataramanan, "The error probability of sparse superposition codes with Approximate Message Passing decoding" [PDF]
  • "Techniques for improving the finite length performance of sparse superposition codes", IEEE Transactions on Communications, vol. 66, no. 3, pp.905-917, March 2018. [PDF]

Sparse regression codes for compression

Using the sparse regression framework, we have also designed efficient algorithms for lossy data compression which attain the optimal compression rate (the rate-distortion function) for Gaussian sources. Furthermore, the source and channel codes constructed above can be combined to obtain rate-optimal codes for several canonical models in network information theory.

  • "Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding", IEEE Transactions on Information Theory, vol. 60, no. 6, pp. 3265-3278, June 2014. [PDF] [Slides from ITA '13]
  • "Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding", IEEE Transactions on Information Theory, vol. 60, no. 6, pp. 3254-3264, June 2014. [PDF] [Slides from ISIT '12]
  • "The Rate-Distortion Function and Excess-Distortion Exponent of Sparse Regression Codes with Optimal Encoding", IEEE Transactions on Information Theory, vol. 63, no. 8, pp. 5228-5243 , August 2017. [PDF] [Slides from ISIT '14]
  • "Sparse Regression Codes for Multi-terminal Source and Channel Coding", Allerton 2012. [PDF] [Slides]

High-dimensional statistical estimation

Approximate Message Passing (AMP) algorithms

Recent work in this direction includes AMP algorithms for low-rank matrix estimation, and non-asymptotic guarantees on the performance of AMP.

  • "Estimation of Low-Rank Matrices via Approximate Message Passing" [PDF]
  • "Finite Sample Analysis of Approximate Message Passing Algorithms", IEEE Transactions on Information Theory, 2018. [PDF]

Shrinkage estimation in high dimensions

Consider the problem of estimating a high-dimensional vector of parameters θ from a noisy observation. Shrinkage estimators (that shrink the data towards a target vector or subspace) are known to dominate the simple maximum-likelihood (ML) estimator for this problem. However, the gains over ML are substantial only when θ happens to lie close to the target subspace. This leads to the question: how do we design estimators that give significant risk reduction over ML for a wide range of θ?

In this work, we infer the clustering structure of θ from the data, and use it to design target subspaces tailored to θ. As the dimension grows, these shrinkage estimators give substantial risk reduction over ML for a wide range of θ. We have also used these ideas to design good estimators for the setting where it is known that θ is sparse, but no other prior information is available.

  • "Cluster-seeking James-Stein Estimators", IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 853-874, February 2018. [PDF] [Slides from talk at ITA '16]
  • "Empirical Bayes estimators for high-dimensional sparse vectors" [PDF]

Converse bounds for high-dimensional estimation

In this work, we obtain new lower bounds on the minimax risk of inference problems such as density estimation, active learning, and compressed sensing. These bounds give tighter "strong converse" results in contrast to the weak converses provided by Fano's inequality.
  • "A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation", Electronic Journal of Statistics, Vol. 12, No. 1, pp. 1126-1149, 2018. [PDF] [Slides from talk at IZS '18]

Coding for deletion and insertion channels

Low-complexity codes for deletions and insertions

The problem of synchronization from deletions and insertions is important in several applications, such as file sharing, online editing, and distributed storage. We have designed low-complexity rate-efficient codes for various edit models with deletions and insertions.

  • "Multilayer codes for synchronization from deletions" [PDF], ITW 2017
  • "Coding for segmented edit channels", IEEE Transactions on Information Theory, vol. 64, no. 4, pp. 3086-3098, April 2018. [PDF]
  • "Low-Complexity Interactive Algorithms for Synchronization from Deletions, Insertions, and Substitutions", IEEE Transactions on Information Theory, vol. 61, no. 10, pp. 5670-5689, October 2015. [PDF]
  • Slides from talk at the Banff Workshop on Interactive Information Theory, Jan. 2012.

Capacity bounds for deletion and insertion channels

In this work, we obtained new achievable rates (capacity lower bounds) for deletion channels, insertion channels, and insertion-deletion channels.
  • "Achievable Rates for Channels with Deletions and Insertions", IEEE Transactions on Information Theory, vol. 59, no.11, pp. 6990-7013, November 2013. [PDF]
  • Slides from talk at ITW, Seville, Sep. 2013.