This webpage describes the research outputs from the project Sparse Regression Codes funded by a Marie Curie Career Integration Grant (Grant Agreement Number 631489 under FP7-PEOPLE-2013-CIG).

Sparse regression codes for communication

Efficient codes based on sparse linear regression have been recently proposed for communication over channels with additive Gaussian noise. We have developed a low-complexity capacity-achieving decoder for these codes based on "approximate message passing" (AMP) techniques.

  • Survey article in Information Theory Society newsletter [PDF]
  • ISIT'16 tutorial on Sparse Regression Codes: handout + slides
  • "Capacity-achieving Sparse Superposition Codes with Approximate Message Passing Decoding", IEEE Transactions on Information Theory, vol. 63, no. 3, pp. 1476-1500, March 2017. [PDF]
  • C. Rush and R. Venkataramanan, "The error probability of sparse superposition codes with Approximate Message Passing decoding" [PDF]
  • "Techniques for improving the finite length performance of sparse superposition codes", IEEE Transactions on Communications, vol. 66, no. 3, pp.905-917, March 2018. [PDF]

Sparse regression codes for compression

Using the sparse regression framework, we have also designed efficient algorithms for lossy data compression which attain the optimal compression rate (the rate-distortion function) for Gaussian sources. Furthermore, the source and channel codes constructed above can be combined to obtain rate-optimal codes for several canonical models in network information theory.

  • "The Rate-Distortion Function and Excess-Distortion Exponent of Sparse Regression Codes with Optimal Encoding", IEEE Transactions on Information Theory, vol. 63, no. 8, pp. 5228-5243 , August 2017. [PDF] [Slides]

Approximate Message Passing (AMP) algorithms

Recent work in this direction includes AMP algorithms for low-rank matrix estimation, and non-asymptotic guarantees on the performance of AMP.

  • "Finite Sample Analysis of Approximate Message Passing Algorithms", IEEE Transactions on Information Theory, 2018. [PDF]
  • "Estimation of Low-Rank Matrices via Approximate Message Passing" [PDF]

Shrinkage estimation in high dimensions

Consider the problem of estimating a high-dimensional vector of parameters θ from a noisy observation. Shrinkage estimators (that shrink the data towards a target vector or subspace) are known to dominate the simple maximum-likelihood (ML) estimator for this problem. However, the gains over ML are substantial only when θ happens to lie close to the target subspace. This leads to the question: how do we design estimators that give significant risk reduction over ML for a wide range of θ?

In this work, we infer the clustering structure of θ from the data, and use it to design target subspaces tailored to θ. As the dimension grows, these shrinkage estimators give substantial risk reduction over ML for a wide range of θ. We have also used these ideas to design good estimators for the setting where it is known that θ is sparse, but no other prior information is available.

  • "Cluster-seeking James-Stein Estimators", IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 853-874, February 2018. [PDF] [Slides from talk at ITA '16]
  • "Empirical Bayes estimators for high-dimensional sparse vectors" [PDF]

Converse bounds for high-dimensional estimation

In this work, we obtain new lower bounds on the minimax risk of inference problems such as density estimation, active learning, and compressed sensing. These bounds give tighter "strong converse" results in contrast to the weak converses provided by Fano's inequality.
  • "A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation", Electronic Journal of Statistics, Vol. 12, No. 1, pp. 1126-1149, 2018. [PDF] [Slides from talk at IZS '18]