Traditionally, signal processing is performed in two main stages: Acquisition, where the signal is sampled according to the Nyquist-Shannon theorem; and compression, where the acquired signal is represented by a few number of samples using the structure and redundancy in the data. Compressive sensing is an emerging paradigm which shows that these two stages can be combined into a single stage: Signals can be acquired using fewer samples than the Nyquist rate.
According to the theory of compressive sensing, a signal can be reconstructed from a much smaller number of linear measurements than the number of signal coefficients, provided that the signal has a sparse representation (or approximation) in some domain. The compressive sensing acquisition system can be modeled as
where y is the observed signal of size M x 1, w is the original sparse signal coefficients of size N x 1, and n is the additive noise. The MxN matrix A represents the acquisition process, which is generally chosen to be a random matrix, that is, each column of A is randomly sampled from a probability distribution. In compressive sensing, M is much less than N, thus this system cannot be solved directly, and reconstruction methods are applied to obtain w .
In our work, we formulate the CS reconstruction problem from a Bayesian perspective. We have developed several reconstruction methods within a Bayesian framework, where we explored the use of Laplace and non-convex sparsity priors. These signal priors are very suitable for compressive sensing reconstruction as they enforce sparsity in the unknown signal to a great extent. We demonstrate with experimental results that the proposed algorithms provide competitive performance with state-of-the-art methods while requiring no user-intervention.
Please check the corresponding project pages for more details about the algorithms.