Hyperparameter Selection for Group-Sparse Regression: A Probabilistic Approach

Research output: Contribution to journalArticlepeer-review

146 Downloads (Pure)


This work analyzes the effects on support recovery for different choices of the hyper- or regularization parameter in LASSO-like sparse and group-sparse regression problems. The hyperparameter implicitly selects the model order of the solution, and is typically set using cross-validation (CV). This may be computationally prohibitive for large-scale problems, and also often overestimates the model order, as CV optimizes for prediction error rather than support recovery. In this work, we propose a probabilistic approach to select the hyperparameter, by quantifying the type I error (false positive rate) using extreme value analysis. From Monte Carlo simulations, one may draw inference on the upper tail of the distribution of the spurious parameter estimates, and the regularization level may be selected for a specified false positive rate. By solving the e group-LASSO problem, the choice of hyperparameter becomes independent of the noise variance. Furthermore, the effects on the false positive rate caused by collinearity in the dictionary is discussed, including ways of circumventing them. The proposed method is compared to other hyperparameter-selection methods in terms of support recovery, false positive rate, false negative rate, and computational complexity. Simulated data illustrate how the proposed method outperforms CV and comparable methods in both computational complexity and support recovery.
Original languageEnglish
Pages (from-to)107-118
JournalSignal Processing
Publication statusPublished - 2018

Subject classification (UKÄ)

  • Signal Processing
  • Probability Theory and Statistics


Dive into the research topics of 'Hyperparameter Selection for Group-Sparse Regression: A Probabilistic Approach'. Together they form a unique fingerprint.

Cite this