Project Details
Description
Matrix factorization is an integral component in many dimensionality reduction methods. The most common example is that of Principal Component Analysis (PCA), where a low dimensional distribution that explains variations in high dimensional data is computed through a low rank approximation of the covariance matrix. PCA is a core statistical tool which has applications in many branches of science where data analysis is involved.In the proposed project we will be studying optimization problems where matrix factorization is a part of the objective. While computing a low rank approximation of any given matrix is straight forward, it is still largely an open question how to perform inference with these models due to the complexity of the objective function. Early attempts to matrix recovery used convex relaxations but turned out to be too weak to give accurate results in many applications. We are interested in developing strong relaxations that can be combined with other model-based and learned priors to enable accurate matrix recovery, with efficient algorithms. Recent results on non-convex formulations have demonstrated a potential for significant improvements while still maintaining optimality guarantees. Our overall goal is to understand when this possible, by studying the optimization landscape, and to design reliable and efficient algorithms that converge to the right solution independently of initialization.
Status | Active |
---|---|
Effective start/end date | 2024/01/01 → 2028/12/31 |
Funding
- Swedish Research Council