Program for the working group on low-rank approximation

within the Dolomites Research Week on Approximation, Alba di Canazei, Italy, September 8–13, 2013.

Monday

  • 11:00-12:00: Mario Sznaier, Convex relaxations of semi-algebraic rank minimization problems arising in systems identification and machine learning
    Abstract: This talks discusses convex relaxations of semi-algebraic rank minimization problems arising in three closely related fields: identification of switched systems, manifold embedding of dynamic data and semi-supervised learning. As we will show in the talk, all of these problems are equivalent to minimizing the rank of a matrix that depends polynomially on the data. While in principle this is a hard non-convex problem, the use of moments based ideas allows for reducing it to a structured principal component decomposition: decomposing a matrix as a sum of structured low-rank and sparse components. In turn, this problem can be very efficiently solved using Augmented Lagrangian methods. In particular, we will discuss the use of Alternating Directions Method of Multipliers (ADMM), which in this case requires performing only a sequence of singular value decomposition and thresholding operations. Finally, we will cover some recent results on Atomic Norm minimization that provide a computationally attractive alternative to Interior Point and Augmented Lagrangian methods. In the second portion of the talk we will present a practical application of these results to problems that involve extracting actionable information from very large data streams, including genomic and video data segmentation, contextually abnormal activity detection and tracking in crowded scenarios.

  • 12:00-12:30 Kolumbán Sándor, Decreasing complexity of SDP relaxations for polynomial optimization by forcing low-rank constraints
    Abstract: The size of SDP relaxations corresponding to polynomial optimization problems increase combinatorially with the degree of the relaxation. Forcing low rank constraints on the moment matrices involved in the LMI constraints opens the possibility to reduce the size of the involved SDP. This may lead to better computational performance for other relaxations.

  • 17:00-17:45: Ivan Markovsky, Low-rank approximation problems in system identification
    Abstract: The notion of linear time-invariant model complexity is quantified and related to the rank of a mosaic Hankel matrix constructed from a set of trajectories of the system. This fact makes mosaic Hankel structured low-rank approximation (SLRA) a powerful tool for linear time-invariant system identification. It was shown in the past that errors-in-variables and output error identification problem with known noise covariance structure can be solved by generic SLRA methods. The SLRA approach has attractive numerical properties and can deal with exact and missing data values. Finally, we present work in progress on solving auto-regressive moving-average exogenous identification problems by SLRA.

  • 17:45-18:30: Konstantin Usevich, Adjusted least squares estimator for algebraic hypersurface fitting
    Abstract: The problem of algebraic hypersurface fitting is considered in an estimation framework: given a set of noisy data points, estimate the coefficients of a corresponding implicit multivariate polynomial equation. The adjusted least squares estimator proposed accounts for the bias present in the ordinary least squares estimator. An algorithm for computing the correction for an arbitrary support of the polynomial equation is presented and applied to the related problem of subspace clustering.

Tuesday

  • 11:00-12:00: Lieven De Lathauwer, Numerical approximation of higher-order tensors: An introduction
    Abstract: We give an overview of basic tensor decompositions and explain their general use. We pay special attention to the use of tensor decompositions for approximation purposes. We discuss numerical strategies for tensor approximation. Important progress has recently been made. It has been recognized that tensor product structure allows a very efficient storage and handling of the Jacobian and (approximate) Hessian of the cost function. On the other hand, multilinearity allows global optimization in (scaled) line and plane search. Although there are many possibilities for decomposition symmetry and factor structure, these may be conveniently handled. We demonstrate the algorithms using Tensorlab, our recently published MATLAB toolbox for tensors and tensor computations. (Joint work with Laurent Sorber and Marc Van Barel)

  • 12:00-12:30: Mariya Ishteva, Low multilinear rank approximation of tensors by structured matrix low-rank approximation
    Abstract: We first consider symmetric higher-order tensors and discuss a novel way to perform symmetric low multilinear rank approximation. Instead of representing the approximation as a product of factors with reduced dimensions, we make use of the fact that a matrix is rank deficient if it has nonzero null space (nonzero kernel). We show how to reformulate the tensor approximation problem as structured matrix low-rank approximation. In addition, by imposing a constraint on the kernel matrix of the approximation, our approach is applicable to general (non-symmetric) tensors. Finally, we deal with affinely structured tensors and obtain (locally) best low multilinear approximation with the same structure.

  • 17:00-17:30: Michael Steinlechner, Low-rank tensor completion by Riemannian optimization
    Abstract: In tensor completion, the goal is to fill in missing entries of a partially known tensor under a low-rank constraint. We propose a new algorithm that performs Riemannian optimization techniques on the manifold of tensors of fixed multilinear rank. More specifically, a variant of the nonlinear conjugate gradient method is developed. Paying particular attention to the efficient implementation, our algorithm scales linearly in the size of the tensor. Examples with synthetic data demonstrate good recovery even if the vast majority of the entries is unknown. We illustrate the use of the developed algorithm for the recovery of multidimensional images and for the approximation of multivariate functions. This is joint work with D. Kressner and B. Vandereycken.

  • 17:30-18:00: Bamdev Mishra, Manopt: A Matlab toolbox for optimization on manifolds

  • 18:00-18:30: Augustin Lefèvre, Informed source separation: A case study for low-rank approximation problems
    Abstract: Single-channel separation is a challenging problem in audio signal processing, with potential interest for tasks such as transcription or speech recognition in complex environments. We will first discuss how nonnnegative matrix factorization (NMF) can be used to perform single-channel source separation. It is now widely accepted additional constraints are needed to recover ‘‘meaningful'' solutions, hence the notion of informed source separation. In this talk, we will present an optimization problem that is close to NMF, and allows to incorporate user-specified constraints in a generic way, thanks to the generic framework of convex optimization. Experiments on audio signals illustrate our contribution and allow to compare its benefits relative to NMF.

Thursday

  • 16:30-17:15: Marco Signoretto, Tensor estimation in reproducing kernel Hilbert spaces with applications
    Abstract: Recently there has been an increasing interest in the cross-fertilization of ideas coming from (convex) optimization, kernel methods and tensor-based techniques. In this talk we discuss the estimation of tensors from data in the unifying framework of reproducing kernel Hilbert spaces (RKHSs). The methodology is based on a multilinear generalization of spectral penalties and relies on a novel representer theorem. When the RKHS is defined on the Cartesian product of finite sets, our problem formulation specializes into existing matrix and tensor completion problems. Additionally, the approach leads to the extension of kernel-based frameworks based on operator estimation. This includes multi-task learning, collaborative filtering and regularization networks. We illustrate applications and highlight the advantages with respect to the existing learning techniques.

  • 17:15-18:00: Francesco Dinuzzo, Low rank output kernels for multi-task learning problems
    Abstract: Low Rank Output Kernel Learning (LROKL) is a kernel-based nonlinear generalization of Reduced Rank Regression that allows to simultaneously solve multiple estimation tasks and reveal a shared low-dimensional subspace capturing inter-task relationships. This talk will discuss several alternative formulations of LROKL, numerical strategies to train the associated model, and applications to a variety of multi-task learning problems.

  • 18:00-18:30: Bamdev Mishra, Fixed-rank optimization on Riemannian quotient manifolds
    Abstract: Recently, the problem of low-rank matrix completion has attracted a lot of attention. Consequently, a lot of algorithms about fixed-rank optimization has been proposed. Motivated by this, we are interested in three popular fixed-rank factorizations that have been used extensively in the optimization community.
    The search space of all the three fixed-rank factorizations will be shown to admit a quotient structure, in particular, it will be given a more general Riemannian quotient structure. This structure embodies a product of well-known manifolds and the Riemannian framework will allow us exploit this and to systematically develop the tools necessary for optimization. As we move on, two important concepts of symmetry and scaling will be emphasized. A short discussion on the specific choice of the Riemannian metric and the tradeoff between geometry and numerics will be presented.
    The resulting class of algorithms, while maintaining complete generality, compete effectively with other state-of-the-art algorithms for the low-rank matrix completion problem on a number of problem instances.
    I will also present the new optimization toolbox Manopt that has been specifically designed to work with optimization on manifolds. The toolbox provides a unique platform to experiment with various manifolds and their geometries and comes bundled with a number of optimization schemes. Optimizing a cost function on a specific manifold can be implemented almost painlessly.
    Finally, I will present few open questions and give a future outlook of research in this direction.