Monday
11:0012:00: Mario Sznaier, Convex relaxations of semialgebraic rank minimization problems arising in systems identification and machine learning Abstract: This talks discusses convex relaxations of semialgebraic rank minimization problems arising in three closely related fields: identification of switched systems, manifold embedding of dynamic data and semisupervised learning. As we will show in the talk, all of these problems are equivalent to minimizing the rank of a matrix that depends polynomially on the data. While in principle this is a hard nonconvex problem, the use of moments based ideas allows for reducing it to a structured principal component decomposition: decomposing a matrix as a sum of structured lowrank and sparse components. In turn, this problem can be very efficiently solved using Augmented Lagrangian methods. In particular, we will discuss the use of Alternating Directions Method of Multipliers (ADMM), which in this case requires performing only a sequence of singular value decomposition and thresholding operations. Finally, we will cover some recent results on Atomic Norm minimization that provide a computationally attractive alternative to Interior Point and Augmented Lagrangian methods. In the second portion of the talk we will present a practical application of these results to problems that involve extracting actionable information from very large data streams, including genomic and video data segmentation, contextually abnormal activity detection and tracking in crowded scenarios.
12:0012:30 Kolumbán Sándor, Decreasing complexity of SDP relaxations for polynomial optimization by forcing lowrank constraints Abstract: The size of SDP relaxations corresponding to polynomial optimization problems increase combinatorially with the degree of the relaxation. Forcing low rank constraints on the moment matrices involved in the LMI constraints opens the possibility to reduce the size of the involved SDP. This may lead to better computational performance for other relaxations.
17:0017:45: Ivan Markovsky, Lowrank approximation problems in system identification Abstract: The notion of linear timeinvariant model complexity is quantified and related to the rank of a mosaic Hankel matrix constructed from a set of trajectories of the system. This fact makes mosaic Hankel structured lowrank approximation (SLRA) a powerful tool for linear timeinvariant system identification. It was shown in the past that errorsinvariables and output error identification problem with known noise covariance structure can be solved by generic SLRA methods. The SLRA approach has attractive numerical properties and can deal with exact and missing data values. Finally, we present work in progress on solving autoregressive movingaverage exogenous identification problems by SLRA.
17:4518:30: Konstantin Usevich, Adjusted least squares estimator for algebraic hypersurface fitting Abstract: The problem of algebraic hypersurface fitting is considered in an estimation framework: given a set of noisy data points, estimate the coefficients of a corresponding implicit multivariate polynomial equation. The adjusted least squares estimator proposed accounts for the bias present in the ordinary least squares estimator. An algorithm for computing the correction for an arbitrary support of the polynomial equation is presented and applied to the related problem of subspace clustering.
Tuesday
11:0012:00: Lieven De Lathauwer, Numerical approximation of higherorder tensors: An introduction Abstract: We give an overview of basic tensor decompositions and explain their general use. We pay special attention to the use of tensor decompositions for approximation purposes. We discuss numerical strategies for tensor approximation. Important progress has recently been made. It has been recognized that tensor product structure allows a very efficient storage and handling of the Jacobian and (approximate) Hessian of the cost function. On the other hand, multilinearity allows global optimization in (scaled) line and plane search. Although there are many possibilities for decomposition symmetry and factor structure, these may be conveniently handled. We demonstrate the algorithms using Tensorlab, our recently published MATLAB toolbox for tensors and tensor computations. (Joint work with Laurent Sorber and Marc Van Barel)
12:0012:30: Mariya Ishteva, Low multilinear rank approximation of tensors by structured matrix lowrank approximation Abstract: We first consider symmetric higherorder tensors and discuss a novel way to perform symmetric low multilinear rank approximation. Instead of representing the approximation as a product of factors with reduced dimensions, we make use of the fact that a matrix is rank deficient if it has nonzero null space (nonzero kernel). We show how to reformulate the tensor approximation problem as structured matrix lowrank approximation. In addition, by imposing a constraint on the kernel matrix of the approximation, our approach is applicable to general (nonsymmetric) tensors. Finally, we deal with affinely structured tensors and obtain (locally) best low multilinear approximation with the same structure.
17:0017:30: Michael Steinlechner, Lowrank tensor completion by Riemannian optimization Abstract: In tensor completion, the goal is to fill in missing entries of a partially known tensor under a lowrank constraint. We propose a new algorithm that performs Riemannian optimization techniques on the manifold of tensors of fixed multilinear rank. More specifically, a variant of the nonlinear conjugate gradient method is developed. Paying particular attention to the efficient implementation, our algorithm scales linearly in the size of the tensor. Examples with synthetic data demonstrate good recovery even if the vast majority of the entries is unknown. We illustrate the use of the developed algorithm for the recovery of multidimensional images and for the approximation of multivariate functions. This is joint work with D. Kressner and B. Vandereycken.
17:3018:00: Bamdev Mishra, Manopt: A Matlab toolbox for optimization on manifolds
18:0018:30: Augustin Lefèvre, Informed source separation: A case study for lowrank approximation problems Abstract: Singlechannel separation is a challenging problem in audio signal processing, with potential interest for tasks such as transcription or speech recognition in complex environments. We will first discuss how nonnnegative matrix factorization (NMF) can be used to perform singlechannel source separation. It is now widely accepted additional constraints are needed to recover ‘‘meaningful'' solutions, hence the notion of informed source separation. In this talk, we will present an optimization problem that is close to NMF, and allows to incorporate userspecified constraints in a generic way, thanks to the generic framework of convex optimization. Experiments on audio signals illustrate our contribution and allow to compare its benefits relative to NMF.
Thursday
16:3017:15: Marco Signoretto, Tensor estimation in reproducing kernel Hilbert spaces with applications Abstract: Recently there has been an increasing interest in the crossfertilization of ideas coming from (convex) optimization, kernel methods and tensorbased techniques. In this talk we discuss the estimation of tensors from data in the unifying framework of reproducing kernel Hilbert spaces (RKHSs). The methodology is based on a multilinear generalization of spectral penalties and relies on a novel representer theorem. When the RKHS is defined on the Cartesian product of finite sets, our problem formulation specializes into existing matrix and tensor completion problems. Additionally, the approach leads to the extension of kernelbased frameworks based on operator estimation. This includes multitask learning, collaborative filtering and regularization networks. We illustrate applications and highlight the advantages with respect to the existing learning techniques.
17:1518:00: Francesco Dinuzzo, Low rank output kernels for multitask learning problems Abstract: Low Rank Output Kernel Learning (LROKL) is a kernelbased nonlinear generalization of Reduced Rank Regression that allows to simultaneously solve multiple estimation tasks and reveal a shared lowdimensional subspace capturing intertask relationships. This talk will discuss several alternative formulations of LROKL, numerical strategies to train the associated model, and applications to a variety of multitask learning problems.
18:0018:30: Bamdev Mishra, Fixedrank optimization on Riemannian quotient manifolds Abstract: Recently, the problem of lowrank matrix completion has attracted a lot of attention. Consequently, a lot of algorithms about fixedrank optimization has been proposed. Motivated by this, we are interested in three popular fixedrank factorizations that have been used extensively in the optimization community. The search space of all the three fixedrank factorizations will be shown to admit a quotient structure, in particular, it will be given a more general Riemannian quotient structure. This structure embodies a product of wellknown manifolds and the Riemannian framework will allow us exploit this and to systematically develop the tools necessary for optimization. As we move on, two important concepts of symmetry and scaling will be emphasized. A short discussion on the specific choice of the Riemannian metric and the tradeoff between geometry and numerics will be presented. The resulting class of algorithms, while maintaining complete generality, compete effectively with other stateoftheart algorithms for the lowrank matrix completion problem on a number of problem instances. I will also present the new optimization toolbox Manopt that has been specifically designed to work with optimization on manifolds. The toolbox provides a unique platform to experiment with various manifolds and their geometries and comes bundled with a number of optimization schemes. Optimizing a cost function on a specific manifold can be implemented almost painlessly. Finally, I will present few open questions and give a future outlook of research in this direction.
