[1]  P. Dreesen, M. Ishteva, and J. Schoukens. Recovering WienerHammerstein nonlinear statespace models using linear algebra. In In Proc. of the 17th IFAC Symposium on System Identification (SYSID 2015), Beijing, China, 2015. [ bib ] 
[2]  P. Dreesen, M. Ishteva, and J. Schoukens. On the full and blockdecoupling of nonlinear functions. In PAMMProceedings of Applied Mathematics and Mechanics, volume 15, pages 739742, 2015. [ bib  DOI  http ] 
[3] 
I. Markovsky and R. Pintelon.
Consistent estimation of autonomous linear timeinvariant systems
from multiple experiments.
In In the Proc. of the Conference on Noise and Vibration
Engineering (ISMA), pages 32653268, Leuven, Belgium, September 2014.
[ bib 
pdf ]
Operational modal analysis from impulse response data can alternatively be viewed as an identification of a stable autonomous linear timeinvariant system. For example, earthquake response data of civil engineering structures and impulsive excitation of bridges leads to this problem. Identification from a single experiment, however, does not yield a consistent estimator in the output error setting due to the exponential decay of the noisefree signal. Using data from multiple experiments, on the other hand, is not straightforward because of the need to match the initial conditions in the repeated experiments. Consequently, we consider the identification from arbitrary initial conditions and show that consistent estimation is possible in this case. The computational method proposed in the paper is based on analytic elimination of the initial conditions (nuisance parameter) and local optimization over the remaining (model) parameters. It is implemented in a ready to use software package, available from http://slra.github.io/software.html

[4] 
M. Ishteva and I. Markovsky.
Tensor low multilinear rank approximation by structured matrix
lowrank approximation.
In In the Proc. of the 21st International Symposium on
Mathematical Theory of Networks and Systems (MTNS 2014), pages 18081812,
Groningen, The Netherlands, July 2014.
[ bib 
DOI 
pdf ]
Lowrank approximations play an important role in systems theory and signal processing. The problems of model reduction and system identification can be posed and solved as a lowrank approximation problem for structured matrices. On the other hand, signal source separation, nonlinear system identification, and multidimensional signal processing and data analysis can be approached with powerful methods of lowrank tensor factorizations. The proposed invited session is focused on theoretical and algorithmic aspects of matrix and tensor lowrank approximations, with applications in system theory and multidimensional signal processing.

[5] 
K. Usevich.
Decomposing multivariate polynomials with structured lowrank matrix
completion.
In Proc. of the 21th International Symposium on Mathematical
Theory of Networks and Systems, pages 18261833, 2014.
[ bib ]
We are focused on numerical methods for decomposing a multivariate polynomial as a sum of univariate polynomials in linear forms. The main tool is the recent result on correspondence between the Waring rank of a homogeneous polynomial and the rank of a partially known quasiHankel matrix constructed from the coefficients of the polynomial. Based on this correspondence, we show that the original decomposition problem can be reformulated as structured lowrank matrix completion (or as structured lowrank approximation in the case of approximate decomposition). We construct algorithms for the polynomial decomposition problem. In the case of bivariate polynomials, we provide an extension of the wellknown Sylvester algorithm for binary forms.

[6] 
A. Van Mulders, L. Vanbeylen, and K. Usevich.
Identification of a blockstructured model with several sources of
nonlinearity.
In Proc. of the 14th European Control Conference, pages
17171722, 2014.
[ bib 
DOI ]
This paper focuses on a statespace based approach for the identification of a rather general nonlinear blockstructured model. The model has several SingleInput SingleOutput (SISO) static polynomial nonlinearities connected to a MultipleInput MultipleOutput (MIMO) dynamic part. The presented method is an extension and improvement of prior work, where at most two nonlinearities could be identified. The location of the nonlinearities or their relation to other parts of the model does not have to be known beforehand: the method is a blackbox approach, in which no states, internal signals or structural properties need to be measured or known. The first step is to estimate a partly structured polynomial (nonlinear) statespace model from inputoutput measurements. Secondly, an algebraic approach is used to split the dynamics and the nonlinearities by decomposing the multivariate polynomial coefficients.

[7] 
I. Markovsky.
Approximate identification with missing data.
In In the Proc. of the 52nd IEEE Conference on Decision and
Control, pages 156161, Florence, Italy, December 2013.
[ bib 
DOI 
pdf 
software ]
Linear timeinvariant system identification is considered in the behavioral setting. Nonstandard features of the problem are specification of missing and exact variables and identification from multiple time series with different length. The problem is equivalent to mosaic Hankel structured lowrank approximation with elementwise weighted cost function. Zero/infinite weights are assigned to the missing/exact data points. The problem is in general nonconvex. A solution method based on local optimization is outlined and compared with alternative methods on simulation examples. In a stochastic setting, the problem corresponds to errorsinvariables identification. A modification of the generic problem considered is presented that is a deterministic equivalent to the classical ARMAX identification. The modification is also a mosaic Hankel structured lowrank approximation problem. Keywords: system identification; behavioral approach; missing data; mosaic Hankel matrix; lowrank approximation 
[8] 
I. Markovsky.
Exact identification with missing data.
In In the Proc. of the 52nd IEEE Conference on Decision and
Control, pages 151155, Florence, Italy, 2013.
[ bib 
DOI 
pdf 
software ]
The paper presents initial results on a subspace method for exact identification of a linear timeinvariant system from data with missing values. The identification problem with missing data is equivalent to a Hankel structured lowrank matrix completion problem. The novel idea is to search systematically and use effectively completely specified submatrices of the incomplete Hankel matrix constructed from the given data. Nontrivial kernels of the rankdeficient completely specified submatrices carry information about the tobeidentified system. Combining this information into a full model of the identified system is a greatest common divisor computation problem. The developed subspace method has linear computational complexity in the number of data points and is therefore an attractive alternative to more expensive methods based on the nuclear norm heuristic. Keywords: subspace system identification, missing data, lowrank matrix completion, nuclear norm, realization. 
[9] 
M. Ishteva, L. Song, and H. Park.
Unfolding latent tree structures using 4th order tensors.
In In Proc. of the International Conference on Machine Learning
(ICML), 2013.
[ bib 
DOI 
pdf 
.html ]
Discovering the structures of latent variable models whose conditional independence structures are trees is an important yet challenging learning task. Existing approaches for this task often require the unknown number of hidden states as an input. In this paper, we propose a quartet based approach which is agnostic to this number. The key contribution is a novel rank characterization of the tensor associated with the marginal distribution of a quartet. This characterization allows us to design a nuclear norm based test for resolving quartet relations. We then use the quartet test as a subroutine in a divideandconquer algorithm for recovering the latent tree structure. We also derive the conditions under which the algorithm is consistent and its error probability decays exponentially with increasing sample size. We demonstrate that the proposed approach compares favorably to alternatives. In a real world stock dataset, it also discovers meaningful groupings of variables, and produces a model that fits the data better. Keywords: latent tree graphical model, structure learning, quartet test, tensor, low rank, nuclear norm 
[10] 
L. Song, M. Ishteva, A. Parikh, E. Xing, and H. Park.
Hierarchical tensor decomposition of latent tree graphical models.
In International Conference on Machine Learning (ICML), 2013.
[ bib 
DOI 
pdf 
.html ]
We approach the problem of estimating the parameters of a latent tree graphical model from a hierarchical tensor decomposition point of view. In this new view, the marginal probability table of the observed variables is treated as a tensor, and we show that: (i) the latent variables induce low rank structures in various matricizations of the tensor; (ii) this collection of low rank matricizations induces a hierarchical low rank decomposition of the tensor. We further derive an optimization problem for estimating (alternative) parameters of a latent tree graphical model, allowing us to represent the marginal probability table of the observed variables in a compact and robust way. The optimization problem aims to find the best hierarchical low rank approximation of a tensor in Frobenius norm. Keywords: parameter estimation, latent tree graphical model, hierarchical tensor decomposition, low rank 
[11] 
K. Usevich.
Improved initial approximation for errorsinvariables system
identification.
In Proc. of the 20th Mediterranean Conference on Control and
Automation, pages 198203, Barcelona, Spain, 2012, July 2012.
[ bib 
DOI ]
Errorsinvariables system identification can be posed and solved as a Hankel structured lowrank approximation problem. In this paper different estimates based on suboptimal lowrank approximations are considered. The estimates are shown to have almost the same efficiency and lead to the same minimum when supplied as an initial approximation to local optimization solver of the structured lowrank approximation problem. In this paper it is shown that increasing Hankel matrix window length improves suboptimal estimates for autonomous systems and does not improve them for systems with inputs.

[12] 
I. Markovsky.
Dynamical systems and control mindstorms.
In In the Proc. of the 20th Mediterranean Conference on Control
and Automation, pages 5459, Barcelona, Spain, 2012.
[ bib 
DOI 
pdf ]
An unorthodox programme for teaching systems and control is developed and tested at the School of Electronics and Computer Science of the University of Southampton. Motivation for the employed teaching methods is Moore's method and S. Papert's book “Mindstorms: children, computers, and powerful ideas”. The teaching is shifted from lecture instruction to independent work on computer based projects and physical models. Our experience shows that involvement with projects is more effective in stimulating curiosity in systems and control related concepts and in achieving understanding of these concepts. The programme consists of two parts: 1) analytical and computational exercises, using Matlab/Octave, and 2) laboratory exercises, using programmable Lego mindstorms models. Both activities cut across several disciplines  physics, mathematics, computer programming, as well as the subject of the programme  systems and control theory.

[13] 
K. Usevich and I. Markovsky.
Structured lowrank approximation as a rational function
minimization.
In In the Proc. of the 16th IFAC Symposium on System
Identification, pages 722727, Brussels, 2012.
[ bib 
DOI 
pdf ]
Many problems of system identification, model reduction and signal processing can be posed and solved as a structured lowrank approximation problem (SLRA). In this paper a reformulation of SLRA as minimization of a multivariate rational function is considered. Using two different parametrizations, we show that the problem reduces to optimization over a compact manifold or to a set of optimization problems over bounded domains of Euclidean space. We make a review of methods of polynomial algebra for global optimization of the rational cost function.

[14] 
I. Markovsky.
How effective is the nuclear norm heuristic in solving data
approximation problems?
In In the Proc. of the 16th IFAC Symposium on System
Identification, pages 316321, Brussels, 2012.
[ bib 
DOI 
pdf 
software ]
The question in the title is answered empirically by solving instances of three classical problems: fitting a straight line to data, fitting a real exponent to data, and system identification in the errorsinvariables setting. The results show that the nuclear norm heuristic performs worse than alternative problem dependant methods—ordinary and total least squares, Kung’s method, and subspace identification. In the line fitting and exponential fitting problems, the globally optimal solution is known analytically, so that the suboptimality of the heuristic methods is quantified.

This file was generated by bibtex2html 1.97.