[1] 
K. Usevich and I. Markovsky.
Variable projection methods for approximate (greatest) common divisor
computations.
Theoretical Computer Science, 2016.
[ bib 
pdf 
http ]
We consider the problem of finding for a given Ntuple of polynomials the closest Ntuple that has a common divisor of degree at least d. Extended weighted Euclidean seminorm of coefficients is used as a measure of closeness. Two equivalent formulations of the problem are considered: (i) direct optimization over common divisors and cofactors, and (ii) Sylvester lowrank approximation. We use the duality between leastsquares and leastnorm problems to show that (i) and (ii) are closely related to mosaic Hankel lowrank approximation. This allows us to apply recent results on complexity and accuracy of computations for mosaic Hankel lowrank approximation. We develop optimization methods based on the variable projection principle. These methods have linear complexity in the degrees of the polynomials if either d is small or d is of the same order as the degrees of the polynomials. We provide a software implementation that is based on a software package for structured lowrank approximation.

[2] 
I. Markovsky.
On the most powerful unfalsified model for data with missing values.
Systems & Control Letters, 2016.
[ bib 
DOI 
pdf 
software ]
The notion of the most powerful unfalsified model plays a key role in system identification. Since its introduction in the mid 80's, many methods have been developed for its numerical computation. All currently existing methods, however, assume that the given data is a /complete/ trajectory of the system. Motivated by the practical issues of data corruption due to failing sensors, transmission lines, or storage devices, we study the problem of computing the most powerful unfalsified model from data with missing values. We do not make assumptions about the nature or pattern of the missing values apart from the basic one that they are a part of a trajectory of a linear timeinvariant system. The identification problem with missing data is equivalent to a Hankel structured lowrank matrix completion problem. The method proposed constructs rank deficient complete submatrices of the incomplete Hankel matrix. Under specified conditions the kernels of the submatrices form a nonminimal kernel representation of the data generating system. The final step of the algorithm is reduction of the nonminimal kernel representation to a minimal one. Apart from its practical relevance in identification, missing data is a useful concept in systems and control. Classic problems, such as simulation, filtering, and tracking control can be viewed as missing data estimation problems for a given system. The corresponding identification problems with missing data are “datadriven” equivalents of the classical simulation, filtering, and tracking control problems. Keywords: most powerful unfalsified model, exact system identification, subspace methods, missing data, lowrank matrix completion. 
[3] 
I. Markovsky.
Comparison of adaptive and modelfree methods for dynamic
measurement.
IEEE Signal Proc. Letters, 22(8):10941097, 2015.
[ bib 
DOI 
pdf 
software ]
Dynamic measurement aims to improve the speed and accuracy characteristics of measurement devices by signal processing. Stateoftheart dynamic measurement methods are modelbased adaptive methods, i.e., 1) they estimate model parameters in realtime and 2) based on the identified model perform modelbased signal processing. The proposed modelfree method belongs to the class of the subspace identification methods. It computes directly the quantity of interest without an explicit parameter estimation. This allows efficient computation as well as applicability to general high order multivariable processes. Keywords: subspace methods, total least squares, adaptive filtering, modelfree signal processing. 
[4] 
I. Markovsky and R. Pintelon.
Identification of linear timeinvariant systems from multiple
experiments.
IEEE Trans. Signal Process., 63(13):35493554, 2015.
[ bib 
DOI 
pdf ]
A standard assumption for consistent estimation in the errorsinvariables setting is persistency of excitation of the noise free input signal. We relax this assumption by considering data from multiple experiments. Consistency is obtained asymptotically as the number of experiments tends to infinity. The main theoretical and algorithmic difficulties are related to the growing number of tobeestimated initial conditions. The method proposed in the paper is based on analytic elimination of the initial conditions and optimization over the remaining parameters. The resulting estimator is consistent, however, achieving asymptotically efficiency remains an open problem. Keywords: maximum likelihood system identification, sumofdamped exponentials modeling, consistency, structured lowrank approximation 
[5] 
K. Usevich and I. Markovsky.
Adjusted least squares fitting of algebraic hypersurfaces.
Linear Algebra Appl., 2015.
[ bib 
DOI 
pdf ]
We consider the problem of fitting a set of points in Euclidean space by an algebraic hypersurface. We assume that points on a true hypersurface, described by a polynomial equation, are corrupted by zero mean independent Gaussian noise, and we estimate the coefficients of the true polynomial equation. The adjusted least squares estimator accounts for the bias present in the ordinary least squares estimator. The adjusted least squares estimator is based on constructing a quasiHankel matrix, which is a biascorrected matrix of moments. For the case of unknown noise variance, the estimator is defined as a solution of a polynomial eigenvalue problem. In this paper, we present new results on invariance properties of the adjusted least squares estimator and an improved algorithm for computing the estimator for an arbitrary set of monomials in the polynomial equation. Keywords: hypersurface fitting; curve fitting; statistical estimation, errorsinvariables; QuasiHankel matrix; Hermite polynomials; affine invariance; subspace clustering 
[6] 
I. Markovsky.
An application of system identification in metrology.
Control Engineering Practice, 43:8593, 2015.
[ bib 
DOI 
pdf 
software ]
Dynamic measurement refers to problems in metrology aiming at modification of characteristics of measurement devices by signal processing. Prototypical dynamic measurement problems, used in the paper as illustrative examples, are speeding up thermometers and weight scales. The paper presents a system theoretic formalization of the dynamic measurement problem as an input estimation problem for a system with step input. If the process dynamics is a priori known, the dynamic measurement problem is equivalent to a state estimation problem and can be solved efficiently in the linear timeinvariant case by the Kalman filter. In the case of unknown process dynamics, the dynamic measurement problem can be solved by adaptive filtering methods. A topic of current research, called datadriven dynamic measurement, is development of methods that compute the measurement quantity without explicitly identifying the process dynamics. Keywords: system identification, behavioral approach, Kalman filtering, metrology, reproducible research 
[7] 
N. Golyandina, A. Korobeynikov, A. Shlemov, and K. Usevich.
Multivariate and 2D extensions of singular spectrum analysis with
the Rssa Package.
Journal of Statistical Software, 67(2), 2015.
[ bib 
DOI ]
Implementation of multivariate and 2D extensions of singular spectrum analysis (SSA) by means of the R package Rssa is considered. The extensions include MSSA for simultaneous analysis and forecasting of several time series and 2DSSA for analysis of digital images. A new extension of 2DSSA analysis called shaped 2DSSA is introduced for analysis of images of arbitrary shape, not necessary rectangular. It is shown that implementation of shaped 2DSSA can serve as a basis for implementation of MSSA and other generalizations. Efficient implementation of operations with Hankel and HankelblockHankel matrices through the fast Fourier transform is suggested. Examples with code fragments in R, which explain the methodology and demonstrate the proper use of Rssa, are presented. Keywords: analysis, decomposition, forecasting, image processing, R package, singular spectrum analysis, time series 
[8]  P. Dreesen, M. Ishteva, and J. Schoukens. Decoupling multivariate polynomials using firstorder information and tensor decompositions. SIAM J. Matrix Anal. Appl., 36(2):864879, 2015. [ bib  DOI  pdf ] 
[9] 
K. Usevich and I. Markovsky.
Variable projection for affinely structured lowrank approximation in
weighted 2norms.
J. Comput. Appl. Math., 272:430448, 2014.
[ bib 
DOI 
pdf 
software 
http ]
The structured lowrank approximation problem for general affine structures, weighted 2norms and fixed elements is considered. The variable projection principle is used to reduce the dimensionality of the optimization problem. Algorithms for evaluation of the cost function, the gradient and an approximation of the Hessian are developed. For m×n mosaic Hankel matrices the algorithms have complexity O(m^{2}n). Keywords: Structured lowrank approximation; Variable projection; Mosaic Hankel matrices; Weighted 2norm; Fixed elements; Computational complexity 
[10] 
I. Markovsky and K. Usevich.
Software for weighted structured lowrank approximation.
J. Comput. Appl. Math., 256:278292, 2014.
[ bib 
DOI 
pdf 
software 
.html ]
A software package is presented that computes locally optimal solutions to lowrank approximation problems with the following features:

[11] 
I. Markovsky.
Recent progress on variable projection methods for structured
lowrank approximation.
Signal Processing, 96PB:406419, 2014.
[ bib 
DOI 
pdf 
software 
.html ]
Rank deficiency of a data matrix is equivalent to the existence of an exact linear model for the data. For the purpose of linear static modeling, the matrix is unstructured and the corresponding modeling problem is an approximation of the matrix by another matrix of a lower rank. In the context of linear timeinvariant dynamic models, the appropriate data matrix is Hankel and the corresponding modeling problems becomes structured lowrank approximation. Lowrank approximation has applications in: system identification; signal processing, machine learning, and computer algebra, where different types of structure and constraints occur. This paper gives an overview of recent progress in efficient local optimization algorithms for solving weighted mosaicHankel structured lowrank approximation problems. In addition, the data matrix may have missing elements and elements may be specified as exact. The described algorithms are implemented in a publicly available software package. Their application to system identification, approximate common divisor, and datadriven simulation problems is described in this paper and is illustrated by reproducible simulation examples. As a data modeling paradigm the lowrank approximation setting is closely related to the the behavioral approach in systems and control, total least squares, errorsinvariables modeling, principal component analysis, and rank minimization.

[12] 
K. Usevich and I. Markovsky.
Optimization on a Grassmann manifold with application to system
identification.
Automatica, 50:16561662, 2014.
[ bib 
DOI 
pdf 
software 
.html ]
In this paper, we consider the problem of optimization of a cost function on a Grassmann manifold. This problem appears in system identification in the behavioral setting, which is a structured lowrank approximation problem. We develop a new method for local optimization on the Grassmann manifold with switching coordinate charts. This method reduces the optimization problem on the manifold to an optimization problem in a bounded domain of an Euclidean space. Our experiments show that this method is competitive with state oftheart retractionbased methods. Compared to retractionbased methods, the proposed method allows to incorporate easily an arbitrary optimization method for solving the optimization subproblem in the Euclidean space. Keywords: system identification, overparameterized models, Grassmann manifold, coordinate charts, structured lowrank approximation, optimization 
[13] 
I. Markovsky, J. Goos, K. Usevich, and R. Pintelon.
Realization and identification of autonomous linear periodically
timevarying systems.
Automatica, 50:16321640, 2014.
[ bib 
DOI 
pdf 
software ]
Subsampling of a linear periodically timevarying system results in a collection of linear timeinvariant systems with common poles. This key fact, known as “lifting”, is used in a two step realization method. The first step is the realization of the timeinvariant dynamics (the lifted system). Computationally, this step is a rankrevealing factorization of a blockHankel matrix. The second step derives a state space representation of the periodic timevarying system. It is shown that no extra computations are required in the second step. The computational complexity of the overall method is therefore equal to the complexity for the realization of the lifted system. A modification of the realization method is proposed, which makes the complexity independent of the parameter variation period. Replacing the rankrevealing factorization in the realization algorithm by structured lowrank approximation yields a maximum likelihood identification method. Existing methods for structured lowrank approximation are used to identify efficiently linear periodically timevarying system. These methods can deal with missing data. Keywords: linear periodically timevarying systems, lifting, realization, Kung's algorithm, Hankel lowrank approximation, maximum likelihood estimation. 
[14] 
M. Ishteva, K. Usevich, and I. Markovsky.
Factorization approach to structured lowrank approximation with
applications.
SIAM J. Matrix Anal. Appl., 35(3):11801204, 2014.
[ bib 
DOI 
pdf 
software ]
We consider the problem of approximating an affinely structured matrix, for example a Hankel matrix, by a lowrank matrix with the same structure. This problem occurs in system identification, signal processing and computer algebra, among others. We impose the lowrank by modeling the approximation as a product of two factors with reduced dimension. The structure of the lowrank model is enforced by introducing a regularization term in the objective function. The proposed local optimization algorithm is able to solve the weighted structured lowrank approximation problem, as well as to deal with the cases of missing or fixed elements. In contrast to approaches based on kernel representations (in linear algebraic sense), the proposed algorithm is designed to address the case of small targeted rank. We compare it to existing approaches on numerical examples of system identification, approximate greatest common divisor problem, and symmetric tensor decomposition and demonstrate its consistently good performance. Keywords: lowrank approximation, affine structure, regularization, system identification, approximate greatest common divisor 
[15] 
S. Rhode, K. Usevich, I. Markovsky, and F. Gauterin.
A recursive restricted total leastsquares algorithm.
IEEE Trans. Signal Process., 62(21):56525662, 2014.
[ bib 
DOI 
pdf 
software ]
We show that the generalized total least squares problem with a singular noise covariance matrix is equivalent to the restricted total least squares problem and propose a recursive method for its numerical solution. The method is based on the generalized inverse iteration. The estimation error covariance matrix and the estimated augmented correction are also characterized and computed recursively. The algorithm is cheap to compute and is suitable for online implementation. Simulation results in least squares, data least squares, total least squares, and restricted total least squares noise scenarios show fast convergence of the parameter estimates to their optimal values obtained by corresponding batch algorithms. Keywords: total least squares, generalized total least squares, restricted total least squares (RTLS), recursive estimation, subspace tracking, system identification 
[16] 
S. De Marchi and K. Usevich.
On certain multivariate Vandermonde determinants whose variables
separate.
Linear Algebra and Its Applications, 449:1727, 2014.
[ bib 
DOI ]
We prove that for almost square tensor product grids and certain sets of bivariate polynomials the Vandermonde determinant can be factored into a product of univariate Vandermonde determinants. This result generalizes the conjecture [Lemma 1, L. Bos et al. (2009), Dolomites Research Notes on Approximation, 2:115]. As a special case, we apply the result to Padua and Padualike points. Keywords: Multivariate Vandermonde determinant; Padua points; Tensor product grid; Polynomial matrices; Semiseparable matrices 
[17] 
R. Kannan, M. Ishteva, and H. Park.
Bounded matrix factorization for recommender system.
Knowledge and Information Systems, 39(3):491511, 2014.
[ bib 
DOI 
pdf 
http ]
Matrix factorization has been widely utilized as a latent factor model for solving the recommender system problem using collaborative filtering. For a recommender system, all the ratings in the rating matrix are bounded within a predetermined range. In this paper, we propose a new improved matrix factorization approach for such a rating matrix, called Bounded Matrix Factorization (BMF), which imposes a lower and an upper bound on every estimated missing element of the rating matrix. We present an efficient algorithm to solve BMF based on the block coordinate descent method. We show that our algorithm is scalable for large matrices with missing elements on multicore systems with low memory. We present substantial experimental results illustrating that the proposed method outperforms the state of the art algorithms for recommender system such as stochastic gradient descent, alternating least squares with regularization, SVD++ and BiasSVD on realworld datasets such as Jester, Movielens, Book crossing, Online dating and Netflix. Keywords: Lowrank approximation; Recommender systems; Bound constraints; Matrix factorization; Block coordinate descent method; Scalable algorithm; Block 
[18] 
I. Markovsky and K. Usevich.
Structured lowrank approximation with missing data.
SIAM J. Matrix Anal. Appl., 34(2):814830, 2013.
[ bib 
DOI 
pdf 
software 
.html ]
We consider lowrank approximation of affinely structured matrices with missing elements. The method proposed is based on reformulation of the problem as inner and outer optimization. The inner minimization is a singular linear leastnorm problem and admits an analytic solution. The outer problem is a nonlinear least squares problem and is solved by local optimization methods: minimization subject to quadratic equality constraints and unconstrained minimization with regularized cost function. The method is generalized to weighted lowrank approximation with missing values and is illustrated on approximate lowrank matrix completion, system identification, and datadriven simulation problems. An extended version of the paper is a literate program, implementing the method and reproducing the presented results. Keywords: lowrank approximation, missing data, variable projection, system identification, approximate matrix completion. 
[19] 
I. Markovsky.
A software package for system identification in the behavioral
setting.
Control Engineering Practice, 21(10):14221436, 2013.
[ bib 
DOI 
pdf 
software 
.html ]
An identification problem with no a priori separation of the variables into inputs and outputs and representation invariant approximation criterion is considered. The model class consists of linear timeinvariant systems of bounded complexity and the approximation criterion is the minimum of a weighted 2norm distance between the given time series and a time series that is consistent with the model. The problem is equivalent to and is solved as a mosaicHankel structured lowrank approximation problem. Software implementing the approach is developed and tested on benchmark problems. Additional nonstandard features of the software are specification of exact and missing variables and identification from multiple experiments. Keywords: system identification; model reduction; behavioral approach; missing data; lowrank approximation; total least squares; reproducible research; DAISY. 
[20] 
M. Ishteva, P.A. Absil, and P. Van Dooren.
Jacobi algorithm for the best low multilinear rank approximation of
symmetric tensors.
SIAM J. Matrix Anal. Appl., 34(2):651672, 2013.
[ bib 
DOI 
pdf 
http ]
The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need of such algorithms. An important part of the paper consists of proving that our algorithm converges to stationary points of the objective function. This can be considered an advantage of the proposed algorithm over existing symmetrypreserving algorithms in the literature. Keywords: multilinear algebra, higherorder tensor, rank reduction, singular value decomposition, Jacobi rotation. 
[21] 
F. Le, I. Markovsky, C. Freeman, and E. Rogers.
Recursive identification of Hammerstein systems with application to
electrically stimulated muscle.
Control Engineering Practice, 20(4):386396, 2012.
[ bib 
DOI 
pdf ]
Two methods for recursive identification of Hammerstein systems are presented. In the first method, recursive least squares algorithm is applied to an overparameterized representation of the Hammerstein model and a rank1 approximation is used to recover the linear and nonlinear parameters from the estimated overparameterized representation. In the second method, the linear and nonlinear parameters are recursively estimated in an alternate manner. Numerical example with simulated data and experimental data from human muscles show the superiority of second method. Keywords: recursive identification; Hammerstein system; muscle model; functional electrical stimulation. 
[22] 
I. Markovsky.
On the complex least squares problem with constrained phase.
SIAM J. Matrix Anal. Appl., 32(3):987992, 2011.
[ bib 
DOI 
pdf 
software ]
The problem of solving approximately in the least squares sense an overdetermined linear system of equations with complex valued coefficients is considered, where the elements of the solution vector are constrained to have the same phase. A direct solution to this problem is given in [Linear Algebra and Its Applications, Vol. 433, pp. 17191721]. An alternative direct solution that reduces the problem to a generalized eigenvalue problem is derived in this paper. The new solution is related to generalized lowrank matrix approximation and makes possible one to use existing robust and efficient algorithms. Keywords: Linear system of equations, Phase constraint, Lowrank approximation, Total least squares. 
This file was generated by bibtex2html 1.97.