Package Modelica.​Math.​Matrices
Library of functions operating on matrices

Information

Library content

This library provides functions operating on matrices. Below, the functions are ordered according to categories and a typical call of the respective function is shown. Most functions are solely an interface to the external LAPACK library.

Note: A' is a short hand notation of transpose(A):

Basic Information

Linear Equations

Matrix Factorizations

Matrix Properties

Matrix Exponentials

Matrix Equations

Matrix Manipulation

See also

Vectors

Extends from Modelica.​Icons.​Package (Icon for standard packages).

Package Contents

NameDescription
balanceReturn a balanced form of matrix A to improve the condition of A
balanceABCReturn a balanced form of a system [A,B;C,0] to improve its condition by a state transformation
choleskyReturn the Cholesky factorization of a symmetric positive definite matrix
conditionNumberReturn the condition number norm(A)*norm(inv(A)) of a matrix A
continuousLyapunovReturn solution X of the continuous-time Lyapunov equation X*A + A'*X = C
continuousRiccatiReturn solution X of the continuous-time algebraic Riccati equation A'*X + X*A - X*B*inv(R)*B'*X + Q = 0 (care)
continuousSylvesterReturn solution X of the continuous-time Sylvester equation A*X + X*B = C
detReturn determinant of a matrix (computed by LU decomposition; try to avoid det(..))
discreteLyapunovReturn solution X of the discrete-time Lyapunov equation A'*X*A + sgn*X = C
discreteRiccatiReturn solution of discrete-time algebraic Riccati equation A'*X*A - X - A'*X*B*inv(R + B'*X*B)*B'*X*A + Q = 0 (dare)
discreteSylvesterReturn solution of the discrete-time Sylvester equation A*X*B + sgn*X = C
eigenValueMatrixReturn real valued block diagonal matrix J of eigenvalues of matrix A (A=V*J*Vinv)
eigenValuesReturn eigenvalues for a real, nonsymmetric matrix in a Real representation
equalityLeastSquaresSolve a linear equality constrained least squares problem
ExamplesExamples demonstrating the usage of the Math.Matrices functions
expReturn the exponential of a matrix by adaptive Taylor series expansion with scaling and balancing
flipLeftRightFlip the columns of a matrix in left/right direction
flipUpDownFlip the rows of a matrix in up/down direction
frobeniusNormReturn the Frobenius norm of a matrix
hessenbergReturn upper Hessenberg form of a matrix
integralExpReturn the exponential and the integral of the exponential of a matrix
integralExpTReturn the exponential, the integral of the exponential, and time-weighted integral of the exponential of a matrix
invReturn inverse of a matrix (try to avoid inv(..))
isEqualCompare whether two Real matrices are identical
LAPACKInterface to LAPACK library (should usually not directly be used but only indirectly via Modelica.Math.Matrices)
leastSquaresSolve linear equation A*x = b (exactly if possible, or otherwise in a least square sense; A may be non-square and may be rank deficient)
leastSquares2Solve linear equation A*X = B (exactly if possible, or otherwise in a least square sense; A may be non-square and may be rank deficient)
LULU decomposition of square or rectangular matrix
LU_solveSolve real system of linear equations P*L*U*x=b with a b vector and an LU decomposition (from LU(..))
LU_solve2Solve real system of linear equations P*L*U*X=B with a B matrix and an LU decomposition (from LU(..))
normReturn the p-norm of a matrix
nullSpaceReturn the orthonormal nullspace of a matrix
QRReturn the QR decomposition of a square matrix with optional column pivoting (A(:,p) = Q*R)
rankReturn rank of a rectangular matrix (computed with singular values)
rcondReturn the reciprocal condition number of a matrix
realSchurReturn the real Schur form (rsf) S of a square matrix A, A=QZ*S*QZ'
singularValuesReturn singular values and left and right singular vectors
solveSolve real system of linear equations A*x=b with a b vector (Gaussian elimination with partial pivoting)
solve2Solve real system of linear equations A*X=B with a B matrix (Gaussian elimination with partial pivoting)
sortSort the rows or columns of a matrix in ascending or descending order
toStringConvert a matrix into its string representation
traceReturn the trace of matrix A, i.e., the sum of the diagonal elements
UtilitiesUtility functions that should not be directly utilized by the user

Function Modelica.​Math.​Matrices.​toString
Convert a matrix into its string representation

Information

Syntax

Matrices.toString(A);
Matrices.toString(A, name="", significantDigits=6);

Description

The function call "Matrices.toString(A)" returns the string representation of matrix A. With the optional arguments "name" and "significantDigits", a name and the number of the digits are defined. The default values of name and significantDigits are "" and 6 respectively. If name=="" then the prefix "<name> =" is left out.

Example

  A = [2.12, -4.34; -2.56, -1.67];

  toString(A);
  // = "
  //      2.12   -4.34
  //     -2.56   -1.67";

  toString(A,"A",1);
  // = "A =
  //         2     -4
  //        -3     -2"

See also

Vectors.toString

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealM[:,:]Real matrix
StringnameIndependent variable name used for printing
IntegersignificantDigitsNumber of significant digits that are shown

Outputs

TypeNameDescription
StringsString expression of matrix M

Function Modelica.​Math.​Matrices.​isEqual
Compare whether two Real matrices are identical

Information

Syntax

Matrices.isEqual(M1, M2);
Matrices.isEqual(M1, M2, eps=0);

Description

The function call "Matrices.isEqual(M1, M2)" returns true, if the two Real matrices M1 and M2 have the same dimensions and the same elements. Otherwise the function returns false. Two elements e1 and e2 of the two matrices are checked on equality by the test "abs(e1-e2) ≤ eps", where "eps" can be provided as third argument of the function. Default is "eps = 0".

Example

  Real A1[2,2] = [1,2; 3,4];
  Real A2[3,2] = [1,2; 3,4; 5,6];
  Real A3[2,2] = [1,2, 3,4.0001];
  Boolean result;
algorithm
  result := Matrices.isEqual(M1,M2);     // = false
  result := Matrices.isEqual(M1,M3);     // = false
  result := Matrices.isEqual(M1,M1);     // = true
  result := Matrices.isEqual(M1,M3,0.1); // = true

See also

Vectors.isEqual, Strings.isEqual

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealM1[:,:]First matrix
RealM2[:,:]Second matrix (may have different size as M1)
RealepsTwo elements e1 and e2 of the two matrices are identical if abs(e1-e2) <= eps

Outputs

TypeNameDescription
Booleanresult= true, if matrices have the same size and the same elements

Function Modelica.​Math.​Matrices.​solve
Solve real system of linear equations A*x=b with a b vector (Gaussian elimination with partial pivoting)

Information

Syntax

Matrices.solve(A,b);

Description

This function call returns the solution x of the linear system of equations

A*x = b

If a unique solution x does not exist (since A is singular), an assertion is triggered. If this is not desired, use instead Matrices.leastSquares and inquire the singularity of the solution with the return argument rank (a unique solution is computed if rank = size(A,1)).

Note, the solution is computed with the LAPACK function "dgesv", i.e., by Gaussian elimination with partial pivoting.

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real b[3] = {10,22,12};
  Real x[3];
algorithm
  x := Matrices.solve(A,b);  // x = {3,2,1}

See also

Matrices.LU, Matrices.LU_solve, Matrices.leastSquares.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Matrix A of A*x = b
Realb[size(A, 1)]Vector b of A*x = b

Outputs

TypeNameDescription
Realx[size(b, 1)]Vector x such that A*x = b

Function Modelica.​Math.​Matrices.​solve2
Solve real system of linear equations A*X=B with a B matrix (Gaussian elimination with partial pivoting)

Information

Syntax

Matrices.solve2(A,b);

Description

This function call returns the solution X of the linear system of equations

A*X = B

If a unique solution X does not exist (since A is singular), an assertion is triggered. If this is not desired, use instead Matrices.leastSquares2 and inquire the singularity of the solution with the return argument rank (a unique solution is computed if rank = size(A,1)).

Note, the solution is computed with the LAPACK function "dgesv", i.e., by Gaussian elimination with partial pivoting.

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real B[3,2] = [10, 20;
                 22, 44;
                 12, 24];
  Real X[3,2];
algorithm
  X := Matrices.solve2(A, B);  /* X = [3, 6;
                                       2, 4;
                                       1, 2] */

See also

Matrices.LU, Matrices.LU_solve2, Matrices.leastSquares2.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Matrix A of A*X = B
RealB[size(A, 1),:]Matrix B of A*X = B

Outputs

TypeNameDescription
RealX[size(B, 1),size(B, 2)]Matrix X such that A*X = B

Function Modelica.​Math.​Matrices.​leastSquares
Solve linear equation A*x = b (exactly if possible, or otherwise in a least square sense; A may be non-square and may be rank deficient)

Information

Syntax

x = Matrices.leastSquares(A,b);

Description

Returns a solution of equation A*x = b in a least square sense (A may be rank deficient):

  minimize | A*x - b |

Several different cases can be distinguished (note, rank is an output argument of this function):

size(A,1) = size(A,2)

A solution is returned for a regular, as well as a singular matrix A:

size(A,1) > size(A,2):

The equation A*x = b has no unique solution. The solution x is selected such that |A*x - b| is as small as possible. If rank = size(A,2), this minimum norm solution is unique. If rank < size(A,2), there are an infinite number of solutions leading to the same minimum value of |A*x - b|. From these infinite number of solutions, the one with the minimum norm |x| is selected. This gives a unique solution that minimizes both |A*x - b| and |x|.

size(A,1) < size(A,2):

Note, the solution is computed with the LAPACK function "dgelsy", i.e., QR or LQ factorization of A with column pivoting.

Algorithmic details

The function first computes a QR factorization with column pivoting:

      A * P = Q * [ R11 R12 ]
                  [  0  R22 ]

with R11 defined as the largest leading submatrix whose estimated condition number is less than 1/rcond. The order of R11, rank, is the effective rank of A.

Then, R22 is considered to be negligible, and R12 is annihilated by orthogonal transformations from the right, arriving at the complete orthogonal factorization:

     A * P = Q * [ T11 0 ] * Z
                 [  0  0 ]

The minimum-norm solution is then

     x = P * Z' [ inv(T11)*Q1'*b ]
                [        0       ]

where Q1 consists of the first "rank" columns of Q.

See also

Matrices.leastSquares2 (same as leastSquares, but with a right hand side matrix),
Matrices.solve (for square, regular matrices A)

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Matrix A
Realb[size(A, 1)]Vector b
RealrcondReciprocal condition number to estimate the rank of A

Outputs

TypeNameDescription
Realx[size(A, 2)]Vector x such that min|A*x-b|^2 if size(A,1) >= size(A,2) or min|x|^2 and A*x=b, if size(A,1) < size(A,2)
IntegerrankRank of A

Function Modelica.​Math.​Matrices.​leastSquares2
Solve linear equation A*X = B (exactly if possible, or otherwise in a least square sense; A may be non-square and may be rank deficient)

Information

Syntax

X = Matrices.leastSquares2(A,B);

Description

Returns a solution of equation A*X = B in a least square sense (A may be rank deficient):

  minimize | A*X - B |

Several different cases can be distinguished (note, rank is an output argument of this function):

size(A,1) = size(A,2)

A solution is returned for a regular, as well as a singular matrix A:

size(A,1) > size(A,2):

The equation A*X = B has no unique solution. The solution X is selected such that |A*X - B| is as small as possible. If rank = size(A,2), this minimum norm solution is unique. If rank < size(A,2), there are an infinite number of solutions leading to the same minimum value of |A*X - B|. From these infinite number of solutions, the one with the minimum norm |X| is selected. This gives a unique solution that minimizes both |A*X - B| and |X|.

size(A,1) < size(A,2):

Note, the solution is computed with the LAPACK function "dgelsy", i.e., QR or LQ factorization of A with column pivoting.

Algorithmic details

The function first computes a QR factorization with column pivoting:

      A * P = Q * [ R11 R12 ]
                  [  0  R22 ]

with R11 defined as the largest leading submatrix whose estimated condition number is less than 1/rcond. The order of R11, rank, is the effective rank of A.

Then, R22 is considered to be negligible, and R12 is annihilated by orthogonal transformations from the right, arriving at the complete orthogonal factorization:

     A * P = Q * [ T11 0 ] * Z
                 [  0  0 ]

The minimum-norm solution is then

     X = P * Z' [ inv(T11)*Q1'*B ]
                [        0       ]

where Q1 consists of the first "rank" columns of Q.

See also

Matrices.leastSquares (same as leastSquares2, but with a right hand side vector),
Matrices.solve2 (for square, regular matrices A)

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Matrix A
RealB[size(A, 1),:]Matrix B
RealrcondReciprocal condition number to estimate rank of A

Outputs

TypeNameDescription
RealX[size(A, 2),size(B, 2)]Matrix X such that min|A*X-B|^2 if size(A,1) >= size(A,2) or min|X|^2 and A*X=B, if size(A,1) < size(A,2)
IntegerrankRank of A

Function Modelica.​Math.​Matrices.​equalityLeastSquares
Solve a linear equality constrained least squares problem

Information

Syntax

x = Matrices.equalityLeastSquares(A,a,B,b);

Description

This function returns the solution x of the linear equality-constrained least squares problem:

min|A*x - a|^2 over x, subject to B*x = b

It is required that the dimensions of A and B fulfill the following relationship:

size(B,1) ≤ size(A,2) ≤ size(A,1) + size(B,1)

Note, the solution is computed with the LAPACK function "dgglse" using the generalized RQ factorization under the assumptions that B has full row rank (= size(B,1)) and the matrix [A;B] has full column rank (= size(A,2)). In this case, the problem has a unique solution.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Minimize |A*x - a|^2
Reala[size(A, 1)] 
RealB[:,size(A, 2)]subject to B*x=b
Realb[size(B, 1)] 

Outputs

TypeNameDescription
Realx[size(A, 2)]solution vector

Function Modelica.​Math.​Matrices.​LU
LU decomposition of square or rectangular matrix

Information

Syntax

(LU, pivots)       = Matrices.LU(A);
(LU, pivots, info) = Matrices.LU(A);

Description

This function call returns the LU decomposition of a "Real[m,n]" matrix A, i.e.,

P*L*U = A

where P is a permutation matrix (implicitly defined by vector pivots), L is a lower triangular matrix with unit diagonal elements (lower trapezoidal if m > n), and U is an upper triangular matrix (upper trapezoidal if m < n). Matrices L and U are stored in the returned matrix LU (the diagonal of L is not stored). With the companion function Matrices.LU_solve, this decomposition can be used to solve linear systems (P*L*U)*x = b with different right hand side vectors b. If a linear system of equations with just one right hand side vector b shall be solved, it is more convenient to just use the function Matrices.solve.

The optional third (Integer) output argument has the following meaning:

info = 0: successful exit
info > 0: if info = i, U[i,i] is exactly zero. The factorization has been completed,
but the factor U is exactly singular, and division by zero will occur
if it is used to solve a system of equations.

The LU factorization is computed with the LAPACK function "dgetrf", i.e., by Gaussian elimination using partial pivoting with row interchanges. Vector "pivots" are the pivot indices, i.e., for 1 ≤ i ≤ min(m,n), row i of matrix A was interchanged with row pivots[i].

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real b1[3] = {10,22,12};
  Real b2[3] = { 7,13,10};
  Real    LU[3,3];
  Integer pivots[3];
  Real    x1[3];
  Real    x2[3];
algorithm
  (LU, pivots) := Matrices.LU(A);
  x1 := Matrices.LU_solve(LU, pivots, b1);  // x1 = {3,2,1}
  x2 := Matrices.LU_solve(LU, pivots, b2);  // x2 = {1,0,2}

See also

Matrices.LU_solve, Matrices.solve,

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Square or rectangular matrix

Outputs

TypeNameDescription
RealLU[size(A, 1),size(A, 2)]L,U factors (used with LU_solve(..))
Integerpivots[min(size(A, 1), size(A, 2))]pivot indices (used with LU_solve(..))
IntegerinfoInformation

Function Modelica.​Math.​Matrices.​LU_solve
Solve real system of linear equations P*L*U*x=b with a b vector and an LU decomposition (from LU(..))

Information

Syntax

Matrices.LU_solve(LU, pivots, b);

Description

This function call returns the solution x of the linear systems of equations

P*L*U*x = b;

where P is a permutation matrix (implicitly defined by vector pivots), L is a lower triangular matrix with unit diagonal elements (lower trapezoidal if m > n), and U is an upper triangular matrix (upper trapezoidal if m < n). The matrices of this decomposition are computed with function Matrices.LU that returns arguments LU and pivots used as input arguments of Matrices.LU_solve. With Matrices.LU and Matrices.LU_solve it is possible to efficiently solve linear systems with different right hand side vectors. If a linear system of equations with just one right hand side vector shall be solved, it is more convenient to just use the function Matrices.solve.

If a unique solution x does not exist (since the LU decomposition is singular), an exception is raised.

The LU factorization is computed with the LAPACK function "dgetrf", i.e., by Gaussian elimination using partial pivoting with row interchanges. Vector "pivots" are the pivot indices, i.e., for 1 ≤ i ≤ min(m,n), row i of matrix A was interchanged with row pivots[i].

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real b1[3] = {10,22,12};
  Real b2[3] = { 7,13,10};
  Real    LU[3,3];
  Integer pivots[3];
  Real    x1[3];
  Real    x2[3];
algorithm
  (LU, pivots) := Matrices.LU(A);
  x1 := Matrices.LU_solve(LU, pivots, b1);  // x1 = {3,2,1}
  x2 := Matrices.LU_solve(LU, pivots, b2);  // x2 = {1,0,2}

See also

Matrices.LU, Matrices.solve,

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealLU[:,size(LU, 1)]L,U factors of Matrices.LU(..) for a square matrix
Integerpivots[size(LU, 1)]Pivots indices of Matrices.LU(..)
Realb[size(LU, 1)]Right hand side vector of P*L*U*x=b

Outputs

TypeNameDescription
Realx[size(b, 1)]Solution vector such that P*L*U*x = b

Function Modelica.​Math.​Matrices.​LU_solve2
Solve real system of linear equations P*L*U*X=B with a B matrix and an LU decomposition (from LU(..))

Information

Syntax

Matrices.LU_solve2(LU, pivots, B);

Description

This function call returns the solution X of the linear systems of equations

P*L*U*X = B;

where P is a permutation matrix (implicitly defined by vector pivots), L is a lower triangular matrix with unit diagonal elements (lower trapezoidal if m > n), and U is an upper triangular matrix (upper trapezoidal if m < n). The matrices of this decomposition are computed with function Matrices.LU that returns arguments LU and pivots used as input arguments of Matrices.LU_solve2. With Matrices.LU and Matrices.LU_solve2 it is possible to efficiently solve linear systems with different right hand side matrices. If a linear system of equations with just one right hand side matrix shall be solved, it is more convenient to just use the function Matrices.solve2.

If a unique solution X does not exist (since the LU decomposition is singular), an exception is raised.

The LU factorization is computed with the LAPACK function "dgetrf", i.e., by Gaussian elimination using partial pivoting with row interchanges. Vector "pivots" are the pivot indices, i.e., for 1 ≤ i ≤ min(m,n), row i of matrix A was interchanged with row pivots[i].

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real B1[3] = [10, 20;
                22, 44;
                12, 24];
  Real B2[3] = [ 7, 14;
                13, 26;
                10, 20];
  Real    LU[3,3];
  Integer pivots[3];
  Real    X1[3,2];
  Real    X2[3,2];
algorithm
  (LU, pivots) := Matrices.LU(A);
  X1 := Matrices.LU_solve2(LU, pivots, B1);  /* X1 = [3, 6;
                                                      2, 4;
                                                      1, 2] */
  X2 := Matrices.LU_solve2(LU, pivots, B2);  /* X2 = [1, 2;
                                                      0, 0;
                                                      2, 4] */

See also

Matrices.LU, Matrices.solve2,

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealLU[:,size(LU, 1)]L,U factors of Matrices.LU(..) for a square matrix
Integerpivots[size(LU, 1)]Pivots indices of Matrices.LU(..)
RealB[size(LU, 1),:]Right hand side matrix of P*L*U*X=B

Outputs

TypeNameDescription
RealX[size(B, 1),size(B, 2)]Solution matrix such that P*L*U*X = B

Function Modelica.​Math.​Matrices.​eigenValues
Return eigenvalues for a real, nonsymmetric matrix in a Real representation

Information

Syntax

                eigenvalues = Matrices.eigenValues(A);
(eigenvalues, eigenvectors) = Matrices.eigenValues(A);

Description

This function call returns the eigenvalues and optionally the (right) eigenvectors of a square matrix A. The first column of "eigenvalues" contains the real and the second column contains the imaginary part of the eigenvalues. If the i-th eigenvalue has no imaginary part, then eigenvectors[:,i] is the corresponding real eigenvector. If the i-th eigenvalue has an imaginary part, then eigenvalues[i+1,:] is the conjugate complex eigenvalue and eigenvectors[:,i] is the real and eigenvectors[:,i+1] is the imaginary part of the eigenvector of the i-th eigenvalue. With function Matrices.eigenValueMatrix, a real block diagonal matrix is constructed from the eigenvalues such that

A = eigenvectors * eigenValueMatrix(eigenvalues) * inv(eigenvectors)

provided the eigenvector matrix "eigenvectors" can be inverted (an inversion is possible, if all eigenvalues are different; in some cases, an inversion is also possible if some eigenvalues are the same).

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real eval;
algorithm
  eval := Matrices.eigenValues(A);  // eval = [-0.618, 0;
                                    //          8.0  , 0;
                                    //          1.618, 0];

i.e., matrix A has the 3 real eigenvalues -0.618, 8, 1.618.

See also

Matrices.eigenValueMatrix, Matrices.singularValues

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Matrix

Outputs

TypeNameDescription
Realeigenvalues[size(A, 1),2]Eigenvalues of matrix A (Re: first column, Im: second column)
Realeigenvectors[size(A, 1),size(A, 2)]Real-valued eigenvector matrix

Function Modelica.​Math.​Matrices.​eigenValueMatrix
Return real valued block diagonal matrix J of eigenvalues of matrix A (A=V*J*Vinv)

Information

Syntax

Matrices.eigenValueMatrix(eigenvalues);

Description

The function call returns a block diagonal matrix J from the two-column matrix eigenvalues (computed by function Matrices.eigenValues). Matrix eigenvalues must have the real part of the eigenvalues in the first column and the imaginary part in the second column. If an eigenvalue i has a vanishing imaginary part, then J[i,i] = eigenvalues[i,1], i.e., the diagonal element of J is the real eigenvalue. Otherwise, eigenvalue i and conjugate complex eigenvalue i+1 are used to construct a 2 by 2 diagonal block of J:

  J[i  , i]   := eigenvalues[i,1];
  J[i  , i+1] := eigenvalues[i,2];
  J[i+1, i]   := eigenvalues[i+1,2];
  J[i+1, i+1] := eigenvalues[i+1,1];

See also

Matrices.eigenValues

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealeigenValues[:,2]Eigen values from function eigenValues(..) (Re: first column, Im: second column)

Outputs

TypeNameDescription
RealJ[size(eigenValues, 1),size(eigenValues, 1)]Real valued block diagonal matrix with eigen values (Re: 1x1 block, Im: 2x2 block)

Function Modelica.​Math.​Matrices.​singularValues
Return singular values and left and right singular vectors

Information

Syntax

         sigma = Matrices.singularValues(A);
(sigma, U, VT) = Matrices.singularValues(A);

Description

This function computes the singular values and optionally the singular vectors of matrix A. Basically the singular value decomposition of A is computed, i.e.,

A = U S VT
  = U*Sigma*VT

where U and V are orthogonal matrices (UUT=I, VVT=I). S = [diagonal(si), zeros(n,m-n)], if n=size(A,1) ≤ m=size(A,2)) or [diagonal(si); zeros(n-m,m)], if n > m=size(A,2)). S has the same size as matrix A with nonnegative diagonal elements in decreasing order and with all other elements zero (s1 is the largest element). The function returns the singular values si in vector sigma and the orthogonal matrices in matrices U and VT.

Example

  A = [1, 2,  3,  4;
       3, 4,  5, -2;
      -1, 2, -3,  5];
  (sigma, U, VT) = singularValues(A);
  results in:
     sigma = {8.33, 6.94, 2.31};
  i.e.
     Sigma = [8.33,    0,    0, 0;
                 0, 6.94,    0, 0;
                 0,    0, 2.31, 0]

See also

Matrices.eigenValues

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Matrix

Outputs

TypeNameDescription
Realsigma[min(size(A, 1), size(A, 2))]Singular values
RealU[size(A, 1),size(A, 1)]Left orthogonal matrix
RealVT[size(A, 2),size(A, 2)]Transposed right orthogonal matrix

Function Modelica.​Math.​Matrices.​QR
Return the QR decomposition of a square matrix with optional column pivoting (A(:,p) = Q*R)

Information

Syntax

(Q,R,p) = Matrices.QR(A);

Description

This function returns the QR decomposition of a rectangular matrix A (the number of columns of A must be less than or equal to the number of rows):

Q*R = A[:,p]

where Q is a rectangular matrix that has orthonormal columns and has the same size as A (QTQ=I), R is a square, upper triangular matrix and p is a permutation vector. Matrix R has the following important properties:

This means that if abs(R[i,i]) ≤ ε then abs(R[j,k]) ≤ ε for j ≥ i, i.e., the i-th row up to the last row of R have small elements and can be treated as being zero. This allows to, e.g., estimate the row-rank of R (which is the same row-rank as A). Furthermore, R can be partitioned in two parts

   A[:,p] = Q * [R1, R2;
                 0,  0]

where R1 is a regular, upper triangular matrix.

Note, the solution is computed with the LAPACK functions "dgeqpf" and "dorgqr", i.e., by Householder transformations with column pivoting. If Q is not needed, the function may be called as: (,R,p) = QR(A).

Example

  Real A[3,3] = [1,2,3;
                 3,4,5;
                 2,1,4];
  Real R[3,3];
algorithm
  (,R) := Matrices.QR(A);  // R = [-7.07.., -4.24.., -3.67..;
                                    0     , -1.73.., -0.23..;
                                    0     ,  0     ,  0.65..];

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Rectangular matrix with size(A,1) >= size(A,2)
BooleanpivotingTrue if column pivoting is performed. True is default

Outputs

TypeNameDescription
RealQ[size(A, 1),size(A, 2)]Rectangular matrix with orthonormal columns such that Q*R=A[:,p]
RealR[size(A, 2),size(A, 2)]Square upper triangular matrix
Integerp[size(A, 2)]Column permutation vector

Function Modelica.​Math.​Matrices.​hessenberg
Return upper Hessenberg form of a matrix

Information

Syntax

         H = Matrices.hessenberg(A);
    (H, U) = Matrices.hessenberg(A);
 

Description

Function hessenberg computes the Hessenberg matrix H of matrix A as well as the orthogonal transformation matrix U that holds H = U'*A*U. The Hessenberg form of a matrix is computed by repeated Householder similarity transformation. The elementary reflectors and the corresponding scalar factors are provided by function "Utilities.toUpperHessenberg()". The transformation matrix U is then computed by LAPACK.dorghr.

Example

 A  = [1, 2,  3;
       6, 5,  4;
       1, 0,  0];

 (H, U) = hessenberg(A);

  results in:

 H = [1.0,  -2.466,  2.630;
     -6.083, 5.514, -3.081;
      0.0,   0.919, -0.514]

 U = [1.0,    0.0,      0.0;
      0.0,   -0.9864,  -0.1644;
      0.0,   -0.1644,   0.9864]

  and therefore,

 U*H*transpose(U) = [1.0, 2.0, 3.0;
                     6.0, 5.0, 4.0;
                     1.0, 0.0, 0.0]

See also

Matrices.Utilities.toUpperHessenberg

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A

Outputs

TypeNameDescription
RealH[size(A, 1),size(A, 2)]Hessenberg form of A
RealU[size(A, 1),size(A, 2)]Transformation matrix

Function Modelica.​Math.​Matrices.​realSchur
Return the real Schur form (rsf) S of a square matrix A, A=QZ*S*QZ'

Information

Syntax

                            S = Matrices.realSchur(A);
(S, QZ, alphaReal, alphaImag) = Matrices.realSchur(A);

Description

Function realSchur calculates the real Schur form of a real square matrix A, i.e.

 A = QZ*S*transpose(QZ)

with the real nxn matrices S and QZ. S is a block upper triangular matrix with 1x1 and 2x2 blocks in the diagonal. QZ is an orthogonal matrix. The 1x1 blocks contains the real eigenvalues of A. The 2x2 blocks [s11, s12; s21, s11] represents the conjugated complex pairs of eigenvalues, whereas the real parts of the eigenvalues are the elements of the diagonal (s11). The imaginary parts are the positive and negative square roots of the product of the two elements s12 and s21 (imag = +-sqrt(s12*s21)).

The calculation in lapack.dgees is performed stepwise, i.e., using the internal methods of balancing and scaling of dgees.

Example

   Real A[3,3] = [1, 2, 3; 4, 5, 6; 7, 8, 9];
   Real T[3,3];
   Real Z[3,3];
   Real alphaReal[3];
   Real alphaImag[3];

algorithm
  (T, Z, alphaReal, alphaImag):=Modelica.Math.Matrices.realSchur(A);
//   T = [16.12, 4.9,   1.59E-015;
//        0,    -1.12, -1.12E-015;
//        0,     0,    -1.30E-015]
//   Z = [-0.23,  -0.88,   0.41;
//        -0.52,  -0.24,  -0.82;
//        -0.82,   0.4,    0.41]
//alphaReal = {16.12, -1.12, -1.32E-015}
//alphaImag = {0, 0, 0}

See also

Math.Matrices.Utilities.reorderRSF

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix

Outputs

TypeNameDescription
RealS[size(A, 1),size(A, 2)]Real Schur form of A
RealQZ[size(A, 1),size(A, 2)]Schur vector Matrix
RealalphaReal[size(A, 1)]Real part of eigenvalue=alphaReal+i*alphaImag
RealalphaImag[size(A, 1)]Imaginary part of eigenvalue=alphaReal+i*alphaImag

Function Modelica.​Math.​Matrices.​cholesky
Return the Cholesky factorization of a symmetric positive definite matrix

Information

Syntax

         H = Matrices.cholesky(A);
         H = Matrices.cholesky(A, upper=true);
 

Description

Function cholesky computes the Cholesky factorization of a real symmetric positive definite matrix A. The optional Boolean input "upper" specifies whether the upper or the lower triangular matrix is returned, i.e.

 A = H'*H   if upper is true (H is upper triangular)
 A = H*H'   if upper is false (H is lower triangular)

The computation is performed by LAPACK.dpotrf.

Example

  A  = [1, 0,  0;
        6, 5,  0;
        1, -2,  2];
  S = A*transpose(A);

  H = Matrices.cholesky(S);

  results in:

  H = [1.0,  6.0,  1.0;
       0.0,  5.0, -2.0;
       0.0,  0.0,  2.0]

  with

  transpose(H)*H = [1.0,  6.0,   1;
                    6.0, 61.0,  -4.0;
                    1.0, -4.0,   9.0] //=S

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Symmetric positive definite matrix
BooleanupperTrue if the right cholesky factor (upper triangle) should be returned

Outputs

TypeNameDescription
RealH[size(A, 1),size(A, 2)]Cholesky factor U (upper=true) or L (upper=false) for A = U'*U or A = L*L'

Function Modelica.​Math.​Matrices.​balance
Return a balanced form of matrix A to improve the condition of A

Information

Syntax

(D,B) = Matrices.balance(A);

Description

This function returns a vector D, such that B=inv(diagonal(D))*A*diagonal(D) has a better condition as matrix A, i.e., conditionNumber(B) ≤ conditionNumber(A). The elements of D are multiples of 2 which means that this function does not introduce round-off errors. Balancing attempts to make the norm of each row of B equal to the norm of the respective column.

Balancing is used to minimize roundoff errors induced through large matrix calculations like Taylor-series approximation or computation of eigenvalues.

Example

       - A = [1, 10,  1000; 0.01,  0,  10; 0.005,  0.01,  10]
       - Matrices.norm(A, 1);
         = 1020.0
       - (T,B)=Matrices.balance(A)
       - T
         = {256, 16, 0.5}
       - B
         =  [1,     0.625,   1.953125;
             0.16,  0,       0.3125;
             2.56,  0.32,   10.0]
       - Matrices.norm(B, 1);
         = 12.265625

The Algorithm is taken from

H. D. Joos, G. Grübel:
RASP'91 Regulator Analysis and Synthesis Programs
DLR - Control Systems Group 1991

which based on the balance function from EISPACK.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)] 

Outputs

TypeNameDescription
RealD[size(A, 1)]diagonal(D)=T is transformation matrix, such that B = inv(T)*A*T has smaller condition as A
RealB[size(A, 1),size(A, 1)]Balanced matrix (= inv(diagonal(D))*A*diagonal(D) )

Function Modelica.​Math.​Matrices.​balanceABC
Return a balanced form of a system [A,B;C,0] to improve its condition by a state transformation

Information

Syntax

(scale,As,Bs,Cs) = Matrices.balanceABC(A,B,C);
(scale,As,Bs)    = Matrices.balanceABC(A,B);
(scale,As,,Cs)   = Matrices.balanceABC(A,C=C);

Description

This function returns a vector scale, such that with T=diagonal(scale) system matrix S_scale

             |inv(T)*A*T, inv(T)*B|
   S_scale = |                    |
             |       C*T,     0   |

has a better condition as system matrix S

       |A, B|
   S = |    |
       |C, 0|

that is, conditionNumber(S_scale) ≤ conditionNumber(S). The elements of vector scale are multiples of 2 which means that this function does not introduce round-off errors.

Balancing a linear dynamic system in state space form

  der(x) = A*x + B*u
      y  = C*x + D*u

means to find a state transformation x_new = T*x = diagonal(scale)*x so that the transformed system is better suited for numerical algorithms.

Example

import Modelica.Math.Matrices;

A = [1, -10,  1000; 0.01,  0,  10; 0.005,  -0.01,  10];
B = [100, 10; 1,0; -0.003, 1];
C = [-0.5, 1, 100];

(scale, As, Bs, Cs) := Matrices.balanceABC(A,B,C);
T    = diagonal(scale);
Diff = [Matrices.inv(T)*A*T, Matrices.inv(T)*B;
        C*T, zeros(1,2)] - [As, Bs; Cs, zeros(1,2)];
err  = Matrices.norm(Diff);

-> Results in:
scale = {16, 1, 0.0625}
norm(A)  = 1000.15, norm(B)  = 100.504, norm(C)  = 100.006
norm(As) = 10.8738, norm(Bs) = 16.0136, norm(Cs) = 10.2011
err = 0

The algorithm is taken from

H. D. Joos, G. Grübel:
RASP'91 Regulator Analysis and Synthesis Programs
DLR - Control Systems Group 1991

which is based on the balance function from EISPACK.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]System matrix A
RealB[size(A, 1),:]System matrix B (need not be present)
RealC[:,size(A, 1)]System matrix C (need not be present)

Outputs

TypeNameDescription
Realscale[size(A, 1)]diagonal(scale)=T is such that [inv(T)*A*T, inv(T)*B; C*T, 0] has smaller condition as [A,B;C,0]
RealAs[size(A, 1),size(A, 1)]Balanced matrix A (= inv(T)*A*T )
RealBs[size(A, 1),size(B, 2)]Balanced matrix B (= inv(T)*B )
RealCs[size(C, 1),size(A, 1)]Balanced matrix C (= C*T )

Function Modelica.​Math.​Matrices.​trace
Return the trace of matrix A, i.e., the sum of the diagonal elements

Information

Syntax

  r = Matrices.trace(A);

Description

This function computes the trace, i.e., the sum of the elements in the diagonal of matrix A.

Example

  A = [1, 3;
       2, 1];
  r = trace(A);

  results in:

  r = 2.0

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A

Outputs

TypeNameDescription
RealresultTrace of A

Function Modelica.​Math.​Matrices.​det
Return determinant of a matrix (computed by LU decomposition; try to avoid det(..))

Information

Syntax

result = Matrices.det(A);

Description

This function returns the determinant "result" of matrix A computed by a LU decomposition with row pivoting. For details about determinants, see http://en.wikipedia.org/wiki/Determinant. Usually, this function should never be used, because there are nearly always better numerical algorithms as by computing the determinant. Examples:

See also

Matrices.rank, Matrices.solve

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)] 

Outputs

TypeNameDescription
RealresultDeterminant of matrix A

Function Modelica.​Math.​Matrices.​inv
Return inverse of a matrix (try to avoid inv(..))

Information

Syntax

invA = Matrices.inv(A);

Description

This function returns the inverse of matrix A, i.e., A*inv(A) = identity(size(A,1)) computed by a LU decomposition with row pivoting. Usually, this function should not be used, because there are nearly always better numerical algorithms as by computing directly the inverse. Example:

Use x = Matrices.solve(A,b) to solve the linear equation A*x = b, instead of computing the solution by x = inv(A)*b, because this is much more efficient and much more reliable.

See also

Matrices.solve Matrices.solve2

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)] 

Outputs

TypeNameDescription
RealinvA[size(A, 1),size(A, 2)]Inverse of matrix A

Function Modelica.​Math.​Matrices.​rank
Return rank of a rectangular matrix (computed with singular values)

Information

Syntax

result = Matrices.rank(A);
result = Matrices.rank(A,eps=0);

Description

This function returns the rank of a square or rectangular matrix A computed by singular value decomposition. For details about the rank of a matrix, see http://en.wikipedia.org/wiki/Matrix_rank. To be more precise:

See also

Matrices.rcond.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Matrix
RealepsIf eps > 0, the singular values are checked against eps; otherwise eps=max(size(A))*norm(A)*Modelica.Constants.eps is used

Outputs

TypeNameDescription
IntegerresultRank of matrix A

Function Modelica.​Math.​Matrices.​conditionNumber
Return the condition number norm(A)*norm(inv(A)) of a matrix A

Information

Syntax

r = Matrices.conditionNumber(A);

Description

This function calculates the condition number (norm(A) * norm(inv(A))) of a general real matrix A, in either the 1-norm, 2-norm or the infinity-norm. In the case of 2-norm the result is the ratio of the largest to the smallest singular value of A. For more details, see http://en.wikipedia.org/wiki/Condition_number.

Example

  A = [1, 2;
       2, 1];
  r = conditionNumber(A);

  results in:

  r = 3.0

See also

Matrices.rcond

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Input matrix
RealpType of p-norm (only allowed: 1, 2 or Modelica.Constants.inf)

Outputs

TypeNameDescription
RealresultCondition number of matrix A

Function Modelica.​Math.​Matrices.​rcond
Return the reciprocal condition number of a matrix

Information

Syntax

r = Matrices.rcond(A);

Description

This function estimates the reciprocal of the condition number (norm(A) * norm(inv(A))) of a general real matrix A, in either the 1-norm or the infinity-norm, using the LAPACK function DGECON. If rcond(A) is near 1.0, A is well conditioned and A is ill conditioned if rcond(A) is near zero.

Example

  A = [1, 2;
       2, 1];
  r = rcond(A);

  results in:

  r = 0.3333

See also

Matrices.conditionNumber

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square real matrix
BooleaninfIs true if infinity norm is used and false for 1-norm

Outputs

TypeNameDescription
RealrcondReciprocal condition number of A
IntegerinfoInformation

Function Modelica.​Math.​Matrices.​norm
Return the p-norm of a matrix

Information

Syntax

Matrices.norm(A);
Matrices.norm(A, p=2);

Description

The function call "Matrices.norm(A)" returns the 2-norm of matrix A, i.e., the largest singular value of A.
The function call "Matrices.norm(A, p)" returns the p-norm of matrix A. The only allowed values for p are

Note, for any matrices A1, A2 the following inequality holds:

Matrices.norm(A1+A2,p) ≤ Matrices.norm(A1,p) + Matrices.norm(A2,p)

Note, for any matrix A and vector v the following inequality holds:

Vectors.norm(A*v,p) ≤ Matrices.norm(A,p)*Vectors.norm(A,p)

See also

Matrices.frobeniusNorm

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Input matrix
RealpType of p-norm (only allowed: 1, 2 or Modelica.Constants.inf)

Outputs

TypeNameDescription
Realresultp-norm of matrix A

Function Modelica.​Math.​Matrices.​frobeniusNorm
Return the Frobenius norm of a matrix

Information

Syntax

  r = Matrices.frobeniusNorm(A);

Description

This function computes the Frobenius norm of a general real matrix A, i.e., the square root of the sum of the squares of all elements.

Example

  A = [1, 2;
       2, 1];
  r = frobeniusNorm(A);

  results in:

  r = 3.162;

See also

Matrices.norm

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Input matrix

Outputs

TypeNameDescription
RealresultFrobenius norm of matrix A

Function Modelica.​Math.​Matrices.​nullSpace
Return the orthonormal nullspace of a matrix

Information

Syntax

           Z = Matrices.nullspace(A);
(Z, nullity) = Matrices.nullspace(A);

Description

This function calculates an orthonormal basis Z=[z_1, z_2, ...] of the nullspace of a matrix A, i.e., A*z_i=0.

The nullspace is obtained by SVD method. That is, matrix A is decomposed into the matrices S, U, V:

 A = U*S*transpose(V)

with the orthonormal matrices U and V and the matrix S with

 S = [S1, 0]
 S1 = [diag(s); 0]

and the singular values s={s1, s2, ..., sr} of A and r=rank(A). Note, that S has the same size as A. Since U and V are orthonormal we may write

 transpose(U)*A*V = [S1, 0].

Matrix S1 obviously has full column rank and therefore, the left n-r rows (n is the number of columns of A or S) of matrix V span a nullspace of A.

The nullity of matrix A is the dimension of the nullspace of A. In view of the above, it becomes clear that nullity holds

 nullity = n - r

with

 n = number of columns of matrix A
 r = rank(A)

Example

  A = [1, 2,  3, 1;
       3, 4,  5, 2;
      -1, 2, -3, 3];
  (Z, nullity) = nullspace(A);

  results in:

  Z=[0.1715;
    -0.686;
     0.1715;
     0.686]

  nullity = 1

See also

Matrices.singularValues

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Input matrix

Outputs

TypeNameDescription
RealZ[size(A, 2),:]Orthonormal nullspace of matrix A
IntegernullityNullity, i.e., the dimension of the nullspace

Function Modelica.​Math.​Matrices.​exp
Return the exponential of a matrix by adaptive Taylor series expansion with scaling and balancing

Information

Syntax

phi = Matrices.exp(A);
phi = Matrices.exp(A,T=1);

Description

This function computes the exponential eAT of matrix A, i.e.

                            (AT)^2   (AT)^3
      Φ = e^(AT) = I + AT + ------ + ------ + ....
                              2!       3!

where e=2.71828..., A is an n x n matrix with real elements and T is a real number, e.g., the sampling time. A may be singular. With the exponential of a matrix it is, e.g., possible to compute the solution of a linear system of differential equations

    der(x) = A*x   ->   x(t0 + T) = e^(AT)*x(t0)

Algorithmic details:

The algorithm is taken from

H. D. Joos, G. Grübel:
RASP'91 Regulator Analysis and Synthesis Programs
DLR - Control Systems Group 1991

The following steps are performed to calculate the exponential of A:

  1. Matrix A is balanced
    (= is transformed with a diagonal matrix D, such that inv(D)*A*D has a smaller condition as A).
  2. The scalar T is divided by a multiple of 2 such that norm( inv(D)*A*D*T/2^k ) < 0.5. Note, that (1) and (2) are implemented such that no round-off errors are introduced.
  3. The matrix from (2) is approximated by explicitly performing the Taylor series expansion with a variable number of terms. Truncation occurs if a new term does no longer contribute to the value of Φ from the previous iteration.
  4. The resulting matrix is transformed back, by reverting the steps of (2) and (1).

In several sources it is not recommended to use Taylor series expansion to calculate the exponential of a matrix, such as in 'C.B. Moler and C.F. Van Loan: Nineteen dubious ways to compute the exponential of a matrix. SIAM Review 20, pp. 801-836, 1979' or in the documentation of m-file expm2 in MATLAB version 6 (http://www.mathworks.com) where it is stated that 'As a practical numerical method, this is often slow and inaccurate'. These statements are valid for a direct implementation of the Taylor series expansion, but not for the implementation variant used in this function.

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)] 
RealT 

Outputs

TypeNameDescription
Realphi[size(A, 1),size(A, 1)]= exp(A*T)

Function Modelica.​Math.​Matrices.​integralExp
Return the exponential and the integral of the exponential of a matrix

Information

Syntax

(phi,gamma) = Matrices.integralExp(A,B);
(phi,gamma) = Matrices.integralExp(A,B,T=1);

Description

This function computes the exponential phi = e^(AT) of matrix A and the integral gamma = integral(phi*dt)*B.

The function uses a Taylor series expansion with Balancing and scaling/squaring to approximate the integral Ψ of the matrix exponential Φ=e^(AT):

                                 AT^2   A^2 * T^3          A^k * T^(k+1)
        Ψ = int(e^(As))ds = IT + ---- + --------- + ... + --------------
                                  2!        3!                (k+1)!

Φ is calculated through Φ = I + A*Ψ, so A may be singular. Γ is simply Ψ*B.

The algorithm runs in the following steps:

  1. Balancing
  2. Scaling
  3. Taylor series expansion
  4. Re-scaling
  5. Re-Balancing

Balancing put the bad condition of a square matrix A into a diagonal transformation matrix D. This reduce the effort of following calculations. Afterwards the result have to be re-balanced by transformation D*Atransf *inv(D).
Scaling halfen T  k-times, until the norm of A*T is less than 0.5. This guarantees minimum rounding errors in the following series expansion. The re-scaling based on the equation  exp(A*2T) = exp(AT)^2. The needed re-scaling formula for psi thus becomes:

         Φ = Φ'*Φ'
   I + A*Ψ = I + 2A*Ψ' + A^2*Ψ'^2
         Ψ = A*Ψ'^2 + 2*Ψ'

where psi' is the scaled result from the series expansion while psi is the re-scaled matrix.

The function is normally used to discretize a state-space system as the zero-order-hold equivalent:

      x(k+1) = Φ*x(k) + Γ*u(k)
        y(k) = C*x(k) + D*u(k)

The zero-order-hold sampling, also known as step-invariant method, gives exact values of the state variables, under the assumption that the control signal u is constant between the sampling instants. Zero-order-hold sampling is described in

K. J. Åström, B. Wittenmark:
Computer Controlled Systems - Theory and Design
Third Edition, p. 32
Syntax:
      (phi,gamma) = Matrices.expIntegral(A,B,T)
                       A,phi: [n,n] square matrices
                     B,gamma: [n,m] input matrix
                           T: scalar, e.g., sampling time

The Algorithm to calculate psi is taken from

H. D. Joos, G. Grübel:
RASP'91 Regulator Analysis and Synthesis Programs
DLR - Control Systems Group 1991

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)] 
RealB[size(A, 1),:] 
RealT 

Outputs

TypeNameDescription
Realphi[size(A, 1),size(A, 1)]= exp(A*T)
Realgamma[size(A, 1),size(B, 2)]= integral(phi)*B

Function Modelica.​Math.​Matrices.​integralExpT
Return the exponential, the integral of the exponential, and time-weighted integral of the exponential of a matrix

Information

(phi,gamma,gamma1) = Matrices.integralExp(A,B);
(phi,gamma,gamma1) = Matrices.integralExp(A,B,T=1);

Description

This function computes the exponential phi = e^(AT) of matrix A and the integral gamma = integral(phi*dt)*B and the integral integral((T-t)*exp(A*t)*dt)*B, where A is a square (n,n) matrix and B, gamma, and gamma1 are (n,m) matrices.

The function calculates the matrices phi,gamma,gamma1 through the equation:

                                 [ A B 0 ]
[phi gamma gamma1] = [I 0 0]*exp([ 0 0 I ]*T)
                                 [ 0 0 0 ]

The matrices define the discretized first-order-hold equivalent of a state-space system:

      x(k+1) = phi*x(k) + gamma*u(k) + gamma1/T*(u(k+1) - u(k))

The first-order-hold sampling, also known as ramp-invariant method, gives more smooth control signals as the ZOH equivalent. First-order-hold sampling is, e.g., described in

K. J. Åström, B. Wittenmark:
Computer Controlled Systems - Theory and Design
Third Edition, p. 256

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)] 
RealB[size(A, 1),:] 
RealT 

Outputs

TypeNameDescription
Realphi[size(A, 1),size(A, 1)]= exp(A*T)
Realgamma[size(A, 1),size(B, 2)]= integral(phi)*B
Realgamma1[size(A, 1),size(B, 2)]= integral((T-t)*exp(A*t))*B

Function Modelica.​Math.​Matrices.​continuousLyapunov
Return solution X of the continuous-time Lyapunov equation X*A + A'*X = C

Information

Syntax

         X = Matrices.continuousLyapunov(A, C);
         X = Matrices.continuousLyapunov(A, C, ATisSchur, eps);

Description

This function computes the solution X of the continuous-time Lyapunov equation

 X*A + A'*X = C

using the Schur method for Lyapunov equations proposed by Bartels and Stewart [1].

In a nutshell, the problem is reduced to the corresponding problem

 Y*R' + R*Y = D

with R=U'*A'*U is the real Schur form of A' and D=U'*C*U and Y=U'*X*U are the corresponding transformations of C and X. This problem is solved sequentially for the 1x1 or 2x2 Schur blocks by exploiting the block triangular form of R. Finally the solution of the original problem is recovered as X=U*Y*U'.
The Boolean input "ATisSchur" indicates to omit the transformation to Schur in the case that A' has already Schur form.

References

  [1] Bartels, R.H. and Stewart G.W.
      Algorithm 432: Solution of the matrix equation AX + XB = C.
      Comm. ACM., Vol. 15, pp. 820-826, 1972.

Example

  A = [1, 2,  3,  4;
       3, 4,  5, -2;
      -1, 2, -3, -5;
       0, 2,  0,  6];

  C =  [-2, 3, 1, 0;
        -6, 8, 0, 1;
         2, 3, 4, 5;
        0, -2, 0, 0];

  X = continuousLyapunov(A, C);

  results in:

  X = [1.633, -0.761,  0.575, -0.656;
      -1.158,  1.216,  0.047,  0.343;
      -1.066, -0.052, -0.916,  1.61;
      -2.473,  0.717, -0.986,  1.48]

See also

Matrices.continuousSylvester, Matrices.discreteLyapunov

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A in X*A + A'*X = C
RealC[size(A, 1),size(A, 2)]Square matrix C in X*A + A'*X = C
BooleanATisSchurTrue if transpose(A) has already real Schur form
RealepsTolerance eps

Outputs

TypeNameDescription
RealX[size(A, 1),size(A, 2)]Solution X of the Lyapunov equation X*A + A'*X = C

Function Modelica.​Math.​Matrices.​continuousSylvester
Return solution X of the continuous-time Sylvester equation A*X + X*B = C

Information

Syntax

         X = Matrices.continuousSylvester(A, B, C);
         X = Matrices.continuousSylvester(A, B, C, AisSchur, BisSchur);

Description

Function continuousSylvester computes the solution X of the continuous-time Sylvester equation

 A*X + X*B = C.

using the Schur method for Sylvester equations proposed by Bartels and Stewart [1].

In a nutshell, the problem is reduced to the corresponding problem

 S*Y + Y*T = D.

with S=U'*A*U is the real Schur of A, T=V'*T*V is the real Schur form of B and D=U'*C*V and Y=U*X*V' are the corresponding transformations of C and X. This problem is solved sequentially by exploiting the block triangular form of S and T. Finally the solution of the original problem is recovered as X=U'*Y*V.
The Boolean inputs "AisSchur" and "BisSchur" indicate to omit one or both of the transformation to Schur in the case that A and/or B have already Schur form.

The function applies LAPACK-routine DTRSYL. See LAPACK.dtrsyl for more information.

References

  [1] Bartels, R.H. and Stewart G.W.
      Algorithm 432: Solution of the matrix equation AX + XB = C.
      Comm. ACM., Vol. 15, pp. 820-826, 1972.

Example

  A = [17.0,   24.0,   1.0,   8.0,   15.0 ;
       23.0,    5.0,   7.0,  14.0,   16.0 ;
        0.0,    6.0,  13.0,  20.0,   22.0;
        0.0,    0.0,  19.0,  21.0,    3.0 ;
        0.0,    0.0,   0.0,   2.0,    9.0];

  B =  [8.0, 1.0, 6.0;
        0.0, 5.0, 7.0;
        0.0, 9.0, 2.0];

  C = [62.0,  -12.0, 26.0;
       59.0,  -10.0, 31.0;
       70.0,  -6.0,   9.0;
       35.0,  31.0,  -7.0;
       36.0, -15.0,   7.0];

  X = continuousSylvester(A, B, C);

  results in:

  X = [0.0,  0.0,  1.0;
       1.0,  0.0,  0.0;
       0.0,  1.0,  0.0;
       1.0,  1.0, -1.0;
       2.0, -2.0,  1.0];

See also

Matrices.discreteSylvester, Matrices.continuousLyapunov

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Square matrix A
RealB[:,:]Square matrix B
RealC[size(A, 1),size(B, 2)]Matrix C
BooleanAisSchurTrue if A has already real Schur form
BooleanBisSchurTrue if B has already real Schur form

Outputs

TypeNameDescription
RealX[size(A, 1),size(B, 2)]Solution of the continuous Sylvester equation

Function Modelica.​Math.​Matrices.​continuousRiccati
Return solution X of the continuous-time algebraic Riccati equation A'*X + X*A - X*B*inv(R)*B'*X + Q = 0 (care)

Information

Syntax

                                X = Matrices.continuousRiccati(A, B, R, Q);
        (X, alphaReal, alphaImag) = Matrices.continuousRiccati(A, B, R, Q, true);

Description

Function continuousRiccati computes the solution X of the continuous-time algebraic Riccati equation

 A'*X + X*A - X*G*X + Q = 0

with G = B*inv(R)*B' using the Schur vector approach proposed by Laub [1].

It is assumed that Q is symmetric and positive semidefinite and R is symmetric, nonsingular and positive definite, (A,B) is stabilizable and (A,Q) is detectable.

These assumptions are not checked in this function !!

The assumptions guarantee that the Hamiltonian matrix

H = [A, -G; -Q, -A']

has no pure imaginary eigenvalue and can be put to an ordered real Schur form

U'*H*U = S = [S11, S12; 0, S22]

with orthogonal similarity transformation U. S is ordered in such a way, that S11 contains the n stable eigenvalues of the closed loop system with system matrix A - B*inv(R)*B'*X. If U is partitioned to

U = [U11, U12; U21, U22]

with dimensions according to S, the solution X is calculated by

X*U11 = U21.

With optional input refinement=true a subsequent iterative refinement based on Newton's method with exact line search is applied. See continuousRiccatiIterative for more information.

References

  [1] Laub, A.J.
      A Schur Method for Solving Algebraic Riccati equations.
      IEEE Trans. Auto. Contr., AC-24, pp. 913-921, 1979.

Example

  A = [0.0, 1.0;
       0.0, 0.0];

  B = [0.0;
       1.0];

  R = [1];

  Q = [1.0, 0.0;
       0.0, 2.0];

X = continuousRiccati(A, B, R, Q);

  results in:

X = [2.0, 1.0;
     1.0, 2.0];

See also

Matrices.Utilities.continuousRiccatiIterative, Matrices.discreteRiccati

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A in CARE
RealB[size(A, 1),:]Matrix B in CARE
RealR[size(B, 2),size(B, 2)]Matrix R in CARE
RealQ[size(A, 1),size(A, 1)]Matrix Q in CARE
BooleanrefineTrue for subsequent refinement

Outputs

TypeNameDescription
RealX[size(A, 1),size(A, 2)]stabilizing solution of CARE
RealalphaReal[2 * size(A, 1)]Real parts of eigenvalue=alphaReal+i*alphaImag
RealalphaImag[2 * size(A, 1)]Imaginary parts of eigenvalue=alphaReal+i*alphaImag

Function Modelica.​Math.​Matrices.​discreteLyapunov
Return solution X of the discrete-time Lyapunov equation A'*X*A + sgn*X = C

Information

Syntax

         X = Matrices.discreteLyapunov(A, C);
         X = Matrices.discreteLyapunov(A, C, ATisSchur, sgn, eps);

Description

This function computes the solution X of the discrete-time Lyapunov equation

 A'*X*A + sgn*X = C

where sgn=1 or sgn =-1. For sgn = -1, the discrete Lyapunov equation is a special case of the Stein equation:

 A*X*B - X + Q = 0.

The algorithm uses the Schur method for Lyapunov equations proposed by Bartels and Stewart [1].

In a nutshell, the problem is reduced to the corresponding problem

 R*Y*R' + sgn*Y = D.

with R=U'*A'*U is the real Schur form of A' and D=U'*C*U and Y=U'*X*U are the corresponding transformations of C and X. This problem is solved sequentially by exploiting the block triangular form of R. Finally the solution of the original problem is recovered as X=U*Y*U'.
The Boolean input "ATisSchur" indicates to omit the transformation to Schur in the case that A' has already Schur form.

References

  [1] Bartels, R.H. and Stewart G.W.
      Algorithm 432: Solution of the matrix equation AX + XB = C.
      Comm. ACM., Vol. 15, pp. 820-826, 1972.

Example

  A = [1, 2,  3,  4;
       3, 4,  5, -2;
      -1, 2, -3, -5;
       0, 2,  0,  6];

  C =  [-2,  3, 1, 0;
        -6,  8, 0, 1;
         2,  3, 4, 5;
         0, -2, 0, 0];

  X = discreteLyapunov(A, C, sgn=-1);

  results in:

  X  = [7.5735,   -3.1426,  2.7205, -2.5958;
       -2.6105,    1.2384, -0.9232,  0.9632;
        6.6090,   -2.6775,  2.6415, -2.6928;
       -0.3572,    0.2298,  0.0533, -0.27410];

See also

Matrices.discreteSylvester, Matrices.continuousLyapunov

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A in A'*X*A + sgn*X = C
RealC[size(A, 1),size(A, 2)]Square matrix C in A'*X*A + sgn*X = C
BooleanATisSchurTrue if transpose(A) has already real Schur form
IntegersgnSpecifies the sign in A'*X*A + sgn*X = C
RealepsTolerance eps

Outputs

TypeNameDescription
RealX[size(A, 1),size(A, 2)]Solution X of the Lyapunov equation A'*X*A + sgn*X = C

Function Modelica.​Math.​Matrices.​discreteSylvester
Return solution of the discrete-time Sylvester equation A*X*B + sgn*X = C

Information

Syntax

         X = Matrices.discreteSylvester(A, B, C);
         X = Matrices.discreteSylvester(A, B, C, AisHess, BTisSchur, sgn, eps);

Description

Function discreteSylvester computes the solution X of the discrete-time Sylvester equation

 A*X*B + sgn*X = C.

where sgn = 1 or sgn = -1. The algorithm applies the Hessenberg-Schur method proposed by Golub et al [1]. For sgn = -1, the discrete Sylvester equation is also known as Stein equation:

 A*X*B - X + Q = 0.

In a nutshell, the problem is reduced to the corresponding problem

 H*Y*S' + sgn*Y = F.

with H=U'*A*U is the Hessenberg form of A and S=V'*B'*V is the real Schur form of B', F=U'*C*V and Y=U*X*V' are appropriate transformations of C and X. This problem is solved sequentially by exploiting the specific forms of S and H. Finally the solution of the original problem is recovered as X=U'*Y*V.
The Boolean inputs "AisHess" and "BTisSchur" indicate to omit one or both of the transformation to Hessenberg form or Schur form respectively in the case that A and/or B have already Hessenberg form or Schur respectively.

References

  [1] Golub, G.H., Nash, S. and Van Loan, C.F.
      A Hessenberg-Schur method for the problem AX + XB = C.
      IEEE Transaction on Automatic Control, AC-24, no. 6, pp. 909-913, 1979.

Example

  A = [1.0,   2.0,   3.0;
       6.0,   7.0,   8.0;
       9.0,   2.0,   3.0];

  B = [7.0,   2.0,   3.0;
       2.0,   1.0,   2.0;
       3.0,   4.0,   1.0];

  C = [271.0,   135.0,   147.0;
       923.0,   494.0,   482.0;
       578.0,   383.0,   287.0];

  X = discreteSylvester(A, B, C);

  results in:
  X = [2.0,   3.0,   6.0;
       4.0,   7.0,   1.0;
       5.0,   3.0,   2.0];

See also

Matrices.continuousSylvester, Matrices.discreteLyapunov

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A in A*X*B + sgn*X = C
RealB[:,size(B, 1)]Square matrix B in A*X*B + sgn*X = C
RealC[size(A, 2),size(B, 1)]Rectangular matrix C in A*X*B + sgn*X = C
BooleanAisHessTrue if A has already Hessenberg form
BooleanBTisSchurTrue if B' has already real Schur form
IntegersgnSpecifies the sign in A*X*B + sgn*X = C
RealepsTolerance

Outputs

TypeNameDescription
RealX[size(A, 2),size(B, 1)]solution of the discrete Sylvester equation A*X*B + sgn*X = C

Function Modelica.​Math.​Matrices.​discreteRiccati
Return solution of discrete-time algebraic Riccati equation A'*X*A - X - A'*X*B*inv(R + B'*X*B)*B'*X*A + Q = 0 (dare)

Information

Syntax

                                 X = Matrices.discreteRiccati(A, B, R, Q);
         (X, alphaReal, alphaImag) = Matrices.discreteRiccati(A, B, R, Q, true);

Description

Function discreteRiccati computes the solution X of the discrete-time algebraic Riccati equation

 A'*X*A - X - A'*X*B*inv(R + B'*X*B)*B'*X*A + Q = 0

using the Schur vector approach proposed by Laub [1].

It is assumed that Q is symmetric and positive semidefinite and R is symmetric, nonsingular and positive definite, (A,B) is stabilizable and (A,Q) is detectable. Using this method, A has also to be invertible.

These assumptions are not checked in this function !!!

The assumptions guarantee that the Hamiltonian matrix.

H = [A + G*T*Q, -G*T; -T*Q, T]

with

     -T
T = A

and

       -1
G = B*R *B'

has no eigenvalue on the unit circle and can be put to an ordered real Schur form

U'*H*U = S = [S11, S12; 0, S22]

with orthogonal similarity transformation U. S is ordered in such a way, that S11 contains the n stable eigenvalues of the closed loop system with system matrix

                  -1
A - B*(R + B'*X*B)  *B'*X*A

If U is partitioned to

U = [U11, U12; U21, U22]

according to S, the solution X can be calculated by

X*U11 = U21.

References

  [1] Laub, A.J.
      A Schur Method for Solving Algebraic Riccati equations.
      IEEE Trans. Auto. Contr., AC-24, pp. 913-921, 1979.

Example

 A  = [4.0    3.0]
      -4.5,  -3.5];

 B  = [ 1.0;
       -1.0];

 R = [1.0];

 Q = [9.0, 6.0;
      6.0, 4.0]

X = discreteRiccati(A, B, R, Q);

  results in:

X = [14.5623, 9.7082;
      9.7082, 6.4721];

See also

Matrices.continuousRiccati

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,size(A, 1)]Square matrix A in DARE
RealB[size(A, 1),:]Matrix B in DARE
RealR[size(B, 2),size(B, 2)]Matrix R in DARE
RealQ[size(A, 1),size(A, 1)]Matrix Q in DARE
BooleanrefineTrue for subsequent refinement

Outputs

TypeNameDescription
RealX[size(A, 1),size(A, 2)]orthogonal matrix of the Schur vectors associated to ordered rsf
RealalphaReal[2 * size(A, 1)]Real part of eigenvalue=alphaReal+i*alphaImag
RealalphaImag[2 * size(A, 1)]Imaginary part of eigenvalue=alphaReal+i*alphaImag

Function Modelica.​Math.​Matrices.​sort
Sort the rows or columns of a matrix in ascending or descending order

Information

Syntax

           sorted_M = Matrices.sort(M);
(sorted_M, indices) = Matrices.sort(M, sortRows=true, ascending=true);

Description

Function sort(..) sorts the rows of a Real matrix M in ascending order and returns the result in sorted_M. If the optional argument "sortRows" is false, the columns of the matrix are sorted. If the optional argument "ascending" is false, the rows or columns are sorted in descending order. In the optional second output argument, the indices of the sorted rows or columns with respect to the original matrix are given, such that

   sorted_M = if sortedRow then M[indices,:] else M[:,indices];

Example

  (M2, i2) := Matrices.sort([2, 1,  0;
                             2, 0, -1]);
       -> M2 = [2, 0, -1;
                2, 1, 0 ];
          i2 = {2,1};

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealM[:,:]Matrix to be sorted
BooleansortRows= true if rows are sorted, otherwise columns
Booleanascending= true if ascending order, otherwise descending order

Outputs

TypeNameDescription
Realsorted_M[size(M, 1),size(M, 2)]Sorted matrix
Integerindices[if sortRows then size(M, 1) else size(M, 2)]sorted_M = if sortRows then M[indices,:] else M[:,indices]

Function Modelica.​Math.​Matrices.​flipLeftRight
Flip the columns of a matrix in left/right direction

Information

Syntax

         A_flr = Matrices.flipLeftRight(A);

Description

Function flipLeftRight computes from matrix A a matrix A_flr with flipped columns, i.e., A_flr[:,i]=A[:,n-i+1], i=1,..., n.

Example

  A = [1, 2,  3;
       3, 4,  5;
      -1, 2, -3];

  A_flr = flipLeftRight(A);

  results in:

  A_flr = [3, 2,  1;
           5, 4,  3;
          -3, 2, -1]

See also

Matrices.flipUpDown

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Matrix to be flipped

Outputs

TypeNameDescription
RealAflip[size(A, 1),size(A, 2)]Flipped matrix

Function Modelica.​Math.​Matrices.​flipUpDown
Flip the rows of a matrix in up/down direction

Information

Syntax

         A_fud = Matrices.flipUpDown(A);

Description

Function flipUpDown computes from matrix A a matrix A_fud with flipped rows, i.e., A_fud[i,:]=A[n-i+1,:], i=1,..., n.

Example

  A = [1, 2,  3;
       3, 4,  5;
      -1, 2, -3];

  A_fud = flipUpDown(A);

  results in:

  A_fud  = [-1, 2, -3;
             3, 4,  5;
             1, 2,  3]

See also

Matrices.flipLeftRight

Extends from Modelica.​Icons.​Function (Icon for functions).

Inputs

TypeNameDescription
RealA[:,:]Matrix to be flipped

Outputs

TypeNameDescription
RealAflip[size(A, 1),size(A, 2)]Flipped matrix