Linear Algebra

In addition to (and as part of) its support for multi-dimensional arrays, Julia provides native implementations of many common and useful linear algebra operations which can be loaded with using LinearAlgebra. Basic operations, such as tr, det, and inv are all supported:

  1. julia> A = [1 2 3; 4 1 6; 7 8 1]
  2. 3×3 Matrix{Int64}:
  3. 1 2 3
  4. 4 1 6
  5. 7 8 1
  6. julia> tr(A)
  7. 3
  8. julia> det(A)
  9. 104.0
  10. julia> inv(A)
  11. 3×3 Matrix{Float64}:
  12. -0.451923 0.211538 0.0865385
  13. 0.365385 -0.192308 0.0576923
  14. 0.240385 0.0576923 -0.0673077

As well as other useful operations, such as finding eigenvalues or eigenvectors:

  1. julia> A = [-4. -17.; 2. 2.]
  2. 2×2 Matrix{Float64}:
  3. -4.0 -17.0
  4. 2.0 2.0
  5. julia> eigvals(A)
  6. 2-element Vector{ComplexF64}:
  7. -1.0 - 5.0im
  8. -1.0 + 5.0im
  9. julia> eigvecs(A)
  10. 2×2 Matrix{ComplexF64}:
  11. 0.945905-0.0im 0.945905+0.0im
  12. -0.166924+0.278207im -0.166924-0.278207im

In addition, Julia provides many factorizations which can be used to speed up problems such as linear solve or matrix exponentiation by pre-factorizing a matrix into a form more amenable (for performance or memory reasons) to the problem. See the documentation on factorize for more information. As an example:

  1. julia> A = [1.5 2 -4; 3 -1 -6; -10 2.3 4]
  2. 3×3 Matrix{Float64}:
  3. 1.5 2.0 -4.0
  4. 3.0 -1.0 -6.0
  5. -10.0 2.3 4.0
  6. julia> factorize(A)
  7. LU{Float64, Matrix{Float64}}
  8. L factor:
  9. 3×3 Matrix{Float64}:
  10. 1.0 0.0 0.0
  11. -0.15 1.0 0.0
  12. -0.3 -0.132196 1.0
  13. U factor:
  14. 3×3 Matrix{Float64}:
  15. -10.0 2.3 4.0
  16. 0.0 2.345 -3.4
  17. 0.0 0.0 -5.24947

Since A is not Hermitian, symmetric, triangular, tridiagonal, or bidiagonal, an LU factorization may be the best we can do. Compare with:

  1. julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]
  2. 3×3 Matrix{Float64}:
  3. 1.5 2.0 -4.0
  4. 2.0 -1.0 -3.0
  5. -4.0 -3.0 5.0
  6. julia> factorize(B)
  7. BunchKaufman{Float64, Matrix{Float64}}
  8. D factor:
  9. 3×3 Tridiagonal{Float64, Vector{Float64}}:
  10. -1.64286 0.0
  11. 0.0 -2.8 0.0
  12. 0.0 5.0
  13. U factor:
  14. 3×3 UnitUpperTriangular{Float64, Matrix{Float64}}:
  15. 1.0 0.142857 -0.8
  16. 1.0 -0.6
  17. 1.0
  18. permutation:
  19. 3-element Vector{Int64}:
  20. 1
  21. 2
  22. 3

Here, Julia was able to detect that B is in fact symmetric, and used a more appropriate factorization. Often it’s possible to write more efficient code for a matrix that is known to have certain properties e.g. it is symmetric, or tridiagonal. Julia provides some special types so that you can “tag” matrices as having these properties. For instance:

  1. julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]
  2. 3×3 Matrix{Float64}:
  3. 1.5 2.0 -4.0
  4. 2.0 -1.0 -3.0
  5. -4.0 -3.0 5.0
  6. julia> sB = Symmetric(B)
  7. 3×3 Symmetric{Float64, Matrix{Float64}}:
  8. 1.5 2.0 -4.0
  9. 2.0 -1.0 -3.0
  10. -4.0 -3.0 5.0

sB has been tagged as a matrix that’s (real) symmetric, so for later operations we might perform on it, such as eigenfactorization or computing matrix-vector products, efficiencies can be found by only referencing half of it. For example:

  1. julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]
  2. 3×3 Matrix{Float64}:
  3. 1.5 2.0 -4.0
  4. 2.0 -1.0 -3.0
  5. -4.0 -3.0 5.0
  6. julia> sB = Symmetric(B)
  7. 3×3 Symmetric{Float64, Matrix{Float64}}:
  8. 1.5 2.0 -4.0
  9. 2.0 -1.0 -3.0
  10. -4.0 -3.0 5.0
  11. julia> x = [1; 2; 3]
  12. 3-element Vector{Int64}:
  13. 1
  14. 2
  15. 3
  16. julia> sB\x
  17. 3-element Vector{Float64}:
  18. -1.7391304347826084
  19. -1.1086956521739126
  20. -1.4565217391304346

The \ operation here performs the linear solution. The left-division operator is pretty powerful and it’s easy to write compact, readable code that is flexible enough to solve all sorts of systems of linear equations.

Special matrices

Matrices with special symmetries and structures arise often in linear algebra and are frequently associated with various matrix factorizations. Julia features a rich collection of special matrix types, which allow for fast computation with specialized routines that are specially developed for particular matrix types.

The following tables summarize the types of special matrices that have been implemented in Julia, as well as whether hooks to various optimized methods for them in LAPACK are available.

TypeDescription
SymmetricSymmetric matrix
HermitianHermitian matrix
UpperTriangularUpper triangular matrix
UnitUpperTriangularUpper triangular matrix with unit diagonal
LowerTriangularLower triangular matrix
UnitLowerTriangularLower triangular matrix with unit diagonal
UpperHessenbergUpper Hessenberg matrix
TridiagonalTridiagonal matrix
SymTridiagonalSymmetric tridiagonal matrix
BidiagonalUpper/lower bidiagonal matrix
DiagonalDiagonal matrix
UniformScalingUniform scaling operator

Elementary operations

Matrix type+-*\Other functions with optimized methods
SymmetricMVinv, sqrt, exp
HermitianMVinv, sqrt, exp
UpperTriangularMVMVinv, det
UnitUpperTriangularMVMVinv, det
LowerTriangularMVMVinv, det
UnitLowerTriangularMVMVinv, det
UpperHessenbergMMinv, det
SymTridiagonalMMMSMVeigmax, eigmin
TridiagonalMMMSMV
BidiagonalMMMSMV
DiagonalMMMVMVinv, det, logdet, /
UniformScalingMMMVSMVS/

Legend:

KeyDescription
M (matrix)An optimized method for matrix-matrix operations is available
V (vector)An optimized method for matrix-vector operations is available
S (scalar)An optimized method for matrix-scalar operations is available

Matrix factorizations

Matrix typeLAPACKeigeneigvalseigvecssvdsvdvals
SymmetricSYARI
HermitianHEARI
UpperTriangularTRAAA
UnitUpperTriangularTRAAA
LowerTriangularTRAAA
UnitLowerTriangularTRAAA
SymTridiagonalSTAARIAV
TridiagonalGT
BidiagonalBDAA
DiagonalDIA

Legend:

KeyDescriptionExample
A (all)An optimized method to find all the characteristic values and/or vectors is availablee.g. eigvals(M)
R (range)An optimized method to find the ilth through the ihth characteristic values are availableeigvals(M, il, ih)
I (interval)An optimized method to find the characteristic values in the interval [vl, vh] is availableeigvals(M, vl, vh)
V (vectors)An optimized method to find the characteristic vectors corresponding to the characteristic values x=[x1, x2,…] is availableeigvecs(M, x)

The uniform scaling operator

A UniformScaling operator represents a scalar times the identity operator, λ*I. The identity operator I is defined as a constant and is an instance of UniformScaling. The size of these operators are generic and match the other matrix in the binary operations +, -, * and \. For A+I and A-I this means that A must be square. Multiplication with the identity operator I is a noop (except for checking that the scaling factor is one) and therefore almost without overhead.

To see the UniformScaling operator in action:

  1. julia> U = UniformScaling(2);
  2. julia> a = [1 2; 3 4]
  3. 2×2 Matrix{Int64}:
  4. 1 2
  5. 3 4
  6. julia> a + U
  7. 2×2 Matrix{Int64}:
  8. 3 2
  9. 3 6
  10. julia> a * U
  11. 2×2 Matrix{Int64}:
  12. 2 4
  13. 6 8
  14. julia> [a U]
  15. 2×4 Matrix{Int64}:
  16. 1 2 2 0
  17. 3 4 0 2
  18. julia> b = [1 2 3; 4 5 6]
  19. 2×3 Matrix{Int64}:
  20. 1 2 3
  21. 4 5 6
  22. julia> b - U
  23. ERROR: DimensionMismatch("matrix is not square: dimensions are (2, 3)")
  24. Stacktrace:
  25. [...]

If you need to solve many systems of the form (A+μI)x = b for the same A and different μ, it might be beneficial to first compute the Hessenberg factorization F of A via the hessenberg function. Given F, Julia employs an efficient algorithm for (F+μ*I) \ b (equivalent to (A+μ*I)x \ b) and related operations like determinants.

Matrix factorizations

Matrix factorizations (a.k.a. matrix decompositions) compute the factorization of a matrix into a product of matrices, and are one of the central concepts in linear algebra.

The following table summarizes the types of matrix factorizations that have been implemented in Julia. Details of their associated methods can be found in the Standard functions section of the Linear Algebra documentation.

TypeDescription
BunchKaufmanBunch-Kaufman factorization
CholeskyCholesky factorization
CholeskyPivotedPivoted Cholesky factorization
LDLtLDL(T) factorization
LULU factorization
QRQR factorization
QRCompactWYCompact WY form of the QR factorization
QRPivotedPivoted QR factorization
LQQR factorization of transpose(A)
HessenbergHessenberg decomposition
EigenSpectral decomposition
GeneralizedEigenGeneralized spectral decomposition
SVDSingular value decomposition
GeneralizedSVDGeneralized SVD
SchurSchur decomposition
GeneralizedSchurGeneralized Schur decomposition

Standard functions

Linear algebra functions in Julia are largely implemented by calling functions from LAPACK. Sparse matrix factorizations call functions from SuiteSparse. Other sparse solvers are available as Julia packages.

Base.:* — Method

  1. *(A::AbstractMatrix, B::AbstractMatrix)

Matrix multiplication.

Examples

  1. julia> [1 1; 0 1] * [1 0; 1 1]
  2. 2×2 Matrix{Int64}:
  3. 2 1
  4. 1 1

source

Base.:\ — Method

  1. \(A, B)

Matrix division using a polyalgorithm. For input matrices A and B, the result X is such that A*X == B when A is square. The solver that is used depends upon the structure of A. If A is upper or lower triangular (or diagonal), no factorization of A is required and the system is solved with either forward or backward substitution. For non-triangular square matrices, an LU factorization is used.

For rectangular A the result is the minimum-norm least squares solution computed by a pivoted QR factorization of A and a rank estimate of A based on the R factor.

When A is sparse, a similar polyalgorithm is used. For indefinite matrices, the LDLt factorization does not use pivoting during the numerical factorization and therefore the procedure can fail even for invertible matrices.

See also: factorize, pinv.

Examples

  1. julia> A = [1 0; 1 -2]; B = [32; -4];
  2. julia> X = A \ B
  3. 2-element Vector{Float64}:
  4. 32.0
  5. 18.0
  6. julia> A * X == B
  7. true

source

LinearAlgebra.SingularException — Type

  1. SingularException

Exception thrown when the input matrix has one or more zero-valued eigenvalues, and is not invertible. A linear solve involving such a matrix cannot be computed. The info field indicates the location of (one of) the singular value(s).

source

LinearAlgebra.PosDefException — Type

  1. PosDefException

Exception thrown when the input matrix was not positive definite. Some linear algebra functions and factorizations are only applicable to positive definite matrices. The info field indicates the location of (one of) the eigenvalue(s) which is (are) less than/equal to 0.

source

LinearAlgebra.ZeroPivotException — Type

  1. ZeroPivotException <: Exception

Exception thrown when a matrix factorization/solve encounters a zero in a pivot (diagonal) position and cannot proceed. This may not mean that the matrix is singular: it may be fruitful to switch to a diffent factorization such as pivoted LU that can re-order variables to eliminate spurious zero pivots. The info field indicates the location of (one of) the zero pivot(s).

source

LinearAlgebra.dot — Function

  1. dot(x, y)
  2. x y

Compute the dot product between two vectors. For complex vectors, the first vector is conjugated.

dot also works on arbitrary iterable objects, including arrays of any dimension, as long as dot is defined on the elements.

dot is semantically equivalent to sum(dot(vx,vy) for (vx,vy) in zip(x, y)), with the added restriction that the arguments must have equal lengths.

x ⋅ y (where can be typed by tab-completing \cdot in the REPL) is a synonym for dot(x, y).

Examples

  1. julia> dot([1; 1], [2; 3])
  2. 5
  3. julia> dot([im; im], [1; 1])
  4. 0 - 2im
  5. julia> dot(1:5, 2:6)
  6. 70
  7. julia> x = fill(2., (5,5));
  8. julia> y = fill(3., (5,5));
  9. julia> dot(x, y)
  10. 150.0

source

LinearAlgebra.dot — Method

  1. dot(x, A, y)

Compute the generalized dot product dot(x, A*y) between two vectors x and y, without storing the intermediate result of A*y. As for the two-argument dot(_,_), this acts recursively. Moreover, for complex vectors, the first vector is conjugated.

Julia 1.4

Three-argument dot requires at least Julia 1.4.

Examples

  1. julia> dot([1; 1], [1 2; 3 4], [2; 3])
  2. 26
  3. julia> dot(1:5, reshape(1:25, 5, 5), 2:6)
  4. 4850
  5. julia> ⋅(1:5, reshape(1:25, 5, 5), 2:6) == dot(1:5, reshape(1:25, 5, 5), 2:6)
  6. true

source

LinearAlgebra.cross — Function

  1. cross(x, y)
  2. ×(x,y)

Compute the cross product of two 3-vectors.

Examples

  1. julia> a = [0;1;0]
  2. 3-element Vector{Int64}:
  3. 0
  4. 1
  5. 0
  6. julia> b = [0;0;1]
  7. 3-element Vector{Int64}:
  8. 0
  9. 0
  10. 1
  11. julia> cross(a,b)
  12. 3-element Vector{Int64}:
  13. 1
  14. 0
  15. 0

source

LinearAlgebra.factorize — Function

  1. factorize(A)

Compute a convenient factorization of A, based upon the type of the input matrix. factorize checks A to see if it is symmetric/triangular/etc. if A is passed as a generic matrix. factorize checks every element of A to verify/rule out each property. It will short-circuit as soon as it can rule out symmetry/triangular structure. The return value can be reused for efficient solving of multiple systems. For example: A=factorize(A); x=A\b; y=A\C.

Properties of Atype of factorization
Positive-definiteCholesky (see cholesky)
Dense Symmetric/HermitianBunch-Kaufman (see bunchkaufman)
Sparse Symmetric/HermitianLDLt (see ldlt)
TriangularTriangular
DiagonalDiagonal
BidiagonalBidiagonal
TridiagonalLU (see lu)
Symmetric real tridiagonalLDLt (see ldlt)
General squareLU (see lu)
General non-squareQR (see qr)

If factorize is called on a Hermitian positive-definite matrix, for instance, then factorize will return a Cholesky factorization.

Examples

  1. julia> A = Array(Bidiagonal(fill(1.0, (5, 5)), :U))
  2. 5×5 Matrix{Float64}:
  3. 1.0 1.0 0.0 0.0 0.0
  4. 0.0 1.0 1.0 0.0 0.0
  5. 0.0 0.0 1.0 1.0 0.0
  6. 0.0 0.0 0.0 1.0 1.0
  7. 0.0 0.0 0.0 0.0 1.0
  8. julia> factorize(A) # factorize will check to see that A is already factorized
  9. 5×5 Bidiagonal{Float64, Vector{Float64}}:
  10. 1.0 1.0
  11. 1.0 1.0
  12. 1.0 1.0
  13. 1.0 1.0
  14. 1.0

This returns a 5×5 Bidiagonal{Float64}, which can now be passed to other linear algebra functions (e.g. eigensolvers) which will use specialized methods for Bidiagonal types.

source

LinearAlgebra.Diagonal — Type

  1. Diagonal(V::AbstractVector)

Construct a matrix with V as its diagonal.

See also diag, diagm.

Examples

  1. julia> Diagonal([1, 10, 100])
  2. 3×3 Diagonal{Int64, Vector{Int64}}:
  3. 1
  4. 10
  5. 100
  6. julia> diagm([7, 13])
  7. 2×2 Matrix{Int64}:
  8. 7 0
  9. 0 13

source

  1. Diagonal(A::AbstractMatrix)

Construct a matrix from the diagonal of A.

Examples

  1. julia> A = permutedims(reshape(1:15, 5, 3))
  2. 3×5 Matrix{Int64}:
  3. 1 2 3 4 5
  4. 6 7 8 9 10
  5. 11 12 13 14 15
  6. julia> Diagonal(A)
  7. 3×3 Diagonal{Int64, Vector{Int64}}:
  8. 1
  9. 7
  10. 13
  11. julia> diag(A, 2)
  12. 3-element Vector{Int64}:
  13. 3
  14. 9
  15. 15

source

  1. Diagonal{T}(undef, n)

Construct an uninitialized Diagonal{T} of length n. See undef.

source

LinearAlgebra.Bidiagonal — Type

  1. Bidiagonal(dv::V, ev::V, uplo::Symbol) where V <: AbstractVector

Constructs an upper (uplo=:U) or lower (uplo=:L) bidiagonal matrix using the given diagonal (dv) and off-diagonal (ev) vectors. The result is of type Bidiagonal and provides efficient specialized linear solvers, but may be converted into a regular matrix with convert(Array, _) (or Array(_) for short). The length of ev must be one less than the length of dv.

Examples

  1. julia> dv = [1, 2, 3, 4]
  2. 4-element Vector{Int64}:
  3. 1
  4. 2
  5. 3
  6. 4
  7. julia> ev = [7, 8, 9]
  8. 3-element Vector{Int64}:
  9. 7
  10. 8
  11. 9
  12. julia> Bu = Bidiagonal(dv, ev, :U) # ev is on the first superdiagonal
  13. 4×4 Bidiagonal{Int64, Vector{Int64}}:
  14. 1 7
  15. 2 8
  16. 3 9
  17. 4
  18. julia> Bl = Bidiagonal(dv, ev, :L) # ev is on the first subdiagonal
  19. 4×4 Bidiagonal{Int64, Vector{Int64}}:
  20. 1
  21. 7 2
  22. 8 3
  23. 9 4

source

  1. Bidiagonal(A, uplo::Symbol)

Construct a Bidiagonal matrix from the main diagonal of A and its first super- (if uplo=:U) or sub-diagonal (if uplo=:L).

Examples

  1. julia> A = [1 1 1 1; 2 2 2 2; 3 3 3 3; 4 4 4 4]
  2. 4×4 Matrix{Int64}:
  3. 1 1 1 1
  4. 2 2 2 2
  5. 3 3 3 3
  6. 4 4 4 4
  7. julia> Bidiagonal(A, :U) # contains the main diagonal and first superdiagonal of A
  8. 4×4 Bidiagonal{Int64, Vector{Int64}}:
  9. 1 1
  10. 2 2
  11. 3 3
  12. 4
  13. julia> Bidiagonal(A, :L) # contains the main diagonal and first subdiagonal of A
  14. 4×4 Bidiagonal{Int64, Vector{Int64}}:
  15. 1
  16. 2 2
  17. 3 3
  18. 4 4

source

LinearAlgebra.SymTridiagonal — Type

  1. SymTridiagonal(dv::V, ev::V) where V <: AbstractVector

Construct a symmetric tridiagonal matrix from the diagonal (dv) and first sub/super-diagonal (ev), respectively. The result is of type SymTridiagonal and provides efficient specialized eigensolvers, but may be converted into a regular matrix with convert(Array, _) (or Array(_) for short).

For SymTridiagonal block matrices, the elements of dv are symmetrized. The argument ev is interpreted as the superdiagonal. Blocks from the subdiagonal are (materialized) transpose of the corresponding superdiagonal blocks.

Examples

  1. julia> dv = [1, 2, 3, 4]
  2. 4-element Vector{Int64}:
  3. 1
  4. 2
  5. 3
  6. 4
  7. julia> ev = [7, 8, 9]
  8. 3-element Vector{Int64}:
  9. 7
  10. 8
  11. 9
  12. julia> SymTridiagonal(dv, ev)
  13. 4×4 SymTridiagonal{Int64, Vector{Int64}}:
  14. 1 7
  15. 7 2 8
  16. 8 3 9
  17. 9 4
  18. julia> A = SymTridiagonal(fill([1 2; 3 4], 3), fill([1 2; 3 4], 2));
  19. julia> A[1,1]
  20. 2×2 Symmetric{Int64, Matrix{Int64}}:
  21. 1 2
  22. 2 4
  23. julia> A[1,2]
  24. 2×2 Matrix{Int64}:
  25. 1 2
  26. 3 4
  27. julia> A[2,1]
  28. 2×2 Matrix{Int64}:
  29. 1 3
  30. 2 4

source

  1. SymTridiagonal(A::AbstractMatrix)

Construct a symmetric tridiagonal matrix from the diagonal and first superdiagonal of the symmetric matrix A.

Examples

  1. julia> A = [1 2 3; 2 4 5; 3 5 6]
  2. 3×3 Matrix{Int64}:
  3. 1 2 3
  4. 2 4 5
  5. 3 5 6
  6. julia> SymTridiagonal(A)
  7. 3×3 SymTridiagonal{Int64, Vector{Int64}}:
  8. 1 2
  9. 2 4 5
  10. 5 6
  11. julia> B = reshape([[1 2; 2 3], [1 2; 3 4], [1 3; 2 4], [1 2; 2 3]], 2, 2);
  12. julia> SymTridiagonal(B)
  13. 2×2 SymTridiagonal{Matrix{Int64}, Vector{Matrix{Int64}}}:
  14. [1 2; 2 3] [1 3; 2 4]
  15. [1 2; 3 4] [1 2; 2 3]

source

LinearAlgebra.Tridiagonal — Type

  1. Tridiagonal(dl::V, d::V, du::V) where V <: AbstractVector

Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of type Tridiagonal and provides efficient specialized linear solvers, but may be converted into a regular matrix with convert(Array, _) (or Array(_) for short). The lengths of dl and du must be one less than the length of d.

Examples

  1. julia> dl = [1, 2, 3];
  2. julia> du = [4, 5, 6];
  3. julia> d = [7, 8, 9, 0];
  4. julia> Tridiagonal(dl, d, du)
  5. 4×4 Tridiagonal{Int64, Vector{Int64}}:
  6. 7 4
  7. 1 8 5
  8. 2 9 6
  9. 3 0

source

  1. Tridiagonal(A)

Construct a tridiagonal matrix from the first sub-diagonal, diagonal and first super-diagonal of the matrix A.

Examples

  1. julia> A = [1 2 3 4; 1 2 3 4; 1 2 3 4; 1 2 3 4]
  2. 4×4 Matrix{Int64}:
  3. 1 2 3 4
  4. 1 2 3 4
  5. 1 2 3 4
  6. 1 2 3 4
  7. julia> Tridiagonal(A)
  8. 4×4 Tridiagonal{Int64, Vector{Int64}}:
  9. 1 2
  10. 1 2 3
  11. 2 3 4
  12. 3 4

source

LinearAlgebra.Symmetric — Type

  1. Symmetric(A, uplo=:U)

Construct a Symmetric view of the upper (if uplo = :U) or lower (if uplo = :L) triangle of the matrix A.

Examples

  1. julia> A = [1 0 2 0 3; 0 4 0 5 0; 6 0 7 0 8; 0 9 0 1 0; 2 0 3 0 4]
  2. 5×5 Matrix{Int64}:
  3. 1 0 2 0 3
  4. 0 4 0 5 0
  5. 6 0 7 0 8
  6. 0 9 0 1 0
  7. 2 0 3 0 4
  8. julia> Supper = Symmetric(A)
  9. 5×5 Symmetric{Int64, Matrix{Int64}}:
  10. 1 0 2 0 3
  11. 0 4 0 5 0
  12. 2 0 7 0 8
  13. 0 5 0 1 0
  14. 3 0 8 0 4
  15. julia> Slower = Symmetric(A, :L)
  16. 5×5 Symmetric{Int64, Matrix{Int64}}:
  17. 1 0 6 0 2
  18. 0 4 0 9 0
  19. 6 0 7 0 3
  20. 0 9 0 1 0
  21. 2 0 3 0 4

Note that Supper will not be equal to Slower unless A is itself symmetric (e.g. if A == transpose(A)).

source

LinearAlgebra.Hermitian — Type

  1. Hermitian(A, uplo=:U)

Construct a Hermitian view of the upper (if uplo = :U) or lower (if uplo = :L) triangle of the matrix A.

Examples

  1. julia> A = [1 0 2+2im 0 3-3im; 0 4 0 5 0; 6-6im 0 7 0 8+8im; 0 9 0 1 0; 2+2im 0 3-3im 0 4];
  2. julia> Hupper = Hermitian(A)
  3. 5×5 Hermitian{Complex{Int64}, Matrix{Complex{Int64}}}:
  4. 1+0im 0+0im 2+2im 0+0im 3-3im
  5. 0+0im 4+0im 0+0im 5+0im 0+0im
  6. 2-2im 0+0im 7+0im 0+0im 8+8im
  7. 0+0im 5+0im 0+0im 1+0im 0+0im
  8. 3+3im 0+0im 8-8im 0+0im 4+0im
  9. julia> Hlower = Hermitian(A, :L)
  10. 5×5 Hermitian{Complex{Int64}, Matrix{Complex{Int64}}}:
  11. 1+0im 0+0im 6+6im 0+0im 2-2im
  12. 0+0im 4+0im 0+0im 9+0im 0+0im
  13. 6-6im 0+0im 7+0im 0+0im 3+3im
  14. 0+0im 9+0im 0+0im 1+0im 0+0im
  15. 2+2im 0+0im 3-3im 0+0im 4+0im

Note that Hupper will not be equal to Hlower unless A is itself Hermitian (e.g. if A == adjoint(A)).

All non-real parts of the diagonal will be ignored.

  1. Hermitian(fill(complex(1,1), 1, 1)) == fill(1, 1, 1)

source

LinearAlgebra.LowerTriangular — Type

  1. LowerTriangular(A::AbstractMatrix)

Construct a LowerTriangular view of the matrix A.

Examples

  1. julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]
  2. 3×3 Matrix{Float64}:
  3. 1.0 2.0 3.0
  4. 4.0 5.0 6.0
  5. 7.0 8.0 9.0
  6. julia> LowerTriangular(A)
  7. 3×3 LowerTriangular{Float64, Matrix{Float64}}:
  8. 1.0
  9. 4.0 5.0
  10. 7.0 8.0 9.0

source

LinearAlgebra.UpperTriangular — Type

  1. UpperTriangular(A::AbstractMatrix)

Construct an UpperTriangular view of the matrix A.

Examples

  1. julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]
  2. 3×3 Matrix{Float64}:
  3. 1.0 2.0 3.0
  4. 4.0 5.0 6.0
  5. 7.0 8.0 9.0
  6. julia> UpperTriangular(A)
  7. 3×3 UpperTriangular{Float64, Matrix{Float64}}:
  8. 1.0 2.0 3.0
  9. 5.0 6.0
  10. 9.0

source

LinearAlgebra.UnitLowerTriangular — Type

  1. UnitLowerTriangular(A::AbstractMatrix)

Construct a UnitLowerTriangular view of the matrix A. Such a view has the oneunit of the eltype of A on its diagonal.

Examples

  1. julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]
  2. 3×3 Matrix{Float64}:
  3. 1.0 2.0 3.0
  4. 4.0 5.0 6.0
  5. 7.0 8.0 9.0
  6. julia> UnitLowerTriangular(A)
  7. 3×3 UnitLowerTriangular{Float64, Matrix{Float64}}:
  8. 1.0
  9. 4.0 1.0
  10. 7.0 8.0 1.0

source

LinearAlgebra.UnitUpperTriangular — Type

  1. UnitUpperTriangular(A::AbstractMatrix)

Construct an UnitUpperTriangular view of the matrix A. Such a view has the oneunit of the eltype of A on its diagonal.

Examples

  1. julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]
  2. 3×3 Matrix{Float64}:
  3. 1.0 2.0 3.0
  4. 4.0 5.0 6.0
  5. 7.0 8.0 9.0
  6. julia> UnitUpperTriangular(A)
  7. 3×3 UnitUpperTriangular{Float64, Matrix{Float64}}:
  8. 1.0 2.0 3.0
  9. 1.0 6.0
  10. 1.0

source

LinearAlgebra.UpperHessenberg — Type

  1. UpperHessenberg(A::AbstractMatrix)

Construct an UpperHessenberg view of the matrix A. Entries of A below the first subdiagonal are ignored.

Efficient algorithms are implemented for H \ b, det(H), and similar.

See also the hessenberg function to factor any matrix into a similar upper-Hessenberg matrix.

If F::Hessenberg is the factorization object, the unitary matrix can be accessed with F.Q and the Hessenberg matrix with F.H. When Q is extracted, the resulting type is the HessenbergQ object, and may be converted to a regular matrix with convert(Array, _) (or Array(_) for short).

Iterating the decomposition produces the factors F.Q and F.H.

Examples

  1. julia> A = [1 2 3 4; 5 6 7 8; 9 10 11 12; 13 14 15 16]
  2. 4×4 Matrix{Int64}:
  3. 1 2 3 4
  4. 5 6 7 8
  5. 9 10 11 12
  6. 13 14 15 16
  7. julia> UpperHessenberg(A)
  8. 4×4 UpperHessenberg{Int64, Matrix{Int64}}:
  9. 1 2 3 4
  10. 5 6 7 8
  11. 10 11 12
  12. 15 16

source

LinearAlgebra.UniformScaling — Type

  1. UniformScaling{T<:Number}

Generically sized uniform scaling operator defined as a scalar times the identity operator, λ*I. Although without an explicit size, it acts similarly to a matrix in many cases and includes support for some indexing. See also I.

Julia 1.6

Indexing using ranges is available as of Julia 1.6.

Examples

  1. julia> J = UniformScaling(2.)
  2. UniformScaling{Float64}
  3. 2.0*I
  4. julia> A = [1. 2.; 3. 4.]
  5. 2×2 Matrix{Float64}:
  6. 1.0 2.0
  7. 3.0 4.0
  8. julia> J*A
  9. 2×2 Matrix{Float64}:
  10. 2.0 4.0
  11. 6.0 8.0
  12. julia> J[1:2, 1:2]
  13. 2×2 Matrix{Float64}:
  14. 2.0 0.0
  15. 0.0 2.0

source

LinearAlgebra.I — Constant

  1. I

An object of type UniformScaling, representing an identity matrix of any size.

Examples

  1. julia> fill(1, (5,6)) * I == fill(1, (5,6))
  2. true
  3. julia> [1 2im 3; 1im 2 3] * I
  4. 2×3 Matrix{Complex{Int64}}:
  5. 1+0im 0+2im 3+0im
  6. 0+1im 2+0im 3+0im

source

LinearAlgebra.UniformScaling — Method

  1. (I::UniformScaling)(n::Integer)

Construct a Diagonal matrix from a UniformScaling.

Julia 1.2

This method is available as of Julia 1.2.

Examples

  1. julia> I(3)
  2. 3×3 Diagonal{Bool, Vector{Bool}}:
  3. 1
  4. 1
  5. 1
  6. julia> (0.7*I)(3)
  7. 3×3 Diagonal{Float64, Vector{Float64}}:
  8. 0.7
  9. 0.7
  10. 0.7

source

LinearAlgebra.Factorization — Type

  1. LinearAlgebra.Factorization

Abstract type for matrix factorizations a.k.a. matrix decompositions. See online documentation for a list of available matrix factorizations.

source

LinearAlgebra.LU — Type

  1. LU <: Factorization

Matrix factorization type of the LU factorization of a square matrix A. This is the return type of lu, the corresponding matrix factorization function.

The individual components of the factorization F::LU can be accessed via getproperty:

ComponentDescription
F.LL (unit lower triangular) part of LU
F.UU (upper triangular) part of LU
F.p(right) permutation Vector
F.P(right) permutation Matrix

Iterating the factorization produces the components F.L, F.U, and F.p.

Examples

  1. julia> A = [4 3; 6 3]
  2. 2×2 Matrix{Int64}:
  3. 4 3
  4. 6 3
  5. julia> F = lu(A)
  6. LU{Float64, Matrix{Float64}}
  7. L factor:
  8. 2×2 Matrix{Float64}:
  9. 1.0 0.0
  10. 0.666667 1.0
  11. U factor:
  12. 2×2 Matrix{Float64}:
  13. 6.0 3.0
  14. 0.0 1.0
  15. julia> F.L * F.U == A[F.p, :]
  16. true
  17. julia> l, u, p = lu(A); # destructuring via iteration
  18. julia> l == F.L && u == F.U && p == F.p
  19. true

source

LinearAlgebra.lu — Function

  1. lu(A, pivot = RowMaximum(); check = true) -> F::LU

Compute the LU factorization of A.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

In most cases, if A is a subtype S of AbstractMatrix{T} with an element type T supporting +, -, * and /, the return type is LU{T,S{T}}. If pivoting is chosen (default) the element type should also support abs and <. Pivoting can be turned off by passing pivot = NoPivot().

The individual components of the factorization F can be accessed via getproperty:

ComponentDescription
F.LL (lower triangular) part of LU
F.UU (upper triangular) part of LU
F.p(right) permutation Vector
F.P(right) permutation Matrix

Iterating the factorization produces the components F.L, F.U, and F.p.

The relationship between F and A is

F.L*F.U == A[F.p, :]

F further supports the following functions:

Supported functionLULU{T,Tridiagonal{T}}
/
\
inv
det
logdet
logabsdet
size

Examples

  1. julia> A = [4 3; 6 3]
  2. 2×2 Matrix{Int64}:
  3. 4 3
  4. 6 3
  5. julia> F = lu(A)
  6. LU{Float64, Matrix{Float64}}
  7. L factor:
  8. 2×2 Matrix{Float64}:
  9. 1.0 0.0
  10. 0.666667 1.0
  11. U factor:
  12. 2×2 Matrix{Float64}:
  13. 6.0 3.0
  14. 0.0 1.0
  15. julia> F.L * F.U == A[F.p, :]
  16. true
  17. julia> l, u, p = lu(A); # destructuring via iteration
  18. julia> l == F.L && u == F.U && p == F.p
  19. true

source

  1. lu(A::SparseMatrixCSC; check = true) -> F::UmfpackLU

Compute the LU factorization of a sparse matrix A.

For sparse A with real or complex element type, the return type of F is UmfpackLU{Tv, Ti}, with Tv = Float64 or ComplexF64 respectively and Ti is an integer type (Int32 or Int64).

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

The individual components of the factorization F can be accessed by indexing:

ComponentDescription
LL (lower triangular) part of LU
UU (upper triangular) part of LU
pright permutation Vector
qleft permutation Vector
RsVector of scaling factors
:(L,U,p,q,Rs) components

The relation between F and A is

F.L*F.U == (F.Rs .* A)[F.p, F.q]

F further supports the following functions:

Note

lu(A::SparseMatrixCSC) uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, lu converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

source

LinearAlgebra.lu! — Function

  1. lu!(A, pivot = RowMaximum(); check = true) -> LU

lu! is the same as lu, but saves space by overwriting the input A, instead of creating a copy. An InexactError exception is thrown if the factorization produces a number not representable by the element type of A, e.g. for integer types.

Examples

  1. julia> A = [4. 3.; 6. 3.]
  2. 2×2 Matrix{Float64}:
  3. 4.0 3.0
  4. 6.0 3.0
  5. julia> F = lu!(A)
  6. LU{Float64, Matrix{Float64}}
  7. L factor:
  8. 2×2 Matrix{Float64}:
  9. 1.0 0.0
  10. 0.666667 1.0
  11. U factor:
  12. 2×2 Matrix{Float64}:
  13. 6.0 3.0
  14. 0.0 1.0
  15. julia> iA = [4 3; 6 3]
  16. 2×2 Matrix{Int64}:
  17. 4 3
  18. 6 3
  19. julia> lu!(iA)
  20. ERROR: InexactError: Int64(0.6666666666666666)
  21. Stacktrace:
  22. [...]

source

  1. lu!(F::UmfpackLU, A::SparseMatrixCSC; check=true) -> F::UmfpackLU

Compute the LU factorization of a sparse matrix A, reusing the symbolic factorization of an already existing LU factorization stored in F. The sparse matrix A must have an identical nonzero pattern as the matrix used to create the LU factorization F, otherwise an error is thrown.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

Note

lu!(F::UmfpackLU, A::SparseMatrixCSC) uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, lu! converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

Julia 1.5

lu! for UmfpackLU requires at least Julia 1.5.

Examples

  1. julia> A = sparse(Float64[1.0 2.0; 0.0 3.0]);
  2. julia> F = lu(A);
  3. julia> B = sparse(Float64[1.0 1.0; 0.0 1.0]);
  4. julia> lu!(F, B);
  5. julia> F \ ones(2)
  6. 2-element Vector{Float64}:
  7. 0.0
  8. 1.0

source

LinearAlgebra.Cholesky — Type

  1. Cholesky <: Factorization

Matrix factorization type of the Cholesky factorization of a dense symmetric/Hermitian positive definite matrix A. This is the return type of cholesky, the corresponding matrix factorization function.

The triangular Cholesky factor can be obtained from the factorization F::Cholesky via F.L and F.U, where A ≈ F.U' * F.U ≈ F.L * F.L'.

The following functions are available for Cholesky objects: size, \, inv, det, logdet and isposdef.

Iterating the decomposition produces the components L and U.

Examples

  1. julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]
  2. 3×3 Matrix{Float64}:
  3. 4.0 12.0 -16.0
  4. 12.0 37.0 -43.0
  5. -16.0 -43.0 98.0
  6. julia> C = cholesky(A)
  7. Cholesky{Float64, Matrix{Float64}}
  8. U factor:
  9. 3×3 UpperTriangular{Float64, Matrix{Float64}}:
  10. 2.0 6.0 -8.0
  11. 1.0 5.0
  12. 3.0
  13. julia> C.U
  14. 3×3 UpperTriangular{Float64, Matrix{Float64}}:
  15. 2.0 6.0 -8.0
  16. 1.0 5.0
  17. 3.0
  18. julia> C.L
  19. 3×3 LowerTriangular{Float64, Matrix{Float64}}:
  20. 2.0
  21. 6.0 1.0
  22. -8.0 5.0 3.0
  23. julia> C.L * C.U == A
  24. true
  25. julia> l, u = C; # destructuring via iteration
  26. julia> l == C.L && u == C.U
  27. true

source

LinearAlgebra.CholeskyPivoted — Type

  1. CholeskyPivoted

Matrix factorization type of the pivoted Cholesky factorization of a dense symmetric/Hermitian positive semi-definite matrix A. This is the return type of cholesky(_, Val(true)), the corresponding matrix factorization function.

The triangular Cholesky factor can be obtained from the factorization F::CholeskyPivoted via F.L and F.U, and the permutation via F.p, where A[F.p, F.p] ≈ Ur' * Ur ≈ Lr * Lr' with Ur = F.U[1:F.rank, :] and Lr = F.L[:, 1:F.rank], or alternatively A ≈ Up' * Up ≈ Lp * Lp' with Up = F.U[1:F.rank, invperm(F.p)] and Lp = F.L[invperm(F.p), 1:F.rank].

The following functions are available for CholeskyPivoted objects: size, \, inv, det, and rank.

Iterating the decomposition produces the components L and U.

Examples

  1. julia> X = [1.0, 2.0, 3.0, 4.0];
  2. julia> A = X * X';
  3. julia> C = cholesky(A, Val(true), check = false)
  4. CholeskyPivoted{Float64, Matrix{Float64}}
  5. U factor with rank 1:
  6. 4×4 UpperTriangular{Float64, Matrix{Float64}}:
  7. 4.0 2.0 3.0 1.0
  8. ⋅ 0.0 6.0 2.0
  9. ⋅ ⋅ 9.0 3.0
  10. ⋅ ⋅ ⋅ 1.0
  11. permutation:
  12. 4-element Vector{Int64}:
  13. 4
  14. 2
  15. 3
  16. 1
  17. julia> C.U[1:C.rank, :]' * C.U[1:C.rank, :] A[C.p, C.p]
  18. true
  19. julia> l, u = C; # destructuring via iteration
  20. julia> l == C.L && u == C.U
  21. true

source

LinearAlgebra.cholesky — Function

  1. cholesky(A, Val(false); check = true) -> Cholesky

Compute the Cholesky factorization of a dense symmetric positive definite matrix A and return a Cholesky factorization. The matrix A can either be a Symmetric or Hermitian StridedMatrix or a perfectly symmetric or Hermitian StridedMatrix.

The triangular Cholesky factor can be obtained from the factorization F via F.L and F.U, where A ≈ F.U' * F.U ≈ F.L * F.L'.

The following functions are available for Cholesky objects: size, \, inv, det, logdet and isposdef.

If you have a matrix A that is slightly non-Hermitian due to roundoff errors in its construction, wrap it in Hermitian(A) before passing it to cholesky in order to treat it as perfectly Hermitian.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

Examples

  1. julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]
  2. 3×3 Matrix{Float64}:
  3. 4.0 12.0 -16.0
  4. 12.0 37.0 -43.0
  5. -16.0 -43.0 98.0
  6. julia> C = cholesky(A)
  7. Cholesky{Float64, Matrix{Float64}}
  8. U factor:
  9. 3×3 UpperTriangular{Float64, Matrix{Float64}}:
  10. 2.0 6.0 -8.0
  11. 1.0 5.0
  12. 3.0
  13. julia> C.U
  14. 3×3 UpperTriangular{Float64, Matrix{Float64}}:
  15. 2.0 6.0 -8.0
  16. 1.0 5.0
  17. 3.0
  18. julia> C.L
  19. 3×3 LowerTriangular{Float64, Matrix{Float64}}:
  20. 2.0
  21. 6.0 1.0
  22. -8.0 5.0 3.0
  23. julia> C.L * C.U == A
  24. true

source

  1. cholesky(A, Val(true); tol = 0.0, check = true) -> CholeskyPivoted

Compute the pivoted Cholesky factorization of a dense symmetric positive semi-definite matrix A and return a CholeskyPivoted factorization. The matrix A can either be a Symmetric or Hermitian StridedMatrix or a perfectly symmetric or Hermitian StridedMatrix.

The triangular Cholesky factor can be obtained from the factorization F via F.L and F.U, and the permutation via F.p, where A[F.p, F.p] ≈ Ur' * Ur ≈ Lr * Lr' with Ur = F.U[1:F.rank, :] and Lr = F.L[:, 1:F.rank], or alternatively A ≈ Up' * Up ≈ Lp * Lp' with Up = F.U[1:F.rank, invperm(F.p)] and Lp = F.L[invperm(F.p), 1:F.rank].

The following functions are available for CholeskyPivoted objects: size, \, inv, det, and rank.

The argument tol determines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.

If you have a matrix A that is slightly non-Hermitian due to roundoff errors in its construction, wrap it in Hermitian(A) before passing it to cholesky in order to treat it as perfectly Hermitian.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

Examples

  1. julia> X = [1.0, 2.0, 3.0, 4.0];
  2. julia> A = X * X';
  3. julia> C = cholesky(A, Val(true), check = false)
  4. CholeskyPivoted{Float64, Matrix{Float64}}
  5. U factor with rank 1:
  6. 4×4 UpperTriangular{Float64, Matrix{Float64}}:
  7. 4.0 2.0 3.0 1.0
  8. ⋅ 0.0 6.0 2.0
  9. ⋅ ⋅ 9.0 3.0
  10. ⋅ ⋅ ⋅ 1.0
  11. permutation:
  12. 4-element Vector{Int64}:
  13. 4
  14. 2
  15. 3
  16. 1
  17. julia> C.U[1:C.rank, :]' * C.U[1:C.rank, :] A[C.p, C.p]
  18. true
  19. julia> l, u = C; # destructuring via iteration
  20. julia> l == C.L && u == C.U
  21. true

source

  1. cholesky(A::SparseMatrixCSC; shift = 0.0, check = true, perm = nothing) -> CHOLMOD.Factor

Compute the Cholesky factorization of a sparse positive definite matrix A. A must be a SparseMatrixCSC or a Symmetric/Hermitian view of a SparseMatrixCSC. Note that even if A doesn’t have the type tag, it must still be symmetric or Hermitian. If perm is not given, a fill-reducing permutation is used. F = cholesky(A) is most frequently used to solve systems of equations with F\b, but also the methods diag, det, and logdet are defined for F. You can also extract individual factors from F, using F.L. However, since pivoting is on by default, the factorization is internally represented as A == P'*L*L'*P with a permutation matrix P; using just L without accounting for P will give incorrect answers. To include the effects of permutation, it’s typically preferable to extract “combined” factors like PtL = F.PtL (the equivalent of P'*L) and LtP = F.UP (the equivalent of L'*P).

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

Setting the optional shift keyword argument computes the factorization of A+shift*I instead of A. If the perm argument is provided, it should be a permutation of 1:size(A,1) giving the ordering to use (instead of CHOLMOD’s default AMD ordering).

Examples

In the following example, the fill-reducing permutation used is [3, 2, 1]. If perm is set to 1:3 to enforce no permutation, the number of nonzero elements in the factor is 6.

  1. julia> A = [2 1 1; 1 2 0; 1 0 2]
  2. 3×3 Matrix{Int64}:
  3. 2 1 1
  4. 1 2 0
  5. 1 0 2
  6. julia> C = cholesky(sparse(A))
  7. SuiteSparse.CHOLMOD.Factor{Float64}
  8. type: LLt
  9. method: simplicial
  10. maxnnz: 5
  11. nnz: 5
  12. success: true
  13. julia> C.p
  14. 3-element Vector{Int64}:
  15. 3
  16. 2
  17. 1
  18. julia> L = sparse(C.L);
  19. julia> Matrix(L)
  20. 3×3 Matrix{Float64}:
  21. 1.41421 0.0 0.0
  22. 0.0 1.41421 0.0
  23. 0.707107 0.707107 1.0
  24. julia> L * L' ≈ A[C.p, C.p]
  25. true
  26. julia> P = sparse(1:3, C.p, ones(3))
  27. 3×3 SparseMatrixCSC{Float64, Int64} with 3 stored entries:
  28. ⋅ ⋅ 1.0
  29. ⋅ 1.0 ⋅
  30. 1.0 ⋅ ⋅
  31. julia> P' * L * L' * P ≈ A
  32. true
  33. julia> C = cholesky(sparse(A), perm=1:3)
  34. SuiteSparse.CHOLMOD.Factor{Float64}
  35. type: LLt
  36. method: simplicial
  37. maxnnz: 6
  38. nnz: 6
  39. success: true
  40. julia> L = sparse(C.L);
  41. julia> Matrix(L)
  42. 3×3 Matrix{Float64}:
  43. 1.41421 0.0 0.0
  44. 0.707107 1.22474 0.0
  45. 0.707107 -0.408248 1.1547
  46. julia> L * L' A
  47. true

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the Base.SparseArrays.CHOLMOD module.

source

LinearAlgebra.cholesky! — Function

  1. cholesky!(A::StridedMatrix, Val(false); check = true) -> Cholesky

The same as cholesky, but saves space by overwriting the input A, instead of creating a copy. An InexactError exception is thrown if the factorization produces a number not representable by the element type of A, e.g. for integer types.

Examples

  1. julia> A = [1 2; 2 50]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 50
  5. julia> cholesky!(A)
  6. ERROR: InexactError: Int64(6.782329983125268)
  7. Stacktrace:
  8. [...]

source

  1. cholesky!(A::StridedMatrix, Val(true); tol = 0.0, check = true) -> CholeskyPivoted

The same as cholesky, but saves space by overwriting the input A, instead of creating a copy. An InexactError exception is thrown if the factorization produces a number not representable by the element type of A, e.g. for integer types.

source

  1. cholesky!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor

Compute the Cholesky ($LL’$) factorization of A, reusing the symbolic factorization F. A must be a SparseMatrixCSC or a Symmetric/ Hermitian view of a SparseMatrixCSC. Note that even if A doesn’t have the type tag, it must still be symmetric or Hermitian.

See also cholesky.

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

source

LinearAlgebra.lowrankupdate — Function

  1. lowrankupdate(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Update a Cholesky factorization C with the vector v. If A = C.U'C.U then CC = cholesky(C.U'C.U + v*v') but the computation of CC only uses O(n^2) operations.

source

LinearAlgebra.lowrankdowndate — Function

  1. lowrankdowndate(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Downdate a Cholesky factorization C with the vector v. If A = C.U'C.U then CC = cholesky(C.U'C.U - v*v') but the computation of CC only uses O(n^2) operations.

source

LinearAlgebra.lowrankupdate! — Function

  1. lowrankupdate!(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Update a Cholesky factorization C with the vector v. If A = C.U'C.U then CC = cholesky(C.U'C.U + v*v') but the computation of CC only uses O(n^2) operations. The input factorization C is updated in place such that on exit C == CC. The vector v is destroyed during the computation.

source

LinearAlgebra.lowrankdowndate! — Function

  1. lowrankdowndate!(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Downdate a Cholesky factorization C with the vector v. If A = C.U'C.U then CC = cholesky(C.U'C.U - v*v') but the computation of CC only uses O(n^2) operations. The input factorization C is updated in place such that on exit C == CC. The vector v is destroyed during the computation.

source

LinearAlgebra.LDLt — Type

  1. LDLt <: Factorization

Matrix factorization type of the LDLt factorization of a real SymTridiagonal matrix S such that S = L*Diagonal(d)*L', where L is a UnitLowerTriangular matrix and d is a vector. The main use of an LDLt factorization F = ldlt(S) is to solve the linear system of equations Sx = b with F\b. This is the return type of ldlt, the corresponding matrix factorization function.

The individual components of the factorization F::LDLt can be accessed via getproperty:

ComponentDescription
F.LL (unit lower triangular) part of LDLt
F.DD (diagonal) part of LDLt
F.LtLt (unit upper triangular) part of LDLt
F.ddiagonal values of D as a Vector

Examples

  1. julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])
  2. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  3. 3.0 1.0
  4. 1.0 4.0 2.0
  5. 2.0 5.0
  6. julia> F = ldlt(S)
  7. LDLt{Float64, SymTridiagonal{Float64, Vector{Float64}}}
  8. L factor:
  9. 3×3 UnitLowerTriangular{Float64, SymTridiagonal{Float64, Vector{Float64}}}:
  10. 1.0
  11. 0.333333 1.0
  12. 0.0 0.545455 1.0
  13. D factor:
  14. 3×3 Diagonal{Float64, Vector{Float64}}:
  15. 3.0
  16. 3.66667
  17. 3.90909

source

LinearAlgebra.ldlt — Function

  1. ldlt(S::SymTridiagonal) -> LDLt

Compute an LDLt factorization of the real symmetric tridiagonal matrix S such that S = L*Diagonal(d)*L' where L is a unit lower triangular matrix and d is a vector. The main use of an LDLt factorization F = ldlt(S) is to solve the linear system of equations Sx = b with F\b.

Examples

  1. julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])
  2. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  3. 3.0 1.0
  4. 1.0 4.0 2.0
  5. 2.0 5.0
  6. julia> ldltS = ldlt(S);
  7. julia> b = [6., 7., 8.];
  8. julia> ldltS \ b
  9. 3-element Vector{Float64}:
  10. 1.7906976744186047
  11. 0.627906976744186
  12. 1.3488372093023255
  13. julia> S \ b
  14. 3-element Vector{Float64}:
  15. 1.7906976744186047
  16. 0.627906976744186
  17. 1.3488372093023255

source

  1. ldlt(A::SparseMatrixCSC; shift = 0.0, check = true, perm=nothing) -> CHOLMOD.Factor

Compute the $LDL’$ factorization of a sparse matrix A. A must be a SparseMatrixCSC or a Symmetric/Hermitian view of a SparseMatrixCSC. Note that even if A doesn’t have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used. F = ldlt(A) is most frequently used to solve systems of equations A*x = b with F\b. The returned factorization object F also supports the methods diag, det, logdet, and inv. You can extract individual factors from F using F.L. However, since pivoting is on by default, the factorization is internally represented as A == P'*L*D*L'*P with a permutation matrix P; using just L without accounting for P will give incorrect answers. To include the effects of permutation, it is typically preferable to extract “combined” factors like PtL = F.PtL (the equivalent of P'*L) and LtP = F.UP (the equivalent of L'*P). The complete list of supported factors is :L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

Setting the optional shift keyword argument computes the factorization of A+shift*I instead of A. If the perm argument is provided, it should be a permutation of 1:size(A,1) giving the ordering to use (instead of CHOLMOD’s default AMD ordering).

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the Base.SparseArrays.CHOLMOD module.

source

LinearAlgebra.ldlt! — Function

  1. ldlt!(S::SymTridiagonal) -> LDLt

Same as ldlt, but saves space by overwriting the input S, instead of creating a copy.

Examples

  1. julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])
  2. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  3. 3.0 1.0
  4. 1.0 4.0 2.0
  5. 2.0 5.0
  6. julia> ldltS = ldlt!(S);
  7. julia> ldltS === S
  8. false
  9. julia> S
  10. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  11. 3.0 0.333333
  12. 0.333333 3.66667 0.545455
  13. 0.545455 3.90909

source

  1. ldlt!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor

Compute the $LDL’$ factorization of A, reusing the symbolic factorization F. A must be a SparseMatrixCSC or a Symmetric/Hermitian view of a SparseMatrixCSC. Note that even if A doesn’t have the type tag, it must still be symmetric or Hermitian.

See also ldlt.

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

source

LinearAlgebra.QR — Type

  1. QR <: Factorization

A QR matrix factorization stored in a packed format, typically obtained from qr. If $A$ is an m×n matrix, then

\[A = Q R\]

where $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. The matrix $Q$ is stored as a sequence of Householder reflectors $v_i$ and coefficients $\tau_i$ where:

\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T).\]

Iterating the decomposition produces the components Q and R.

The object has two fields:

  • factors is an m×n matrix.

    • The upper triangular part contains the elements of $R$, that is R = triu(F.factors) for a QR object F.

    • The subdiagonal part contains the reflectors $v_i$ stored in a packed format where $v_i$ is the $i$th column of the matrix V = I + tril(F.factors, -1).

  • τ is a vector of length min(m,n) containing the coefficients $au_i$.

source

LinearAlgebra.QRCompactWY — Type

  1. QRCompactWY <: Factorization

A QR matrix factorization stored in a compact blocked format, typically obtained from qr. If $A$ is an m×n matrix, then

\[A = Q R\]

where $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. It is similar to the QR format except that the orthogonal/unitary matrix $Q$ is stored in Compact WY format [Schreiber1989]. For the block size $n_b$, it is stored as a m×n lower trapezoidal matrix $V$ and a matrix $T = (T_1 \; T_2 \; … \; T_{b-1} \; T_b’)$ composed of $b = \lceil \min(m,n) / n_b \rceil$ upper triangular matrices $T_j$ of size $n_b$×$n_b$ ($j = 1, …, b-1$) and an upper trapezoidal $n_b$×$\min(m,n) - (b-1) n_b$ matrix $T_b’$ ($j=b$) whose upper square part denoted with $T_b$ satisfying

\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T) = \prod_{j=1}^{b} (I - V_j T_j V_j^T)\]

such that $v_i$ is the $i$th column of $V$, $\tau_i$ is the $i$th element of [diag(T_1); diag(T_2); …; diag(T_b)], and $(V_1 \; V_2 \; … \; V_b)$ is the left m×min(m, n) block of $V$. When constructed using qr, the block size is given by $n_b = \min(m, n, 36)$.

Iterating the decomposition produces the components Q and R.

The object has two fields:

  • factors, as in the QR type, is an m×n matrix.

    • The upper triangular part contains the elements of $R$, that is R = triu(F.factors) for a QR object F.

    • The subdiagonal part contains the reflectors $v_i$ stored in a packed format such that V = I + tril(F.factors, -1).

  • T is a $n_b$-by-$\min(m,n)$ matrix as described above. The subdiagonal elements for each triangular matrix $T_j$ are ignored.

Note

This format should not to be confused with the older WY representation [Bischof1987].

source

LinearAlgebra.QRPivoted — Type

  1. QRPivoted <: Factorization

A QR matrix factorization with column pivoting in a packed format, typically obtained from qr. If $A$ is an m×n matrix, then

\[A P = Q R\]

where $P$ is a permutation matrix, $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. The matrix $Q$ is stored as a sequence of Householder reflectors:

\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T).\]

Iterating the decomposition produces the components Q, R, and p.

The object has three fields:

  • factors is an m×n matrix.

    • The upper triangular part contains the elements of $R$, that is R = triu(F.factors) for a QR object F.

    • The subdiagonal part contains the reflectors $v_i$ stored in a packed format where $v_i$ is the $i$th column of the matrix V = I + tril(F.factors, -1).

  • τ is a vector of length min(m,n) containing the coefficients $au_i$.

  • jpvt is an integer vector of length n corresponding to the permutation $P$.

source

LinearAlgebra.qr — Function

  1. qr(A, pivot = NoPivot(); blocksize) -> F

Compute the QR factorization of the matrix A: an orthogonal (or unitary if A is complex-valued) matrix Q, and an upper triangular matrix R such that

\[A = Q R\]

The returned object F stores the factorization in a packed format:

  • if pivot == ColumnNorm() then F is a QRPivoted object,

  • otherwise if the element type of A is a BLAS type (Float32, Float64, ComplexF32 or ComplexF64), then F is a QRCompactWY object,

  • otherwise F is a QR object.

The individual components of the decomposition F can be retrieved via property accessors:

  • F.Q: the orthogonal/unitary matrix Q
  • F.R: the upper triangular matrix R
  • F.p: the permutation vector of the pivot (QRPivoted only)
  • F.P: the permutation matrix of the pivot (QRPivoted only)

Iterating the decomposition produces the components Q, R, and if extant p.

The following functions are available for the QR objects: inv, size, and \. When A is rectangular, \ will return a least squares solution and if the solution is not unique, the one with smallest norm is returned. When A is not full rank, factorization with (column) pivoting is required to obtain a minimum norm solution.

Multiplication with respect to either full/square or non-full/square Q is allowed, i.e. both F.Q*F.R and F.Q*A are supported. A Q matrix can be converted into a regular matrix with Matrix. This operation returns the “thin” Q factor, i.e., if A is m×n with m>=n, then Matrix(F.Q) yields an m×n matrix with orthonormal columns. To retrieve the “full” Q factor, an m×m orthogonal matrix, use F.Q*Matrix(I,m,m). If m<=n, then Matrix(F.Q) yields an m×m orthogonal matrix.

The block size for QR decomposition can be specified by keyword argument blocksize :: Integer when pivot == NoPivot() and A isa StridedMatrix{<:BlasFloat}. It is ignored when blocksize > minimum(size(A)). See QRCompactWY.

Julia 1.4

The blocksize keyword argument requires Julia 1.4 or later.

Examples

  1. julia> A = [3.0 -6.0; 4.0 -8.0; 0.0 1.0]
  2. 3×2 Matrix{Float64}:
  3. 3.0 -6.0
  4. 4.0 -8.0
  5. 0.0 1.0
  6. julia> F = qr(A)
  7. LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}}
  8. Q factor:
  9. 3×3 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}}:
  10. -0.6 0.0 0.8
  11. -0.8 0.0 -0.6
  12. 0.0 -1.0 0.0
  13. R factor:
  14. 2×2 Matrix{Float64}:
  15. -5.0 10.0
  16. 0.0 -1.0
  17. julia> F.Q * F.R == A
  18. true

Note

qr returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that the Q and R matrices can be stored compactly rather as two separate dense matrices.

source

  1. qr(A::SparseMatrixCSC; tol=_default_tol(A), ordering=ORDERING_DEFAULT) -> QRSparse

Compute the QR factorization of a sparse matrix A. Fill-reducing row and column permutations are used such that F.R = F.Q'*A[F.prow,F.pcol]. The main application of this type is to solve least squares or underdetermined problems with \. The function calls the C library SPQR.

Note

qr(A::SparseMatrixCSC) uses the SPQR library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, as of Julia v1.4 qr converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

Examples

  1. julia> A = sparse([1,2,3,4], [1,1,2,2], [1.0,1.0,1.0,1.0])
  2. 4×2 SparseMatrixCSC{Float64, Int64} with 4 stored entries:
  3. 1.0
  4. 1.0
  5. 1.0
  6. 1.0
  7. julia> qr(A)
  8. SuiteSparse.SPQR.QRSparse{Float64, Int64}
  9. Q factor:
  10. 4×4 SuiteSparse.SPQR.QRSparseQ{Float64, Int64}:
  11. -0.707107 0.0 0.0 -0.707107
  12. 0.0 -0.707107 -0.707107 0.0
  13. 0.0 -0.707107 0.707107 0.0
  14. -0.707107 0.0 0.0 0.707107
  15. R factor:
  16. 2×2 SparseMatrixCSC{Float64, Int64} with 2 stored entries:
  17. -1.41421
  18. -1.41421
  19. Row permutation:
  20. 4-element Vector{Int64}:
  21. 1
  22. 3
  23. 4
  24. 2
  25. Column permutation:
  26. 2-element Vector{Int64}:
  27. 1
  28. 2

source

LinearAlgebra.qr! — Function

  1. qr!(A, pivot = NoPivot(); blocksize)

qr! is the same as qr when A is a subtype of StridedMatrix, but saves space by overwriting the input A, instead of creating a copy. An InexactError exception is thrown if the factorization produces a number not representable by the element type of A, e.g. for integer types.

Julia 1.4

The blocksize keyword argument requires Julia 1.4 or later.

Examples

  1. julia> a = [1. 2.; 3. 4.]
  2. 2×2 Matrix{Float64}:
  3. 1.0 2.0
  4. 3.0 4.0
  5. julia> qr!(a)
  6. LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}}
  7. Q factor:
  8. 2×2 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}}:
  9. -0.316228 -0.948683
  10. -0.948683 0.316228
  11. R factor:
  12. 2×2 Matrix{Float64}:
  13. -3.16228 -4.42719
  14. 0.0 -0.632456
  15. julia> a = [1 2; 3 4]
  16. 2×2 Matrix{Int64}:
  17. 1 2
  18. 3 4
  19. julia> qr!(a)
  20. ERROR: InexactError: Int64(3.1622776601683795)
  21. Stacktrace:
  22. [...]

source

LinearAlgebra.LQ — Type

  1. LQ <: Factorization

Matrix factorization type of the LQ factorization of a matrix A. The LQ decomposition is the QR decomposition of transpose(A). This is the return type of lq, the corresponding matrix factorization function.

If S::LQ is the factorization object, the lower triangular component can be obtained via S.L, and the orthogonal/unitary component via S.Q, such that A ≈ S.L*S.Q.

Iterating the decomposition produces the components S.L and S.Q.

Examples

  1. julia> A = [5. 7.; -2. -4.]
  2. 2×2 Matrix{Float64}:
  3. 5.0 7.0
  4. -2.0 -4.0
  5. julia> S = lq(A)
  6. LQ{Float64, Matrix{Float64}}
  7. L factor:
  8. 2×2 Matrix{Float64}:
  9. -8.60233 0.0
  10. 4.41741 -0.697486
  11. Q factor:
  12. 2×2 LinearAlgebra.LQPackedQ{Float64, Matrix{Float64}}:
  13. -0.581238 -0.813733
  14. -0.813733 0.581238
  15. julia> S.L * S.Q
  16. 2×2 Matrix{Float64}:
  17. 5.0 7.0
  18. -2.0 -4.0
  19. julia> l, q = S; # destructuring via iteration
  20. julia> l == S.L && q == S.Q
  21. true

source

LinearAlgebra.lq — Function

  1. lq(A) -> S::LQ

Compute the LQ decomposition of A. The decomposition’s lower triangular component can be obtained from the LQ object S via S.L, and the orthogonal/unitary component via S.Q, such that A ≈ S.L*S.Q.

Iterating the decomposition produces the components S.L and S.Q.

The LQ decomposition is the QR decomposition of transpose(A), and it is useful in order to compute the minimum-norm solution lq(A) \ b to an underdetermined system of equations (A has more columns than rows, but has full row rank).

Examples

  1. julia> A = [5. 7.; -2. -4.]
  2. 2×2 Matrix{Float64}:
  3. 5.0 7.0
  4. -2.0 -4.0
  5. julia> S = lq(A)
  6. LQ{Float64, Matrix{Float64}}
  7. L factor:
  8. 2×2 Matrix{Float64}:
  9. -8.60233 0.0
  10. 4.41741 -0.697486
  11. Q factor:
  12. 2×2 LinearAlgebra.LQPackedQ{Float64, Matrix{Float64}}:
  13. -0.581238 -0.813733
  14. -0.813733 0.581238
  15. julia> S.L * S.Q
  16. 2×2 Matrix{Float64}:
  17. 5.0 7.0
  18. -2.0 -4.0
  19. julia> l, q = S; # destructuring via iteration
  20. julia> l == S.L && q == S.Q
  21. true

source

LinearAlgebra.lq! — Function

  1. lq!(A) -> LQ

Compute the LQ factorization of A, using the input matrix as a workspace. See also lq.

source

LinearAlgebra.BunchKaufman — Type

  1. BunchKaufman <: Factorization

Matrix factorization type of the Bunch-Kaufman factorization of a symmetric or Hermitian matrix A as P'UDU'P or P'LDL'P, depending on whether the upper (the default) or the lower triangle is stored in A. If A is complex symmetric then U' and L' denote the unconjugated transposes, i.e. transpose(U) and transpose(L), respectively. This is the return type of bunchkaufman, the corresponding matrix factorization function.

If S::BunchKaufman is the factorization object, the components can be obtained via S.D, S.U or S.L as appropriate given S.uplo, and S.p.

Iterating the decomposition produces the components S.D, S.U or S.L as appropriate given S.uplo, and S.p.

Examples

  1. julia> A = [1 2; 2 3]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 3
  5. julia> S = bunchkaufman(A) # A gets wrapped internally by Symmetric(A)
  6. BunchKaufman{Float64, Matrix{Float64}}
  7. D factor:
  8. 2×2 Tridiagonal{Float64, Vector{Float64}}:
  9. -0.333333 0.0
  10. 0.0 3.0
  11. U factor:
  12. 2×2 UnitUpperTriangular{Float64, Matrix{Float64}}:
  13. 1.0 0.666667
  14. 1.0
  15. permutation:
  16. 2-element Vector{Int64}:
  17. 1
  18. 2
  19. julia> d, u, p = S; # destructuring via iteration
  20. julia> d == S.D && u == S.U && p == S.p
  21. true
  22. julia> S = bunchkaufman(Symmetric(A, :L))
  23. BunchKaufman{Float64, Matrix{Float64}}
  24. D factor:
  25. 2×2 Tridiagonal{Float64, Vector{Float64}}:
  26. 3.0 0.0
  27. 0.0 -0.333333
  28. L factor:
  29. 2×2 UnitLowerTriangular{Float64, Matrix{Float64}}:
  30. 1.0
  31. 0.666667 1.0
  32. permutation:
  33. 2-element Vector{Int64}:
  34. 2
  35. 1

source

LinearAlgebra.bunchkaufman — Function

  1. bunchkaufman(A, rook::Bool=false; check = true) -> S::BunchKaufman

Compute the Bunch-Kaufman [Bunch1977] factorization of a symmetric or Hermitian matrix A as P'*U*D*U'*P or P'*L*D*L'*P, depending on which triangle is stored in A, and return a BunchKaufman object. Note that if A is complex symmetric then U' and L' denote the unconjugated transposes, i.e. transpose(U) and transpose(L).

Iterating the decomposition produces the components S.D, S.U or S.L as appropriate given S.uplo, and S.p.

If rook is true, rook pivoting is used. If rook is false, rook pivoting is not used.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition’s validity (via issuccess) lies with the user.

The following functions are available for BunchKaufman objects: size, \, inv, issymmetric, ishermitian, getindex.

Examples

  1. julia> A = [1 2; 2 3]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 3
  5. julia> S = bunchkaufman(A) # A gets wrapped internally by Symmetric(A)
  6. BunchKaufman{Float64, Matrix{Float64}}
  7. D factor:
  8. 2×2 Tridiagonal{Float64, Vector{Float64}}:
  9. -0.333333 0.0
  10. 0.0 3.0
  11. U factor:
  12. 2×2 UnitUpperTriangular{Float64, Matrix{Float64}}:
  13. 1.0 0.666667
  14. 1.0
  15. permutation:
  16. 2-element Vector{Int64}:
  17. 1
  18. 2
  19. julia> d, u, p = S; # destructuring via iteration
  20. julia> d == S.D && u == S.U && p == S.p
  21. true
  22. julia> S.U*S.D*S.U' - S.P*A*S.P'
  23. 2×2 Matrix{Float64}:
  24. 0.0 0.0
  25. 0.0 0.0
  26. julia> S = bunchkaufman(Symmetric(A, :L))
  27. BunchKaufman{Float64, Matrix{Float64}}
  28. D factor:
  29. 2×2 Tridiagonal{Float64, Vector{Float64}}:
  30. 3.0 0.0
  31. 0.0 -0.333333
  32. L factor:
  33. 2×2 UnitLowerTriangular{Float64, Matrix{Float64}}:
  34. 1.0
  35. 0.666667 1.0
  36. permutation:
  37. 2-element Vector{Int64}:
  38. 2
  39. 1
  40. julia> S.L*S.D*S.L' - A[S.p, S.p]
  41. 2×2 Matrix{Float64}:
  42. 0.0 0.0
  43. 0.0 0.0

source

LinearAlgebra.bunchkaufman! — Function

  1. bunchkaufman!(A, rook::Bool=false; check = true) -> BunchKaufman

bunchkaufman! is the same as bunchkaufman, but saves space by overwriting the input A, instead of creating a copy.

source

LinearAlgebra.Eigen — Type

  1. Eigen <: Factorization

Matrix factorization type of the eigenvalue/spectral decomposition of a square matrix A. This is the return type of eigen, the corresponding matrix factorization function.

If F::Eigen is the factorization object, the eigenvalues can be obtained via F.values and the eigenvectors as the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

Iterating the decomposition produces the components F.values and F.vectors.

Examples

  1. julia> F = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
  2. Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
  3. values:
  4. 3-element Vector{Float64}:
  5. 1.0
  6. 3.0
  7. 18.0
  8. vectors:
  9. 3×3 Matrix{Float64}:
  10. 1.0 0.0 0.0
  11. 0.0 1.0 0.0
  12. 0.0 0.0 1.0
  13. julia> F.values
  14. 3-element Vector{Float64}:
  15. 1.0
  16. 3.0
  17. 18.0
  18. julia> F.vectors
  19. 3×3 Matrix{Float64}:
  20. 1.0 0.0 0.0
  21. 0.0 1.0 0.0
  22. 0.0 0.0 1.0
  23. julia> vals, vecs = F; # destructuring via iteration
  24. julia> vals == F.values && vecs == F.vectors
  25. true

source

LinearAlgebra.GeneralizedEigen — Type

  1. GeneralizedEigen <: Factorization

Matrix factorization type of the generalized eigenvalue/spectral decomposition of A and B. This is the return type of eigen, the corresponding matrix factorization function, when called with two matrix arguments.

If F::GeneralizedEigen is the factorization object, the eigenvalues can be obtained via F.values and the eigenvectors as the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

Iterating the decomposition produces the components F.values and F.vectors.

Examples

  1. julia> A = [1 0; 0 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 0 -1
  5. julia> B = [0 1; 1 0]
  6. 2×2 Matrix{Int64}:
  7. 0 1
  8. 1 0
  9. julia> F = eigen(A, B)
  10. GeneralizedEigen{ComplexF64, ComplexF64, Matrix{ComplexF64}, Vector{ComplexF64}}
  11. values:
  12. 2-element Vector{ComplexF64}:
  13. 0.0 - 1.0im
  14. 0.0 + 1.0im
  15. vectors:
  16. 2×2 Matrix{ComplexF64}:
  17. 0.0+1.0im 0.0-1.0im
  18. -1.0+0.0im -1.0-0.0im
  19. julia> F.values
  20. 2-element Vector{ComplexF64}:
  21. 0.0 - 1.0im
  22. 0.0 + 1.0im
  23. julia> F.vectors
  24. 2×2 Matrix{ComplexF64}:
  25. 0.0+1.0im 0.0-1.0im
  26. -1.0+0.0im -1.0-0.0im
  27. julia> vals, vecs = F; # destructuring via iteration
  28. julia> vals == F.values && vecs == F.vectors
  29. true

source

LinearAlgebra.eigvals — Function

  1. eigvals(A; permute::Bool=true, scale::Bool=true, sortby) -> values

Return the eigenvalues of A.

For general non-symmetric matrices it is possible to specify how the matrix is balanced before the eigenvalue calculation. The permute, scale, and sortby keywords are the same as for eigen!.

Examples

  1. julia> diag_matrix = [1 0; 0 4]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 0 4
  5. julia> eigvals(diag_matrix)
  6. 2-element Vector{Float64}:
  7. 1.0
  8. 4.0

source

For a scalar input, eigvals will return a scalar.

Example

  1. julia> eigvals(-2)
  2. -2

source

  1. eigvals(A, B) -> values

Computes the generalized eigenvalues of A and B.

Examples

  1. julia> A = [1 0; 0 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 0 -1
  5. julia> B = [0 1; 1 0]
  6. 2×2 Matrix{Int64}:
  7. 0 1
  8. 1 0
  9. julia> eigvals(A,B)
  10. 2-element Vector{ComplexF64}:
  11. 0.0 - 1.0im
  12. 0.0 + 1.0im

source

  1. eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values

Returns the eigenvalues of A. It is possible to calculate only a subset of the eigenvalues by specifying a UnitRange irange covering indices of the sorted eigenvalues, e.g. the 2nd to 8th eigenvalues.

Examples

  1. julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
  2. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  3. 1.0 2.0
  4. 2.0 2.0 3.0
  5. 3.0 1.0
  6. julia> eigvals(A, 2:2)
  7. 1-element Vector{Float64}:
  8. 0.9999999999999996
  9. julia> eigvals(A)
  10. 3-element Vector{Float64}:
  11. -2.1400549446402604
  12. 1.0000000000000002
  13. 5.140054944640259

source

  1. eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values

Returns the eigenvalues of A. It is possible to calculate only a subset of the eigenvalues by specifying a pair vl and vu for the lower and upper boundaries of the eigenvalues.

Examples

  1. julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
  2. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  3. 1.0 2.0
  4. 2.0 2.0 3.0
  5. 3.0 1.0
  6. julia> eigvals(A, -1, 2)
  7. 1-element Vector{Float64}:
  8. 1.0000000000000009
  9. julia> eigvals(A)
  10. 3-element Vector{Float64}:
  11. -2.1400549446402604
  12. 1.0000000000000002
  13. 5.140054944640259

source

LinearAlgebra.eigvals! — Function

  1. eigvals!(A; permute::Bool=true, scale::Bool=true, sortby) -> values

Same as eigvals, but saves space by overwriting the input A, instead of creating a copy. The permute, scale, and sortby keywords are the same as for eigen.

Note

The input matrix A will not contain its eigenvalues after eigvals! is called on it - A is used as a workspace.

Examples

  1. julia> A = [1. 2.; 3. 4.]
  2. 2×2 Matrix{Float64}:
  3. 1.0 2.0
  4. 3.0 4.0
  5. julia> eigvals!(A)
  6. 2-element Vector{Float64}:
  7. -0.3722813232690143
  8. 5.372281323269014
  9. julia> A
  10. 2×2 Matrix{Float64}:
  11. -0.372281 -1.0
  12. 0.0 5.37228

source

  1. eigvals!(A, B; sortby) -> values

Same as eigvals, but saves space by overwriting the input A (and B), instead of creating copies.

Note

The input matrices A and B will not contain their eigenvalues after eigvals! is called. They are used as workspaces.

Examples

  1. julia> A = [1. 0.; 0. -1.]
  2. 2×2 Matrix{Float64}:
  3. 1.0 0.0
  4. 0.0 -1.0
  5. julia> B = [0. 1.; 1. 0.]
  6. 2×2 Matrix{Float64}:
  7. 0.0 1.0
  8. 1.0 0.0
  9. julia> eigvals!(A, B)
  10. 2-element Vector{ComplexF64}:
  11. 0.0 - 1.0im
  12. 0.0 + 1.0im
  13. julia> A
  14. 2×2 Matrix{Float64}:
  15. -0.0 -1.0
  16. 1.0 -0.0
  17. julia> B
  18. 2×2 Matrix{Float64}:
  19. 1.0 0.0
  20. 0.0 1.0

source

  1. eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values

Same as eigvals, but saves space by overwriting the input A, instead of creating a copy. irange is a range of eigenvalue indices to search for - for instance, the 2nd to 8th eigenvalues.

source

  1. eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values

Same as eigvals, but saves space by overwriting the input A, instead of creating a copy. vl is the lower bound of the interval to search for eigenvalues, and vu is the upper bound.

source

LinearAlgebra.eigmax — Function

  1. eigmax(A; permute::Bool=true, scale::Bool=true)

Return the largest eigenvalue of A. The option permute=true permutes the matrix to become closer to upper triangular, and scale=true scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues of A are complex, this method will fail, since complex numbers cannot be sorted.

Examples

  1. julia> A = [0 im; -im 0]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 0+0im 0+1im
  4. 0-1im 0+0im
  5. julia> eigmax(A)
  6. 1.0
  7. julia> A = [0 im; -1 0]
  8. 2×2 Matrix{Complex{Int64}}:
  9. 0+0im 0+1im
  10. -1+0im 0+0im
  11. julia> eigmax(A)
  12. ERROR: DomainError with Complex{Int64}[0+0im 0+1im; -1+0im 0+0im]:
  13. `A` cannot have complex eigenvalues.
  14. Stacktrace:
  15. [...]

source

LinearAlgebra.eigmin — Function

  1. eigmin(A; permute::Bool=true, scale::Bool=true)

Return the smallest eigenvalue of A. The option permute=true permutes the matrix to become closer to upper triangular, and scale=true scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues of A are complex, this method will fail, since complex numbers cannot be sorted.

Examples

  1. julia> A = [0 im; -im 0]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 0+0im 0+1im
  4. 0-1im 0+0im
  5. julia> eigmin(A)
  6. -1.0
  7. julia> A = [0 im; -1 0]
  8. 2×2 Matrix{Complex{Int64}}:
  9. 0+0im 0+1im
  10. -1+0im 0+0im
  11. julia> eigmin(A)
  12. ERROR: DomainError with Complex{Int64}[0+0im 0+1im; -1+0im 0+0im]:
  13. `A` cannot have complex eigenvalues.
  14. Stacktrace:
  15. [...]

source

LinearAlgebra.eigvecs — Function

  1. eigvecs(A::SymTridiagonal[, eigvals]) -> Matrix

Return a matrix M whose columns are the eigenvectors of A. (The kth eigenvector can be obtained from the slice M[:, k].)

If the optional vector of eigenvalues eigvals is specified, eigvecs returns the specific corresponding eigenvectors.

Examples

  1. julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
  2. 3×3 SymTridiagonal{Float64, Vector{Float64}}:
  3. 1.0 2.0
  4. 2.0 2.0 3.0
  5. 3.0 1.0
  6. julia> eigvals(A)
  7. 3-element Vector{Float64}:
  8. -2.1400549446402604
  9. 1.0000000000000002
  10. 5.140054944640259
  11. julia> eigvecs(A)
  12. 3×3 Matrix{Float64}:
  13. 0.418304 -0.83205 0.364299
  14. -0.656749 -7.39009e-16 0.754109
  15. 0.627457 0.5547 0.546448
  16. julia> eigvecs(A, [1.])
  17. 3×1 Matrix{Float64}:
  18. 0.8320502943378438
  19. 4.263514128092366e-17
  20. -0.5547001962252291

source

  1. eigvecs(A; permute::Bool=true, scale::Bool=true, `sortby`) -> Matrix

Return a matrix M whose columns are the eigenvectors of A. (The kth eigenvector can be obtained from the slice M[:, k].) The permute, scale, and sortby keywords are the same as for eigen.

Examples

  1. julia> eigvecs([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
  2. 3×3 Matrix{Float64}:
  3. 1.0 0.0 0.0
  4. 0.0 1.0 0.0
  5. 0.0 0.0 1.0

source

  1. eigvecs(A, B) -> Matrix

Return a matrix M whose columns are the generalized eigenvectors of A and B. (The kth eigenvector can be obtained from the slice M[:, k].)

Examples

  1. julia> A = [1 0; 0 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 0 -1
  5. julia> B = [0 1; 1 0]
  6. 2×2 Matrix{Int64}:
  7. 0 1
  8. 1 0
  9. julia> eigvecs(A, B)
  10. 2×2 Matrix{ComplexF64}:
  11. 0.0+1.0im 0.0-1.0im
  12. -1.0+0.0im -1.0-0.0im

source

LinearAlgebra.eigen — Function

  1. eigen(A; permute::Bool=true, scale::Bool=true, sortby) -> Eigen

Computes the eigenvalue decomposition of A, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

Iterating the decomposition produces the components F.values and F.vectors.

The following functions are available for Eigen objects: inv, det, and isposdef.

For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option permute=true permutes the matrix to become closer to upper triangular, and scale=true scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default is true for both options.

By default, the eigenvalues and vectors are sorted lexicographically by (real(λ),imag(λ)). A different comparison function by(λ) can be passed to sortby, or you can pass sortby=nothing to leave the eigenvalues in an arbitrary order. Some special matrix types (e.g. Diagonal or SymTridiagonal) may implement their own sorting convention and not accept a sortby keyword.

Examples

  1. julia> F = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
  2. Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
  3. values:
  4. 3-element Vector{Float64}:
  5. 1.0
  6. 3.0
  7. 18.0
  8. vectors:
  9. 3×3 Matrix{Float64}:
  10. 1.0 0.0 0.0
  11. 0.0 1.0 0.0
  12. 0.0 0.0 1.0
  13. julia> F.values
  14. 3-element Vector{Float64}:
  15. 1.0
  16. 3.0
  17. 18.0
  18. julia> F.vectors
  19. 3×3 Matrix{Float64}:
  20. 1.0 0.0 0.0
  21. 0.0 1.0 0.0
  22. 0.0 0.0 1.0
  23. julia> vals, vecs = F; # destructuring via iteration
  24. julia> vals == F.values && vecs == F.vectors
  25. true

source

  1. eigen(A, B) -> GeneralizedEigen

Computes the generalized eigenvalue decomposition of A and B, returning a GeneralizedEigen factorization object F which contains the generalized eigenvalues in F.values and the generalized eigenvectors in the columns of the matrix F.vectors. (The kth generalized eigenvector can be obtained from the slice F.vectors[:, k].)

Iterating the decomposition produces the components F.values and F.vectors.

Any keyword arguments passed to eigen are passed through to the lower-level eigen! function.

Examples

  1. julia> A = [1 0; 0 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 0 -1
  5. julia> B = [0 1; 1 0]
  6. 2×2 Matrix{Int64}:
  7. 0 1
  8. 1 0
  9. julia> F = eigen(A, B);
  10. julia> F.values
  11. 2-element Vector{ComplexF64}:
  12. 0.0 - 1.0im
  13. 0.0 + 1.0im
  14. julia> F.vectors
  15. 2×2 Matrix{ComplexF64}:
  16. 0.0+1.0im 0.0-1.0im
  17. -1.0+0.0im -1.0-0.0im
  18. julia> vals, vecs = F; # destructuring via iteration
  19. julia> vals == F.values && vecs == F.vectors
  20. true

source

  1. eigen(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> Eigen

Computes the eigenvalue decomposition of A, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

Iterating the decomposition produces the components F.values and F.vectors.

The following functions are available for Eigen objects: inv, det, and isposdef.

The UnitRange irange specifies indices of the sorted eigenvalues to search for.

Note

If irange is not 1:n, where n is the dimension of A, then the returned factorization will be a truncated factorization.

source

  1. eigen(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> Eigen

Computes the eigenvalue decomposition of A, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

Iterating the decomposition produces the components F.values and F.vectors.

The following functions are available for Eigen objects: inv, det, and isposdef.

vl is the lower bound of the window of eigenvalues to search for, and vu is the upper bound.

Note

If [vl, vu] does not contain all eigenvalues of A, then the returned factorization will be a truncated factorization.

source

LinearAlgebra.eigen! — Function

  1. eigen!(A, [B]; permute, scale, sortby)

Same as eigen, but saves space by overwriting the input A (and B), instead of creating a copy.

source

LinearAlgebra.Hessenberg — Type

  1. Hessenberg <: Factorization

A Hessenberg object represents the Hessenberg factorization QHQ' of a square matrix, or a shift Q(H+μI)Q' thereof, which is produced by the hessenberg function.

source

LinearAlgebra.hessenberg — Function

  1. hessenberg(A) -> Hessenberg

Compute the Hessenberg decomposition of A and return a Hessenberg object. If F is the factorization object, the unitary matrix can be accessed with F.Q (of type LinearAlgebra.HessenbergQ) and the Hessenberg matrix with F.H (of type UpperHessenberg), either of which may be converted to a regular matrix with Matrix(F.H) or Matrix(F.Q).

If A is Hermitian or real-Symmetric, then the Hessenberg decomposition produces a real-symmetric tridiagonal matrix and F.H is of type SymTridiagonal.

Note that the shifted factorization A+μI = Q (H+μI) Q' can be constructed efficiently by F + μ*I using the UniformScaling object I, which creates a new Hessenberg object with shared storage and a modified shift. The shift of a given F is obtained by F.μ. This is useful because multiple shifted solves (F + μ*I) \ b (for different μ and/or b) can be performed efficiently once F is created.

Iterating the decomposition produces the factors F.Q, F.H, F.μ.

Examples

  1. julia> A = [4. 9. 7.; 4. 4. 1.; 4. 3. 2.]
  2. 3×3 Matrix{Float64}:
  3. 4.0 9.0 7.0
  4. 4.0 4.0 1.0
  5. 4.0 3.0 2.0
  6. julia> F = hessenberg(A)
  7. Hessenberg{Float64, UpperHessenberg{Float64, Matrix{Float64}}, Matrix{Float64}, Vector{Float64}, Bool}
  8. Q factor:
  9. 3×3 LinearAlgebra.HessenbergQ{Float64, Matrix{Float64}, Vector{Float64}, false}:
  10. 1.0 0.0 0.0
  11. 0.0 -0.707107 -0.707107
  12. 0.0 -0.707107 0.707107
  13. H factor:
  14. 3×3 UpperHessenberg{Float64, Matrix{Float64}}:
  15. 4.0 -11.3137 -1.41421
  16. -5.65685 5.0 2.0
  17. -1.0444e-15 1.0
  18. julia> F.Q * F.H * F.Q'
  19. 3×3 Matrix{Float64}:
  20. 4.0 9.0 7.0
  21. 4.0 4.0 1.0
  22. 4.0 3.0 2.0
  23. julia> q, h = F; # destructuring via iteration
  24. julia> q == F.Q && h == F.H
  25. true

source

LinearAlgebra.hessenberg! — Function

  1. hessenberg!(A) -> Hessenberg

hessenberg! is the same as hessenberg, but saves space by overwriting the input A, instead of creating a copy.

source

LinearAlgebra.Schur — Type

  1. Schur <: Factorization

Matrix factorization type of the Schur factorization of a matrix A. This is the return type of schur(_), the corresponding matrix factorization function.

If F::Schur is the factorization object, the (quasi) triangular Schur factor can be obtained via either F.Schur or F.T and the orthogonal/unitary Schur vectors via F.vectors or F.Z such that A = F.vectors * F.Schur * F.vectors'. The eigenvalues of A can be obtained with F.values.

Iterating the decomposition produces the components F.T, F.Z, and F.values.

Examples

  1. julia> A = [5. 7.; -2. -4.]
  2. 2×2 Matrix{Float64}:
  3. 5.0 7.0
  4. -2.0 -4.0
  5. julia> F = schur(A)
  6. Schur{Float64, Matrix{Float64}}
  7. T factor:
  8. 2×2 Matrix{Float64}:
  9. 3.0 9.0
  10. 0.0 -2.0
  11. Z factor:
  12. 2×2 Matrix{Float64}:
  13. 0.961524 0.274721
  14. -0.274721 0.961524
  15. eigenvalues:
  16. 2-element Vector{Float64}:
  17. 3.0
  18. -2.0
  19. julia> F.vectors * F.Schur * F.vectors'
  20. 2×2 Matrix{Float64}:
  21. 5.0 7.0
  22. -2.0 -4.0
  23. julia> t, z, vals = F; # destructuring via iteration
  24. julia> t == F.T && z == F.Z && vals == F.values
  25. true

source

LinearAlgebra.GeneralizedSchur — Type

  1. GeneralizedSchur <: Factorization

Matrix factorization type of the generalized Schur factorization of two matrices A and B. This is the return type of schur(_, _), the corresponding matrix factorization function.

If F::GeneralizedSchur is the factorization object, the (quasi) triangular Schur factors can be obtained via F.S and F.T, the left unitary/orthogonal Schur vectors via F.left or F.Q, and the right unitary/orthogonal Schur vectors can be obtained with F.right or F.Z such that A=F.left*F.S*F.right' and B=F.left*F.T*F.right'. The generalized eigenvalues of A and B can be obtained with F.α./F.β.

Iterating the decomposition produces the components F.S, F.T, F.Q, F.Z, F.α, and F.β.

source

LinearAlgebra.schur — Function

  1. schur(A::StridedMatrix) -> F::Schur

Computes the Schur factorization of the matrix A. The (quasi) triangular Schur factor can be obtained from the Schur object F with either F.Schur or F.T and the orthogonal/unitary Schur vectors can be obtained with F.vectors or F.Z such that A = F.vectors * F.Schur * F.vectors'. The eigenvalues of A can be obtained with F.values.

For real A, the Schur factorization is “quasitriangular”, which means that it is upper-triangular except with 2×2 diagonal blocks for any conjugate pair of complex eigenvalues; this allows the factorization to be purely real even when there are complex eigenvalues. To obtain the (complex) purely upper-triangular Schur factorization from a real quasitriangular factorization, you can use Schur{Complex}(schur(A)).

Iterating the decomposition produces the components F.T, F.Z, and F.values.

Examples

  1. julia> A = [5. 7.; -2. -4.]
  2. 2×2 Matrix{Float64}:
  3. 5.0 7.0
  4. -2.0 -4.0
  5. julia> F = schur(A)
  6. Schur{Float64, Matrix{Float64}}
  7. T factor:
  8. 2×2 Matrix{Float64}:
  9. 3.0 9.0
  10. 0.0 -2.0
  11. Z factor:
  12. 2×2 Matrix{Float64}:
  13. 0.961524 0.274721
  14. -0.274721 0.961524
  15. eigenvalues:
  16. 2-element Vector{Float64}:
  17. 3.0
  18. -2.0
  19. julia> F.vectors * F.Schur * F.vectors'
  20. 2×2 Matrix{Float64}:
  21. 5.0 7.0
  22. -2.0 -4.0
  23. julia> t, z, vals = F; # destructuring via iteration
  24. julia> t == F.T && z == F.Z && vals == F.values
  25. true

source

  1. schur(A::StridedMatrix, B::StridedMatrix) -> F::GeneralizedSchur

Computes the Generalized Schur (or QZ) factorization of the matrices A and B. The (quasi) triangular Schur factors can be obtained from the Schur object F with F.S and F.T, the left unitary/orthogonal Schur vectors can be obtained with F.left or F.Q and the right unitary/orthogonal Schur vectors can be obtained with F.right or F.Z such that A=F.left*F.S*F.right' and B=F.left*F.T*F.right'. The generalized eigenvalues of A and B can be obtained with F.α./F.β.

Iterating the decomposition produces the components F.S, F.T, F.Q, F.Z, F.α, and F.β.

source

LinearAlgebra.schur! — Function

  1. schur!(A::StridedMatrix) -> F::Schur

Same as schur but uses the input argument A as workspace.

Examples

  1. julia> A = [5. 7.; -2. -4.]
  2. 2×2 Matrix{Float64}:
  3. 5.0 7.0
  4. -2.0 -4.0
  5. julia> F = schur!(A)
  6. Schur{Float64, Matrix{Float64}}
  7. T factor:
  8. 2×2 Matrix{Float64}:
  9. 3.0 9.0
  10. 0.0 -2.0
  11. Z factor:
  12. 2×2 Matrix{Float64}:
  13. 0.961524 0.274721
  14. -0.274721 0.961524
  15. eigenvalues:
  16. 2-element Vector{Float64}:
  17. 3.0
  18. -2.0
  19. julia> A
  20. 2×2 Matrix{Float64}:
  21. 3.0 9.0
  22. 0.0 -2.0

source

  1. schur!(A::StridedMatrix, B::StridedMatrix) -> F::GeneralizedSchur

Same as schur but uses the input matrices A and B as workspace.

source

LinearAlgebra.ordschur — Function

  1. ordschur(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur

Reorders the Schur factorization F of a matrix A = Z*T*Z' according to the logical array select returning the reordered factorization F object. The selected eigenvalues appear in the leading diagonal of F.Schur and the corresponding leading columns of F.vectors form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded via select.

source

  1. ordschur(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur

Reorders the Generalized Schur factorization F of a matrix pair (A, B) = (Q*S*Z', Q*T*Z') according to the logical array select and returns a GeneralizedSchur object F. The selected eigenvalues appear in the leading diagonal of both F.S and F.T, and the left and right orthogonal/unitary Schur vectors are also reordered such that (A, B) = F.Q*(F.S, F.T)*F.Z' still holds and the generalized eigenvalues of A and B can still be obtained with F.α./F.β.

source

LinearAlgebra.ordschur! — Function

  1. ordschur!(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur

Same as ordschur but overwrites the factorization F.

source

  1. ordschur!(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur

Same as ordschur but overwrites the factorization F.

source

LinearAlgebra.SVD — Type

  1. SVD <: Factorization

Matrix factorization type of the singular value decomposition (SVD) of a matrix A. This is the return type of svd(_), the corresponding matrix factorization function.

If F::SVD is the factorization object, U, S, V and Vt can be obtained via F.U, F.S, F.V and F.Vt, such that A = U * Diagonal(S) * Vt. The singular values in S are sorted in descending order.

Iterating the decomposition produces the components U, S, and V.

Examples

  1. julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]
  2. 4×5 Matrix{Float64}:
  3. 1.0 0.0 0.0 0.0 2.0
  4. 0.0 0.0 3.0 0.0 0.0
  5. 0.0 0.0 0.0 0.0 0.0
  6. 0.0 2.0 0.0 0.0 0.0
  7. julia> F = svd(A)
  8. SVD{Float64, Float64, Matrix{Float64}}
  9. U factor:
  10. 4×4 Matrix{Float64}:
  11. 0.0 1.0 0.0 0.0
  12. 1.0 0.0 0.0 0.0
  13. 0.0 0.0 0.0 -1.0
  14. 0.0 0.0 1.0 0.0
  15. singular values:
  16. 4-element Vector{Float64}:
  17. 3.0
  18. 2.23606797749979
  19. 2.0
  20. 0.0
  21. Vt factor:
  22. 4×5 Matrix{Float64}:
  23. -0.0 0.0 1.0 -0.0 0.0
  24. 0.447214 0.0 0.0 0.0 0.894427
  25. -0.0 1.0 0.0 -0.0 0.0
  26. 0.0 0.0 0.0 1.0 0.0
  27. julia> F.U * Diagonal(F.S) * F.Vt
  28. 4×5 Matrix{Float64}:
  29. 1.0 0.0 0.0 0.0 2.0
  30. 0.0 0.0 3.0 0.0 0.0
  31. 0.0 0.0 0.0 0.0 0.0
  32. 0.0 2.0 0.0 0.0 0.0
  33. julia> u, s, v = F; # destructuring via iteration
  34. julia> u == F.U && s == F.S && v == F.V
  35. true

source

LinearAlgebra.GeneralizedSVD — Type

  1. GeneralizedSVD <: Factorization

Matrix factorization type of the generalized singular value decomposition (SVD) of two matrices A and B, such that A = F.U*F.D1*F.R0*F.Q' and B = F.V*F.D2*F.R0*F.Q'. This is the return type of svd(_, _), the corresponding matrix factorization function.

For an M-by-N matrix A and P-by-N matrix B,

  • U is a M-by-M orthogonal matrix,
  • V is a P-by-P orthogonal matrix,
  • Q is a N-by-N orthogonal matrix,
  • D1 is a M-by-(K+L) diagonal matrix with 1s in the first K entries,
  • D2 is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,
  • R0 is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,

K+L is the effective numerical rank of the matrix [A; B].

Iterating the decomposition produces the components U, V, Q, D1, D2, and R0.

The entries of F.D1 and F.D2 are related, as explained in the LAPACK documentation for the generalized SVD and the xGGSVD3 routine which is called underneath (in LAPACK 3.6.0 and newer).

Examples

  1. julia> A = [1. 0.; 0. -1.]
  2. 2×2 Matrix{Float64}:
  3. 1.0 0.0
  4. 0.0 -1.0
  5. julia> B = [0. 1.; 1. 0.]
  6. 2×2 Matrix{Float64}:
  7. 0.0 1.0
  8. 1.0 0.0
  9. julia> F = svd(A, B)
  10. GeneralizedSVD{Float64, Matrix{Float64}}
  11. U factor:
  12. 2×2 Matrix{Float64}:
  13. 1.0 0.0
  14. 0.0 1.0
  15. V factor:
  16. 2×2 Matrix{Float64}:
  17. -0.0 -1.0
  18. 1.0 0.0
  19. Q factor:
  20. 2×2 Matrix{Float64}:
  21. 1.0 0.0
  22. 0.0 1.0
  23. D1 factor:
  24. 2×2 SparseArrays.SparseMatrixCSC{Float64, Int64} with 2 stored entries:
  25. 0.707107
  26. 0.707107
  27. D2 factor:
  28. 2×2 SparseArrays.SparseMatrixCSC{Float64, Int64} with 2 stored entries:
  29. 0.707107
  30. 0.707107
  31. R0 factor:
  32. 2×2 Matrix{Float64}:
  33. 1.41421 0.0
  34. 0.0 -1.41421
  35. julia> F.U*F.D1*F.R0*F.Q'
  36. 2×2 Matrix{Float64}:
  37. 1.0 0.0
  38. 0.0 -1.0
  39. julia> F.V*F.D2*F.R0*F.Q'
  40. 2×2 Matrix{Float64}:
  41. 0.0 1.0
  42. 1.0 0.0

source

LinearAlgebra.svd — Function

  1. svd(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD

Compute the singular value decomposition (SVD) of A and return an SVD object.

U, S, V and Vt can be obtained from the factorization F with F.U, F.S, F.V and F.Vt, such that A = U * Diagonal(S) * Vt. The algorithm produces Vt and hence Vt is more efficient to extract than V. The singular values in S are sorted in descending order.

Iterating the decomposition produces the components U, S, and V.

If full = false (default), a “thin” SVD is returned. For a $M \times N$ matrix A, in the full factorization U is M \times M and V is N \times N, while in the thin factorization U is M \times K and V is N \times K, where K = \min(M,N) is the number of singular values.

If alg = DivideAndConquer() a divide-and-conquer algorithm is used to calculate the SVD. Another (typically slower but more accurate) option is alg = QRIteration().

Julia 1.3

The alg keyword argument requires Julia 1.3 or later.

Examples

  1. julia> A = rand(4,3);
  2. julia> F = svd(A); # Store the Factorization Object
  3. julia> A F.U * Diagonal(F.S) * F.Vt
  4. true
  5. julia> U, S, V = F; # destructuring via iteration
  6. julia> A U * Diagonal(S) * V'
  7. true
  8. julia> Uonly, = svd(A); # Store U only
  9. julia> Uonly == U
  10. true

source

  1. svd(A, B) -> GeneralizedSVD

Compute the generalized SVD of A and B, returning a GeneralizedSVD factorization object F such that [A;B] = [F.U * F.D1; F.V * F.D2] * F.R0 * F.Q'

  • U is a M-by-M orthogonal matrix,
  • V is a P-by-P orthogonal matrix,
  • Q is a N-by-N orthogonal matrix,
  • D1 is a M-by-(K+L) diagonal matrix with 1s in the first K entries,
  • D2 is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,
  • R0 is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,

K+L is the effective numerical rank of the matrix [A; B].

Iterating the decomposition produces the components U, V, Q, D1, D2, and R0.

The generalized SVD is used in applications such as when one wants to compare how much belongs to A vs. how much belongs to B, as in human vs yeast genome, or signal vs noise, or between clusters vs within clusters. (See Edelman and Wang for discussion: https://arxiv.org/abs/1901.00485)

It decomposes [A; B] into [UC; VS]H, where [UC; VS] is a natural orthogonal basis for the column space of [A; B], and H = RQ' is a natural non-orthogonal basis for the rowspace of [A;B], where the top rows are most closely attributed to the A matrix, and the bottom to the B matrix. The multi-cosine/sine matrices C and S provide a multi-measure of how much A vs how much B, and U and V provide directions in which these are measured.

Examples

  1. julia> A = randn(3,2); B=randn(4,2);
  2. julia> F = svd(A, B);
  3. julia> U,V,Q,C,S,R = F;
  4. julia> H = R*Q';
  5. julia> [A; B] ≈ [U*C; V*S]*H
  6. true
  7. julia> [A; B] ≈ [F.U*F.D1; F.V*F.D2]*F.R0*F.Q'
  8. true
  9. julia> Uonly, = svd(A,B);
  10. julia> U == Uonly
  11. true

source

LinearAlgebra.svd! — Function

  1. svd!(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD

svd! is the same as svd, but saves space by overwriting the input A, instead of creating a copy. See documentation of svd for details.

source

  1. svd!(A, B) -> GeneralizedSVD

svd! is the same as svd, but modifies the arguments A and B in-place, instead of making copies. See documentation of svd for details.

source

LinearAlgebra.svdvals — Function

  1. svdvals(A)

Return the singular values of A in descending order.

Examples

  1. julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]
  2. 4×5 Matrix{Float64}:
  3. 1.0 0.0 0.0 0.0 2.0
  4. 0.0 0.0 3.0 0.0 0.0
  5. 0.0 0.0 0.0 0.0 0.0
  6. 0.0 2.0 0.0 0.0 0.0
  7. julia> svdvals(A)
  8. 4-element Vector{Float64}:
  9. 3.0
  10. 2.23606797749979
  11. 2.0
  12. 0.0

source

  1. svdvals(A, B)

Return the generalized singular values from the generalized singular value decomposition of A and B. See also svd.

Examples

  1. julia> A = [1. 0.; 0. -1.]
  2. 2×2 Matrix{Float64}:
  3. 1.0 0.0
  4. 0.0 -1.0
  5. julia> B = [0. 1.; 1. 0.]
  6. 2×2 Matrix{Float64}:
  7. 0.0 1.0
  8. 1.0 0.0
  9. julia> svdvals(A, B)
  10. 2-element Vector{Float64}:
  11. 1.0
  12. 1.0

source

LinearAlgebra.svdvals! — Function

  1. svdvals!(A)

Return the singular values of A, saving space by overwriting the input. See also svdvals and svd. ```

source

  1. svdvals!(A, B)

Return the generalized singular values from the generalized singular value decomposition of A and B, saving space by overwriting A and B. See also svd and svdvals.

source

LinearAlgebra.Givens — Type

  1. LinearAlgebra.Givens(i1,i2,c,s) -> G

A Givens rotation linear operator. The fields c and s represent the cosine and sine of the rotation angle, respectively. The Givens type supports left multiplication G*A and conjugated transpose right multiplication A*G'. The type doesn’t have a size and can therefore be multiplied with matrices of arbitrary size as long as i2<=size(A,2) for G*A or i2<=size(A,1) for A*G'.

See also givens.

source

LinearAlgebra.givens — Function

  1. givens(f::T, g::T, i1::Integer, i2::Integer) where {T} -> (G::Givens, r::T)

Computes the Givens rotation G and scalar r such that for any vector x where

  1. x[i1] = f
  2. x[i2] = g

the result of the multiplication

  1. y = G*x

has the property that

  1. y[i1] = r
  2. y[i2] = 0

See also LinearAlgebra.Givens.

source

  1. givens(A::AbstractArray, i1::Integer, i2::Integer, j::Integer) -> (G::Givens, r)

Computes the Givens rotation G and scalar r such that the result of the multiplication

  1. B = G*A

has the property that

  1. B[i1,j] = r
  2. B[i2,j] = 0

See also LinearAlgebra.Givens.

source

  1. givens(x::AbstractVector, i1::Integer, i2::Integer) -> (G::Givens, r)

Computes the Givens rotation G and scalar r such that the result of the multiplication

  1. B = G*x

has the property that

  1. B[i1] = r
  2. B[i2] = 0

See also LinearAlgebra.Givens.

source

LinearAlgebra.triu — Function

  1. triu(M)

Upper triangle of a matrix.

Examples

  1. julia> a = fill(1.0, (4,4))
  2. 4×4 Matrix{Float64}:
  3. 1.0 1.0 1.0 1.0
  4. 1.0 1.0 1.0 1.0
  5. 1.0 1.0 1.0 1.0
  6. 1.0 1.0 1.0 1.0
  7. julia> triu(a)
  8. 4×4 Matrix{Float64}:
  9. 1.0 1.0 1.0 1.0
  10. 0.0 1.0 1.0 1.0
  11. 0.0 0.0 1.0 1.0
  12. 0.0 0.0 0.0 1.0

source

  1. triu(M, k::Integer)

Returns the upper triangle of M starting from the kth superdiagonal.

Examples

  1. julia> a = fill(1.0, (4,4))
  2. 4×4 Matrix{Float64}:
  3. 1.0 1.0 1.0 1.0
  4. 1.0 1.0 1.0 1.0
  5. 1.0 1.0 1.0 1.0
  6. 1.0 1.0 1.0 1.0
  7. julia> triu(a,3)
  8. 4×4 Matrix{Float64}:
  9. 0.0 0.0 0.0 1.0
  10. 0.0 0.0 0.0 0.0
  11. 0.0 0.0 0.0 0.0
  12. 0.0 0.0 0.0 0.0
  13. julia> triu(a,-3)
  14. 4×4 Matrix{Float64}:
  15. 1.0 1.0 1.0 1.0
  16. 1.0 1.0 1.0 1.0
  17. 1.0 1.0 1.0 1.0
  18. 1.0 1.0 1.0 1.0

source

LinearAlgebra.triu! — Function

  1. triu!(M)

Upper triangle of a matrix, overwriting M in the process. See also triu.

source

  1. triu!(M, k::Integer)

Return the upper triangle of M starting from the kth superdiagonal, overwriting M in the process.

Examples

  1. julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]
  2. 5×5 Matrix{Int64}:
  3. 1 2 3 4 5
  4. 1 2 3 4 5
  5. 1 2 3 4 5
  6. 1 2 3 4 5
  7. 1 2 3 4 5
  8. julia> triu!(M, 1)
  9. 5×5 Matrix{Int64}:
  10. 0 2 3 4 5
  11. 0 0 3 4 5
  12. 0 0 0 4 5
  13. 0 0 0 0 5
  14. 0 0 0 0 0

source

LinearAlgebra.tril — Function

  1. tril(M)

Lower triangle of a matrix.

Examples

  1. julia> a = fill(1.0, (4,4))
  2. 4×4 Matrix{Float64}:
  3. 1.0 1.0 1.0 1.0
  4. 1.0 1.0 1.0 1.0
  5. 1.0 1.0 1.0 1.0
  6. 1.0 1.0 1.0 1.0
  7. julia> tril(a)
  8. 4×4 Matrix{Float64}:
  9. 1.0 0.0 0.0 0.0
  10. 1.0 1.0 0.0 0.0
  11. 1.0 1.0 1.0 0.0
  12. 1.0 1.0 1.0 1.0

source

  1. tril(M, k::Integer)

Returns the lower triangle of M starting from the kth superdiagonal.

Examples

  1. julia> a = fill(1.0, (4,4))
  2. 4×4 Matrix{Float64}:
  3. 1.0 1.0 1.0 1.0
  4. 1.0 1.0 1.0 1.0
  5. 1.0 1.0 1.0 1.0
  6. 1.0 1.0 1.0 1.0
  7. julia> tril(a,3)
  8. 4×4 Matrix{Float64}:
  9. 1.0 1.0 1.0 1.0
  10. 1.0 1.0 1.0 1.0
  11. 1.0 1.0 1.0 1.0
  12. 1.0 1.0 1.0 1.0
  13. julia> tril(a,-3)
  14. 4×4 Matrix{Float64}:
  15. 0.0 0.0 0.0 0.0
  16. 0.0 0.0 0.0 0.0
  17. 0.0 0.0 0.0 0.0
  18. 1.0 0.0 0.0 0.0

source

LinearAlgebra.tril! — Function

  1. tril!(M)

Lower triangle of a matrix, overwriting M in the process. See also tril.

source

  1. tril!(M, k::Integer)

Return the lower triangle of M starting from the kth superdiagonal, overwriting M in the process.

Examples

  1. julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]
  2. 5×5 Matrix{Int64}:
  3. 1 2 3 4 5
  4. 1 2 3 4 5
  5. 1 2 3 4 5
  6. 1 2 3 4 5
  7. 1 2 3 4 5
  8. julia> tril!(M, 2)
  9. 5×5 Matrix{Int64}:
  10. 1 2 3 0 0
  11. 1 2 3 4 0
  12. 1 2 3 4 5
  13. 1 2 3 4 5
  14. 1 2 3 4 5

source

LinearAlgebra.diagind — Function

  1. diagind(M, k::Integer=0)

An AbstractRange giving the indices of the kth diagonal of the matrix M.

See also: diag, diagm, Diagonal.

Examples

  1. julia> A = [1 2 3; 4 5 6; 7 8 9]
  2. 3×3 Matrix{Int64}:
  3. 1 2 3
  4. 4 5 6
  5. 7 8 9
  6. julia> diagind(A,-1)
  7. 2:4:6

source

LinearAlgebra.diag — Function

  1. diag(M, k::Integer=0)

The kth diagonal of a matrix, as a vector.

See also diagm, diagind, Diagonal, isdiag.

Examples

  1. julia> A = [1 2 3; 4 5 6; 7 8 9]
  2. 3×3 Matrix{Int64}:
  3. 1 2 3
  4. 4 5 6
  5. 7 8 9
  6. julia> diag(A,1)
  7. 2-element Vector{Int64}:
  8. 2
  9. 6

source

LinearAlgebra.diagm — Function

  1. diagm(kv::Pair{<:Integer,<:AbstractVector}...)
  2. diagm(m::Integer, n::Integer, kv::Pair{<:Integer,<:AbstractVector}...)

Construct a matrix from Pairs of diagonals and vectors. Vector kv.second will be placed on the kv.first diagonal. By default the matrix is square and its size is inferred from kv, but a non-square size m×n (padded with zeros as needed) can be specified by passing m,n as the first arguments.

diagm constructs a full matrix; if you want storage-efficient versions with fast arithmetic, see Diagonal, Bidiagonal Tridiagonal and SymTridiagonal.

Examples

  1. julia> diagm(1 => [1,2,3])
  2. 4×4 Matrix{Int64}:
  3. 0 1 0 0
  4. 0 0 2 0
  5. 0 0 0 3
  6. 0 0 0 0
  7. julia> diagm(1 => [1,2,3], -1 => [4,5])
  8. 4×4 Matrix{Int64}:
  9. 0 1 0 0
  10. 4 0 2 0
  11. 0 5 0 3
  12. 0 0 0 0

source

  1. diagm(v::AbstractVector)
  2. diagm(m::Integer, n::Integer, v::AbstractVector)

Construct a matrix with elements of the vector as diagonal elements. By default, the matrix is square and its size is given by length(v), but a non-square size m×n can be specified by passing m,n as the first arguments.

Examples

  1. julia> diagm([1,2,3])
  2. 3×3 Matrix{Int64}:
  3. 1 0 0
  4. 0 2 0
  5. 0 0 3

source

LinearAlgebra.rank — Function

  1. rank(A::AbstractMatrix; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)
  2. rank(A::AbstractMatrix, rtol::Real)

Compute the rank of a matrix by counting how many singular values of A have magnitude greater than max(atol, rtol*σ₁) where σ₁ is A‘s largest singular value. atol and rtol are the absolute and relative tolerances, respectively. The default relative tolerance is n*ϵ, where n is the size of the smallest dimension of A, and ϵ is the eps of the element type of A.

Julia 1.1

The atol and rtol keyword arguments requires at least Julia 1.1. In Julia 1.0 rtol is available as a positional argument, but this will be deprecated in Julia 2.0.

Examples

  1. julia> rank(Matrix(I, 3, 3))
  2. 3
  3. julia> rank(diagm(0 => [1, 0, 2]))
  4. 2
  5. julia> rank(diagm(0 => [1, 0.001, 2]), rtol=0.1)
  6. 2
  7. julia> rank(diagm(0 => [1, 0.001, 2]), rtol=0.00001)
  8. 3
  9. julia> rank(diagm(0 => [1, 0.001, 2]), atol=1.5)
  10. 1

source

LinearAlgebra.norm — Function

  1. norm(A, p::Real=2)

For any iterable container A (including arrays of any dimension) of numbers (or any element type for which norm is defined), compute the p-norm (defaulting to p=2) as if A were a vector of the corresponding length.

The p-norm is defined as

\[\|A\|_p = \left( \sum_{i=1}^n | a_i | ^p \right)^{1/p}\]

with $a_i$ the entries of $A$, $| a_i |$ the norm of $a_i$, and $n$ the length of $A$. Since the p-norm is computed using the norms of the entries of A, the p-norm of a vector of vectors is not compatible with the interpretation of it as a block vector in general if p != 2.

p can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, norm(A, Inf) returns the largest value in abs.(A), whereas norm(A, -Inf) returns the smallest. If A is a matrix and p=2, then this is equivalent to the Frobenius norm.

The second argument p is not necessarily a part of the interface for norm, i.e. a custom type may only implement norm(A) without second argument.

Use opnorm to compute the operator norm of a matrix.

Examples

  1. julia> v = [3, -2, 6]
  2. 3-element Vector{Int64}:
  3. 3
  4. -2
  5. 6
  6. julia> norm(v)
  7. 7.0
  8. julia> norm(v, 1)
  9. 11.0
  10. julia> norm(v, Inf)
  11. 6.0
  12. julia> norm([1 2 3; 4 5 6; 7 8 9])
  13. 16.881943016134134
  14. julia> norm([1 2 3 4 5 6 7 8 9])
  15. 16.881943016134134
  16. julia> norm(1:9)
  17. 16.881943016134134
  18. julia> norm(hcat(v,v), 1) == norm(vcat(v,v), 1) != norm([v,v], 1)
  19. true
  20. julia> norm(hcat(v,v), 2) == norm(vcat(v,v), 2) == norm([v,v], 2)
  21. true
  22. julia> norm(hcat(v,v), Inf) == norm(vcat(v,v), Inf) != norm([v,v], Inf)
  23. true

source

  1. norm(x::Number, p::Real=2)

For numbers, return $\left( |x|^p \right)^{1/p}$.

Examples

  1. julia> norm(2, 1)
  2. 2.0
  3. julia> norm(-2, 1)
  4. 2.0
  5. julia> norm(2, 2)
  6. 2.0
  7. julia> norm(-2, 2)
  8. 2.0
  9. julia> norm(2, Inf)
  10. 2.0
  11. julia> norm(-2, Inf)
  12. 2.0

source

LinearAlgebra.opnorm — Function

  1. opnorm(A::AbstractMatrix, p::Real=2)

Compute the operator norm (or matrix norm) induced by the vector p-norm, where valid values of p are 1, 2, or Inf. (Note that for sparse matrices, p=2 is currently not implemented.) Use norm to compute the Frobenius norm.

When p=1, the operator norm is the maximum absolute column sum of A:

\[\|A\|_1 = \max_{1 ≤ j ≤ n} \sum_{i=1}^m | a_{ij} |\]

with $a_{ij}$ the entries of $A$, and $m$ and $n$ its dimensions.

When p=2, the operator norm is the spectral norm, equal to the largest singular value of A.

When p=Inf, the operator norm is the maximum absolute row sum of A:

\[\|A\|_\infty = \max_{1 ≤ i ≤ m} \sum _{j=1}^n | a_{ij} |\]

Examples

  1. julia> A = [1 -2 -3; 2 3 -1]
  2. 2×3 Matrix{Int64}:
  3. 1 -2 -3
  4. 2 3 -1
  5. julia> opnorm(A, Inf)
  6. 6.0
  7. julia> opnorm(A, 1)
  8. 5.0

source

  1. opnorm(x::Number, p::Real=2)

For numbers, return $\left( |x|^p \right)^{1/p}$. This is equivalent to norm.

source

  1. opnorm(A::Adjoint{<:Any,<:AbstracVector}, q::Real=2)
  2. opnorm(A::Transpose{<:Any,<:AbstracVector}, q::Real=2)

For Adjoint/Transpose-wrapped vectors, return the operator $q$-norm of A, which is equivalent to the p-norm with value p = q/(q-1). They coincide at p = q = 2. Use norm to compute the p norm of A as a vector.

The difference in norm between a vector space and its dual arises to preserve the relationship between duality and the dot product, and the result is consistent with the operator p-norm of a 1 × n matrix.

Examples

  1. julia> v = [1; im];
  2. julia> vc = v';
  3. julia> opnorm(vc, 1)
  4. 1.0
  5. julia> norm(vc, 1)
  6. 2.0
  7. julia> norm(v, 1)
  8. 2.0
  9. julia> opnorm(vc, 2)
  10. 1.4142135623730951
  11. julia> norm(vc, 2)
  12. 1.4142135623730951
  13. julia> norm(v, 2)
  14. 1.4142135623730951
  15. julia> opnorm(vc, Inf)
  16. 2.0
  17. julia> norm(vc, Inf)
  18. 1.0
  19. julia> norm(v, Inf)
  20. 1.0

source

LinearAlgebra.normalize! — Function

  1. normalize!(a::AbstractArray, p::Real=2)

Normalize the array a in-place so that its p-norm equals unity, i.e. norm(a, p) == 1. See also normalize and norm.

source

LinearAlgebra.normalize — Function

  1. normalize(a::AbstractArray, p::Real=2)

Normalize the array a so that its p-norm equals unity, i.e. norm(a, p) == 1. See also normalize! and norm.

Examples

  1. julia> a = [1,2,4];
  2. julia> b = normalize(a)
  3. 3-element Vector{Float64}:
  4. 0.2182178902359924
  5. 0.4364357804719848
  6. 0.8728715609439696
  7. julia> norm(b)
  8. 1.0
  9. julia> c = normalize(a, 1)
  10. 3-element Vector{Float64}:
  11. 0.14285714285714285
  12. 0.2857142857142857
  13. 0.5714285714285714
  14. julia> norm(c, 1)
  15. 1.0
  16. julia> a = [1 2 4 ; 1 2 4]
  17. 2×3 Matrix{Int64}:
  18. 1 2 4
  19. 1 2 4
  20. julia> norm(a)
  21. 6.48074069840786
  22. julia> normalize(a)
  23. 2×3 Matrix{Float64}:
  24. 0.154303 0.308607 0.617213
  25. 0.154303 0.308607 0.617213

source

LinearAlgebra.cond — Function

  1. cond(M, p::Real=2)

Condition number of the matrix M, computed using the operator p-norm. Valid values for p are 1, 2 (default), or Inf.

source

LinearAlgebra.condskeel — Function

  1. condskeel(M, [x, p::Real=Inf])

\[\kappa_S(M, p) = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \right\Vert_p \\ \kappa_S(M, x, p) = \frac{\left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \left\vert x \right\vert \right\Vert_p}{\left \Vert x \right \Vert_p}\]

Skeel condition number $\kappa_S$ of the matrix M, optionally with respect to the vector x, as computed using the operator p-norm. $\left\vert M \right\vert$ denotes the matrix of (entry wise) absolute values of $M$; $\left\vert M \right\vert_{ij} = \left\vert M_{ij} \right\vert$. Valid values for p are 1, 2 and Inf (default).

This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.

source

LinearAlgebra.tr — Function

  1. tr(M)

Matrix trace. Sums the diagonal elements of M.

Examples

  1. julia> A = [1 2; 3 4]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 3 4
  5. julia> tr(A)
  6. 5

source

LinearAlgebra.det — Function

  1. det(M)

Matrix determinant.

See also: logdet and logabsdet.

Examples

  1. julia> M = [1 0; 2 2]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 2 2
  5. julia> det(M)
  6. 2.0

source

LinearAlgebra.logdet — Function

  1. logdet(M)

Log of matrix determinant. Equivalent to log(det(M)), but may provide increased accuracy and/or speed.

Examples

  1. julia> M = [1 0; 2 2]
  2. 2×2 Matrix{Int64}:
  3. 1 0
  4. 2 2
  5. julia> logdet(M)
  6. 0.6931471805599453
  7. julia> logdet(Matrix(I, 3, 3))
  8. 0.0

source

LinearAlgebra.logabsdet — Function

  1. logabsdet(M)

Log of absolute value of matrix determinant. Equivalent to (log(abs(det(M))), sign(det(M))), but may provide increased accuracy and/or speed.

Examples

  1. julia> A = [-1. 0.; 0. 1.]
  2. 2×2 Matrix{Float64}:
  3. -1.0 0.0
  4. 0.0 1.0
  5. julia> det(A)
  6. -1.0
  7. julia> logabsdet(A)
  8. (0.0, -1.0)
  9. julia> B = [2. 0.; 0. 1.]
  10. 2×2 Matrix{Float64}:
  11. 2.0 0.0
  12. 0.0 1.0
  13. julia> det(B)
  14. 2.0
  15. julia> logabsdet(B)
  16. (0.6931471805599453, 1.0)

source

Base.inv — Method

  1. inv(M)

Matrix inverse. Computes matrix N such that M * N = I, where I is the identity matrix. Computed by solving the left-division N = M \ I.

Examples

  1. julia> M = [2 5; 1 3]
  2. 2×2 Matrix{Int64}:
  3. 2 5
  4. 1 3
  5. julia> N = inv(M)
  6. 2×2 Matrix{Float64}:
  7. 3.0 -5.0
  8. -1.0 2.0
  9. julia> M*N == N*M == Matrix(I, 2, 2)
  10. true

source

LinearAlgebra.pinv — Function

  1. pinv(M; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)
  2. pinv(M, rtol::Real) = pinv(M; rtol=rtol) # to be deprecated in Julia 2.0

Computes the Moore-Penrose pseudoinverse.

For matrices M with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values greater than max(atol, rtol*σ₁) where σ₁ is the largest singular value of M.

The optimal choice of absolute (atol) and relative tolerance (rtol) varies both with the value of M and the intended application of the pseudoinverse. The default relative tolerance is n*ϵ, where n is the size of the smallest dimension of M, and ϵ is the eps of the element type of M.

For inverting dense ill-conditioned matrices in a least-squares sense, rtol = sqrt(eps(real(float(one(eltype(M)))))) is recommended.

For more information, see [issue8859], [B96], [S84], [KY88].

Examples

  1. julia> M = [1.5 1.3; 1.2 1.9]
  2. 2×2 Matrix{Float64}:
  3. 1.5 1.3
  4. 1.2 1.9
  5. julia> N = pinv(M)
  6. 2×2 Matrix{Float64}:
  7. 1.47287 -1.00775
  8. -0.930233 1.16279
  9. julia> M * N
  10. 2×2 Matrix{Float64}:
  11. 1.0 -2.22045e-16
  12. 4.44089e-16 1.0

source

LinearAlgebra.nullspace — Function

  1. nullspace(M; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)
  2. nullspace(M, rtol::Real) = nullspace(M; rtol=rtol) # to be deprecated in Julia 2.0

Computes a basis for the nullspace of M by including the singular vectors of M whose singular values have magnitudes greater than max(atol, rtol*σ₁), where σ₁ is M‘s largest singular value.

By default, the relative tolerance rtol is n*ϵ, where n is the size of the smallest dimension of M, and ϵ is the eps of the element type of M.

Examples

  1. julia> M = [1 0 0; 0 1 0; 0 0 0]
  2. 3×3 Matrix{Int64}:
  3. 1 0 0
  4. 0 1 0
  5. 0 0 0
  6. julia> nullspace(M)
  7. 3×1 Matrix{Float64}:
  8. 0.0
  9. 0.0
  10. 1.0
  11. julia> nullspace(M, rtol=3)
  12. 3×3 Matrix{Float64}:
  13. 0.0 1.0 0.0
  14. 1.0 0.0 0.0
  15. 0.0 0.0 1.0
  16. julia> nullspace(M, atol=0.95)
  17. 3×1 Matrix{Float64}:
  18. 0.0
  19. 0.0
  20. 1.0

source

Base.kron — Function

  1. kron(A, B)

Kronecker tensor product of two vectors or two matrices.

For real vectors v and w, the Kronecker product is related to the outer product by kron(v,w) == vec(w * transpose(v)) or w * transpose(v) == reshape(kron(v,w), (length(w), length(v))). Note how the ordering of v and w differs on the left and right of these expressions (due to column-major storage). For complex vectors, the outer product w * v' also differs by conjugation of v.

Examples

  1. julia> A = [1 2; 3 4]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 3 4
  5. julia> B = [im 1; 1 -im]
  6. 2×2 Matrix{Complex{Int64}}:
  7. 0+1im 1+0im
  8. 1+0im 0-1im
  9. julia> kron(A, B)
  10. 4×4 Matrix{Complex{Int64}}:
  11. 0+1im 1+0im 0+2im 2+0im
  12. 1+0im 0-1im 2+0im 0-2im
  13. 0+3im 3+0im 0+4im 4+0im
  14. 3+0im 0-3im 4+0im 0-4im
  15. julia> v = [1, 2]; w = [3, 4, 5];
  16. julia> w*transpose(v)
  17. 3×2 Matrix{Int64}:
  18. 3 6
  19. 4 8
  20. 5 10
  21. julia> reshape(kron(v,w), (length(w), length(v)))
  22. 3×2 Matrix{Int64}:
  23. 3 6
  24. 4 8
  25. 5 10

source

Base.kron! — Function

  1. kron!(C, A, B)

kron! is the in-place version of kron. Computes kron(A, B) and stores the result in C overwriting the existing value of C.

Tip

Bounds checking can be disabled by @inbounds, but you need to take care of the shape of C, A, B yourself.

Julia 1.6

This function requires Julia 1.6 or later.

source

Base.exp — Method

  1. exp(A::AbstractMatrix)

Compute the matrix exponential of A, defined by

\[e^A = \sum_{n=0}^{\infty} \frac{A^n}{n!}.\]

For symmetric or Hermitian A, an eigendecomposition (eigen) is used, otherwise the scaling and squaring algorithm (see [H05]) is chosen.

Examples

  1. julia> A = Matrix(1.0I, 2, 2)
  2. 2×2 Matrix{Float64}:
  3. 1.0 0.0
  4. 0.0 1.0
  5. julia> exp(A)
  6. 2×2 Matrix{Float64}:
  7. 2.71828 0.0
  8. 0.0 2.71828

source

Base.cis — Method

  1. cis(A::AbstractMatrix)

Compute $\exp(i A)$ for a square matrix $A$.

Julia 1.7

Support for using cis with matrices was added in Julia 1.7.

Examples

  1. julia> cis([π 0; 0 π]) -I
  2. true

source

Base.:^ — Method

  1. ^(A::AbstractMatrix, p::Number)

Matrix power, equivalent to $\exp(p\log(A))$

Examples

  1. julia> [1 2; 0 3]^3
  2. 2×2 Matrix{Int64}:
  3. 1 26
  4. 0 27

source

Base.:^ — Method

  1. ^(b::Number, A::AbstractMatrix)

Matrix exponential, equivalent to $\exp(\log(b)A)$.

Julia 1.1

Support for raising Irrational numbers (like ) to a matrix was added in Julia 1.1.

Examples

  1. julia> 2^[1 2; 0 3]
  2. 2×2 Matrix{Float64}:
  3. 2.0 6.0
  4. 0.0 8.0
  5. julia> ℯ^[1 2; 0 3]
  6. 2×2 Matrix{Float64}:
  7. 2.71828 17.3673
  8. 0.0 20.0855

source

Base.log — Method

  1. log(A::StridedMatrix)

If A has no negative real eigenvalue, compute the principal matrix logarithm of A, i.e. the unique matrix $X$ such that $e^X = A$ and $-\pi < Im(\lambda) < \pi$ for all the eigenvalues $\lambda$ of $X$. If A has nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used, if A is triangular an improved version of the inverse scaling and squaring method is employed (see [AH12] and [AHR13]). If A is real with no negative eigenvalues, then the real Schur form is computed. Otherwise, the complex Schur form is computed. Then the upper (quasi-)triangular algorithm in [AHR13] is used on the upper (quasi-)triangular factor.

Examples

  1. julia> A = Matrix(2.7182818*I, 2, 2)
  2. 2×2 Matrix{Float64}:
  3. 2.71828 0.0
  4. 0.0 2.71828
  5. julia> log(A)
  6. 2×2 Matrix{Float64}:
  7. 1.0 0.0
  8. 0.0 1.0

source

Base.sqrt — Method

  1. sqrt(A::AbstractMatrix)

If A has no negative real eigenvalues, compute the principal matrix square root of A, that is the unique matrix $X$ with eigenvalues having positive real part such that $X^2 = A$. Otherwise, a nonprincipal square root is returned.

If A is real-symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the square root. For such matrices, eigenvalues λ that appear to be slightly negative due to roundoff errors are treated as if they were zero More precisely, matrices with all eigenvalues ≥ -rtol*(max |λ|) are treated as semidefinite (yielding a Hermitian square root), with negative eigenvalues taken to be zero. rtol is a keyword argument to sqrt (in the Hermitian/real-symmetric case only) that defaults to machine precision scaled by size(A,1).

Otherwise, the square root is determined by means of the Björck-Hammarling method [BH83], which computes the complex Schur form (schur) and then the complex square root of the triangular factor. If a real square root exists, then an extension of this method [H87] that computes the real Schur form and then the real square root of the quasi-triangular factor is instead used.

Examples

  1. julia> A = [4 0; 0 4]
  2. 2×2 Matrix{Int64}:
  3. 4 0
  4. 0 4
  5. julia> sqrt(A)
  6. 2×2 Matrix{Float64}:
  7. 2.0 0.0
  8. 0.0 2.0

source

Base.cos — Method

  1. cos(A::AbstractMatrix)

Compute the matrix cosine of a square matrix A.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the cosine. Otherwise, the cosine is determined by calling exp.

Examples

  1. julia> cos(fill(1.0, (2,2)))
  2. 2×2 Matrix{Float64}:
  3. 0.291927 -0.708073
  4. -0.708073 0.291927

source

Base.sin — Method

  1. sin(A::AbstractMatrix)

Compute the matrix sine of a square matrix A.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the sine. Otherwise, the sine is determined by calling exp.

Examples

  1. julia> sin(fill(1.0, (2,2)))
  2. 2×2 Matrix{Float64}:
  3. 0.454649 0.454649
  4. 0.454649 0.454649

source

Base.Math.sincos — Method

  1. sincos(A::AbstractMatrix)

Compute the matrix sine and cosine of a square matrix A.

Examples

  1. julia> S, C = sincos(fill(1.0, (2,2)));
  2. julia> S
  3. 2×2 Matrix{Float64}:
  4. 0.454649 0.454649
  5. 0.454649 0.454649
  6. julia> C
  7. 2×2 Matrix{Float64}:
  8. 0.291927 -0.708073
  9. -0.708073 0.291927

source

Base.tan — Method

  1. tan(A::AbstractMatrix)

Compute the matrix tangent of a square matrix A.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the tangent. Otherwise, the tangent is determined by calling exp.

Examples

  1. julia> tan(fill(1.0, (2,2)))
  2. 2×2 Matrix{Float64}:
  3. -1.09252 -1.09252
  4. -1.09252 -1.09252

source

Base.Math.sec — Method

  1. sec(A::AbstractMatrix)

Compute the matrix secant of a square matrix A.

source

Base.Math.csc — Method

  1. csc(A::AbstractMatrix)

Compute the matrix cosecant of a square matrix A.

source

Base.Math.cot — Method

  1. cot(A::AbstractMatrix)

Compute the matrix cotangent of a square matrix A.

source

Base.cosh — Method

  1. cosh(A::AbstractMatrix)

Compute the matrix hyperbolic cosine of a square matrix A.

source

Base.sinh — Method

  1. sinh(A::AbstractMatrix)

Compute the matrix hyperbolic sine of a square matrix A.

source

Base.tanh — Method

  1. tanh(A::AbstractMatrix)

Compute the matrix hyperbolic tangent of a square matrix A.

source

Base.Math.sech — Method

  1. sech(A::AbstractMatrix)

Compute the matrix hyperbolic secant of square matrix A.

source

Base.Math.csch — Method

  1. csch(A::AbstractMatrix)

Compute the matrix hyperbolic cosecant of square matrix A.

source

Base.Math.coth — Method

  1. coth(A::AbstractMatrix)

Compute the matrix hyperbolic cotangent of square matrix A.

source

Base.acos — Method

  1. acos(A::AbstractMatrix)

Compute the inverse matrix cosine of a square matrix A.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the inverse cosine. Otherwise, the inverse cosine is determined by using log and sqrt. For the theory and logarithmic formulas used to compute this function, see [AH16_1].

Examples

  1. julia> acos(cos([0.5 0.1; -0.2 0.3]))
  2. 2×2 Matrix{ComplexF64}:
  3. 0.5-8.32667e-17im 0.1+0.0im
  4. -0.2+2.63678e-16im 0.3-3.46945e-16im

source

Base.asin — Method

  1. asin(A::AbstractMatrix)

Compute the inverse matrix sine of a square matrix A.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the inverse sine. Otherwise, the inverse sine is determined by using log and sqrt. For the theory and logarithmic formulas used to compute this function, see [AH16_2].

Examples

  1. julia> asin(sin([0.5 0.1; -0.2 0.3]))
  2. 2×2 Matrix{ComplexF64}:
  3. 0.5-4.16334e-17im 0.1-5.55112e-17im
  4. -0.2+9.71445e-17im 0.3-1.249e-16im

source

Base.atan — Method

  1. atan(A::AbstractMatrix)

Compute the inverse matrix tangent of a square matrix A.

If A is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the inverse tangent. Otherwise, the inverse tangent is determined by using log. For the theory and logarithmic formulas used to compute this function, see [AH16_3].

Examples

  1. julia> atan(tan([0.5 0.1; -0.2 0.3]))
  2. 2×2 Matrix{ComplexF64}:
  3. 0.5+1.38778e-17im 0.1-2.77556e-17im
  4. -0.2+6.93889e-17im 0.3-4.16334e-17im

source

Base.Math.asec — Method

  1. asec(A::AbstractMatrix)

Compute the inverse matrix secant of A.

source

Base.Math.acsc — Method

  1. acsc(A::AbstractMatrix)

Compute the inverse matrix cosecant of A.

source

Base.Math.acot — Method

  1. acot(A::AbstractMatrix)

Compute the inverse matrix cotangent of A.

source

Base.acosh — Method

  1. acosh(A::AbstractMatrix)

Compute the inverse hyperbolic matrix cosine of a square matrix A. For the theory and logarithmic formulas used to compute this function, see [AH16_4].

source

Base.asinh — Method

  1. asinh(A::AbstractMatrix)

Compute the inverse hyperbolic matrix sine of a square matrix A. For the theory and logarithmic formulas used to compute this function, see [AH16_5].

source

Base.atanh — Method

  1. atanh(A::AbstractMatrix)

Compute the inverse hyperbolic matrix tangent of a square matrix A. For the theory and logarithmic formulas used to compute this function, see [AH16_6].

source

Base.Math.asech — Method

  1. asech(A::AbstractMatrix)

Compute the inverse matrix hyperbolic secant of A.

source

Base.Math.acsch — Method

  1. acsch(A::AbstractMatrix)

Compute the inverse matrix hyperbolic cosecant of A.

source

Base.Math.acoth — Method

  1. acoth(A::AbstractMatrix)

Compute the inverse matrix hyperbolic cotangent of A.

source

LinearAlgebra.lyap — Function

  1. lyap(A, C)

Computes the solution X to the continuous Lyapunov equation AX + XA' + C = 0, where no eigenvalue of A has a zero real part and no two eigenvalues are negative complex conjugates of each other.

Examples

  1. julia> A = [3. 4.; 5. 6]
  2. 2×2 Matrix{Float64}:
  3. 3.0 4.0
  4. 5.0 6.0
  5. julia> B = [1. 1.; 1. 2.]
  6. 2×2 Matrix{Float64}:
  7. 1.0 1.0
  8. 1.0 2.0
  9. julia> X = lyap(A, B)
  10. 2×2 Matrix{Float64}:
  11. 0.5 -0.5
  12. -0.5 0.25
  13. julia> A*X + X*A' + B
  14. 2×2 Matrix{Float64}:
  15. 0.0 6.66134e-16
  16. 6.66134e-16 8.88178e-16

source

LinearAlgebra.sylvester — Function

  1. sylvester(A, B, C)

Computes the solution X to the Sylvester equation AX + XB + C = 0, where A, B and C have compatible dimensions and A and -B have no eigenvalues with equal real part.

Examples

  1. julia> A = [3. 4.; 5. 6]
  2. 2×2 Matrix{Float64}:
  3. 3.0 4.0
  4. 5.0 6.0
  5. julia> B = [1. 1.; 1. 2.]
  6. 2×2 Matrix{Float64}:
  7. 1.0 1.0
  8. 1.0 2.0
  9. julia> C = [1. 2.; -2. 1]
  10. 2×2 Matrix{Float64}:
  11. 1.0 2.0
  12. -2.0 1.0
  13. julia> X = sylvester(A, B, C)
  14. 2×2 Matrix{Float64}:
  15. -4.46667 1.93333
  16. 3.73333 -1.8
  17. julia> A*X + X*B + C
  18. 2×2 Matrix{Float64}:
  19. 2.66454e-15 1.77636e-15
  20. -3.77476e-15 4.44089e-16

source

LinearAlgebra.issuccess — Function

  1. issuccess(F::Factorization)

Test that a factorization of a matrix succeeded.

Julia 1.6

issuccess(::CholeskyPivoted) requires Julia 1.6 or later.

  1. julia> F = cholesky([1 0; 0 1]);
  2. julia> LinearAlgebra.issuccess(F)
  3. true
  4. julia> F = lu([1 0; 0 0]; check = false);
  5. julia> LinearAlgebra.issuccess(F)
  6. false

source

LinearAlgebra.issymmetric — Function

  1. issymmetric(A) -> Bool

Test whether a matrix is symmetric.

Examples

  1. julia> a = [1 2; 2 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 -1
  5. julia> issymmetric(a)
  6. true
  7. julia> b = [1 im; -im 1]
  8. 2×2 Matrix{Complex{Int64}}:
  9. 1+0im 0+1im
  10. 0-1im 1+0im
  11. julia> issymmetric(b)
  12. false

source

LinearAlgebra.isposdef — Function

  1. isposdef(A) -> Bool

Test whether a matrix is positive definite (and Hermitian) by trying to perform a Cholesky factorization of A.

See also isposdef!, cholesky.

Examples

  1. julia> A = [1 2; 2 50]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 50
  5. julia> isposdef(A)
  6. true

source

LinearAlgebra.isposdef! — Function

  1. isposdef!(A) -> Bool

Test whether a matrix is positive definite (and Hermitian) by trying to perform a Cholesky factorization of A, overwriting A in the process. See also isposdef.

Examples

  1. julia> A = [1. 2.; 2. 50.];
  2. julia> isposdef!(A)
  3. true
  4. julia> A
  5. 2×2 Matrix{Float64}:
  6. 1.0 2.0
  7. 2.0 6.78233

source

LinearAlgebra.istril — Function

  1. istril(A::AbstractMatrix, k::Integer = 0) -> Bool

Test whether A is lower triangular starting from the kth superdiagonal.

Examples

  1. julia> a = [1 2; 2 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 -1
  5. julia> istril(a)
  6. false
  7. julia> istril(a, 1)
  8. true
  9. julia> b = [1 0; -im -1]
  10. 2×2 Matrix{Complex{Int64}}:
  11. 1+0im 0+0im
  12. 0-1im -1+0im
  13. julia> istril(b)
  14. true
  15. julia> istril(b, -1)
  16. false

source

LinearAlgebra.istriu — Function

  1. istriu(A::AbstractMatrix, k::Integer = 0) -> Bool

Test whether A is upper triangular starting from the kth superdiagonal.

Examples

  1. julia> a = [1 2; 2 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 -1
  5. julia> istriu(a)
  6. false
  7. julia> istriu(a, -1)
  8. true
  9. julia> b = [1 im; 0 -1]
  10. 2×2 Matrix{Complex{Int64}}:
  11. 1+0im 0+1im
  12. 0+0im -1+0im
  13. julia> istriu(b)
  14. true
  15. julia> istriu(b, 1)
  16. false

source

LinearAlgebra.isdiag — Function

  1. isdiag(A) -> Bool

Test whether a matrix is diagonal.

Examples

  1. julia> a = [1 2; 2 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 -1
  5. julia> isdiag(a)
  6. false
  7. julia> b = [im 0; 0 -im]
  8. 2×2 Matrix{Complex{Int64}}:
  9. 0+1im 0+0im
  10. 0+0im 0-1im
  11. julia> isdiag(b)
  12. true

source

LinearAlgebra.ishermitian — Function

  1. ishermitian(A) -> Bool

Test whether a matrix is Hermitian.

Examples

  1. julia> a = [1 2; 2 -1]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 2 -1
  5. julia> ishermitian(a)
  6. true
  7. julia> b = [1 im; -im 1]
  8. 2×2 Matrix{Complex{Int64}}:
  9. 1+0im 0+1im
  10. 0-1im 1+0im
  11. julia> ishermitian(b)
  12. true

source

Base.transpose — Function

  1. transpose(A)

Lazy transpose. Mutating the returned object should appropriately mutate A. Often, but not always, yields Transpose(A), where Transpose is a lazy transpose wrapper. Note that this operation is recursive.

This operation is intended for linear algebra usage - for general data manipulation see permutedims, which is non-recursive.

Examples

  1. julia> A = [3+2im 9+2im; 8+7im 4+6im]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 3+2im 9+2im
  4. 8+7im 4+6im
  5. julia> transpose(A)
  6. 2×2 transpose(::Matrix{Complex{Int64}}) with eltype Complex{Int64}:
  7. 3+2im 8+7im
  8. 9+2im 4+6im

source

LinearAlgebra.transpose! — Function

  1. transpose!(dest,src)

Transpose array src and store the result in the preallocated array dest, which should have a size corresponding to (size(src,2),size(src,1)). No in-place transposition is supported and unexpected results will happen if src and dest have overlapping memory regions.

Examples

  1. julia> A = [3+2im 9+2im; 8+7im 4+6im]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 3+2im 9+2im
  4. 8+7im 4+6im
  5. julia> B = zeros(Complex{Int64}, 2, 2)
  6. 2×2 Matrix{Complex{Int64}}:
  7. 0+0im 0+0im
  8. 0+0im 0+0im
  9. julia> transpose!(B, A);
  10. julia> B
  11. 2×2 Matrix{Complex{Int64}}:
  12. 3+2im 8+7im
  13. 9+2im 4+6im
  14. julia> A
  15. 2×2 Matrix{Complex{Int64}}:
  16. 3+2im 9+2im
  17. 8+7im 4+6im

source

LinearAlgebra.Transpose — Type

  1. Transpose

Lazy wrapper type for a transpose view of the underlying linear algebra object, usually an AbstractVector/AbstractMatrix, but also some Factorization, for instance. Usually, the Transpose constructor should not be called directly, use transpose instead. To materialize the view use copy.

This type is intended for linear algebra usage - for general data manipulation see permutedims.

Examples

  1. julia> A = [3+2im 9+2im; 8+7im 4+6im]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 3+2im 9+2im
  4. 8+7im 4+6im
  5. julia> transpose(A)
  6. 2×2 transpose(::Matrix{Complex{Int64}}) with eltype Complex{Int64}:
  7. 3+2im 8+7im
  8. 9+2im 4+6im

source

Base.adjoint — Function

  1. A'
  2. adjoint(A)

Lazy adjoint (conjugate transposition). Note that adjoint is applied recursively to elements.

For number types, adjoint returns the complex conjugate, and therefore it is equivalent to the identity function for real numbers.

This operation is intended for linear algebra usage - for general data manipulation see permutedims.

Examples

  1. julia> A = [3+2im 9+2im; 8+7im 4+6im]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 3+2im 9+2im
  4. 8+7im 4+6im
  5. julia> adjoint(A)
  6. 2×2 adjoint(::Matrix{Complex{Int64}}) with eltype Complex{Int64}:
  7. 3-2im 8-7im
  8. 9-2im 4-6im
  9. julia> x = [3, 4im]
  10. 2-element Vector{Complex{Int64}}:
  11. 3 + 0im
  12. 0 + 4im
  13. julia> x'x
  14. 25 + 0im

source

LinearAlgebra.adjoint! — Function

  1. adjoint!(dest,src)

Conjugate transpose array src and store the result in the preallocated array dest, which should have a size corresponding to (size(src,2),size(src,1)). No in-place transposition is supported and unexpected results will happen if src and dest have overlapping memory regions.

Examples

  1. julia> A = [3+2im 9+2im; 8+7im 4+6im]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 3+2im 9+2im
  4. 8+7im 4+6im
  5. julia> B = zeros(Complex{Int64}, 2, 2)
  6. 2×2 Matrix{Complex{Int64}}:
  7. 0+0im 0+0im
  8. 0+0im 0+0im
  9. julia> adjoint!(B, A);
  10. julia> B
  11. 2×2 Matrix{Complex{Int64}}:
  12. 3-2im 8-7im
  13. 9-2im 4-6im
  14. julia> A
  15. 2×2 Matrix{Complex{Int64}}:
  16. 3+2im 9+2im
  17. 8+7im 4+6im

source

LinearAlgebra.Adjoint — Type

  1. Adjoint

Lazy wrapper type for an adjoint view of the underlying linear algebra object, usually an AbstractVector/AbstractMatrix, but also some Factorization, for instance. Usually, the Adjoint constructor should not be called directly, use adjoint instead. To materialize the view use copy.

This type is intended for linear algebra usage - for general data manipulation see permutedims.

Examples

  1. julia> A = [3+2im 9+2im; 8+7im 4+6im]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 3+2im 9+2im
  4. 8+7im 4+6im
  5. julia> adjoint(A)
  6. 2×2 adjoint(::Matrix{Complex{Int64}}) with eltype Complex{Int64}:
  7. 3-2im 8-7im
  8. 9-2im 4-6im

source

Base.copy — Method

  1. copy(A::Transpose)
  2. copy(A::Adjoint)

Eagerly evaluate the lazy matrix transpose/adjoint. Note that the transposition is applied recursively to elements.

This operation is intended for linear algebra usage - for general data manipulation see permutedims, which is non-recursive.

Examples

  1. julia> A = [1 2im; -3im 4]
  2. 2×2 Matrix{Complex{Int64}}:
  3. 1+0im 0+2im
  4. 0-3im 4+0im
  5. julia> T = transpose(A)
  6. 2×2 transpose(::Matrix{Complex{Int64}}) with eltype Complex{Int64}:
  7. 1+0im 0-3im
  8. 0+2im 4+0im
  9. julia> copy(T)
  10. 2×2 Matrix{Complex{Int64}}:
  11. 1+0im 0-3im
  12. 0+2im 4+0im

source

LinearAlgebra.stride1 — Function

  1. stride1(A) -> Int

Return the distance between successive array elements in dimension 1 in units of element size.

Examples

  1. julia> A = [1,2,3,4]
  2. 4-element Vector{Int64}:
  3. 1
  4. 2
  5. 3
  6. 4
  7. julia> LinearAlgebra.stride1(A)
  8. 1
  9. julia> B = view(A, 2:2:4)
  10. 2-element view(::Vector{Int64}, 2:2:4) with eltype Int64:
  11. 2
  12. 4
  13. julia> LinearAlgebra.stride1(B)
  14. 2

source

LinearAlgebra.checksquare — Function

  1. LinearAlgebra.checksquare(A)

Check that a matrix is square, then return its common dimension. For multiple arguments, return a vector.

Examples

  1. julia> A = fill(1, (4,4)); B = fill(1, (5,5));
  2. julia> LinearAlgebra.checksquare(A, B)
  3. 2-element Vector{Int64}:
  4. 4
  5. 5

source

LinearAlgebra.peakflops — Function

  1. LinearAlgebra.peakflops(n::Integer=2000; parallel::Bool=false)

peakflops computes the peak flop rate of the computer by using double precision gemm!. By default, if no arguments are specified, it multiplies a matrix of size n x n, where n = 2000. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set with BLAS.set_num_threads(n).

If the keyword argument parallel is set to true, peakflops is run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argument n still refers to the size of the problem that is solved on each processor.

Julia 1.1

This function requires at least Julia 1.1. In Julia 1.0 it is available from the standard library InteractiveUtils.

source

Low-level matrix operations

In many cases there are in-place versions of matrix operations that allow you to supply a pre-allocated output vector or matrix. This is useful when optimizing critical code in order to avoid the overhead of repeated allocations. These in-place operations are suffixed with ! below (e.g. mul!) according to the usual Julia convention.

LinearAlgebra.mul! — Function

  1. mul!(Y, A, B) -> Y

Calculates the matrix-matrix or matrix-vector product $AB$ and stores the result in Y, overwriting the existing value of Y. Note that Y must not be aliased with either A or B.

Examples

  1. julia> A=[1.0 2.0; 3.0 4.0]; B=[1.0 1.0; 1.0 1.0]; Y = similar(B); mul!(Y, A, B);
  2. julia> Y
  3. 2×2 Matrix{Float64}:
  4. 3.0 3.0
  5. 7.0 7.0

Implementation

For custom matrix and vector types, it is recommended to implement 5-argument mul! rather than implementing 3-argument mul! directly if possible.

source

  1. mul!(C, A, B, α, β) -> C

Combined inplace matrix-matrix or matrix-vector multiply-add $A B α + C β$. The result is stored in C by overwriting it. Note that C must not be aliased with either A or B.

Julia 1.3

Five-argument mul! requires at least Julia 1.3.

Examples

  1. julia> A=[1.0 2.0; 3.0 4.0]; B=[1.0 1.0; 1.0 1.0]; C=[1.0 2.0; 3.0 4.0];
  2. julia> mul!(C, A, B, 100.0, 10.0) === C
  3. true
  4. julia> C
  5. 2×2 Matrix{Float64}:
  6. 310.0 320.0
  7. 730.0 740.0

source

LinearAlgebra.lmul! — Function

  1. lmul!(a::Number, B::AbstractArray)

Scale an array B by a scalar a overwriting B in-place. Use rmul! to multiply scalar from right. The scaling operation respects the semantics of the multiplication * between a and an element of B. In particular, this also applies to multiplication involving non-finite numbers such as NaN and ±Inf.

Julia 1.1

Prior to Julia 1.1, NaN and ±Inf entries in B were treated inconsistently.

Examples

  1. julia> B = [1 2; 3 4]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 3 4
  5. julia> lmul!(2, B)
  6. 2×2 Matrix{Int64}:
  7. 2 4
  8. 6 8
  9. julia> lmul!(0.0, [Inf])
  10. 1-element Vector{Float64}:
  11. NaN

source

  1. lmul!(A, B)

Calculate the matrix-matrix product $AB$, overwriting B, and return the result. Here, A must be of special matrix type, like, e.g., Diagonal, UpperTriangular or LowerTriangular, or of some orthogonal type, see QR.

Examples

  1. julia> B = [0 1; 1 0];
  2. julia> A = LinearAlgebra.UpperTriangular([1 2; 0 3]);
  3. julia> LinearAlgebra.lmul!(A, B);
  4. julia> B
  5. 2×2 Matrix{Int64}:
  6. 2 1
  7. 3 0
  8. julia> B = [1.0 2.0; 3.0 4.0];
  9. julia> F = qr([0 1; -1 0]);
  10. julia> lmul!(F.Q, B)
  11. 2×2 Matrix{Float64}:
  12. 3.0 4.0
  13. 1.0 2.0

source

LinearAlgebra.rmul! — Function

  1. rmul!(A::AbstractArray, b::Number)

Scale an array A by a scalar b overwriting A in-place. Use lmul! to multiply scalar from left. The scaling operation respects the semantics of the multiplication * between an element of A and b. In particular, this also applies to multiplication involving non-finite numbers such as NaN and ±Inf.

Julia 1.1

Prior to Julia 1.1, NaN and ±Inf entries in A were treated inconsistently.

Examples

  1. julia> A = [1 2; 3 4]
  2. 2×2 Matrix{Int64}:
  3. 1 2
  4. 3 4
  5. julia> rmul!(A, 2)
  6. 2×2 Matrix{Int64}:
  7. 2 4
  8. 6 8
  9. julia> rmul!([NaN], 0.0)
  10. 1-element Vector{Float64}:
  11. NaN

source

  1. rmul!(A, B)

Calculate the matrix-matrix product $AB$, overwriting A, and return the result. Here, B must be of special matrix type, like, e.g., Diagonal, UpperTriangular or LowerTriangular, or of some orthogonal type, see QR.

Examples

  1. julia> A = [0 1; 1 0];
  2. julia> B = LinearAlgebra.UpperTriangular([1 2; 0 3]);
  3. julia> LinearAlgebra.rmul!(A, B);
  4. julia> A
  5. 2×2 Matrix{Int64}:
  6. 0 3
  7. 1 2
  8. julia> A = [1.0 2.0; 3.0 4.0];
  9. julia> F = qr([0 1; -1 0]);
  10. julia> rmul!(A, F.Q)
  11. 2×2 Matrix{Float64}:
  12. 2.0 1.0
  13. 4.0 3.0

source

LinearAlgebra.ldiv! — Function

  1. ldiv!(Y, A, B) -> Y

Compute A \ B in-place and store the result in Y, returning the result.

The argument A should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by factorize or cholesky). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g., lu!), and performance-critical situations requiring ldiv! usually also require fine-grained control over the factorization of A.

Examples

  1. julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];
  2. julia> X = [1; 2.5; 3];
  3. julia> Y = zero(X);
  4. julia> ldiv!(Y, qr(A), X);
  5. julia> Y
  6. 3-element Vector{Float64}:
  7. 0.7128099173553719
  8. -0.051652892561983806
  9. 0.10020661157024781
  10. julia> A\X
  11. 3-element Vector{Float64}:
  12. 0.7128099173553719
  13. -0.05165289256198342
  14. 0.1002066115702479

source

  1. ldiv!(A, B)

Compute A \ B in-place and overwriting B to store the result.

The argument A should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by factorize or cholesky). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g., lu!), and performance-critical situations requiring ldiv! usually also require fine-grained control over the factorization of A.

Examples

  1. julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];
  2. julia> X = [1; 2.5; 3];
  3. julia> Y = copy(X);
  4. julia> ldiv!(qr(A), X);
  5. julia> X
  6. 3-element Vector{Float64}:
  7. 0.7128099173553719
  8. -0.051652892561983806
  9. 0.10020661157024781
  10. julia> A\Y
  11. 3-element Vector{Float64}:
  12. 0.7128099173553719
  13. -0.05165289256198342
  14. 0.1002066115702479

source

  1. ldiv!(a::Number, B::AbstractArray)

Divide each entry in an array B by a scalar a overwriting B in-place. Use rdiv! to divide scalar from right.

Examples

  1. julia> B = [1.0 2.0; 3.0 4.0]
  2. 2×2 Matrix{Float64}:
  3. 1.0 2.0
  4. 3.0 4.0
  5. julia> ldiv!(2.0, B)
  6. 2×2 Matrix{Float64}:
  7. 0.5 1.0
  8. 1.5 2.0

source

LinearAlgebra.rdiv! — Function

  1. rdiv!(A, B)

Compute A / B in-place and overwriting A to store the result.

The argument B should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by factorize or cholesky). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g., lu!), and performance-critical situations requiring rdiv! usually also require fine-grained control over the factorization of B.

source

  1. rdiv!(A::AbstractArray, b::Number)

Divide each entry in an array A by a scalar b overwriting A in-place. Use ldiv! to divide scalar from left.

Examples

  1. julia> A = [1.0 2.0; 3.0 4.0]
  2. 2×2 Matrix{Float64}:
  3. 1.0 2.0
  4. 3.0 4.0
  5. julia> rdiv!(A, 2.0)
  6. 2×2 Matrix{Float64}:
  7. 0.5 1.0
  8. 1.5 2.0

source

BLAS functions

In Julia (as in much of scientific computation), dense linear-algebra operations are based on the LAPACK library, which in turn is built on top of basic linear-algebra building-blocks known as the BLAS. There are highly optimized implementations of BLAS available for every computer architecture, and sometimes in high-performance linear algebra routines it is useful to call the BLAS functions directly.

LinearAlgebra.BLAS provides wrappers for some of the BLAS functions. Those BLAS functions that overwrite one of the input arrays have names ending in '!'. Usually, a BLAS function has four methods defined, for Float64, Float32, ComplexF64, and ComplexF32 arrays.

BLAS character arguments

Many BLAS functions accept arguments that determine whether to transpose an argument (trans), which triangle of a matrix to reference (uplo or ul), whether the diagonal of a triangular matrix can be assumed to be all ones (dA) or which side of a matrix multiplication the input argument belongs on (side). The possibilities are:

Multiplication order

sideMeaning
‘L’The argument goes on the left side of a matrix-matrix operation.
‘R’The argument goes on the right side of a matrix-matrix operation.

Triangle referencing

uplo/ulMeaning
‘U’Only the upper triangle of the matrix will be used.
‘L’Only the lower triangle of the matrix will be used.

Transposition operation

trans/tXMeaning
‘N’The input matrix X is not transposed or conjugated.
‘T’The input matrix X will be transposed.
‘C’The input matrix X will be conjugated and transposed.

Unit diagonal

diag/dXMeaning
‘N’The diagonal values of the matrix X will be read.
‘U’The diagonal of the matrix X is assumed to be all ones.

LinearAlgebra.BLAS — Module

Interface to BLAS subroutines.

source

LinearAlgebra.BLAS.dot — Function

  1. dot(n, X, incx, Y, incy)

Dot product of two vectors consisting of n elements of array X with stride incx and n elements of array Y with stride incy.

Examples

  1. julia> BLAS.dot(10, fill(1.0, 10), 1, fill(1.0, 20), 2)
  2. 10.0

source

LinearAlgebra.BLAS.dotu — Function

  1. dotu(n, X, incx, Y, incy)

Dot function for two complex vectors consisting of n elements of array X with stride incx and n elements of array Y with stride incy.

Examples

  1. julia> BLAS.dotu(10, fill(1.0im, 10), 1, fill(1.0+im, 20), 2)
  2. -10.0 + 10.0im

source

LinearAlgebra.BLAS.dotc — Function

  1. dotc(n, X, incx, U, incy)

Dot function for two complex vectors, consisting of n elements of array X with stride incx and n elements of array U with stride incy, conjugating the first vector.

Examples

  1. julia> BLAS.dotc(10, fill(1.0im, 10), 1, fill(1.0+im, 20), 2)
  2. 10.0 - 10.0im

source

LinearAlgebra.BLAS.blascopy! — Function

  1. blascopy!(n, X, incx, Y, incy)

Copy n elements of array X with stride incx to array Y with stride incy. Returns Y.

source

LinearAlgebra.BLAS.nrm2 — Function

  1. nrm2(n, X, incx)

2-norm of a vector consisting of n elements of array X with stride incx.

Examples

  1. julia> BLAS.nrm2(4, fill(1.0, 8), 2)
  2. 2.0
  3. julia> BLAS.nrm2(1, fill(1.0, 8), 2)
  4. 1.0

source

LinearAlgebra.BLAS.asum — Function

  1. asum(n, X, incx)

Sum of the magnitudes of the first n elements of array X with stride incx.

For a real array, the magnitude is the absolute value. For a complex array, the magnitude is the sum of the absolute value of the real part and the absolute value of the imaginary part.

Examples

  1. julia> BLAS.asum(5, fill(1.0im, 10), 2)
  2. 5.0
  3. julia> BLAS.asum(2, fill(1.0im, 10), 5)
  4. 2.0

source

LinearAlgebra.axpy! — Function

  1. axpy!(a, X, Y)

Overwrite Y with X*a + Y, where a is a scalar. Return Y.

Examples

  1. julia> x = [1; 2; 3];
  2. julia> y = [4; 5; 6];
  3. julia> BLAS.axpy!(2, x, y)
  4. 3-element Vector{Int64}:
  5. 6
  6. 9
  7. 12

source

LinearAlgebra.axpby! — Function

  1. axpby!(a, X, b, Y)

Overwrite Y with X*a + Y*b, where a and b are scalars. Return Y.

Examples

  1. julia> x = [1., 2, 3];
  2. julia> y = [4., 5, 6];
  3. julia> BLAS.axpby!(2., x, 3., y)
  4. 3-element Vector{Float64}:
  5. 14.0
  6. 19.0
  7. 24.0

source

LinearAlgebra.BLAS.scal! — Function

  1. scal!(n, a, X, incx)
  2. scal!(a, X)

Overwrite X with a*X for the first n elements of array X with stride incx. Returns X.

If n and incx are not provided, length(X) and stride(X,1) are used.

source

LinearAlgebra.BLAS.scal — Function

  1. scal(n, a, X, incx)
  2. scal(a, X)

Return X scaled by a for the first n elements of array X with stride incx.

If n and incx are not provided, length(X) and stride(X,1) are used.

source

LinearAlgebra.BLAS.iamax — Function

  1. iamax(n, dx, incx)
  2. iamax(dx)

Find the index of the element of dx with the maximum absolute value. n is the length of dx, and incx is the stride. If n and incx are not provided, they assume default values of n=length(dx) and incx=stride1(dx).

source

LinearAlgebra.BLAS.ger! — Function

  1. ger!(alpha, x, y, A)

Rank-1 update of the matrix A with vectors x and y as alpha*x*y' + A.

source

LinearAlgebra.BLAS.syr! — Function

  1. syr!(uplo, alpha, x, A)

Rank-1 update of the symmetric matrix A with vector x as alpha*x*transpose(x) + A. uplo controls which triangle of A is updated. Returns A.

source

LinearAlgebra.BLAS.syrk! — Function

  1. syrk!(uplo, trans, alpha, A, beta, C)

Rank-k update of the symmetric matrix C as alpha*A*transpose(A) + beta*C or alpha*transpose(A)*A + beta*C according to trans. Only the uplo triangle of C is used. Returns C.

source

LinearAlgebra.BLAS.syrk — Function

  1. syrk(uplo, trans, alpha, A)

Returns either the upper triangle or the lower triangle of A, according to uplo, of alpha*A*transpose(A) or alpha*transpose(A)*A, according to trans.

source

LinearAlgebra.BLAS.syr2k! — Function

  1. syr2k!(uplo, trans, alpha, A, B, beta, C)

Rank-2k update of the symmetric matrix C as alpha*A*transpose(B) + alpha*B*transpose(A) + beta*C or alpha*transpose(A)*B + alpha*transpose(B)*A + beta*C according to trans. Only the uplo triangle of C is used. Returns C.

source

LinearAlgebra.BLAS.syr2k — Function

  1. syr2k(uplo, trans, alpha, A, B)

Returns the uplo triangle of alpha*A*transpose(B) + alpha*B*transpose(A) or alpha*transpose(A)*B + alpha*transpose(B)*A, according to trans.

source

  1. syr2k(uplo, trans, A, B)

Returns the uplo triangle of A*transpose(B) + B*transpose(A) or transpose(A)*B + transpose(B)*A, according to trans.

source

LinearAlgebra.BLAS.her! — Function

  1. her!(uplo, alpha, x, A)

Methods for complex arrays only. Rank-1 update of the Hermitian matrix A with vector x as alpha*x*x' + A. uplo controls which triangle of A is updated. Returns A.

source

LinearAlgebra.BLAS.herk! — Function

  1. herk!(uplo, trans, alpha, A, beta, C)

Methods for complex arrays only. Rank-k update of the Hermitian matrix C as alpha*A*A' + beta*C or alpha*A'*A + beta*C according to trans. Only the uplo triangle of C is updated. Returns C.

source

LinearAlgebra.BLAS.herk — Function

  1. herk(uplo, trans, alpha, A)

Methods for complex arrays only. Returns the uplo triangle of alpha*A*A' or alpha*A'*A, according to trans.

source

LinearAlgebra.BLAS.her2k! — Function

  1. her2k!(uplo, trans, alpha, A, B, beta, C)

Rank-2k update of the Hermitian matrix C as alpha*A*B' + alpha*B*A' + beta*C or alpha*A'*B + alpha*B'*A + beta*C according to trans. The scalar beta has to be real. Only the uplo triangle of C is used. Returns C.

source

LinearAlgebra.BLAS.her2k — Function

  1. her2k(uplo, trans, alpha, A, B)

Returns the uplo triangle of alpha*A*B' + alpha*B*A' or alpha*A'*B + alpha*B'*A, according to trans.

source

  1. her2k(uplo, trans, A, B)

Returns the uplo triangle of A*B' + B*A' or A'*B + B'*A, according to trans.

source

LinearAlgebra.BLAS.gbmv! — Function

  1. gbmv!(trans, m, kl, ku, alpha, A, x, beta, y)

Update vector y as alpha*A*x + beta*y or alpha*A'*x + beta*y according to trans. The matrix A is a general band matrix of dimension m by size(A,2) with kl sub-diagonals and ku super-diagonals. alpha and beta are scalars. Return the updated y.

source

LinearAlgebra.BLAS.gbmv — Function

  1. gbmv(trans, m, kl, ku, alpha, A, x)

Return alpha*A*x or alpha*A'*x according to trans. The matrix A is a general band matrix of dimension m by size(A,2) with kl sub-diagonals and ku super-diagonals, and alpha is a scalar.

source

LinearAlgebra.BLAS.sbmv! — Function

  1. sbmv!(uplo, k, alpha, A, x, beta, y)

Update vector y as alpha*A*x + beta*y where A is a symmetric band matrix of order size(A,2) with k super-diagonals stored in the argument A. The storage layout for A is described the reference BLAS module, level-2 BLAS at http://www.netlib.org/lapack/explore-html/. Only the uplo triangle of A is used.

Return the updated y.

source

LinearAlgebra.BLAS.sbmv — Method

  1. sbmv(uplo, k, alpha, A, x)

Return alpha*A*x where A is a symmetric band matrix of order size(A,2) with k super-diagonals stored in the argument A. Only the uplo triangle of A is used.

source

LinearAlgebra.BLAS.sbmv — Method

  1. sbmv(uplo, k, A, x)

Return A*x where A is a symmetric band matrix of order size(A,2) with k super-diagonals stored in the argument A. Only the uplo triangle of A is used.

source

LinearAlgebra.BLAS.gemm! — Function

  1. gemm!(tA, tB, alpha, A, B, beta, C)

Update C as alpha*A*B + beta*C or the other three variants according to tA and tB. Return the updated C.

source

LinearAlgebra.BLAS.gemm — Method

  1. gemm(tA, tB, alpha, A, B)

Return alpha*A*B or the other three variants according to tA and tB.

source

LinearAlgebra.BLAS.gemm — Method

  1. gemm(tA, tB, A, B)

Return A*B or the other three variants according to tA and tB.

source

LinearAlgebra.BLAS.gemv! — Function

  1. gemv!(tA, alpha, A, x, beta, y)

Update the vector y as alpha*A*x + beta*y or alpha*A'x + beta*y according to tA. alpha and beta are scalars. Return the updated y.

source

LinearAlgebra.BLAS.gemv — Method

  1. gemv(tA, alpha, A, x)

Return alpha*A*x or alpha*A'x according to tA. alpha is a scalar.

source

LinearAlgebra.BLAS.gemv — Method

  1. gemv(tA, A, x)

Return A*x or A'x according to tA.

source

LinearAlgebra.BLAS.symm! — Function

  1. symm!(side, ul, alpha, A, B, beta, C)

Update C as alpha*A*B + beta*C or alpha*B*A + beta*C according to side. A is assumed to be symmetric. Only the ul triangle of A is used. Return the updated C.

source

LinearAlgebra.BLAS.symm — Method

  1. symm(side, ul, alpha, A, B)

Return alpha*A*B or alpha*B*A according to side. A is assumed to be symmetric. Only the ul triangle of A is used.

source

LinearAlgebra.BLAS.symm — Method

  1. symm(side, ul, A, B)

Return A*B or B*A according to side. A is assumed to be symmetric. Only the ul triangle of A is used.

source

LinearAlgebra.BLAS.symv! — Function

  1. symv!(ul, alpha, A, x, beta, y)

Update the vector y as alpha*A*x + beta*y. A is assumed to be symmetric. Only the ul triangle of A is used. alpha and beta are scalars. Return the updated y.

source

LinearAlgebra.BLAS.symv — Method

  1. symv(ul, alpha, A, x)

Return alpha*A*x. A is assumed to be symmetric. Only the ul triangle of A is used. alpha is a scalar.

source

LinearAlgebra.BLAS.symv — Method

  1. symv(ul, A, x)

Return A*x. A is assumed to be symmetric. Only the ul triangle of A is used.

source

LinearAlgebra.BLAS.hemm! — Function

  1. hemm!(side, ul, alpha, A, B, beta, C)

Update C as alpha*A*B + beta*C or alpha*B*A + beta*C according to side. A is assumed to be Hermitian. Only the ul triangle of A is used. Return the updated C.

source

LinearAlgebra.BLAS.hemm — Method

  1. hemm(side, ul, alpha, A, B)

Return alpha*A*B or alpha*B*A according to side. A is assumed to be Hermitian. Only the ul triangle of A is used.

source

LinearAlgebra.BLAS.hemm — Method

  1. hemm(side, ul, A, B)

Return A*B or B*A according to side. A is assumed to be Hermitian. Only the ul triangle of A is used.

source

LinearAlgebra.BLAS.hemv! — Function

  1. hemv!(ul, alpha, A, x, beta, y)

Update the vector y as alpha*A*x + beta*y. A is assumed to be Hermitian. Only the ul triangle of A is used. alpha and beta are scalars. Return the updated y.

source

LinearAlgebra.BLAS.hemv — Method

  1. hemv(ul, alpha, A, x)

Return alpha*A*x. A is assumed to be Hermitian. Only the ul triangle of A is used. alpha is a scalar.

source

LinearAlgebra.BLAS.hemv — Method

  1. hemv(ul, A, x)

Return A*x. A is assumed to be Hermitian. Only the ul triangle of A is used.

source

LinearAlgebra.BLAS.trmm! — Function

  1. trmm!(side, ul, tA, dA, alpha, A, B)

Update B as alpha*A*B or one of the other three variants determined by side and tA. Only the ul triangle of A is used. dA determines if the diagonal values are read or are assumed to be all ones. Returns the updated B.

source

LinearAlgebra.BLAS.trmm — Function

  1. trmm(side, ul, tA, dA, alpha, A, B)

Returns alpha*A*B or one of the other three variants determined by side and tA. Only the ul triangle of A is used. dA determines if the diagonal values are read or are assumed to be all ones.

source

LinearAlgebra.BLAS.trsm! — Function

  1. trsm!(side, ul, tA, dA, alpha, A, B)

Overwrite B with the solution to A*X = alpha*B or one of the other three variants determined by side and tA. Only the ul triangle of A is used. dA determines if the diagonal values are read or are assumed to be all ones. Returns the updated B.

source

LinearAlgebra.BLAS.trsm — Function

  1. trsm(side, ul, tA, dA, alpha, A, B)

Return the solution to A*X = alpha*B or one of the other three variants determined by determined by side and tA. Only the ul triangle of A is used. dA determines if the diagonal values are read or are assumed to be all ones.

source

LinearAlgebra.BLAS.trmv! — Function

  1. trmv!(ul, tA, dA, A, b)

Return op(A)*b, where op is determined by tA. Only the ul triangle of A is used. dA determines if the diagonal values are read or are assumed to be all ones. The multiplication occurs in-place on b.

source

LinearAlgebra.BLAS.trmv — Function

  1. trmv(ul, tA, dA, A, b)

Return op(A)*b, where op is determined by tA. Only the ul triangle of A is used. dA determines if the diagonal values are read or are assumed to be all ones.

source

LinearAlgebra.BLAS.trsv! — Function

  1. trsv!(ul, tA, dA, A, b)

Overwrite b with the solution to A*x = b or one of the other two variants determined by tA and ul. dA determines if the diagonal values are read or are assumed to be all ones. Return the updated b.

source

LinearAlgebra.BLAS.trsv — Function

  1. trsv(ul, tA, dA, A, b)

Return the solution to A*x = b or one of the other two variants determined by tA and ul. dA determines if the diagonal values are read or are assumed to be all ones.

source

LinearAlgebra.BLAS.set_num_threads — Function

  1. set_num_threads(n::Integer)
  2. set_num_threads(::Nothing)

Set the number of threads the BLAS library should use equal to n::Integer.

Also accepts nothing, in which case julia tries to guess the default number of threads. Passing nothing is discouraged and mainly exists for historical reasons.

source

LinearAlgebra.BLAS.get_num_threads — Function

  1. get_num_threads()

Get the number of threads the BLAS library is using.

Julia 1.6

get_num_threads requires at least Julia 1.6.

source

LAPACK functions

LinearAlgebra.LAPACK provides wrappers for some of the LAPACK functions for linear algebra. Those functions that overwrite one of the input arrays have names ending in '!'.

Usually a function has 4 methods defined, one each for Float64, Float32, ComplexF64 and ComplexF32 arrays.

Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not user-facing, there is no commitment to support/deprecate this specific set of functions in future releases.

LinearAlgebra.LAPACK — Module

Interfaces to LAPACK subroutines.

source

LinearAlgebra.LAPACK.gbtrf! — Function

  1. gbtrf!(kl, ku, m, AB) -> (AB, ipiv)

Compute the LU factorization of a banded matrix AB. kl is the first subdiagonal containing a nonzero band, ku is the last superdiagonal containing one, and m is the first dimension of the matrix AB. Returns the LU factorization in-place and ipiv, the vector of pivots used.

source

LinearAlgebra.LAPACK.gbtrs! — Function

  1. gbtrs!(trans, kl, ku, m, AB, ipiv, B)

Solve the equation AB * X = B. trans determines the orientation of AB. It may be N (no transpose), T (transpose), or C (conjugate transpose). kl is the first subdiagonal containing a nonzero band, ku is the last superdiagonal containing one, and m is the first dimension of the matrix AB. ipiv is the vector of pivots returned from gbtrf!. Returns the vector or matrix X, overwriting B in-place.

source

LinearAlgebra.LAPACK.gebal! — Function

  1. gebal!(job, A) -> (ilo, ihi, scale)

Balance the matrix A before computing its eigensystem or Schur factorization. job can be one of N (A will not be permuted or scaled), P (A will only be permuted), S (A will only be scaled), or B (A will be both permuted and scaled). Modifies A in-place and returns ilo, ihi, and scale. If permuting was turned on, A[i,j] = 0 if j > i and 1 < j < ilo or j > ihi. scale contains information about the scaling/permutations performed.

source

LinearAlgebra.LAPACK.gebak! — Function

  1. gebak!(job, side, ilo, ihi, scale, V)

Transform the eigenvectors V of a matrix balanced using gebal! to the unscaled/unpermuted eigenvectors of the original matrix. Modifies V in-place. side can be L (left eigenvectors are transformed) or R (right eigenvectors are transformed).

source

LinearAlgebra.LAPACK.gebrd! — Function

  1. gebrd!(A) -> (A, d, e, tauq, taup)

Reduce A in-place to bidiagonal form A = QBP'. Returns A, containing the bidiagonal matrix B; d, containing the diagonal elements of B; e, containing the off-diagonal elements of B; tauq, containing the elementary reflectors representing Q; and taup, containing the elementary reflectors representing P.

source

LinearAlgebra.LAPACK.gelqf! — Function

  1. gelqf!(A, tau)

Compute the LQ factorization of A, A = LQ. tau contains scalars which parameterize the elementary reflectors of the factorization. tau must have length greater than or equal to the smallest dimension of A.

Returns A and tau modified in-place.

source

  1. gelqf!(A) -> (A, tau)

Compute the LQ factorization of A, A = LQ.

Returns A, modified in-place, and tau, which contains scalars which parameterize the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.geqlf! — Function

  1. geqlf!(A, tau)

Compute the QL factorization of A, A = QL. tau contains scalars which parameterize the elementary reflectors of the factorization. tau must have length greater than or equal to the smallest dimension of A.

Returns A and tau modified in-place.

source

  1. geqlf!(A) -> (A, tau)

Compute the QL factorization of A, A = QL.

Returns A, modified in-place, and tau, which contains scalars which parameterize the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.geqrf! — Function

  1. geqrf!(A, tau)

Compute the QR factorization of A, A = QR. tau contains scalars which parameterize the elementary reflectors of the factorization. tau must have length greater than or equal to the smallest dimension of A.

Returns A and tau modified in-place.

source

  1. geqrf!(A) -> (A, tau)

Compute the QR factorization of A, A = QR.

Returns A, modified in-place, and tau, which contains scalars which parameterize the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.geqp3! — Function

  1. geqp3!(A, [jpvt, tau]) -> (A, tau, jpvt)

Compute the pivoted QR factorization of A, AP = QR using BLAS level 3. P is a pivoting matrix, represented by jpvt. tau stores the elementary reflectors. The arguments jpvt and tau are optional and allow for passing preallocated arrays. When passed, jpvt must have length greater than or equal to n if A is an (m x n) matrix and tau must have length greater than or equal to the smallest dimension of A.

A, jpvt, and tau are modified in-place.

source

LinearAlgebra.LAPACK.gerqf! — Function

  1. gerqf!(A, tau)

Compute the RQ factorization of A, A = RQ. tau contains scalars which parameterize the elementary reflectors of the factorization. tau must have length greater than or equal to the smallest dimension of A.

Returns A and tau modified in-place.

source

  1. gerqf!(A) -> (A, tau)

Compute the RQ factorization of A, A = RQ.

Returns A, modified in-place, and tau, which contains scalars which parameterize the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.geqrt! — Function

  1. geqrt!(A, T)

Compute the blocked QR factorization of A, A = QR. T contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension of T sets the block size and it must be between 1 and n. The second dimension of T must equal the smallest dimension of A.

Returns A and T modified in-place.

source

  1. geqrt!(A, nb) -> (A, T)

Compute the blocked QR factorization of A, A = QR. nb sets the block size and it must be between 1 and n, the second dimension of A.

Returns A, modified in-place, and T, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.geqrt3! — Function

  1. geqrt3!(A, T)

Recursively computes the blocked QR factorization of A, A = QR. T contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension of T sets the block size and it must be between 1 and n. The second dimension of T must equal the smallest dimension of A.

Returns A and T modified in-place.

source

  1. geqrt3!(A) -> (A, T)

Recursively computes the blocked QR factorization of A, A = QR.

Returns A, modified in-place, and T, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.getrf! — Function

  1. getrf!(A) -> (A, ipiv, info)

Compute the pivoted LU factorization of A, A = LU.

Returns A, modified in-place, ipiv, the pivoting information, and an info code which indicates success (info = 0), a singular value in U (info = i, in which case U[i,i] is singular), or an error code (info < 0).

source

LinearAlgebra.LAPACK.tzrzf! — Function

  1. tzrzf!(A) -> (A, tau)

Transforms the upper trapezoidal matrix A to upper triangular form in-place. Returns A and tau, the scalar parameters for the elementary reflectors of the transformation.

source

LinearAlgebra.LAPACK.ormrz! — Function

  1. ormrz!(side, trans, A, tau, C)

Multiplies the matrix C by Q from the transformation supplied by tzrzf!. Depending on side or trans the multiplication can be left-sided (side = L, Q*C) or right-sided (side = R, C*Q) and Q can be unmodified (trans = N), transposed (trans = T), or conjugate transposed (trans = C). Returns matrix C which is modified in-place with the result of the multiplication.

source

LinearAlgebra.LAPACK.gels! — Function

  1. gels!(trans, A, B) -> (F, B, ssr)

Solves the linear equation A * X = B, transpose(A) * X = B, or adjoint(A) * X = B using a QR or LQ factorization. Modifies the matrix/vector B in place with the solution. A is overwritten with its QR or LQ factorization. trans may be one of N (no modification), T (transpose), or C (conjugate transpose). gels! searches for the minimum norm/least squares solution. A may be under or over determined. The solution is returned in B.

source

LinearAlgebra.LAPACK.gesv! — Function

  1. gesv!(A, B) -> (B, A, ipiv)

Solves the linear equation A * X = B where A is a square matrix using the LU factorization of A. A is overwritten with its LU factorization and B is overwritten with the solution X. ipiv contains the pivoting information for the LU factorization of A.

source

LinearAlgebra.LAPACK.getrs! — Function

  1. getrs!(trans, A, ipiv, B)

Solves the linear equation A * X = B, transpose(A) * X = B, or adjoint(A) * X = B for square A. Modifies the matrix/vector B in place with the solution. A is the LU factorization from getrf!, with ipiv the pivoting information. trans may be one of N (no modification), T (transpose), or C (conjugate transpose).

source

LinearAlgebra.LAPACK.getri! — Function

  1. getri!(A, ipiv)

Computes the inverse of A, using its LU factorization found by getrf!. ipiv is the pivot information output and A contains the LU factorization of getrf!. A is overwritten with its inverse.

source

LinearAlgebra.LAPACK.gesvx! — Function

  1. gesvx!(fact, trans, A, AF, ipiv, equed, R, C, B) -> (X, equed, R, C, B, rcond, ferr, berr, work)

Solves the linear equation A * X = B (trans = N), transpose(A) * X = B (trans = T), or adjoint(A) * X = B (trans = C) using the LU factorization of A. fact may be E, in which case A will be equilibrated and copied to AF; F, in which case AF and ipiv from a previous LU factorization are inputs; or N, in which case A will be copied to AF and then factored. If fact = F, equed may be N, meaning A has not been equilibrated; R, meaning A was multiplied by Diagonal(R) from the left; C, meaning A was multiplied by Diagonal(C) from the right; or B, meaning A was multiplied by Diagonal(R) from the left and Diagonal(C) from the right. If fact = F and equed = R or B the elements of R must all be positive. If fact = F and equed = C or B the elements of C must all be positive.

Returns the solution X; equed, which is an output if fact is not N, and describes the equilibration that was performed; R, the row equilibration diagonal; C, the column equilibration diagonal; B, which may be overwritten with its equilibrated form Diagonal(R)*B (if trans = N and equed = R,B) or Diagonal(C)*B (if trans = T,C and equed = C,B); rcond, the reciprocal condition number of A after equilbrating; ferr, the forward error bound for each solution vector in X; berr, the forward error bound for each solution vector in X; and work, the reciprocal pivot growth factor.

source

  1. gesvx!(A, B)

The no-equilibration, no-transpose simplification of gesvx!.

source

LinearAlgebra.LAPACK.gelsd! — Function

  1. gelsd!(A, B, rcond) -> (B, rnk)

Computes the least norm solution of A * X = B by finding the SVD factorization of A, then dividing-and-conquering the problem. B is overwritten with the solution X. Singular values below rcond will be treated as zero. Returns the solution in B and the effective rank of A in rnk.

source

LinearAlgebra.LAPACK.gelsy! — Function

  1. gelsy!(A, B, rcond) -> (B, rnk)

Computes the least norm solution of A * X = B by finding the full QR factorization of A, then dividing-and-conquering the problem. B is overwritten with the solution X. Singular values below rcond will be treated as zero. Returns the solution in B and the effective rank of A in rnk.

source

LinearAlgebra.LAPACK.gglse! — Function

  1. gglse!(A, c, B, d) -> (X,res)

Solves the equation A * x = c where x is subject to the equality constraint B * x = d. Uses the formula ||c - A*x||^2 = 0 to solve. Returns X and the residual sum-of-squares.

source

LinearAlgebra.LAPACK.geev! — Function

  1. geev!(jobvl, jobvr, A) -> (W, VL, VR)

Finds the eigensystem of A. If jobvl = N, the left eigenvectors of A aren’t computed. If jobvr = N, the right eigenvectors of A aren’t computed. If jobvl = V or jobvr = V, the corresponding eigenvectors are computed. Returns the eigenvalues in W, the right eigenvectors in VR, and the left eigenvectors in VL.

source

LinearAlgebra.LAPACK.gesdd! — Function

  1. gesdd!(job, A) -> (U, S, VT)

Finds the singular value decomposition of A, A = U * S * V', using a divide and conquer approach. If job = A, all the columns of U and the rows of V' are computed. If job = N, no columns of U or rows of V' are computed. If job = O, A is overwritten with the columns of (thin) U and the rows of (thin) V'. If job = S, the columns of (thin) U and the rows of (thin) V' are computed and returned separately.

source

LinearAlgebra.LAPACK.gesvd! — Function

  1. gesvd!(jobu, jobvt, A) -> (U, S, VT)

Finds the singular value decomposition of A, A = U * S * V'. If jobu = A, all the columns of U are computed. If jobvt = A all the rows of V' are computed. If jobu = N, no columns of U are computed. If jobvt = N no rows of V' are computed. If jobu = O, A is overwritten with the columns of (thin) U. If jobvt = O, A is overwritten with the rows of (thin) V'. If jobu = S, the columns of (thin) U are computed and returned separately. If jobvt = S the rows of (thin) V' are computed and returned separately. jobu and jobvt can’t both be O.

Returns U, S, and Vt, where S are the singular values of A.

source

LinearAlgebra.LAPACK.ggsvd! — Function

  1. ggsvd!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)

Finds the generalized singular value decomposition of A and B, U'*A*Q = D1*R and V'*B*Q = D2*R. D1 has alpha on its diagonal and D2 has beta on its diagonal. If jobu = U, the orthogonal/unitary matrix U is computed. If jobv = V the orthogonal/unitary matrix V is computed. If jobq = Q, the orthogonal/unitary matrix Q is computed. If jobu, jobv or jobq is N, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.

source

LinearAlgebra.LAPACK.ggsvd3! — Function

  1. ggsvd3!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)

Finds the generalized singular value decomposition of A and B, U'*A*Q = D1*R and V'*B*Q = D2*R. D1 has alpha on its diagonal and D2 has beta on its diagonal. If jobu = U, the orthogonal/unitary matrix U is computed. If jobv = V the orthogonal/unitary matrix V is computed. If jobq = Q, the orthogonal/unitary matrix Q is computed. If jobu, jobv, or jobq is N, that matrix is not computed. This function requires LAPACK 3.6.0.

source

LinearAlgebra.LAPACK.geevx! — Function

  1. geevx!(balanc, jobvl, jobvr, sense, A) -> (A, w, VL, VR, ilo, ihi, scale, abnrm, rconde, rcondv)

Finds the eigensystem of A with matrix balancing. If jobvl = N, the left eigenvectors of A aren’t computed. If jobvr = N, the right eigenvectors of A aren’t computed. If jobvl = V or jobvr = V, the corresponding eigenvectors are computed. If balanc = N, no balancing is performed. If balanc = P, A is permuted but not scaled. If balanc = S, A is scaled but not permuted. If balanc = B, A is permuted and scaled. If sense = N, no reciprocal condition numbers are computed. If sense = E, reciprocal condition numbers are computed for the eigenvalues only. If sense = V, reciprocal condition numbers are computed for the right eigenvectors only. If sense = B, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. If sense = E,B, the right and left eigenvectors must be computed.

source

LinearAlgebra.LAPACK.ggev! — Function

  1. ggev!(jobvl, jobvr, A, B) -> (alpha, beta, vl, vr)

Finds the generalized eigendecomposition of A and B. If jobvl = N, the left eigenvectors aren’t computed. If jobvr = N, the right eigenvectors aren’t computed. If jobvl = V or jobvr = V, the corresponding eigenvectors are computed.

source

LinearAlgebra.LAPACK.gtsv! — Function

  1. gtsv!(dl, d, du, B)

Solves the equation A * X = B where A is a tridiagonal matrix with dl on the subdiagonal, d on the diagonal, and du on the superdiagonal.

Overwrites B with the solution X and returns it.

source

LinearAlgebra.LAPACK.gttrf! — Function

  1. gttrf!(dl, d, du) -> (dl, d, du, du2, ipiv)

Finds the LU factorization of a tridiagonal matrix with dl on the subdiagonal, d on the diagonal, and du on the superdiagonal.

Modifies dl, d, and du in-place and returns them and the second superdiagonal du2 and the pivoting vector ipiv.

source

LinearAlgebra.LAPACK.gttrs! — Function

  1. gttrs!(trans, dl, d, du, du2, ipiv, B)

Solves the equation A * X = B (trans = N), transpose(A) * X = B (trans = T), or adjoint(A) * X = B (trans = C) using the LU factorization computed by gttrf!. B is overwritten with the solution X.

source

LinearAlgebra.LAPACK.orglq! — Function

orglq!(A, tau, k = length(tau))

Explicitly finds the matrix Q of a LQ factorization after calling gelqf! on A. Uses the output of gelqf!. A is overwritten by Q.

source

LinearAlgebra.LAPACK.orgqr! — Function

orgqr!(A, tau, k = length(tau))

Explicitly finds the matrix Q of a QR factorization after calling geqrf! on A. Uses the output of geqrf!. A is overwritten by Q.

source

LinearAlgebra.LAPACK.orgql! — Function

orgql!(A, tau, k = length(tau))

Explicitly finds the matrix Q of a QL factorization after calling geqlf! on A. Uses the output of geqlf!. A is overwritten by Q.

source

LinearAlgebra.LAPACK.orgrq! — Function

orgrq!(A, tau, k = length(tau))

Explicitly finds the matrix Q of a RQ factorization after calling gerqf! on A. Uses the output of gerqf!. A is overwritten by Q.

source

LinearAlgebra.LAPACK.ormlq! — Function

ormlq!(side, trans, A, tau, C)

Computes Q * C (trans = N), transpose(Q) * C (trans = T), adjoint(Q) * C (trans = C) for side = L or the equivalent right-sided multiplication for side = R using Q from a LQ factorization of A computed using gelqf!. C is overwritten.

source

LinearAlgebra.LAPACK.ormqr! — Function

ormqr!(side, trans, A, tau, C)

Computes Q * C (trans = N), transpose(Q) * C (trans = T), adjoint(Q) * C (trans = C) for side = L or the equivalent right-sided multiplication for side = R using Q from a QR factorization of A computed using geqrf!. C is overwritten.

source

LinearAlgebra.LAPACK.ormql! — Function

ormql!(side, trans, A, tau, C)

Computes Q * C (trans = N), transpose(Q) * C (trans = T), adjoint(Q) * C (trans = C) for side = L or the equivalent right-sided multiplication for side = R using Q from a QL factorization of A computed using geqlf!. C is overwritten.

source

LinearAlgebra.LAPACK.ormrq! — Function

ormrq!(side, trans, A, tau, C)

Computes Q * C (trans = N), transpose(Q) * C (trans = T), adjoint(Q) * C (trans = C) for side = L or the equivalent right-sided multiplication for side = R using Q from a RQ factorization of A computed using gerqf!. C is overwritten.

source

LinearAlgebra.LAPACK.gemqrt! — Function

gemqrt!(side, trans, V, T, C)

Computes Q * C (trans = N), transpose(Q) * C (trans = T), adjoint(Q) * C (trans = C) for side = L or the equivalent right-sided multiplication for side = R using Q from a QR factorization of A computed using geqrt!. C is overwritten.

source

LinearAlgebra.LAPACK.posv! — Function

posv!(uplo, A, B) -> (A, B)

Finds the solution to A * X = B where A is a symmetric or Hermitian positive definite matrix. If uplo = U the upper Cholesky decomposition of A is computed. If uplo = L the lower Cholesky decomposition of A is computed. A is overwritten by its Cholesky decomposition. B is overwritten with the solution X.

source

LinearAlgebra.LAPACK.potrf! — Function

potrf!(uplo, A)

Computes the Cholesky (upper if uplo = U, lower if uplo = L) decomposition of positive-definite matrix A. A is overwritten and returned with an info code.

source

LinearAlgebra.LAPACK.potri! — Function

potri!(uplo, A)

Computes the inverse of positive-definite matrix A after calling potrf! to find its (upper if uplo = U, lower if uplo = L) Cholesky decomposition.

A is overwritten by its inverse and returned.

source

LinearAlgebra.LAPACK.potrs! — Function

potrs!(uplo, A, B)

Finds the solution to A * X = B where A is a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed by potrf!. If uplo = U the upper Cholesky decomposition of A was computed. If uplo = L the lower Cholesky decomposition of A was computed. B is overwritten with the solution X.

source

LinearAlgebra.LAPACK.pstrf! — Function

pstrf!(uplo, A, tol) -> (A, piv, rank, info)

Computes the (upper if uplo = U, lower if uplo = L) pivoted Cholesky decomposition of positive-definite matrix A with a user-set tolerance tol. A is overwritten by its Cholesky decomposition.

Returns A, the pivots piv, the rank of A, and an info code. If info = 0, the factorization succeeded. If info = i > 0, then A is indefinite or rank-deficient.

source

LinearAlgebra.LAPACK.ptsv! — Function

ptsv!(D, E, B)

Solves A * X = B for positive-definite tridiagonal A. D is the diagonal of A and E is the off-diagonal. B is overwritten with the solution X and returned.

source

LinearAlgebra.LAPACK.pttrf! — Function

pttrf!(D, E)

Computes the LDLt factorization of a positive-definite tridiagonal matrix with D as diagonal and E as off-diagonal. D and E are overwritten and returned.

source

LinearAlgebra.LAPACK.pttrs! — Function

pttrs!(D, E, B)

Solves A * X = B for positive-definite tridiagonal A with diagonal D and off-diagonal E after computing A‘s LDLt factorization using pttrf!. B is overwritten with the solution X.

source

LinearAlgebra.LAPACK.trtri! — Function

trtri!(uplo, diag, A)

Finds the inverse of (upper if uplo = U, lower if uplo = L) triangular matrix A. If diag = N, A has non-unit diagonal elements. If diag = U, all diagonal elements of A are one. A is overwritten with its inverse.

source

LinearAlgebra.LAPACK.trtrs! — Function

trtrs!(uplo, trans, diag, A, B)

Solves A * X = B (trans = N), transpose(A) * X = B (trans = T), or adjoint(A) * X = B (trans = C) for (upper if uplo = U, lower if uplo = L) triangular matrix A. If diag = N, A has non-unit diagonal elements. If diag = U, all diagonal elements of A are one. B is overwritten with the solution X.

source

LinearAlgebra.LAPACK.trcon! — Function

trcon!(norm, uplo, diag, A)

Finds the reciprocal condition number of (upper if uplo = U, lower if uplo = L) triangular matrix A. If diag = N, A has non-unit diagonal elements. If diag = U, all diagonal elements of A are one. If norm = I, the condition number is found in the infinity norm. If norm = O or 1, the condition number is found in the one norm.

source

LinearAlgebra.LAPACK.trevc! — Function

trevc!(side, howmny, select, T, VL = similar(T), VR = similar(T))

Finds the eigensystem of an upper triangular matrix T. If side = R, the right eigenvectors are computed. If side = L, the left eigenvectors are computed. If side = B, both sets are computed. If howmny = A, all eigenvectors are found. If howmny = B, all eigenvectors are found and backtransformed using VL and VR. If howmny = S, only the eigenvectors corresponding to the values in select are computed.

source

LinearAlgebra.LAPACK.trrfs! — Function

trrfs!(uplo, trans, diag, A, B, X, Ferr, Berr) -> (Ferr, Berr)

Estimates the error in the solution to A * X = B (trans = N), transpose(A) * X = B (trans = T), adjoint(A) * X = B (trans = C) for side = L, or the equivalent equations a right-handed side = R X * A after computing X using trtrs!. If uplo = U, A is upper triangular. If uplo = L, A is lower triangular. If diag = N, A has non-unit diagonal elements. If diag = U, all diagonal elements of A are one. Ferr and Berr are optional inputs. Ferr is the forward error and Berr is the backward error, each component-wise.

source

LinearAlgebra.LAPACK.stev! — Function

stev!(job, dv, ev) -> (dv, Zmat)

Computes the eigensystem for a symmetric tridiagonal matrix with dv as diagonal and ev as off-diagonal. If job = N only the eigenvalues are found and returned in dv. If job = V then the eigenvectors are also found and returned in Zmat.

source

LinearAlgebra.LAPACK.stebz! — Function

stebz!(range, order, vl, vu, il, iu, abstol, dv, ev) -> (dv, iblock, isplit)

Computes the eigenvalues for a symmetric tridiagonal matrix with dv as diagonal and ev as off-diagonal. If range = A, all the eigenvalues are found. If range = V, the eigenvalues in the half-open interval (vl, vu] are found. If range = I, the eigenvalues with indices between il and iu are found. If order = B, eigvalues are ordered within a block. If order = E, they are ordered across all the blocks. abstol can be set as a tolerance for convergence.

source

LinearAlgebra.LAPACK.stegr! — Function

stegr!(jobz, range, dv, ev, vl, vu, il, iu) -> (w, Z)

Computes the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) for a symmetric tridiagonal matrix with dv as diagonal and ev as off-diagonal. If range = A, all the eigenvalues are found. If range = V, the eigenvalues in the half-open interval (vl, vu] are found. If range = I, the eigenvalues with indices between il and iu are found. The eigenvalues are returned in w and the eigenvectors in Z.

source

LinearAlgebra.LAPACK.stein! — Function

stein!(dv, ev_in, w_in, iblock_in, isplit_in)

Computes the eigenvectors for a symmetric tridiagonal matrix with dv as diagonal and ev_in as off-diagonal. w_in specifies the input eigenvalues for which to find corresponding eigenvectors. iblock_in specifies the submatrices corresponding to the eigenvalues in w_in. isplit_in specifies the splitting points between the submatrix blocks.

source

LinearAlgebra.LAPACK.syconv! — Function

syconv!(uplo, A, ipiv) -> (A, work)

Converts a symmetric matrix A (which has been factorized into a triangular matrix) into two matrices L and D. If uplo = U, A is upper triangular. If uplo = L, it is lower triangular. ipiv is the pivot vector from the triangular factorization. A is overwritten by L and D.

source

LinearAlgebra.LAPACK.sysv! — Function

sysv!(uplo, A, B) -> (B, A, ipiv)

Finds the solution to A * X = B for symmetric matrix A. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored. B is overwritten by the solution X. A is overwritten by its Bunch-Kaufman factorization. ipiv contains pivoting information about the factorization.

source

LinearAlgebra.LAPACK.sytrf! — Function

sytrf!(uplo, A) -> (A, ipiv, info)

Computes the Bunch-Kaufman factorization of a symmetric matrix A. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored.

Returns A, overwritten by the factorization, a pivot vector ipiv, and the error code info which is a non-negative integer. If info is positive the matrix is singular and the diagonal part of the factorization is exactly zero at position info.

source

LinearAlgebra.LAPACK.sytri! — Function

sytri!(uplo, A, ipiv)

Computes the inverse of a symmetric matrix A using the results of sytrf!. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored. A is overwritten by its inverse.

source

LinearAlgebra.LAPACK.sytrs! — Function

sytrs!(uplo, A, ipiv, B)

Solves the equation A * X = B for a symmetric matrix A using the results of sytrf!. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored. B is overwritten by the solution X.

source

LinearAlgebra.LAPACK.hesv! — Function

hesv!(uplo, A, B) -> (B, A, ipiv)

Finds the solution to A * X = B for Hermitian matrix A. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored. B is overwritten by the solution X. A is overwritten by its Bunch-Kaufman factorization. ipiv contains pivoting information about the factorization.

source

LinearAlgebra.LAPACK.hetrf! — Function

hetrf!(uplo, A) -> (A, ipiv, info)

Computes the Bunch-Kaufman factorization of a Hermitian matrix A. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored.

Returns A, overwritten by the factorization, a pivot vector ipiv, and the error code info which is a non-negative integer. If info is positive the matrix is singular and the diagonal part of the factorization is exactly zero at position info.

source

LinearAlgebra.LAPACK.hetri! — Function

hetri!(uplo, A, ipiv)

Computes the inverse of a Hermitian matrix A using the results of sytrf!. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored. A is overwritten by its inverse.

source

LinearAlgebra.LAPACK.hetrs! — Function

hetrs!(uplo, A, ipiv, B)

Solves the equation A * X = B for a Hermitian matrix A using the results of sytrf!. If uplo = U, the upper half of A is stored. If uplo = L, the lower half is stored. B is overwritten by the solution X.

source

LinearAlgebra.LAPACK.syev! — Function

syev!(jobz, uplo, A)

Finds the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrix A. If uplo = U, the upper triangle of A is used. If uplo = L, the lower triangle of A is used.

source

LinearAlgebra.LAPACK.syevr! — Function

syevr!(jobz, range, uplo, A, vl, vu, il, iu, abstol) -> (W, Z)

Finds the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrix A. If uplo = U, the upper triangle of A is used. If uplo = L, the lower triangle of A is used. If range = A, all the eigenvalues are found. If range = V, the eigenvalues in the half-open interval (vl, vu] are found. If range = I, the eigenvalues with indices between il and iu are found. abstol can be set as a tolerance for convergence.

The eigenvalues are returned in W and the eigenvectors in Z.

source

LinearAlgebra.LAPACK.sygvd! — Function

sygvd!(itype, jobz, uplo, A, B) -> (w, A, B)

Finds the generalized eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrix A and symmetric positive-definite matrix B. If uplo = U, the upper triangles of A and B are used. If uplo = L, the lower triangles of A and B are used. If itype = 1, the problem to solve is A * x = lambda * B * x. If itype = 2, the problem to solve is A * B * x = lambda * x. If itype = 3, the problem to solve is B * A * x = lambda * x.

source

LinearAlgebra.LAPACK.bdsqr! — Function

bdsqr!(uplo, d, e_, Vt, U, C) -> (d, Vt, U, C)

Computes the singular value decomposition of a bidiagonal matrix with d on the diagonal and e_ on the off-diagonal. If uplo = U, e_ is the superdiagonal. If uplo = L, e_ is the subdiagonal. Can optionally also compute the product Q' * C.

Returns the singular values in d, and the matrix C overwritten with Q' * C.

source

LinearAlgebra.LAPACK.bdsdc! — Function

bdsdc!(uplo, compq, d, e_) -> (d, e, u, vt, q, iq)

Computes the singular value decomposition of a bidiagonal matrix with d on the diagonal and e_ on the off-diagonal using a divide and conqueq method. If uplo = U, e_ is the superdiagonal. If uplo = L, e_ is the subdiagonal. If compq = N, only the singular values are found. If compq = I, the singular values and vectors are found. If compq = P, the singular values and vectors are found in compact form. Only works for real types.

Returns the singular values in d, and if compq = P, the compact singular vectors in iq.

source

LinearAlgebra.LAPACK.gecon! — Function

gecon!(normtype, A, anorm)

Finds the reciprocal condition number of matrix A. If normtype = I, the condition number is found in the infinity norm. If normtype = O or 1, the condition number is found in the one norm. A must be the result of getrf! and anorm is the norm of A in the relevant norm.

source

LinearAlgebra.LAPACK.gehrd! — Function

gehrd!(ilo, ihi, A) -> (A, tau)

Converts a matrix A to Hessenberg form. If A is balanced with gebal! then ilo and ihi are the outputs of gebal!. Otherwise they should be ilo = 1 and ihi = size(A,2). tau contains the elementary reflectors of the factorization.

source

LinearAlgebra.LAPACK.orghr! — Function

orghr!(ilo, ihi, A, tau)

Explicitly finds Q, the orthogonal/unitary matrix from gehrd!. ilo, ihi, A, and tau must correspond to the input/output to gehrd!.

source

LinearAlgebra.LAPACK.gees! — Function

gees!(jobvs, A) -> (A, vs, w)

Computes the eigenvalues (jobvs = N) or the eigenvalues and Schur vectors (jobvs = V) of matrix A. A is overwritten by its Schur form.

Returns A, vs containing the Schur vectors, and w, containing the eigenvalues.

source

LinearAlgebra.LAPACK.gges! — Function

gges!(jobvsl, jobvsr, A, B) -> (A, B, alpha, beta, vsl, vsr)

Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (jobsvl = V), or right Schur vectors (jobvsr = V) of A and B.

The generalized eigenvalues are returned in alpha and beta. The left Schur vectors are returned in vsl and the right Schur vectors are returned in vsr.

source

LinearAlgebra.LAPACK.trexc! — Function

trexc!(compq, ifst, ilst, T, Q) -> (T, Q)
trexc!(ifst, ilst, T, Q) -> (T, Q)

Reorder the Schur factorization T of a matrix, such that the diagonal block of T with row index ifst is moved to row index ilst. If compq = V, the Schur vectors Q are reordered. If compq = N they are not modified. The 4-arg method calls the 5-arg method with compq = V.

source

LinearAlgebra.LAPACK.trsen! — Function

trsen!(job, compq, select, T, Q) -> (T, Q, w, s, sep)
trsen!(select, T, Q) -> (T, Q, w, s, sep)

Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. If job = N, no condition numbers are found. If job = E, only the condition number for this cluster of eigenvalues is found. If job = V, only the condition number for the invariant subspace is found. If job = B then the condition numbers for the cluster and subspace are found. If compq = V the Schur vectors Q are updated. If compq = N the Schur vectors are not modified. select determines which eigenvalues are in the cluster. The 3-arg method calls the 5-arg method with job = N and compq = V.

Returns T, Q, reordered eigenvalues in w, the condition number of the cluster of eigenvalues s, and the condition number of the invariant subspace sep.

source

LinearAlgebra.LAPACK.tgsen! — Function

tgsen!(select, S, T, Q, Z) -> (S, T, alpha, beta, Q, Z)

Reorders the vectors of a generalized Schur decomposition. select specifies the eigenvalues in each cluster.

source

LinearAlgebra.LAPACK.trsyl! — Function

trsyl!(transa, transb, A, B, C, isgn=1) -> (C, scale)

Solves the Sylvester matrix equation A * X +/- X * B = scale*C where A and B are both quasi-upper triangular. If transa = N, A is not modified. If transa = T, A is transposed. If transa = C, A is conjugate transposed. Similarly for transb and B. If isgn = 1, the equation A * X + X * B = scale * C is solved. If isgn = -1, the equation A * X - X * B = scale * C is solved.

Returns X (overwriting C) and scale.

source

  • Bischof1987C Bischof and C Van Loan, “The WY representation for products of Householder matrices”, SIAM J Sci Stat Comput 8 (1987), s2-s13. doi:10.1137/0908009
  • Schreiber1989R Schreiber and C Van Loan, “A storage-efficient WY representation for products of Householder transformations”, SIAM J Sci Stat Comput 10 (1989), 53-57. doi:10.1137/0910005
  • Bunch1977J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179. url.
  • issue8859Issue 8859, “Fix least squares”, https://github.com/JuliaLang/julia/pull/8859
  • B96Åke Björck, “Numerical Methods for Least Squares Problems”, SIAM Press, Philadelphia, 1996, “Other Titles in Applied Mathematics”, Vol. 51. doi:10.1137/1.9781611971484
  • S84G. W. Stewart, “Rank Degeneracy”, SIAM Journal on Scientific and Statistical Computing, 5(2), 1984, 403-413. doi:10.1137/0905030
  • KY88Konstantinos Konstantinides and Kung Yao, “Statistical analysis of effective singular values in matrix rank determination”, IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 1988, 757-763. doi:10.1109/29.1585
  • H05Nicholas J. Higham, “The squaring and scaling method for the matrix exponential revisited”, SIAM Journal on Matrix Analysis and Applications, 26(4), 2005, 1179-1193. doi:10.1137/090768539
  • AH12Awad H. Al-Mohy and Nicholas J. Higham, “Improved inverse scaling and squaring algorithms for the matrix logarithm”, SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169. doi:10.1137/110852553
  • AHR13Awad H. Al-Mohy, Nicholas J. Higham and Samuel D. Relton, “Computing the Fréchet derivative of the matrix logarithm and estimating the condition number”, SIAM Journal on Scientific Computing, 35(4), 2013, C394-C410. doi:10.1137/120885991
  • BH83Åke Björck and Sven Hammarling, “A Schur method for the square root of a matrix”, Linear Algebra and its Applications, 52-53, 1983, 127-140. doi:10.1016/0024-3795(83)80010-X80010-X)
  • H87Nicholas J. Higham, “Computing real square roots of a real matrix”, Linear Algebra and its Applications, 88-89, 1987, 405-430. doi:10.1016/0024-3795(87)90118-290118-2)
  • AH16_1Mary Aprahamian and Nicholas J. Higham, “Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms”, MIMS EPrint: 2016.4. https://doi.org/10.1137/16M1057577
  • AH16_2Mary Aprahamian and Nicholas J. Higham, “Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms”, MIMS EPrint: 2016.4. https://doi.org/10.1137/16M1057577
  • AH16_3Mary Aprahamian and Nicholas J. Higham, “Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms”, MIMS EPrint: 2016.4. https://doi.org/10.1137/16M1057577
  • AH16_4Mary Aprahamian and Nicholas J. Higham, “Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms”, MIMS EPrint: 2016.4. https://doi.org/10.1137/16M1057577
  • AH16_5Mary Aprahamian and Nicholas J. Higham, “Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms”, MIMS EPrint: 2016.4. https://doi.org/10.1137/16M1057577
  • AH16_6Mary Aprahamian and Nicholas J. Higham, “Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms”, MIMS EPrint: 2016.4. https://doi.org/10.1137/16M1057577