Coding With Fun
Home Docker Django Node.js Articles Python pip guide FAQ Policy

Julia linear algegege


May 14, 2021 Julia


Table of contents


Linear algege for

Matrix decomposition

Matrix decomposition is the product of breaking down a matrix into several matrices and is a core concept in linear algege for.

The following table summarizes several matrix decomposition methods implemented in Julia. Specific functions can refer to the Linear Algebra chapter of the standard library documentation.

Cholesky Cholesky decomposes
CholeskyPivoted The primary Cholesky decomposes
LU LU decomposition
LUTridiagonal The LU factor decomposition of the triangular matrix
UmfpackLU LU decomposition of sparse matrices (calculated using UMFPACK)
Qr QR decomposition
QRCompactWY The compact WY form of QR decomposition
QRPivoted The primary QR is decomposed
Hessenberg Hessenberg decomposes
Eigen Feature decomposition
Svd Singular value decomposition
GeneralizedSVD Broad singular value decomposition

Special matrix

Special matrices with symmetrical structures are often encountered in linear algegexuals, which are often associated with matrix decomposition. Julia has a very rich range of special matrix types built in to quickly perform specific operations on special matrices.

The following table summarizes the special matrix types in Julia, which also contain some of the operations that have been optimized in LAPACK.

Hermitian Hermit matrix
Triangular Top/Bottom Triangle Matrix
Tridiagonal Triangular matrix
SymTridiagonal Symmetrical triangular moment
Bidiagonal Up/down double verse matrix
Diagonal The verse matrix
UniformScaling Scale the matrix

Basic operations

Matrix type + - * \ Other optimized functions
Hermitian Xy inv, sqrtm, expm
Triangular Xy Xy inv, det
SymTridiagonal XZ Xy eigmax/min
Tridiagonal XZ Xy
Bidiagonal XZ Xy
Diagnoal Xy Xy inv, det, logdet, /
UniformScaling Xyz Xyz /

Legend:

The matrix-matrix operation has been optimized
Y The matrix-vector operation has been optimized
Z The matrix-standard operation has been optimized

Matrix decomposition

Matrix type LAPACK eig eigvals eigvecs Svd svdvals
Hermitian HE Abc
Triangular Tr
SymTridiagonal St A Abc AD
Tridiagonal GT
Bidiagonal BD A A
Diagonal DI A

Legend:

A Optimized for finding feature values and/or feature vectors eigvals (M) for example
B The look for ilth to ihth feature values has been optimized eigvals(M, il, ih)
C The feature values that are found between the .vl, vh are optimized eigvals(M, vl, vh)
D The feature vector corresponding to the look-for feature value x ,... x1, x2, x2, has been optimized eigvecs(M, x)

Zoom operations

A UniformScaling represents the scalarian number of scalables for a unit λ*I . U nit unit I is defined as a constant and is UniformScaling T he dimensions of these operators are - of * general \ size and can match the + in other binary operators, such as . T his A+I be a square array for A - I and A A-I Multiplication I use unit operator I are an empty operation (unless the scaling factor is one), so there is little overhead.