Name of two different techniques based on the singular value decomposition
In linear algebra , the generalized singular value decomposition (GSVD ) is the name of two different techniques based on the singular value decomposition (SVD) . The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD ) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
First version: two-matrix decomposition [ edit ]
The generalized singular value decomposition (GSVD ) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition . It was introduced by Van Loan [ 1] in 1976 and later developed by Paige and Saunders ,[ 2] which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD,[ 3] [ 4] [ 5] are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms . In the following, let
F
=
R
{\displaystyle \mathbb {F} =\mathbb {R} }
, or
F
=
C
{\displaystyle \mathbb {F} =\mathbb {C} }
.
The generalized singular value decomposition of matrices
A
1
∈
F
m
1
×
n
{\displaystyle A_{1}\in \mathbb {F} ^{m_{1}\times n}}
and
A
2
∈
F
m
2
×
n
{\displaystyle A_{2}\in \mathbb {F} ^{m_{2}\times n}}
is
A
1
=
U
1
Σ
1
[
W
∗
D
,
0
D
]
Q
∗
,
A
2
=
U
2
Σ
2
[
W
∗
D
,
0
D
]
Q
∗
,
{\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[W^{*}D,0_{D}]Q^{*},\\A_{2}&=U_{2}\Sigma _{2}[W^{*}D,0_{D}]Q^{*},\end{aligned}}}
where
U
1
∈
F
m
1
×
m
1
{\displaystyle U_{1}\in \mathbb {F} ^{m_{1}\times m_{1}}}
is unitary ,
U
2
∈
F
m
2
×
m
2
{\displaystyle U_{2}\in \mathbb {F} ^{m_{2}\times m_{2}}}
is unitary,
Q
∈
F
n
×
n
{\displaystyle Q\in \mathbb {F} ^{n\times n}}
is unitary,
W
∈
F
k
×
k
{\displaystyle W\in \mathbb {F} ^{k\times k}}
is unitary,
D
∈
R
k
×
k
{\displaystyle D\in \mathbb {R} ^{k\times k}}
is real diagonal with positive diagonal, and contains the non-zero singular values of
C
=
[
A
1
A
2
]
{\displaystyle C={\begin{bmatrix}A_{1}\\A_{2}\end{bmatrix}}}
in decreasing order,
0
D
=
0
∈
R
k
×
(
n
−
k
)
{\displaystyle 0_{D}=0\in \mathbb {R} ^{k\times (n-k)}}
,
Σ
1
=
⌈
I
A
,
S
1
,
0
A
⌋
∈
R
m
1
×
k
{\displaystyle \Sigma _{1}=\lceil I_{A},S_{1},0_{A}\rfloor \in \mathbb {R} ^{m_{1}\times k}}
is real non-negative block-diagonal , where
S
1
=
⌈
α
r
+
1
,
…
,
α
r
+
s
⌋
{\displaystyle S_{1}=\lceil \alpha _{r+1},\dots ,\alpha _{r+s}\rfloor }
with
1
>
α
r
+
1
≥
⋯
≥
α
r
+
s
>
0
{\displaystyle 1>\alpha _{r+1}\geq \cdots \geq \alpha _{r+s}>0}
,
I
A
=
I
r
{\displaystyle I_{A}=I_{r}}
, and
0
A
=
0
∈
R
(
m
1
−
r
−
s
)
×
(
k
−
r
−
s
)
{\displaystyle 0_{A}=0\in \mathbb {R} ^{(m_{1}-r-s)\times (k-r-s)}}
,
Σ
2
=
⌈
0
B
,
S
2
,
I
B
⌋
∈
R
m
2
×
k
{\displaystyle \Sigma _{2}=\lceil 0_{B},S_{2},I_{B}\rfloor \in \mathbb {R} ^{m_{2}\times k}}
is real non-negative block-diagonal, where
S
2
=
⌈
β
r
+
1
,
…
,
β
r
+
s
⌋
{\displaystyle S_{2}=\lceil \beta _{r+1},\dots ,\beta _{r+s}\rfloor }
with
0
<
β
r
+
1
≤
⋯
≤
β
r
+
s
<
1
{\displaystyle 0<\beta _{r+1}\leq \cdots \leq \beta _{r+s}<1}
,
I
B
=
I
k
−
r
−
s
{\displaystyle I_{B}=I_{k-r-s}}
, and
0
B
=
0
∈
R
(
m
2
−
k
+
r
)
×
r
{\displaystyle 0_{B}=0\in \mathbb {R} ^{(m_{2}-k+r)\times r}}
,
Σ
1
∗
Σ
1
=
⌈
α
1
2
,
…
,
α
k
2
⌋
{\displaystyle \Sigma _{1}^{*}\Sigma _{1}=\lceil \alpha _{1}^{2},\dots ,\alpha _{k}^{2}\rfloor }
,
Σ
2
∗
Σ
2
=
⌈
β
1
2
,
…
,
β
k
2
⌋
{\displaystyle \Sigma _{2}^{*}\Sigma _{2}=\lceil \beta _{1}^{2},\dots ,\beta _{k}^{2}\rfloor }
,
Σ
1
∗
Σ
1
+
Σ
2
∗
Σ
2
=
I
k
{\displaystyle \Sigma _{1}^{*}\Sigma _{1}+\Sigma _{2}^{*}\Sigma _{2}=I_{k}}
,
k
=
rank
(
C
)
{\displaystyle k={\textrm {rank}}(C)}
.
We denote
α
1
=
⋯
=
α
r
=
1
{\displaystyle \alpha _{1}=\cdots =\alpha _{r}=1}
,
α
r
+
s
+
1
=
⋯
=
α
k
=
0
{\displaystyle \alpha _{r+s+1}=\cdots =\alpha _{k}=0}
,
β
1
=
⋯
=
β
r
=
0
{\displaystyle \beta _{1}=\cdots =\beta _{r}=0}
, and
β
r
+
s
+
1
=
⋯
=
β
k
=
1
{\displaystyle \beta _{r+s+1}=\cdots =\beta _{k}=1}
. While
Σ
1
{\displaystyle \Sigma _{1}}
is diagonal,
Σ
2
{\displaystyle \Sigma _{2}}
is not always diagonal, because of the leading rectangular zero matrix; instead
Σ
2
{\displaystyle \Sigma _{2}}
is "bottom-right-diagonal".
There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply
Q
∗
{\displaystyle Q^{*}}
from the left by
E
E
∗
=
I
{\displaystyle EE^{*}=I}
where
E
∈
F
n
×
n
{\displaystyle E\in \mathbb {F} ^{n\times n}}
is an arbitrary unitary matrix. We denote
X
=
(
[
W
∗
D
,
0
D
]
Q
∗
)
∗
{\displaystyle X=([W^{*}D,0_{D}]Q^{*})^{*}}
X
∗
=
[
0
,
R
]
Q
^
∗
{\displaystyle X^{*}=[0,R]{\hat {Q}}^{*}}
, where
R
∈
F
k
×
k
{\displaystyle R\in \mathbb {F} ^{k\times k}}
is upper-triangular and invertible, and
Q
^
∈
F
n
×
n
{\displaystyle {\hat {Q}}\in \mathbb {F} ^{n\times n}}
is unitary. Such matrices exist by RQ-decomposition .
Y
=
W
∗
D
{\displaystyle Y=W^{*}D}
. Then
Y
{\displaystyle Y}
is invertible.
Here are some variations of the GSVD:
MATLAB (gsvd):
A
1
=
U
1
Σ
1
X
∗
,
A
2
=
U
2
Σ
2
X
∗
.
{\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}X^{*},\\A_{2}&=U_{2}\Sigma _{2}X^{*}.\end{aligned}}}
LAPACK (LA_GGSVD):
A
1
=
U
1
Σ
1
[
0
,
R
]
Q
^
∗
,
A
2
=
U
2
Σ
2
[
0
,
R
]
Q
^
∗
.
{\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[0,R]{\hat {Q}}^{*},\\A_{2}&=U_{2}\Sigma _{2}[0,R]{\hat {Q}}^{*}.\end{aligned}}}
Simplified:
A
1
=
U
1
Σ
1
[
Y
,
0
D
]
Q
∗
,
A
2
=
U
2
Σ
2
[
Y
,
0
D
]
Q
∗
.
{\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[Y,0_{D}]Q^{*},\\A_{2}&=U_{2}\Sigma _{2}[Y,0_{D}]Q^{*}.\end{aligned}}}
Generalized singular values [ edit ]
A generalized singular value of
A
1
{\displaystyle A_{1}}
and
A
2
{\displaystyle A_{2}}
is a pair
(
a
,
b
)
∈
R
2
{\displaystyle (a,b)\in \mathbb {R} ^{2}}
such that
lim
δ
→
0
det
(
b
2
A
1
∗
A
1
−
a
2
A
2
∗
A
2
+
δ
I
n
)
/
det
(
δ
I
n
−
k
)
=
0
,
a
2
+
b
2
=
1
,
a
,
b
≥
0.
{\displaystyle {\begin{aligned}\lim _{\delta \to 0}\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})/\det(\delta I_{n-k})&=0,\\a^{2}+b^{2}&=1,\\a,b&\geq 0.\end{aligned}}}
We have
A
i
A
j
∗
=
U
i
Σ
i
Y
Y
∗
Σ
j
∗
U
j
∗
{\displaystyle A_{i}A_{j}^{*}=U_{i}\Sigma _{i}YY^{*}\Sigma _{j}^{*}U_{j}^{*}}
A
i
∗
A
j
=
Q
[
Y
∗
Σ
i
∗
Σ
j
Y
0
0
0
]
Q
∗
=
Q
1
Y
∗
Σ
i
∗
Σ
j
Y
Q
1
∗
{\displaystyle A_{i}^{*}A_{j}=Q{\begin{bmatrix}Y^{*}\Sigma _{i}^{*}\Sigma _{j}Y&0\\0&0\end{bmatrix}}Q^{*}=Q_{1}Y^{*}\Sigma _{i}^{*}\Sigma _{j}YQ_{1}^{*}}
By these properties we can show that the generalized singular values are exactly the pairs
(
α
i
,
β
i
)
{\displaystyle (\alpha _{i},\beta _{i})}
. We have
det
(
b
2
A
1
∗
A
1
−
a
2
A
2
∗
A
2
+
δ
I
n
)
=
det
(
b
2
A
1
∗
A
1
−
a
2
A
2
∗
A
2
+
δ
Q
Q
∗
)
=
det
(
Q
[
Y
∗
(
b
2
Σ
1
∗
Σ
1
−
a
2
Σ
2
∗
Σ
2
)
Y
+
δ
I
k
0
0
δ
I
n
−
k
]
Q
∗
)
=
det
(
δ
I
n
−
k
)
det
(
Y
∗
(
b
2
Σ
1
∗
Σ
1
−
a
2
Σ
2
∗
Σ
2
)
Y
+
δ
I
k
)
.
{\displaystyle {\begin{aligned}&\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})\\=&\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta QQ^{*})\\=&\det \left(Q{\begin{bmatrix}Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k}&0\\0&\delta I_{n-k}\end{bmatrix}}Q^{*}\right)\\=&\det(\delta I_{n-k})\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k}).\end{aligned}}}
Therefore
lim
δ
→
0
det
(
b
2
A
1
∗
A
1
−
a
2
A
2
∗
A
2
+
δ
I
n
)
/
det
(
δ
I
n
−
k
)
=
lim
δ
→
0
det
(
Y
∗
(
b
2
Σ
1
∗
Σ
1
−
a
2
Σ
2
∗
Σ
2
)
Y
+
δ
I
k
)
=
det
(
Y
∗
(
b
2
Σ
1
∗
Σ
1
−
a
2
Σ
2
∗
Σ
2
)
Y
)
=
|
det
(
Y
)
|
2
∏
i
=
1
k
(
b
2
α
i
2
−
a
2
β
i
2
)
.
{\displaystyle {\begin{aligned}{}&\lim _{\delta \to 0}\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})/\det(\delta I_{n-k})\\=&\lim _{\delta \to 0}\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k})\\=&\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y)\\=&|\det(Y)|^{2}\prod _{i=1}^{k}(b^{2}\alpha _{i}^{2}-a^{2}\beta _{i}^{2}).\end{aligned}}}
This expression is zero exactly when
a
=
α
i
{\displaystyle a=\alpha _{i}}
and
b
=
β
i
{\displaystyle b=\beta _{i}}
for some
i
{\displaystyle i}
.
In,[ 2] the generalized singular values are claimed to be those which solve
det
(
b
2
A
1
∗
A
1
−
a
2
A
2
∗
A
2
)
=
0
{\displaystyle \det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2})=0}
. However, this claim only holds when
k
=
n
{\displaystyle k=n}
, since otherwise the determinant is zero for every pair
(
a
,
b
)
∈
R
2
{\displaystyle (a,b)\in \mathbb {R} ^{2}}
; this can be seen by substituting
δ
=
0
{\displaystyle \delta =0}
above.
Generalized inverse [ edit ]
Define
E
+
=
E
−
1
{\displaystyle E^{+}=E^{-1}}
for any invertible matrix
E
∈
F
n
×
n
{\displaystyle E\in \mathbb {F} ^{n\times n}}
,
0
+
=
0
∗
{\displaystyle 0^{+}=0^{*}}
for any zero matrix
0
∈
F
m
×
n
{\displaystyle 0\in \mathbb {F} ^{m\times n}}
, and
⌈
E
1
,
E
2
⌋
+
=
⌈
E
1
+
,
E
2
+
⌋
{\displaystyle \left\lceil E_{1},E_{2}\right\rfloor ^{+}=\left\lceil E_{1}^{+},E_{2}^{+}\right\rfloor }
for any block-diagonal matrix. Then define
A
i
+
=
Q
[
Y
−
1
0
]
Σ
i
+
U
i
∗
{\displaystyle A_{i}^{+}=Q{\begin{bmatrix}Y^{-1}\\0\end{bmatrix}}\Sigma _{i}^{+}U_{i}^{*}}
It can be shown that
A
i
+
{\displaystyle A_{i}^{+}}
as defined here is a generalized inverse of
A
i
{\displaystyle A_{i}}
; in particular a
{
1
,
2
,
3
}
{\displaystyle \{1,2,3\}}
-inverse of
A
i
{\displaystyle A_{i}}
. Since it does not in general satisfy
(
A
i
+
A
i
)
∗
=
A
i
+
A
i
{\displaystyle (A_{i}^{+}A_{i})^{*}=A_{i}^{+}A_{i}}
, this is not the Moore–Penrose inverse ; otherwise we could derive
(
A
B
)
+
=
B
+
A
+
{\displaystyle (AB)^{+}=B^{+}A^{+}}
for any choice of matrices, which only holds for certain class of matrices .
Suppose
Q
=
[
Q
1
Q
2
]
{\displaystyle Q={\begin{bmatrix}Q_{1}&Q_{2}\end{bmatrix}}}
, where
Q
1
∈
F
n
×
k
{\displaystyle Q_{1}\in \mathbb {F} ^{n\times k}}
and
Q
2
∈
F
n
×
(
n
−
k
)
{\displaystyle Q_{2}\in \mathbb {F} ^{n\times (n-k)}}
. This generalized inverse has the following properties:
Σ
1
+
=
⌈
I
A
,
S
1
−
1
,
0
A
T
⌋
{\displaystyle \Sigma _{1}^{+}=\lceil I_{A},S_{1}^{-1},0_{A}^{T}\rfloor }
Σ
2
+
=
⌈
0
B
T
,
S
2
−
1
,
I
B
⌋
{\displaystyle \Sigma _{2}^{+}=\lceil 0_{B}^{T},S_{2}^{-1},I_{B}\rfloor }
Σ
1
Σ
1
+
=
⌈
I
,
I
,
0
⌋
{\displaystyle \Sigma _{1}\Sigma _{1}^{+}=\lceil I,I,0\rfloor }
Σ
2
Σ
2
+
=
⌈
0
,
I
,
I
⌋
{\displaystyle \Sigma _{2}\Sigma _{2}^{+}=\lceil 0,I,I\rfloor }
Σ
1
Σ
2
+
=
⌈
0
,
S
1
S
2
−
1
,
0
⌋
{\displaystyle \Sigma _{1}\Sigma _{2}^{+}=\lceil 0,S_{1}S_{2}^{-1},0\rfloor }
Σ
1
+
Σ
2
=
⌈
0
,
S
1
−
1
S
2
,
0
⌋
{\displaystyle \Sigma _{1}^{+}\Sigma _{2}=\lceil 0,S_{1}^{-1}S_{2},0\rfloor }
A
i
A
j
+
=
U
i
Σ
i
Σ
j
+
U
j
∗
{\displaystyle A_{i}A_{j}^{+}=U_{i}\Sigma _{i}\Sigma _{j}^{+}U_{j}^{*}}
A
i
+
A
j
=
Q
[
Y
−
1
Σ
i
+
Σ
j
Y
0
0
0
]
Q
∗
=
Q
1
Y
−
1
Σ
i
+
Σ
j
Y
Q
1
∗
{\displaystyle A_{i}^{+}A_{j}=Q{\begin{bmatrix}Y^{-1}\Sigma _{i}^{+}\Sigma _{j}Y&0\\0&0\end{bmatrix}}Q^{*}=Q_{1}Y^{-1}\Sigma _{i}^{+}\Sigma _{j}YQ_{1}^{*}}
A generalized singular ratio of
A
1
{\displaystyle A_{1}}
and
A
2
{\displaystyle A_{2}}
is
σ
i
=
α
i
β
i
+
{\displaystyle \sigma _{i}=\alpha _{i}\beta _{i}^{+}}
. By the above properties,
A
1
A
2
+
=
U
1
Σ
1
Σ
2
+
U
2
∗
{\displaystyle A_{1}A_{2}^{+}=U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}}
. Note that
Σ
1
Σ
2
+
=
⌈
0
,
S
1
S
2
−
1
,
0
⌋
{\displaystyle \Sigma _{1}\Sigma _{2}^{+}=\lceil 0,S_{1}S_{2}^{-1},0\rfloor }
is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. If
A
2
{\displaystyle A_{2}}
is invertible, then
Σ
1
Σ
2
+
{\displaystyle \Sigma _{1}\Sigma _{2}^{+}}
has no leading zeros, and the generalized singular ratios are the singular values, and
U
1
{\displaystyle U_{1}}
and
U
2
{\displaystyle U_{2}}
are the matrices of singular vectors, of the matrix
A
1
A
2
+
=
A
1
A
2
−
1
{\displaystyle A_{1}A_{2}^{+}=A_{1}A_{2}^{-1}}
. In fact, computing the SVD of
A
1
A
2
−
1
{\displaystyle A_{1}A_{2}^{-1}}
is one of the motivations for the GSVD, as "forming
A
B
−
1
{\displaystyle AB^{-1}}
and finding its SVD can lead to unnecessary and large numerical errors when
B
{\displaystyle B}
is ill-conditioned for solution of equations".[ 2] Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. If
A
2
{\displaystyle A_{2}}
is not invertible, then
U
1
Σ
1
Σ
2
+
U
2
∗
{\displaystyle U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}}
is still the SVD of
A
1
A
2
+
{\displaystyle A_{1}A_{2}^{+}}
if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back:
U
1
Σ
1
Σ
2
+
U
2
∗
=
(
U
1
P
1
)
P
1
∗
Σ
1
Σ
2
+
P
2
(
P
2
∗
U
2
∗
)
{\displaystyle U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}=(U_{1}P_{1})P_{1}^{*}\Sigma _{1}\Sigma _{2}^{+}P_{2}(P_{2}^{*}U_{2}^{*})}
, where
P
1
{\displaystyle P_{1}}
and
P
2
{\displaystyle P_{2}}
are appropriate permutation matrices. Since rank equals the number of non-zero singular values,
r
a
n
k
(
A
1
A
2
+
)
=
s
{\displaystyle \mathrm {rank} (A_{1}A_{2}^{+})=s}
.
Let
C
=
P
⌈
D
,
0
⌋
Q
∗
{\displaystyle C=P\lceil D,0\rfloor Q^{*}}
be the SVD of
C
=
[
A
1
A
2
]
{\displaystyle C={\begin{bmatrix}A_{1}\\A_{2}\end{bmatrix}}}
, where
P
∈
F
(
m
1
+
m
2
)
×
(
m
1
×
m
2
)
{\displaystyle P\in \mathbb {F} ^{(m_{1}+m_{2})\times (m_{1}\times m_{2})}}
is unitary, and
Q
{\displaystyle Q}
and
D
{\displaystyle D}
are as described,
P
=
[
P
1
,
P
2
]
{\displaystyle P=[P_{1},P_{2}]}
, where
P
1
∈
F
(
m
1
+
m
2
)
×
k
{\displaystyle P_{1}\in \mathbb {F} ^{(m_{1}+m_{2})\times k}}
and
P
2
∈
F
(
m
1
+
m
2
)
×
(
n
−
k
)
{\displaystyle P_{2}\in \mathbb {F} ^{(m_{1}+m_{2})\times (n-k)}}
,
P
1
=
[
P
11
P
21
]
{\displaystyle P_{1}={\begin{bmatrix}P_{11}\\P_{21}\end{bmatrix}}}
, where
P
11
∈
F
m
1
×
k
{\displaystyle P_{11}\in \mathbb {F} ^{m_{1}\times k}}
and
P
21
∈
F
m
2
×
k
{\displaystyle P_{21}\in \mathbb {F} ^{m_{2}\times k}}
,
P
11
=
U
1
Σ
1
W
∗
{\displaystyle P_{11}=U_{1}\Sigma _{1}W^{*}}
by the SVD of
P
11
{\displaystyle P_{11}}
, where
U
1
{\displaystyle U_{1}}
,
Σ
1
{\displaystyle \Sigma _{1}}
and
W
{\displaystyle W}
are as described,
P
21
W
=
U
2
Σ
2
{\displaystyle P_{21}W=U_{2}\Sigma _{2}}
by a decomposition similar to a QR-decomposition , where
U
2
{\displaystyle U_{2}}
and
Σ
2
{\displaystyle \Sigma _{2}}
are as described.
Then
C
=
P
⌈
D
,
0
⌋
Q
∗
=
[
P
1
D
,
0
]
Q
∗
=
[
U
1
Σ
1
W
∗
D
0
U
2
Σ
2
W
∗
D
0
]
Q
∗
=
[
U
1
Σ
1
[
W
∗
D
,
0
]
Q
∗
U
2
Σ
2
[
W
∗
D
,
0
]
Q
∗
]
.
{\displaystyle {\begin{aligned}C&=P\lceil D,0\rfloor Q^{*}\\{}&=[P_{1}D,0]Q^{*}\\{}&={\begin{bmatrix}U_{1}\Sigma _{1}W^{*}D&0\\U_{2}\Sigma _{2}W^{*}D&0\end{bmatrix}}Q^{*}\\{}&={\begin{bmatrix}U_{1}\Sigma _{1}[W^{*}D,0]Q^{*}\\U_{2}\Sigma _{2}[W^{*}D,0]Q^{*}\end{bmatrix}}.\end{aligned}}}
We also have
[
U
1
∗
0
0
U
2
∗
]
P
1
W
=
[
Σ
1
Σ
2
]
.
{\displaystyle {\begin{bmatrix}U_{1}^{*}&0\\0&U_{2}^{*}\end{bmatrix}}P_{1}W={\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}.}
Therefore
Σ
1
∗
Σ
1
+
Σ
2
∗
Σ
2
=
[
Σ
1
Σ
2
]
∗
[
Σ
1
Σ
2
]
=
W
∗
P
1
∗
[
U
1
0
0
U
2
]
[
U
1
∗
0
0
U
2
∗
]
P
1
W
=
I
.
{\displaystyle \Sigma _{1}^{*}\Sigma _{1}+\Sigma _{2}^{*}\Sigma _{2}={\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}^{*}{\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}=W^{*}P_{1}^{*}{\begin{bmatrix}U_{1}&0\\0&U_{2}\end{bmatrix}}{\begin{bmatrix}U_{1}^{*}&0\\0&U_{2}^{*}\end{bmatrix}}P_{1}W=I.}
Since
P
1
{\displaystyle P_{1}}
has orthonormal columns,
|
|
P
1
|
|
2
≤
1
{\displaystyle ||P_{1}||_{2}\leq 1}
. Therefore
|
|
Σ
1
|
|
2
=
|
|
U
1
∗
P
1
W
|
|
2
=
|
|
P
1
|
|
2
≤
1.
{\displaystyle ||\Sigma _{1}||_{2}=||U_{1}^{*}P_{1}W||_{2}=||P_{1}||_{2}\leq 1.}
We also have for each
x
∈
R
k
{\displaystyle x\in \mathbb {R} ^{k}}
such that
|
|
x
|
|
2
=
1
{\displaystyle ||x||_{2}=1}
that
|
|
P
21
x
|
|
2
2
≤
|
|
P
11
x
|
|
2
2
+
|
|
P
21
x
|
|
2
2
=
|
|
P
1
x
|
|
2
2
≤
1.
{\displaystyle ||P_{21}x||_{2}^{2}\leq ||P_{11}x||_{2}^{2}+||P_{21}x||_{2}^{2}=||P_{1}x||_{2}^{2}\leq 1.}
Therefore
|
|
P
21
|
|
2
≤
1
{\displaystyle ||P_{21}||_{2}\leq 1}
, and
|
|
Σ
2
|
|
2
=
|
|
U
2
∗
P
21
W
|
|
2
=
|
|
P
21
|
|
2
≤
1.
{\displaystyle ||\Sigma _{2}||_{2}=||U_{2}^{*}P_{21}W||_{2}=||P_{21}||_{2}\leq 1.}
The tensor GSVD is one of the comparative spectral decompositions, multi-tensor generalizations of the SVD, invented to simultaneously identify the similar and dissimilar among, and create a single coherent model from any data types, of any number and dimensions.
The GSVD, formulated as a comparative spectral decomposition,[ 6] has been successfully applied to signal processing and data science, e.g., in genomic signal processing.[ 7] [ 8] [ 9]
These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD)[ 10] and the tensor GSVD.[ 11] [ 12]
It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with a linear model, i.e. a reproducing kernel Hilbert space .[ 13]
Second version: weighted single-matrix decomposition [ edit ]
The weighted version of the generalized singular value decomposition (GSVD ) is a constrained matrix decomposition with constraints imposed on the left and right singular vectors of the singular value decomposition .[ 14] [ 15] [ 16] This form of the GSVD is an extension of the SVD as such. Given the SVD of an m×n real or complex matrix M
M
=
U
Σ
V
∗
{\displaystyle M=U\Sigma V^{*}\,}
where
U
∗
W
u
U
=
V
∗
W
v
V
=
I
.
{\displaystyle U^{*}W_{u}U=V^{*}W_{v}V=I.}
Where I is the identity matrix and where
U
{\displaystyle U}
and
V
{\displaystyle V}
are orthonormal given their constraints (
W
u
{\displaystyle W_{u}}
and
W
v
{\displaystyle W_{v}}
). Additionally,
W
u
{\displaystyle W_{u}}
and
W
v
{\displaystyle W_{v}}
are positive definite matrices (often diagonal matrices of weights). This form of the GSVD is the core of certain techniques, such as generalized principal component analysis and Correspondence analysis .
The weighted form of the GSVD is called as such because, with the correct selection of weights, it generalizes many techniques (such as multidimensional scaling and linear discriminant analysis ).[ 17]
^ Van Loan CF (1976). "Generalizing the Singular Value Decomposition". SIAM J. Numer. Anal . 13 (1): 76– 83. Bibcode :1976SJNA...13...76V . doi :10.1137/0713009 .
^ a b c Paige CC, Saunders MA (1981). "Towards a Generalized Singular Value Decomposition". SIAM J. Numer. Anal . 18 (3): 398– 405. Bibcode :1981SJNA...18..398P . doi :10.1137/0718026 .
^ Hansen PC (1997). Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion . SIAM Monographs on Mathematical Modeling and Computation. ISBN 0-89871-403-6 .
^ de Moor BL, Golub GH (1989). "Generalized Singular Value Decompositions A Proposal for a Standard Nomenclauture" (PDF) .
^ de Moor BL, Zha H (1991). "A tree of generalizations of the ordinary singular value decomposition" . Linear Algebra and Its Applications . 147 : 469– 500. doi :10.1016/0024-3795(91)90243-P .
^ Alter O, Brown PO, Botstein D (March 2003). "Generalized singular value decomposition for comparative analysis of genome-scale expression data sets of two different organisms" . Proceedings of the National Academy of Sciences of the United States of America . 100 (6): 3351– 6. Bibcode :2003PNAS..100.3351A . doi :10.1073/pnas.0530258100 . PMC 152296 . PMID 12631705 .
^ Lee CH, Alpert BO, Sankaranarayanan P, Alter O (January 2012). "GSVD comparison of patient-matched normal and tumor aCGH profiles reveals global copy-number alterations predicting glioblastoma multiforme survival" . PLOS ONE . 7 (1): e30098. Bibcode :2012PLoSO...730098L . doi :10.1371/journal.pone.0030098 . PMC 3264559 . PMID 22291905 .
^ Aiello KA, Ponnapalli SP, Alter O (September 2018). "Mathematically universal and biologically consistent astrocytoma genotype encodes for transformation and predicts survival phenotype" . APL Bioengineering . 2 (3): 031909. doi :10.1063/1.5037882 . PMC 6215493 . PMID 30397684 .
^ Ponnapalli SP, Bradley MW, Devine K, Bowen J, Coppens SE, Leraas KM, Milash BA, Li F, Luo H, Qiu S, Wu K, Yang H, Wittwer CT, Palmer CA, Jensen RL, Gastier-Foster JM, Hanson HA, Barnholtz-Sloan JS , Alter O (May 2020). "Retrospective Clinical Trial Experimentally Validates Glioblastoma Genome-Wide Pattern of DNA Copy-Number Alterations Predictor of Survival" . APL Bioengineering . 4 (2): 026106. doi :10.1063/1.5142559 . PMC 7229984 . PMID 32478280 . Press Release .
^ Ponnapalli SP, Saunders MA, Van Loan CF, Alter O (December 2011). "A higher-order generalized singular value decomposition for comparison of global mRNA expression from multiple organisms" . PLOS ONE . 6 (12): e28072. Bibcode :2011PLoSO...628072P . doi :10.1371/journal.pone.0028072 . PMC 3245232 . PMID 22216090 .
^ Sankaranarayanan P, Schomay TE, Aiello KA, Alter O (April 2015). "Tensor GSVD of patient- and platform-matched tumor and normal DNA copy-number profiles uncovers chromosome arm-wide patterns of tumor-exclusive platform-consistent alterations encoding for cell transformation and predicting ovarian cancer survival" . PLOS ONE . 10 (4): e0121396. Bibcode :2015PLoSO..1021396S . doi :10.1371/journal.pone.0121396 . PMC 4398562 . PMID 25875127 .
^ Bradley MW, Aiello KA, Ponnapalli SP, Hanson HA, Alter O (September 2019). "GSVD- and tensor GSVD-uncovered patterns of DNA copy-number alterations predict adenocarcinomas survival in general and in response to platinum" . APL Bioengineering . 3 (3): 036104. doi :10.1063/1.5099268 . PMC 6701977 . PMID 31463421 . Supplementary Material .
^ Cabannes, Vivien; Pillaud-Vivien, Loucas; Bach, Francis; Rudi, Alessandro (2021). "Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning". arXiv :2009.04324 [stat.ML ].
^ Jolliffe IT (2002). Principal Component Analysis . Springer Series in Statistics (2nd ed.). NY: Springer. ISBN 978-0-387-95442-4 .
^ Greenacre M (1983). Theory and Applications of Correspondence Analysis . London: Academic Press. ISBN 978-0-12-299050-2 .
^ Abdi H, Williams LJ (2010). "Principal component analysis". Wiley Interdisciplinary Reviews: Computational Statistics . 2 (4): 433– 459. doi :10.1002/wics.101 . S2CID 122379222 .
^ Abdi H (2007). "Singular Value Decomposition (SVD) and Generalized Singular Value Decomposition (GSVD).". In Salkind NJ (ed.). Encyclopedia of Measurement and Statistics . Thousand Oaks (CA): Sage. pp. 907 –912.
Golub G, Van Loan C (1996). Matrix Computation (Third ed.). Baltimore: Johns Hopkins University Press. ISBN 0-8018-5414-8 .
LAPACK manual [1]