Computing Outer Inverses Using Complete Orthogonal Factorizations

Several full-rank representations of the A T,S inverse of a given constant complex matrix, which are based on various complete orthogonal factorizations, are introduced. Particularly, we introduce a full rank representation based on the Singular Value Decomposition (S VD) as well as on a combination of the QR decomposition and the S VD of the triangular matrix produced by the QR decomposition. Furthermore, representations based on the factorizations to a bidiagonal form are defined. The representations arising from reductions to bidiagonal form are applicable to real full row rank matrices. Illustrative numerical examples as well as an extensive numerical study are presented. A comparison of three introduced methods is presented.


Introduction
Computation of generalized inverses by means of various matrix decompositions has been extensively investigated in the scientific literature.
A fast computational method for computing the Moore-Penrose inverse A † based on the QR decomposition of the matrix A is introduced in (Katsikis et al., 2011).The QR decomposition is assumed to be defined as in Theorem 3.3.11 in (Watkins, 2002) and its extension to complex matrices is used from (Godall, 1993).An extension of the representation introduced in (Katsikis et al., 2011) to the set of outer inverses with prescribed range and null space is presented in (Stanimirović et al., 2012b).Symbolic computation of A (2)  T,S inverses on the basis of the QDR decomposition of the matrix W is presented in (Stanimirović et al., 2012).The canonical form of the DMP inverse A D AA † , of a square matrix A, is presented in (Malik & Thome, 2014).This representation of the DMP inverse is based on the Hartwig-Spindelböck matrix decomposition.The authors of the paper (Issa & Bdair, 2014) introduced some new bounds for the zeros of polynomials by using the QR and LU decompositions of the corresponding companion matrices.Recently, representations of {2, 4} and {2, 3} generalized inverses based on the S VD are considered in (Shaini & Hoxha, 2013).
In the present paper we develop several numerical algorithms for computing A (2)  T,S inverses.These algorithms are based on the full rank representation of an appropriately chosen n × m matrix W arising from various complete orthogonal factorizations of W.
Our second intention is to examine properties of the introduced methods.For this reason, the paper presents a greater number of numerical experiments in which the introduced methods are compared with each other.
The paper is organized as follows.The second section surveys some useful basic notions and notations concerning generalized inverses.In the third section we derive two numerical algorithms for computing outer inverses.These algorithms are based on the S VD factorization of an appropriately chosen matrix W as well as on the successive application of QR decomposition and S VD.Representations based on bidiagonal forms, applicable to real full row rank matrices, are introduced in Section 4. Numerical examples on various test matrices are presented in the last section.

Preliminaries
Following the usual notation, by R m×n r (resp.C m×n r ) we denote the set of all real (resp.complex) m × n matrices of rank r.By I we denote the unit matrix of an appropriate order.Furthermore A T , R(A), rank(A) and N(A) denote the transpose, the range, the rank and the null space of A ∈ R m×n .
In the case m = n we also consider the following equations For a sequence S of elements from the set {1, 2, 3, 4, 5, 1 k }, the set of matrices obeying the equations labeled by the numbers collected in S is denoted by A{S}.A matrix from A{S} is called an S-inverse of A. The matrix X = A † is said to be the Moore-Penrose inverse of A satisfying equations ( 1)-( 4).The group inverse A # is the unique {1, 2, 5} inverse of A, and exists if and only if ind(A) = min (2) and ( 5) are satisfied.In the case ind(A) = 1, the Drazin inverse of A is equal to the group inverse A # of A.
The rank of generalized inverse X is important, and it will be convenient to consider the subset A{i, j, k} s of A{i, j, k}, consisting {i, j, k}-inverses of rank s (see Ben-Israel & Greville, 2003).
T,S .The outer generalized inverses with prescribed range and null-space are of the special importance in matrix theory.The {2}-inverses have application in constructing the iterative methods for solving the nonlinear equations (Ben-Israel & Greville, 2003;Nashed, 1993) as well as in statistics (Getson & Hsuan, 1988;Husen & Langenberg, 1985).In particular, outer inverses play an important role in stable approximations of ill-posed problems and in linear and nonlinear problems involving rank-deficient generalized inverse (Nashed, 1976;Zheng & Bapat, 2004).On the other hand, it is well known that the Moore-Penrose inverse A † and the weighted Moore-Penrose inverse A † M,N , the Drazin inverse A D and the group inverse A # , as well as the Bott-Duffin inverse A (−1)  (L) and the generalized Bott-Duffin inverse A ( †) (L) can be presented by a unified approach, as generalized inverses A (2)  T,S for appropriate choice of matrices T and S .For example, the next is valid for a rectangular matrix A (Ben-Israel & Greville, 2003): where M, N are positive definite matrices of appropriate orders and A = N −1 A T M. For a given square matrix A the next identities are satisfied (Ben-Israel & Greville, 2003;Chen, 1990;Wang et al., 2004): In the next proposition we restate the full-rank representation of {2}-inverses with prescribed range and null space from (Shen & Cheng, 2007).
, T be a subspace of C n of dimension s ≤ r and let S be a subspace of C m of dimensions m − s.In addition, suppose that W ∈ C n×m satisfies R(W) = T, N(W) = S .Let W has an arbitrary full-rank decomposition, that is W = FG.If A has a {2}-inverse A (2)  T,S , then: (1) GAF is an invertible matrix; (2) A (2)  T,S = F(GAF) −1 G.

Representations Based on QRD and SVD
In this section, it is assumed that A ∈ C m×n r is an input matrix and W ∈ C n×m s , 0 < s ≤ r is an arbitrary but fixed matrix.A complete orthogonal factorization of W is defined by where T is a square nonsingular matrix of dimension s × s, s = rank(W).It is assumed that the zero blocks in positions (1, 2) and (2, 2) of the block matrix in (3) may vanish.In order to simplify computations and derive a full-rank representation of W, it is necessary to consider partitions of matrices U and V into appropriate blocks where U s and V s denote the first s columns of U and V, respectively.Then is a full-rank factorization of W obtained from the complete orthogonal factorization (3).
In this section we will prove that the QR factorization and the S VD are various appearances of the complete orthogonal factorization.Also, in Section 4 we show that a complete orthogonal factorization of W can be derived from its reduction to a bidiagonal form.
Suppose that the QR factorization of W is of the form where is an upper trapezoidal matrix.Assume that Q and R are partitioned as where Q s consists of the first s columns of the matrix Q and R 11 ∈ C s×s is nonsingular.
The idea to calculate the Moore-Penrose inverse using the QR decomposition is originated in (Katsikis et al., 2011): A generalization of this result to the set of outer inverses is introduced in (Stanimirović et al., 2012b) The QR factorization (6) is not a complete orthogonal factorization unless R 12 = 0. Therefore, (7) is not derived from the complete orthogonal factorization.However, R 12 can be eliminated by applying further orthogonal (or unitary) transformations from the right to the upper trapezoidal matrix R 11 R 12 : This gives the complete orthogonal factorization of W: Therefore, (8) is the complete orthogonal factorization of W arising from (5), where T is equal to T 11 .Furthermore, W = Q s T 11 (PZ) * is the full-rank factorization of W (called the QR full-rank factorization).The following representation can be derived as an analogy of (7).
Corollary 1 Let A ∈ C m×n r be the given matrix and W ∈ C n×m s , s ≤ r be selected.Assume that (8) the complete orthogonal factorization of W. If A has a {2}-inverse A (2) R(W),N(W) , then T 11 (PZ) * AQ s is an invertible matrix and Suppose that the S VD factorization of W ∈ C n×m s is of the general form where The S VD is a complete orthogonal factorization in which the block T is the diagonal matrix with singular values of W on the main diagonal and U, V are left orthogonal.It is known that the S VD (10) can be presented in a more efficient form where Further, discarding small singular values in Σ s , it is possible to use the Truncated Singular Value Decomposition (T S VD) of W and generate a full-rank factorization of its approximation W (t) , defined by Immediately from Proposition and the full rank decomposition ( 13) we obtain the following representation of outer inverses.
Lemma 1 Let A ∈ C m×n r be the given matrix and W ∈ C n×m s , s ≤ r be selected.If (13) is the full-rank factorization of W (t) then the following is valid Algorithm 1 defines the method for computing the outer inverse of A which is based on the S VD full-rank decomposition of W.
Algorithm 1 Computing the A (2) T,S inverse of the matrix A using ( 14).(Algorithm SVDATS2) Require: The matrix A of dimensions m × n and of rank r.
1: Choose arbitrary but fixed n × m matrix W of rank s ≤ r.
2: Compute the S VD full-rank decomposition of the matrix W (t) in the form (13).
3: Solve the matrix equation We also define representations based on a combination of the QR decomposition of A and the S VD of the triangular matrix R.This representation is known from (Moor, 1991(Moor, , 1992)).In applications where m n, it is often a good idea to use the S VD of the triangular factor that appears in the QR decomposition of A (see Moor, 1991): where The full-rank factorization (15) of A gives us idea to apply S VD of the triangular factor arising from the QR decomposition of W ∈ R n×m s , 0 < s ≤ r: Later, one can use the following truncated form of ( 16): where 17) is a full-rank factorization of W (t) .Therefore, the induced outer inverse of A with the range R(Q W ) and null space N(V * t P * ) is defined by Algorithm 2 defines the method for computing outer inverse of A which is defined in (18).
Algorithm 2 Computing the A (2) T,S inverse of the matrix A using the full-rank representation ( 18).(Algorithm QRSVDATS2) Require: The matrix A of dimensions m × n and of rank r.
1: Choose arbitrary but fixed n × m matrix W of rank s ≤ r.
2: Compute the QR decomposition of the matrix W in the form ( 16).
3: Compute the T S VD decomposition of the matrix R W and derive the factorization (17) of W (t) .4: Solve the matrix equation
Algorithm 3 Computing A † of a full-column rank matrix A.
(Algorithm V (Bidiag1) from (Smoktunowicz & Wróbel, 2012)) Require: The matrix A of dimensions m × n and of rank n. 1: Use Golub-Kahan algorithm to reduce A to a bidiagonal form A = UBV T , where U ∈ R m×n is left orthogonal (i.e.U T U = I n ), V ∈ R n×n is orthogonal and B ∈ R n×n is bidiagonal (i.e. the nonzero elements can be located on the main diagonal or on the superdiagonal only).2: Solve the equation BY = U T for Y by back substitution.3: Compute the output The second algorithm from (Smoktunowicz & Wróbel, 2012) is restated in Algorithm 4.
Algorithm 4 Computing A † of a full-column rank matrix A.
A complete orthogonal factorization of W can be obtained starting from its bidiagonal form.Introduced generalizations of Algorithm 3 and Algorithm 4 are applicable to real full row rank matrices.If A is of the order m × n and of rank m ≤ n, introduced algorithms generate the set of outer inverses of A of rank m, denoted by A{2} m .The bidiagonal form of the matrix W ∈ R n×m m , defined in (Ralha, 2003), is used to compute outer inverses of A. As it is stated in (Smoktunowicz & Wróbel, 2012), the assumption rank(W) = m guarantees invertibility of B. This means that W = UBV T is a complete orthogonal factorization of A. In the sequel we define a method to obtain a complete orthogonal factorization of W ∈ R n×m m starting from its bidiagonal form.
Proposition 2 (Ralha, 2003) Let W ∈ R n×m satisfy n ≥ m = rank(W).There exist orthogonal U ∈ R n×n , orthogonal V ∈ R m×m and upper bidiagonal matrix where the block B is bidiagonal, such that the following factorization of W holds: Theorem 1 Let A ∈ R m×n m be a given matrix satisfying n ≥ m and W ∈ R n×m n be a selected but fixed matrix.Let (19) be the bidiagonal form of W. Let U be partitioned into blocks where U m denotes the first m rows of U. Then Using the representation ( 22) from Theorem we are in a position to state Algorithm 5.
Algorithm 5 Computing the A (2) T,S inverse of A ∈ R m×n using ( 22).(Algorithm Bidiag1ATS2) Require: The matrix A of dimensions m × n and of rank m.
1: Choose arbitrary but fixed n × m matrix W of rank m.
2: Compute the factorization of the matrix W in the form (20).
3: Solve the matrix equation BV T AU m X = BV T with respect to unknown matrix X. 4: Compute the output In order to generalize Algorithm 4, in the sequel we generalize the idea of the reduction to bidiagonal form in two stages.If W ∈ R n×m m , the generalization is justified in the case when n is much larger than m.A QR decomposition of W ∈ R n×m m is performed in the first stage: where R m ∈ R m×m is upper triangular and Q m denotes the matrix constructed from the first m columns of Q.The second stage requires reduction of the relatively small matrix R m into a bidiagonal form All matrices included in the decomposition (24) are of the order m × m, Ũ, Ṽ are orthogonal and B is bidiagonal.Decompositions ( 23) and ( 24) imply the following representation of outer inverses of A.
Theorem 2 Let A ∈ R m×n m be given and W ∈ R n×m m be chosen matrix.If (23) is a QR decomposition of W and (24) is a bidiagonal form of R m , then Proof.We have is a full-rank factorization of W, the proof immediately follows from the representation of outer inverses with prescribed range and null space from (Shen & Cheng, 2007).
Algorithm 6 Computing the A (2) T,S inverse of the matrix A ∈ R m×n using ( 25).(Algorithm Bidiag2ATS2) Require: The matrix A of dimensions m × n and of rank m.
1: Choose arbitrary but fixed n × m matrix W of rank m.
2: Compute the factorization of the matrix W in the form (26).
3: Solve the matrix equation B ṼT AQ m ŨX = B ṼT with respect to unknown matrix X. 4: Compute the output
Truncated S VD of the order 2 of the matrix R is defined by the ordered triple Outer generalized inverse defined in ( 18) is equal to In the continuation of this section we will give a series of examples in which we study behavior of introduced representations.

Example 2
In this example we compare algorithms for computing outer inverses based on three different factorizations: -Algorithm QRATS2 based on the QR decomposition from (Stanimirović et al., 2012b); -Algorithm QRSVDATS2 based on the representation (18) and -Algorithm SVDATS2 based on the S VD from (Shaini & Hoxha, 2013).
The following code in the programming package Mathematica is applied: (* n=100 or n=200 or n=300 or n=400 or n=500 or n=600 or n=700 or n=800 *) The smallest values in the tables are written in bold font.
In Table 1 and Table 2 we arrange the results obtained applying the above Mathematica code on the test function S [n] from (Zielke, 1986), in the case a = 1.Dimensions n of test matrices are equal to n = 100, 200,300,400,500,600,700,800. For each matrix S [n] we generate 50 random matrices W of rank 2. Two criteria are used for testing the algorithms: precision and CPU time.Let us denote by ) the results generated applying QRD, the combination of QRD and S VD of R and T S VD for kth generated matrix W k , k = 1, . . ., 50.The columns 2,3,4 in Table 1 denote the sum of generated norms: Performances of compared methods are tested by using the so-called performance profile, introduced in (Dolan & Moore, 2002).The underlying performance metric is defined by the accuracy and the CPU time.Following the notations given in the paper (Dolan & Moore, 2002), the number of solvers is n s = 3 (the solvers are S QR n , S QRS VD n , S S VD n ) and the number of numerical experiments is n p = 8 (each exploiting 50 randomly matrices W and accordingly generated outer inverses).By i p,s we denote the achieved accuracy obtained by applying the method s on the problem p.The quantity is called the performance ratio.Finally, the performance of the solver s is defined by the following cumulative distribution function where τ ∈ R and P represents the set of problems.
Figure 1 shows the performance profiles for S QR n , S QRS VD n , S S VD n regarding the accuracy, using data arranged in Table 1.It is observable from Figure 1 that S S VD n and S QRS VD n methods show better performances compared to S QR n : This means that Algorithm S VDAT S 2 has the highest probability of being the optimal solver with respect to the numerical accuracy.Also, Algorithm QRS V DAT S 2 is better than Algorithm QRAT S 2.
Let us denote by T QR k , T QRS VD k , T S VD k CPU times spanned by applying QRD, combination of QRD and S VD of R and T S VD for a selected n and kth generated matrix W k , k = 1, . . ., 50.The columns 2, 3, 4 in Table 2 denote the sums of CPU times  Figure 2 leads to the same conclusion as Figure 1:

Example 3
In this example we compare the same algorithms as in Example 2 on the numerical computation of the Moore-Penrose inverse.Instead of randomly generated matrices we use the matrices 10,30,50,70,90,110,130,150,170,190.
The results are arranged in Table 3.Let us denote by the results generated applying QRD, the combination of QRD and S VD of R and T S VD, respectively.The columns 2, 3, 4 in Table 3 contain the generated matrix norms From Table 3 it is possible to conclude the following: 1) The method QRAT S 2 is the worst solver.
2) The method QRS V DAT S 2 reaches the best numerical precision.The only exception is observed in the case n = 70.

Example 4
In this example we continue comparison of the same algorithms for numerical computation of outer inverse.We again use the matrix A = S [n] and randomly generated matrices W of rank rank(W) = n/2.These matrices can be generated by the next Mathematica code: The results are arranged in Table 4. Dimensions n are equal to n = 10, 30, 50, 70, 90, 110, 130, 150, 170, 190.Let us denote by the results generated applying QRD, combination of QRD and S VD of R and T S VD.The columns 2, 3, 4 in Table 3 contain values of the matrix norms Comparing the results included in Table 4, it is possible to conclude the following: 1) The method QRAT S 2 is again the worst solver regarding the numerical precision.
2) The methods QRS V DAT S 2 and S VDAT S 2 reach much better numerical precision.
Example 5 Finally, the same algorithms are compared in numerical computation of outer inverses of the Lauchli matrix.The Lauchli matrix is a (n + 1) × n matrix defined by (see Higham, 1988) In this example, the matrix A is equal to A = L(n, 0.2).We again use randomly generated matrices W satisfying rank(W) = n/2.The results are arranged in Table 5.  Comparing the results included in Table 5, it is possible to conclude the following.
1) The method QRAT S 2 reaches the minimal precision two times, in the first two cases.
2) The methods QRS V DAT S 2 and S VDAT S 2 generate the results with much better numerical precision.Both of them reach the minimal precision six times.
3) QRAT S 2 is the best solver for small dimensions n, QRS V DAT S 2 for medium values and S VDAT S 2 for greatest dimensions.
Example 6 The matrix A = L(n, 0.000002) is ill-conditioned and causes serious numerical problems in computation of required outer inverses.The results are obtained using randomly generated matrices W of rank rank(W) = n/2 and arranged in Table 6.The results included in Table 6 show that QRS V DAT S 2 is the best solver six times, S VDAT S 2 five times and QRAT S 2 reaches the best values three times.

Conclusion
According to performed numerical experiments we conclude the following.
1) The S VD representation of outer inverses with prescribed range and null space is the most efficient in the case of low rank matrices W.
2) The combination of QR and S VD decomposition of outer inverses with prescribed range and null space is the most efficient in the case W = A T .
3) The combination of QR decomposition and S VD is a significant improvement of the QR decomposition method from (Stanimirović et al., 2012b).
Also, in the present paper we show that the factorizations based on a bidiagonal form are useful in representation of outer inverses of matrices with full column rank.

Figure 1 .
Figure1.Performance profile regarding the accuracy presented in Table1

Table 1 .
Comparison of three algorithms for computing outer inverses

Table 2 .
CPU time of three algorithms for computing outer inverses Figure 2 illustrates data arranged in Table2and shows the performance profiles for the methods QRAT S 2, QRS V DAT S 2, S VDAT S 2 regarding the spanned CPU time.

Table 4 .
Accuracy of three algorithms for computing outer inverses of the matrix A = S [n]

Table 6 .
Accuracy of three algorithms for computing the outer inverses of the ill-conditioned Lauchli matrix