A matrix decomposition and its applications

We show the uniqueness and construction (of the Z matrix in Theorem 2.1, to be exact) of a matrix decomposition and give an affirmative answer to a question proposed in [J. Math. Anal. Appl. 407 (2013) 436-442].


Introduction
Several recent papers [1][2][3][4][5] are devoted to the study of matrices with numerical range in a sector of the complex plane. In particular, this includes the study of accretive-dissipative matrices and positive definite matrices as special cases. A matrix decomposition plays a fundamental role in these works. The aim of this paper is twofold: show the uniqueness along with other properties of the key matrix in the decomposition and give an affirmative answer to a question raised in [6].
As usual, the set of n × n complex matrices is denoted by M n . For A ∈ M n , the singular values and eigenvalues of A are denoted by σ i (A) and λ i (A), respectively, i = 1, . . . , n.
Let A ∈ M n . We write A ≥ 0 if A is positive semidefinite (i.e. x * Ax ≥ 0 for all x ∈ C n ) and A > 0 if A is positive definite (i.e. x * Ax > 0 for all nonzero x ∈ C n ). For two Hermitian matrices A and B of the same size, we denote For a square complex matrix A, recall the Cartesian (or Toeplitz) decomposition (see, e.g. [7, p.6] and [8, p.7]

F. Zhang
The Cartesian decomposition of a matrix is unique. There are many interesting properties for such a decomposition. For instance, (R * AR) = R * ( A)R for any A ∈ M n and any n × m matrix R. A celebrated result due to Fan and Hoffman (see, e.g. [7, p.73]) states that (1) For A ∈ M n , the numerical range of A is the set in the complex plane The classic Toeplitz-Hausdorff theorem asserts that the numerical range of a matrix is a compact and convex subset of the complex plane (see, e.g. [9, p.108]).
For α ∈ [0, π 2 ), let S α be the sector in the complex plane given by If the numerical range of A is contained in an S α for some α ∈ [0, π 2 ), then A is nonsingular and A is positive definite. Moreover, W (A) ⊆ S α implies W (R * AR) ⊆ S α for any nonzero n ×m matrix R. A special and interesting case of this is when A is a diagonal (resp. normal) matrix, the numerical range is comprised of all points in the polygon of the diagonal entries (resp. eigenvalues).
If W (A) is contained in the first quadrant of the complex plane, then A and A are positive semidefinite. We call such a matrix A accretive-dissipative. Note that if A is accretive-dissipative and nonsingular, then W (A) ⊆ e iπ/4 S π/4 , i.e. W (e −iπ/4 A) ⊆ S π/4 . With a continuity argument, we assume that the accretive-dissipative matrices to be considered in this paper are nonsingular.
More can be said about sectors and numerical ranges. Observe that a sector S α , α ∈ [0, π 2 ), is a positive convex cone (i.e. ax + by ∈ S α for all positive a, b ∈ R and all x, y ∈ S α ), it has the addition-closure property, that is, A direct proof of this using the Cartesian decomposition goes as follows: We note here that, by means of numerical range sectors, Li, Rodman and Spitkovsky studied fractional roots (powers) of elements in Banach algebras. [10] In Section 2, we provide a detailed analysis of the so-called sectoral decomposition and show some important properties of it. In Section 3, we use the decomposition and majorization as a tool to obtain some norm inequalities; a question raised in [6] is answered.

A matrix decomposition with a sector
We begin with discussions on a matrix decomposition which we refer to as the sectoral decomposition. The existence of the matrix decomposition with numerical range contained in a sector has appeared in [1, Lemma 2.1]. A similar observation was made by London [11] three decades ago (or even earlier by A. Ostrowski and O. Taussky) to prove a number of existing results by the factorization. This decomposition theorem, though simple as it looks, has been heavily used in recent papers. [1][2][3][4][5] In light of its importance and for completeness and convenience, we restate it here; we then show the uniqueness and give a way of constructing the key matrix Z in the decomposition.
. Then there exist an invertible matrix X and a unitary and diagonal matrix Z = diag(e iθ 1 , . . . , e iθ n ) with all |θ j | ≤ α such that A = X Z X * . Moreover, such a matrix Z is unique up to permutation. [8,Theorem 7.6.4] or [9, Theorem 7.6], M and N are simultaneously *-congruent and diagonalizable, that is, P * M P and P * N P are diagonal for some invertible matrix P. It follows that we can write where X and Y are nonsingular, Z 1 and Z 2 are unitary and diagonal. We may assume Y = I (otherwise replace X with Y −1 X ). We show that Z 1 and Z 2 have the same main diagonal entries (regardless of order). For this, we show that β ∈ C is a diagonal entry of Z 1 with multiplicity k if and only if β is a diagonal entry of Z 2 with the same multiplicity. First, consider the case β = 1. Let Z 1 = C 1 + i S 1 and Z 2 = C 2 + i S 2 be the Cartesian decompositions of Z 1 and Z 2 , respectively. Then XC 1 X * = C 2 and X S 1 X * = S 2 . Since β = 1 is a diagonal entry of Z 1 with multiplicity k, 1 appears on the diagonal of C 1 k times, so S 1 has k zeros on its diagonal. Thus rank(X S 1 X * ) = n − k. As X S 1 X * = S 2 , we have rank(S 2 ) = n − k. This implies that C 2 , thus Z 2 , contains k 1's on its diagonal. If β = 1, multiplying byβ we have X (β Z 1 )X * =β Z 2 . Repeating the above argument with β Z 1 = C 1 + i S 1 andβ Z 2 = C 2 + i S 2 , we see thatβ Z 1 andβ Z 2 each have k 1's on their diagonals; so each Z 1 and Z 2 has k β's on the diagonal. We conclude that Z 2 is similar to Z 1 through permutation, i.e. Z 2 = P Z 1 P for a permutation matrix P, where P denotes the transpose of P.
Note that in the above proof, the uniqueness of the Cartesian decomposition of A results in the uniqueness of Z . This may be singled out as an independent result. Corollary 2.2 If G is a nonsingular matrix such that G SG * = T , where S and T are both unitary and diagonal, then S is similar to T through permutation.

F. Zhang
Note that G in Corollary 2.2 and X in Theorem 2.1 are not unique in general. At this point, the best we can say about G is G SG * T * = I , the identity matrix.
Observe that cos α is decreasing in α on [0, π 2 ), the following are immediate.
and let A = X Z X * be a sectoral decomposition of A, where X is invertible and Z is unitary and diagonal. Then The following result gives a way of constructing the unique matrix Z .
One may check that is normal. For any unit vector z, z * z is a point in the x y-plane with x-coordinate x = 1. It follows that the numerical radius of , i.e. w( ) = max{|z * z| | z ∈ C n , z * z = 1}, is no more than sec α (as the hypotonus of the right triangle with the adjacent leg of length 1). Since is normal, all the singular values of are no more than sec α. In particular, for the spectral norm 2 , we have 2 ≤ sec α.
Let γ a and γ b be respectively the largest and smallest values of the γ j 's in Theorem 2.4. For the Z in the decomposition, W (Z ) is the polygonal region of the diagonal entries of Z located on the unit circle from e iγ a to e iγ b . For the in Corollary 2.5, W ( ) is the vertical line segment from the point 1 +i tan γ a to the point 1 +i tan γ b . All these figures are contained in S γ c , where γ c = max{|γ a |, |γ b |}, which is nothing but the γ (A) in Theorem 2.4. In practice, we find first then Z .

Example 1 Let A ∈ M 3 have the Cartesian decomposition
Upon computation, the eigenvalues of M −1 N are 0, ± √ 2. It follows that the eigenvalues of is the triangle (with interior) with vertices 1 + 0i, e iρ , e −iρ ; while W (A) is an oval disc (see [12, p.140 Given a matrix A, if the numerical range W (A) is contained in a half-plane, then we can rotate the numerical range so that it is relocated in a sector S α for some α ∈ [0, π 2 ). What would be the best possible (smallest) value of such an α? Suppose that W (A) is contained in a region between two half-lines starting from the origin (or a wedge) and let δ be the angle between the two half-lines, 0 ≤ δ < π. Then W (e iθ A) ⊆ S δ/2 for some θ ∈ R. Such a rotation has no impact on certain quantities of the matrix such as norm for e iθ A = A . This observation suggests that some matrix problems (of stable matrices, say) may be studied through the matrices whose numerical ranges are contained in the right half-plane.

Norm inequalities for partitioned matrices
Recall that a norm · on M n is unitarily invariant if U AV = A for any A ∈  (σ 1 (B), . . . , σ r (B), 0, . . . , 0) ∈ R n , where r is the rank of B. Thus σ (A) and σ (B) are both in R n .
Let A be an n-square complex matrix partitioned in the form , where A 11 and A 22 are square. (2) In [6], the following norm inequalities are proved (in Hilbert space).
(A) [6,Theorem 3.3]: Let A ∈ M n be accretive-dissipative and partitioned as in (2). Then for any unitarily invariant norm · on M n , (B) [6,Theorem 3.11]: Let A ∈ M n be accretive-dissipative and partitioned as in (2). Then for any unitarily invariant norm · on M n , It is asked in [6] as an open problem whether the factor 4 in (3) and the factor √ 2 in (4) can be improved. Indeed, the factor √ 2 in (4) is optimal. To construct such an accretivedissipative matrix, we can first find a matrix whose numerical range is contained in the sector S π/4 , then rotate it by +π/4.
However, the factor 4 in (3) can be improved to 2 (see Corollary 3.3). In this section, we extend (3) and (4) to some more general results. Let x = (x 1 , . . . , x n ), y = (y 1 , . . . , y n ) ∈ R n . We denote x • y = (x 1 y 1 , . . . , x n y n ) and write x ≤ y to mean x j ≤ y j for j = 1, . . . , n. We rearrange the components of x and y in nonincreasing order: . . , n, we say that x is weakly majorized by y, denoted by x ≺ w y. If, in addition, the last inequality is an equality, i.e. n i=1 x i = n i=1 y i , we say that x is majorized by y, written as x ≺ y (see, e.g. [13, p.12] w σ (B). So, to some extent, the norm inequalities are essentially the same as the singular value majorization inequalities. As is known (see, e.g. [7, p.74
Equivalently, for all unitarily invariant norms · on M n , Proof Let A = X Z X * be a sectoral decomposition of A, where X is invertible and Z is unitary and diagonal. Then The last inequality is by Corollary 2.3 (iv). The norm inequality (6) follows at once.
Theorem 3.2 Let A ∈ M n be partitioned as in (2) and assume W (A) ⊆ S α for some α ∈ [0, π 2 ). Then for any unitarily invariant norm · on M n , Proof Let A 11 be p × p. By Theorem 2.1, we may assume that A = X Z X * is a sectoral decomposition of A, where X is invertible and Z is unitary and diagonal. We partition Using Corollary 2.3 (ii), we have So the inequality (8) is true for A 12 . The inequality for A 21 is similarly proven.
If A is a positive definite matrix, then α = 0 and sec α = 1 in (8).
The inequality (9) is stronger than (3). Moreover, the constant factor 2 is best possible for all accretive-dissipative matrices and unitarily invariant norms. To present next theorem, we need a lemma which is interesting in its own right. Here we regard λ(H 11 ) and λ(H 22 ) as vectors in R n (by adding 0's if necessary) with components arranged in nonincreasing order. It is also known (see [15] or [16] We must also point out that (11) has appeared in [14, p.217] and a more general result is available in [17,Theorem 2.1]. We include our proof here as it is short and elementary; and it is the most elegant one in author's opinion.
Theorem 3.5 Let A ∈ M n be partitioned as in (2) and let W (A) ⊆ S α for some α ∈ [0, π 2 ). Then for any unitarily invariant norm · on M n , Proof By Lemma 3.1 and noticing that A = A 11 * * The desired inequality follows at once since X ≤ X for any X .