Is There a Name for the Decomposition of a Partitioned Orthogonal Matrix?

Click For Summary
The discussion revolves around the decomposition of a partitioned orthogonal matrix, specifically the existence of a matrix V_2 that complements V_1, which has orthonormal columns, to form an orthogonal matrix V. The theorem referenced indicates that the range of V_1 is orthogonal to the range of V_2. While the theorem is noted as a standard result in linear algebra, the original poster could not find a proof in common textbooks. A suggestion is made to rephrase the problem in terms of extending an orthonormal basis for a subspace to the entire space, with a mention of using the Gram-Schmidt process for practical application. The conversation highlights the need for a proper name for this specific decomposition method.
Dafe
Messages
144
Reaction score
0
"Partitioned Orthogonal Matrix"

Hi,
I was reading the following theorem in the Matrix Computations book by Golub and Van Loan:

If V_1 \in R^{n\times r} has orthonormal columns, then there exists V_2 \in R^{n\times (n-r)} such that,
V = [V_1V_2] is orthogonal.
Note that ran(V_1)^{\bot}=ran(V_2)

It also says that the proof is a standard result from introductory linear algebra.

So I picked up my copy of Introduction to linear algebra by Strang and did not find this.
I then looked in the Matrix Analysis book by Carl D. Meyer, and here he mentiones this under the name "partitioned orthogonal matrix". I did not find a proof though.

Is there a proper name for this "decomposition"?

Thanks.
 
Physics news on Phys.org


This may be easier to see if you rephrase the problem. The columns of V_1 form an orthonormal basis for an r-dimensional subspace of \matbb{R}^n, and it is a standard result from the theory of Hilbert spaces that you may extend an orthonormal basis for a subspace to the whole space. The functional analyst in me would use Zorn's lemma to show that every orthonormal set (the columns of V_1) is contained in an orthonormal basis of the whole space, but this is overkill in the finite-dimensional case. In this case, I'd simply find a basis of \mathbb{R}^n that contains the columns of V_1 (using your favorite argument) and then use the Gram-Schmidt process. Not that if you let the columns of V_1 be the first r vectors in the Gram-Schmidt process, they will remain unchanged (because they are already orthogonal).
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K