Matrix multiplication and the dot product

In summary: I believe I have it now. Sorry for the confusion earlier.In summary, the theorem states that for an n x n matrix A, if A(transpose)A=I, then for any vector x in R^n, the length of Ax is equal to the length of x. This can be proven by using the properties of transpose and the fact that matrix multiplication is associative.
  • #1
EV33
196
0

Homework Statement


Suppose that A is an n x n matrix such that A(Transpose)A=I. Let x be any vector in R^n. Show that llAxll=llxll; that is, multiplication of x by A produces a vector Ax having the same length as x.


Homework Equations



Sqrt(x(transpose)x)=llxll

The Attempt at a Solution



So I started setting up the equation wth an n x n matrix...

l a b l =A
l c d l

l a c l= A (transpose)
l b d l

A(transpose)A= I=
l a^(2) +c^(2) ab+cd l
l ba+dc b^(2)+d^(2)l

then I let the vector x=

l x1 l
l x2 l


then from there I was going to use the distance formula on Ax

but I wasn't sure if doing the dot product of Ax is the same thing as matrix multiplication of Ax, and my second problem is that I am not sure how to take the square root of a 2x2 matrix if they are the same thing.

If someone can answer those to questions for me I think I can finish the problem.

Thank you.
 
Physics news on Phys.org
  • #2
You don't have to work with any explicit matrices. If you are writing expressions like Ax then you should be thinking of x as a column vector. So ||x||=sqrt(transpose(x)x). So ||Ax||=sqrt(transpose(Ax)Ax). Can you express transpose(Ax) in terms of transpose(A) and transpose(x)?
 
  • #3
I'm not too sure how to do that.

Would this argument work?

If A(transpose)A=I, then A=A(transpose)=I.

If A=I, then Ax=Ix=x

therefore llAxll=llIxll=llxl
 
  • #4
EV33 said:
I'm not too sure how to do that.

Would this argument work?

If A(transpose)A=I, then A=A(transpose)=I.

If A=I, then Ax=Ix=x

therefore llAxll=llIxll=llxl
No, because your first step is wrong. There are matrices other than I where [itex]A^TA=I[/itex].

Try looking up the properties of transpose, in particular, the transpose of a product.
 
  • #5
ok I looked up all I know about transpose, and I also looked up all I know about I...
so here is what I know, but I ended up using other theorums to solve it.

Thrm 10: If A and B are m x n matrices and c is an n x p matrix, then:

1. (A+B)^T=A^T +B^T
2.(AC)^T=C^T * A^T
3. (A^T)^T=A


What I know about I...
.I is a matrix whose diagnal is all 1's and the rest are 0's.
.AI=A
.I is nonsingular (the only soultion is the trivial solution)
. (A^(-1)*A)^T=I


Here is my second attempt. I ended up using mainly other thrms then the ones above but I am pretty sure I have it this time. Thank you to eveyone who has helped me thus far, and anyone who spends the time to read my proof.


If I is non singular then A and A transpose must be nonsingular,because if either A or A transpose was singular then I would have to be singular and I happens to be nonsingular. The lemma states, "Let P,Q, and R be n x n matrices such that PQ=R. IF either P or Q is singular then so is R.

Then I also know that all non singular matrices can row reduce to I. If all non singular matrices can reduce to I, and A is non singular then I must be equal to A, by thrm 1.

thrm 1 states, " if one of the 3 row operations is applied to a system of linear equations, then the resulting system is equivalent to the original system.

So because any matrix time I is equal to that matrix and I is equal to A then...

A=I,

so Ax=Ix=x, therefore

llAxll=llxll
 
  • #6
EV33 said:
Then I also know that all non singular matrices can row reduce to I. If all non singular matrices can reduce to I, and A is non singular then I must be equal to A, by thrm 1.
This is completely wrong. Just because a matrix is non-singular doesn't mean it equals I. It can be reduced to I, but the reduced matrix isn't equal to the original matrix.

An obvious counterexample to your logic: Take A=2I. It's obviously non-singular, but |Ax| = 2|x|. "Aha!" you say, "but [itex]A^TA \ne I[/itex]!" Yes, that's true, but your proof doesn't rely on that assumption either, so according to your proof's logic, A=2I should preserve the length of vectors.

So look at property 2 of theorem 10 and use the hint Dick gave above.
 
  • #7
oh wow that made things much easier but I am still a little confused.

so I have (Ax)^T=x^(T)*A^(T)

llxll=llAxll

sqrt(x^(T)*x)=sqrt((x^(T)*A^(T))(Ax))


x^(T)*x=(x^(T)*A^(T))(Ax)

My problem starts here now. I thought the order of multiplication mattered, and for me to get this to simplifiy I would have to break that order up I feel like because I would multiply the A^T by the A first rather than the B^T by the A^T. Does order matter here? And if it does, does it disable me from being able to do A^T by A first?
 
  • #8
What you're thinking of is the fact that matrix multiplication is not commutative, that is, [itex]AB\ne BA[/itex] generally, but it is associative, so [itex](AB)C = A(BC)[/itex].
 
  • #9
You've got (x^(T)*A^(T))*(A*x). You are confusing commutativity with associativity. It's true you can't change the order of matrices but you can regroup them. I.e. (AB)(CD)=A(BC)D. That's just associativity. And it is true.
 
  • #10
ahhhhhhhh got ya. Thank you so much for all the help everyone.
 

What is matrix multiplication?

Matrix multiplication is a mathematical operation that involves multiplying two matrices together to produce a new matrix. It is used in various fields such as engineering, physics, and computer science to solve complex problems.

What is the dot product?

The dot product is a mathematical operation that calculates the scalar product of two vectors. It is also known as the inner product and is used to determine the angle between two vectors or to project one vector onto another.

What is the difference between matrix multiplication and the dot product?

Matrix multiplication involves multiplying two matrices, while the dot product involves multiplying two vectors. In matrix multiplication, the order of the matrices matters, but in the dot product, the order of the vectors does not matter. Additionally, the result of matrix multiplication is a matrix, while the result of the dot product is a scalar.

How is matrix multiplication and the dot product used in real-world applications?

Matrix multiplication and the dot product are used in various real-world applications, such as image and signal processing, data compression, and machine learning. They are also used in computer graphics to rotate, scale, and translate objects.

What are some common mistakes when performing matrix multiplication and the dot product?

Some common mistakes when performing matrix multiplication include forgetting to check for compatibility between the two matrices, mixing up the row and column order, and not properly multiplying the elements. For the dot product, common mistakes include forgetting to take the absolute value of the result and not accounting for the direction of the vectors.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
705
  • Calculus and Beyond Homework Help
Replies
1
Views
495
  • Calculus and Beyond Homework Help
Replies
4
Views
856
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
559
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
387
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
Back
Top