Cauchy schwarz inequality in Rudin

  • Context: Graduate 
  • Thread starter Thread starter joecharland
  • Start date Start date
  • Tags Tags
    Cauchy Inequality
Click For Summary
SUMMARY

The discussion focuses on the proof of the Cauchy-Schwarz inequality as presented in Walter Rudin's analysis texts. The key expression derived in the proof is the equivalence of the sum ##\sum |B a_j - C b_j|^2## to the expression ##B(AB - |C|^2##, which is shown to be non-negative under the assumption that ##B \geq 0##. Participants express a desire to understand the intuition behind Rudin's proof methodology, noting that the Cauchy-Schwarz inequality is inherently true in vector spaces and can be derived from the properties of dot products in ##\mathbb{R}^n##. The discussion also references an external explanation for further clarity.

PREREQUISITES
  • Understanding of the Cauchy-Schwarz inequality
  • Familiarity with vector spaces and dot products
  • Knowledge of mathematical proofs and analysis
  • Basic concepts of non-negative sums in mathematics
NEXT STEPS
  • Study the proof of the Cauchy-Schwarz inequality in Walter Rudin's "Principles of Mathematical Analysis"
  • Learn about the properties of dot products in vector spaces
  • Explore the law of cosines and its application in vector analysis
  • Review external resources on mathematical proofs, such as the explanation provided by Berkeley's mathematics department
USEFUL FOR

Mathematics students, particularly those studying analysis, educators teaching vector spaces, and anyone seeking a deeper understanding of the Cauchy-Schwarz inequality and its proofs.

joecharland
Messages
3
Reaction score
0
I have worked my way though the proof of the Cauchy Schwarz inequality in Rudin but I am struggling to understand how one could have arrived at that proof in the first place. The essence of the proof is that this sum:
##\sum |B a_j - C b_j|^2##
is shown to be equivalent to the following expression:
##B(AB - |C|^2)##
Now since each term of the first sum is positive, it is clearly greater than or equal to zero, so that the expression $$B(AB - |C|^2)$$ is also greater than or equal to zero. Now if $$B = 0$$ the theorem is trivial, so assume that $$B \geq 0$$ and then the inequality $$B(AB - |C|^2) \geq 0$$ implies that $$AB - |C|^2 \geq 0$$ which is the theorem.

Now naturally what I want to understand is how to arrive at this proof in the first place. Some intuition to start with is that if $$AB - |C|^2$$ can be made equivalent to a single sum, each term of which is nonnegative, this would give the desired result. But Rudin added a step to this, by showing that $$B(AB - |C|^2)$$ can be made equivalent to a single sum and then the B can be canceled out. What train of thought would have led Rudin to this proof?

There is an explanation offered here:
http://math.berkeley.edu/~gbergman/ug.hndts/06x2+03F_104_q+a.txt

But I am still struggling to figure out that explanation too. Can anyone either help or direct me to a useful resource?

Thanks!
 
Physics news on Phys.org
IMHO, I think that rudin may have proven this in this way b/c he found it to be more elegant. When I took analysis, I proved it in the following way... Which is much more of a derivation than a proof , loosely... The cs inequality is True in a vector space, so... Given two vectors a and b, the derivation of the the formula is an obvious result of the dot product of two vectors in r^n... which can be seen as an obvious result of the law of cosines applied in a vector space and so on... I too wanted more solid analytical proof than that, but after following the advice of the text and prof, instead of letting the switch from seeing n-tuples as vectors , or points in space confuse me... I embraced the ability to effortlessly go back and forth between seeing n-tuples as vectors or points in n space... For any point p in n space... We may assign an n dimensional vector op... Conversely for any vector op in n space, we may assign a point p... By going effortlessly back and forth between mindsets one sees that a proof of the cs inequality in a vector sense is truly sufficient, in fact, you probally won't take the time to prove it at all, it is obviously true.
 

Similar threads

Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
Replies
1
Views
7K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K