Spivak's Calculus (4ed) 1.19 Schwarz inequality

In summary, we used the Schwarz inequality to prove that if x_{1} = \lambda y_{1} and x_{2} = \lambda y_{2} for some number \lambda \geq 0, then equality holds. We also proved the same thing for the case where y_{1} = y_{2} = 0. Finally, we showed that if y_{1} and y_{2} are not both 0 and there is no number \lambda such that x_{1} = \lambda y_{1} and x_{2} = \lambda y_{2}, then using the solutions to the quadratic equation, we can prove the Schwarz inequality.
  • #1
swevener
21
0
The problem
Given the Schwarz inequality, [itex]x_{1}y_{1} + x_{2}y_{2} \leq \sqrt{x_{1}^{2} + x_{2}^{2}} \sqrt{y_{1}^{2} + y_{2}^{2}}[/itex], prove that if [itex]x_{1} = \lambda y_{1}[/itex] and [itex]x_{2} = \lambda y_{2}[/itex] for some number [itex]\lambda \geq 0[/itex], then equality holds. Prove the same thing if [itex]y_{1} = y_{2} = 0[/itex]. Now suppose that [itex]y_{1}[/itex] and [itex]y_{2}[/itex] are not both 0 and that there is no number [itex]\lambda[/itex] such that [itex]x_{1} = \lambda y_{1}[/itex] and [itex]x_{2} = \lambda y_{2}[/itex]. Then

[tex]\begin{align*}
0 &\lt (\lambda y_{1} - x_{1})^{2} + (\lambda y_{2} - x_{2})^{2} \\
&= \lambda^{2} (y_{1}^{2} + y_{2}^{2}) - 2 \lambda (x_{1} y_{1} + x_{2} y_{2}) + (x_{1}^{2} + x_{2}^{2}).
\end{align*}[/tex]
Use the solutions to the quadratic equation to prove the Schwarz ineq.

My confusion
I can do all the parts of this, but I'm not sure how they fit together. I can't figure out how we go from the Schwarz ineq. to the quadratic equation, so I don't know why the lack of a real solution proves the ineq. I've tried working it forward and backward and all I've got is wasted paper and a sore wrist.
 
Physics news on Phys.org
  • #2
Hi swevener! http://img96.imageshack.us/img96/5725/red5e5etimes5e5e45e5e25.gif

You say you can do all the parts? Yet the problem leads you in steps to the quadratic equation, so which part can't you do?
Maybe you can't see the origin of this inequality:
[tex]\begin{align*}
0 &\lt (\lambda y_{1} - x_{1})^{2} + (\lambda y_{2} - x_{2})^{2} \end{align*}[/tex]
It arises because you are told there is now an "error" or difference between x1 and λy1 (and/or x2 and λy2) so add these differences together and equate result to something greater than 0. You then expand the brackets and arrive at the quadratic shown.
 
Last edited by a moderator:
  • #3
Exactly what I was looking for. Thank you! :)
 
  • #4
swevener said:
[tex]\begin{align*}
0 &\lt (\lambda y_{1} - x_{1})^{2} + (\lambda y_{2} - x_{2})^{2} \\
&= \lambda^{2} (y_{1}^{2} + y_{2}^{2}) - 2 \lambda (x_{1} y_{1} + x_{2} y_{2}) + (x_{1}^{2} + x_{2}^{2}).
\end{align*}[/tex]

I'm just getting started with Spivak and math and am confused about the notation here, perhaps someone can clarify it for me. It seems to me this can be read two ways:

(a) [tex]\begin{align*}
0 &\lt (\lambda y_{1} - x_{1})^{2} + (\lambda y_{2} - x_{2})^{2}) \\
0 &= \lambda^{2} (y_{1}^{2} + y_{2}^{2}) - 2 \lambda (x_{1} y_{1} + x_{2} y_{2}) + (x_{1}^{2} + x_{2}^{2}).
\end{align*}[/tex]

or (b) [tex]\begin{align*}
0 &\lt (\lambda y_{1} - x_{1})^{2} + (\lambda y_{2} - x_{2})^{2} \\
&= \\
0 &\lt \lambda^{2} (y_{1}^{2} + y_{2}^{2}) - 2 \lambda (x_{1} y_{1} + x_{2} y_{2}) + (x_{1}^{2} + x_{2}^{2}).
\end{align*}[/tex]

I'm inclined to read this as (b), and that can easily be shown to hold true, but I want to make sure I understand what's going on here and am not making a mistake as I proceed.
 
  • #5
Yes, (b) is what's intended. I didn't like the liberties the authors took there, either. I think it could definitely have been expressed with more rigor.

This symbol would have been appropriate; though I'd be content with just plain
 
  • #6
NascentOxygen said:
Yes, (b) is what's intended. I didn't like the liberties the authors took there, either. I think it could definitely have been expressed with more rigor.

This symbol would have been appropriate; though I'd be content with just plain

Thanks for the prompt reply. So, I'm on the right track but there is a gap that I can't solve in this problem. I glanced after many hours at Spivak's solution and it still doesn't satisfy me. He says the equation:

[tex]\begin{align*}
0 &\lt \lambda^{2} (y_{1}^{2} + y_{2}^{2}) - 2 \lambda (x_{1} y_{1} + x_{2} y_{2}) + (x_{1}^{2} + x_{2}^{2}).
\end{align*}[/tex]

has no solution for [tex]\lambda[/tex]. That's all well and good, but then he goes on to infer that from the prior problem's relation to the quadratic equation we must have:

[tex]\begin{align*}
(2(x_{1}y{1} + x_{2}y{2}/(y{1}^{2} + y_{2}^{2}))^{2} &- 4(x_{1}^{2} + x_{2}^{2})/(y{1}^{2} + y_{2}^{2}) &\lt 0
\end{align*}[/tex]

which yields the Schwarz inequality. That's well and good, and I can certainly see how this is supposed to represent the formula [tex]b^{2} - 4c \lt 0[/tex], but where does this formula even come from?

It seems to me [tex]\begin{align*}
0 &\lt \lambda^{2} (y_{1}^{2} + y_{2}^{2}) - 2 \lambda (x_{1} y_{1} + x_{2} y_{2}) + (x_{1}^{2} + x_{2}^{2}).
\end{align*}[/tex]

Is in the: [tex]ax^{2} + bx + c = a(x^{2} +bx/a + c/a)[/tex] form, which would mean if we let [tex]a = (y{1}^{2} + y_{2}^{2})[/tex] and [tex]x = -\lambda[/tex], and let [tex]c = (x_{1}^{2} + x_{2}^{2})[/tex], and let [tex]b = 2(x_{1}y{1} + x_{2}y{2})[/tex] then our formula isn't actually [tex]b^{2} - 4c \lt 0[/tex] but
[tex]b^{2}/a - 4c/a \lt 0[/tex]. I'm just not seeing how to close the gap.

I hope this isn't all nonsense, and I hope there aren't too many typos (my first shot at tex).

Cheers to anyone who read,
A
 
Last edited:
  • Like
Likes jorgebar
  • #7
Don't replace λ with -x, leave it as λ

set b = –2 (x₁y₁ + x₂y₂)
 
  • #8
It just dawned on me what other moves I need to close the gap, and I picked up on a few other errors above. I don't know why I didn't see it before. Thanks for your help.
 

1. What is Spivak's Calculus (4ed) 1.19 Schwarz inequality?

Spivak's Calculus (4ed) 1.19 Schwarz inequality, also known as the Cauchy-Schwarz inequality, is a fundamental theorem in mathematics that establishes a relationship between the dot product and the magnitude of vectors in an inner product space.

2. How is Schwarz inequality used in calculus?

Schwarz inequality is used in calculus to prove important theorems, such as the triangle inequality for vector spaces and the fundamental theorem of calculus. It is also a key tool in optimization problems and in the study of orthogonality and perpendicularity.

3. What is the significance of Schwarz inequality in mathematics?

Schwarz inequality is significant in mathematics because it provides a way to measure the angle between two vectors in an inner product space, which is essential in understanding the geometry of vector spaces. It also has many applications in fields such as physics, engineering, and economics.

4. Can you explain the proof of Schwarz inequality?

The proof of Schwarz inequality involves using the Cauchy-Schwarz inequality for real numbers and extending it to n-dimensional vectors. It can be proven using various techniques, including the geometric approach, the algebraic approach, and the analytical approach.

5. What are the applications of Schwarz inequality in real life?

Schwarz inequality has many applications in real life, such as in signal processing, image compression, and machine learning. It is also used in statistics to measure the correlation between variables and in finance to calculate portfolio risk. Additionally, it has applications in physical sciences, such as in quantum mechanics and electromagnetism.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
5
Views
810
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Precalculus Mathematics Homework Help
Replies
6
Views
2K
Replies
18
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Introductory Physics Homework Help
2
Replies
57
Views
4K
Replies
38
Views
450
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Precalculus Mathematics Homework Help
Replies
3
Views
778
  • Calculus
Replies
13
Views
1K
Back
Top