Why Are My Gram-Schmidt Results Different on Wolfram|Alpha and MATLAB?

  • Context: MHB 
  • Thread starter Thread starter nedf
  • Start date Start date
  • Tags Tags
    Process
Click For Summary

Discussion Overview

The discussion revolves around discrepancies in the results of the Gram-Schmidt process as computed by Wolfram|Alpha and MATLAB. Participants explore the implications of using different definitions of the dot product, particularly in the context of complex numbers, and question the uniqueness of orthonormal bases derived from a given matrix.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant notes a difference in results when computing the second term of the Gram-Schmidt process and questions the reason for this discrepancy.
  • Another participant suggests that the difference may arise from the need to conjugate the dot product for complex numbers, indicating that the formula used in the Gram-Schmidt process should reflect this requirement.
  • A participant explains that both definitions of the dot product are valid but are based on different conventions, which could lead to inconsistencies if not properly accounted for in the calculations.
  • There is a discussion about the implications of using different dot product definitions on the results of the Gram-Schmidt process, with one participant asserting that using the correct formula should yield consistent results.
  • Participants discuss the uniqueness of orthonormal bases, with one stating that an orthonormal basis is typically not unique and providing examples to illustrate this point.

Areas of Agreement / Disagreement

Participants express differing views on the definitions of the dot product and their impact on the Gram-Schmidt process. There is no consensus on the resolution of the discrepancies between Wolfram|Alpha and MATLAB results, and the uniqueness of orthonormal bases is acknowledged but not universally accepted.

Contextual Notes

Participants highlight the importance of definitions and conventions in mathematical processes, particularly regarding the dot product in complex vector spaces. The discussion remains open regarding the implications of these differences on the results obtained from various computational tools.

Physics news on Phys.org
nedf said:
Wolframalpha provides this solution:
Wolfram|Alpha: Computational Knowledge EngineHowever when i compute the second term v2 = x2 - (x2.v1)/(v1.v1) * v1
The result is different from that of above. What's wrong?
Wolfram|Alpha: Computational Knowledge Engine)

Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
 
I like Serena said:
Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

Also, i computed the orth on matlab:
o1nZzHA.png


Why is it different from Wolfram|Alpha: Computational Knowledge Engine?
peUHk3W.png


Is the orthonormal (normalized) basis unique for a given matrix?
 
Last edited:
nedf said:
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

It's a matter of convention. Both dot products are valid inner products. And one is the conjugate of the other.
However, your formula for the Gram-Schmidt process assumes the $\sum a_i\overline {b_i}$ version.
Otherwise the dot product in the fraction should have been the other way around.
So the results will be the same - if we use the proper formula.
And your Gram-Schmidt formula is incompatible with mathworks's variant.

Note that with the standard $\sum a_i\overline{b_i}$ we have:
$$\mathbf v_1\cdot \mathbf v_2 = \mathbf v_1 \cdot\left( \mathbf x_2-\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \mathbf v_1 \cdot \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\right)^*\mathbf v_1 \cdot \mathbf v_1 \\
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf v_1 \cdot \mathbf x_2}{\mathbf v_1 \cdot \mathbf v_1}\right)\mathbf v_1 \cdot \mathbf v_1
= \mathbf v_1 \cdot \mathbf x_2 - \mathbf v_1 \cdot \mathbf x_2 = 0
$$

Also, i computed the orth on matlab:Why is it different from Wolfram|Alpha: Computational Knowledge Engine?Is the orthonormal (normalized) basis unique for a given matrix?

And no, an orthonormal basis is typically not unique.
Consider for instance $\mathbb R^3$.
The standard orthonormal basis is $\{(1,0,0),(0,1,0),(0,0,1)\}$.
But $\{(1,0,0),(0,1/\sqrt 2,1/\sqrt 2),(0,1/\sqrt 2,-1/\sqrt 2)\}$ is also an orthonormal basis.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
1
Views
3K
Replies
3
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K