MHB Why Are My Gram-Schmidt Results Different on Wolfram|Alpha and MATLAB?

  • Thread starter Thread starter nedf
  • Start date Start date
  • Tags Tags
    Process
Click For Summary
The differences in results between Wolfram|Alpha and MATLAB for the Gram-Schmidt process stem from the treatment of the dot product, particularly with complex numbers, which requires conjugation. The formula used in the Gram-Schmidt process must align with the specific definition of the dot product employed by each platform. Both Wolfram|Alpha and MATLAB provide valid inner products, but they differ in convention, leading to discrepancies in outcomes. Additionally, the orthonormal basis generated by the Gram-Schmidt process is not unique, as multiple valid orthonormal bases can exist for a given matrix. Understanding these conventions and properties is crucial for accurate computations in linear algebra.
Physics news on Phys.org
nedf said:
Wolframalpha provides this solution:
Wolfram|Alpha: Computational Knowledge EngineHowever when i compute the second term v2 = x2 - (x2.v1)/(v1.v1) * v1
The result is different from that of above. What's wrong?
Wolfram|Alpha: Computational Knowledge Engine)

Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
 
I like Serena said:
Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

Also, i computed the orth on matlab:
o1nZzHA.png


Why is it different from Wolfram|Alpha: Computational Knowledge Engine?
peUHk3W.png


Is the orthonormal (normalized) basis unique for a given matrix?
 
Last edited:
nedf said:
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

It's a matter of convention. Both dot products are valid inner products. And one is the conjugate of the other.
However, your formula for the Gram-Schmidt process assumes the $\sum a_i\overline {b_i}$ version.
Otherwise the dot product in the fraction should have been the other way around.
So the results will be the same - if we use the proper formula.
And your Gram-Schmidt formula is incompatible with mathworks's variant.

Note that with the standard $\sum a_i\overline{b_i}$ we have:
$$\mathbf v_1\cdot \mathbf v_2 = \mathbf v_1 \cdot\left( \mathbf x_2-\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \mathbf v_1 \cdot \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\right)^*\mathbf v_1 \cdot \mathbf v_1 \\
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf v_1 \cdot \mathbf x_2}{\mathbf v_1 \cdot \mathbf v_1}\right)\mathbf v_1 \cdot \mathbf v_1
= \mathbf v_1 \cdot \mathbf x_2 - \mathbf v_1 \cdot \mathbf x_2 = 0
$$

Also, i computed the orth on matlab:Why is it different from Wolfram|Alpha: Computational Knowledge Engine?Is the orthonormal (normalized) basis unique for a given matrix?

And no, an orthonormal basis is typically not unique.
Consider for instance $\mathbb R^3$.
The standard orthonormal basis is $\{(1,0,0),(0,1,0),(0,0,1)\}$.
But $\{(1,0,0),(0,1/\sqrt 2,1/\sqrt 2),(0,1/\sqrt 2,-1/\sqrt 2)\}$ is also an orthonormal basis.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
1
Views
2K
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
3K
Replies
3
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K