MHB Why Are My Gram-Schmidt Results Different on Wolfram|Alpha and MATLAB?

  • Thread starter Thread starter nedf
  • Start date Start date
  • Tags Tags
    Process
Physics news on Phys.org
nedf said:
Wolframalpha provides this solution:
Wolfram|Alpha: Computational Knowledge EngineHowever when i compute the second term v2 = x2 - (x2.v1)/(v1.v1) * v1
The result is different from that of above. What's wrong?
Wolfram|Alpha: Computational Knowledge Engine)

Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
 
I like Serena said:
Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

Also, i computed the orth on matlab:
o1nZzHA.png


Why is it different from Wolfram|Alpha: Computational Knowledge Engine?
peUHk3W.png


Is the orthonormal (normalized) basis unique for a given matrix?
 
Last edited:
nedf said:
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

It's a matter of convention. Both dot products are valid inner products. And one is the conjugate of the other.
However, your formula for the Gram-Schmidt process assumes the $\sum a_i\overline {b_i}$ version.
Otherwise the dot product in the fraction should have been the other way around.
So the results will be the same - if we use the proper formula.
And your Gram-Schmidt formula is incompatible with mathworks's variant.

Note that with the standard $\sum a_i\overline{b_i}$ we have:
$$\mathbf v_1\cdot \mathbf v_2 = \mathbf v_1 \cdot\left( \mathbf x_2-\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \mathbf v_1 \cdot \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\right)^*\mathbf v_1 \cdot \mathbf v_1 \\
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf v_1 \cdot \mathbf x_2}{\mathbf v_1 \cdot \mathbf v_1}\right)\mathbf v_1 \cdot \mathbf v_1
= \mathbf v_1 \cdot \mathbf x_2 - \mathbf v_1 \cdot \mathbf x_2 = 0
$$

Also, i computed the orth on matlab:Why is it different from Wolfram|Alpha: Computational Knowledge Engine?Is the orthonormal (normalized) basis unique for a given matrix?

And no, an orthonormal basis is typically not unique.
Consider for instance $\mathbb R^3$.
The standard orthonormal basis is $\{(1,0,0),(0,1,0),(0,0,1)\}$.
But $\{(1,0,0),(0,1/\sqrt 2,1/\sqrt 2),(0,1/\sqrt 2,-1/\sqrt 2)\}$ is also an orthonormal basis.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top