Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Euclidean space: dot product and orthonormal basis

  1. Feb 3, 2015 #1
    Dear All,

    Here is one of my doubts I encountered after studying many linear algebra books and texts. The Euclidean space is defined by introducing the so-called "standard" dot (or inner product) product in the form:

    [tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]

    With that one can define the metric and the vector norm, the latter one as:

    [tex] || \boldsymbol{a} || = \sqrt{(\boldsymbol{a},\boldsymbol{a})} [/tex]

    etc. etc. However, we know that the first formula is valid only when we chose the orthonormal basis. That is the basis consisting of the vectors which are mutually orthogonal and with unit lengths:

    [tex] (\boldsymbol{e}_i,\boldsymbol{e}_j) = \delta_{ij} [/tex]

    [tex] || \boldsymbol{a}_i || = 1 [/tex]

    Thus, to define the orthonormal basis one need to define dot product and norm first. On the other hand, the dot product formula works only in the case of othonormal basis. If we take any other basis this formula will not be valid (in Euclidean space?). Is that correct? The question is then in what order should be define all the terms to be consistent. Or perhaps there are more Euclidean spaces, each of its metric, and the above choice is arbitrary so it resembles the physical space the most? The question is then why it is like this? I would like to avoid the formula:

    [tex] (\boldsymbol{a},\boldsymbol{b}) = || \boldsymbol{a} || \cdot || \boldsymbol{b} || \cdot \cos \theta [/tex]

    since for this we need the norm and the angle.... Closed loop and one of my doubts on how to deal the the topic...

    Many thanks for explanations!

    Radek
     
  2. jcsd
  3. Feb 3, 2015 #2

    jbunniii

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    For any norm satisfying the parallelogram law
    $$\|x + y\|^2 + \|x - y\|^2 = 2\|x\|^2 + 2\|y\|^2$$
    we can define an inner product which is compatible with the norm (meaning that ##\langle x,x\rangle = \|x\|^2##) as follows. For a vector space over the real scalar field, define
    $$\langle x,y \rangle = \frac{1}{4}\left(\|x + y\|^2 - \|x - y\|^2\right)$$
    For a vector space over the complex scalar field, define
    $$\langle x,y \rangle = \frac{1}{4}\left(\|x + y\|^2 - \|x - y\|^2 + i\|x + iy\|^2 - i\|x - iy\|^2\right)$$
    If I recall correctly, this is an exercise in Axler's Linear Algebra Done Right. Also, I just did a search and found this PDF:

    Norm-Induced Inner Products
    http://www.pcs.cnu.edu/~jgomez/files/norm.pdf [Broken]
     
    Last edited by a moderator: May 7, 2017
  4. Feb 4, 2015 #3
    Your answer explains that we can obtain an inner product from the norm. The attached PDF file gives more details. To me, it seems like just swapping the problem around. The question is rather why we define the inner product (or the norm) like I wrote, in the case of Euclidean spaces? What would be a reasoning behind all the definitions, and in which order they should be introduced to obtain a consistent theory.
     
  5. Feb 4, 2015 #4

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member


    If you're talking about the space ##\mathbb R^n##, that definition makes sense even if you don't mention a basis. If it's some other finite-dimensional vector space, you will probably define the inner product in a different way.

    Right, this formula is often taken as the definition of the angle between two vectors.
     
  6. Feb 4, 2015 #5
    The Euclidean space, or "real inner product space" is defined as a real vector space equipped with additional operation, inner product (dot product), that assigns to a pair of vectors ##\mathbf x##, ##\mathbf y## a real number ##(\mathbf x, \mathbf y)## (sometimes denoted ##\mathbf x\cdot \mathbf y##). This inner product has to be symmetric, ##(\mathbf x, \mathbf y)= (\mathbf y , \mathbf x)##, linear in each argument and non-negative, ##(\mathbf x, \mathbf x)\ge 0##. The inner product is also supposed to be non-degenerate, meaning that ##(\mathbf x, \mathbf x)=0## only if ##\mathbf x =\mathbf 0## (the equality ##(\mathbf 0, \mathbf 0)=0## follows trivially from linearity).

    The Euclidean space is usually also supposed to be finite-dimensional.

    Once we are given an inner product, satisfying all the above properties, we can develop the whole theory, i.e. define norm, construct orthonormal bases, etc. The expression presented by OP gives an example of an inner product, one can easily check that all the properties are satisfied. This inner product is called the standard inner product in ##\mathbb R^n##. We first define the standard inner product, and then check that the standard basis in ##\mathbb R^n## is an orthonormal basis (with respect to the standard inner product).

    So, to make long story short, we first define an inner product, and only after that we define orthogonality, so there is no vicious circle here.
     
  7. Feb 4, 2015 #6

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

     
  8. Feb 11, 2015 #7
    Dear All,

    I think I understand the idea, and the flow of the reasoning. Especially what Hawkeye18 wrote. We first define the space with the selected inner product, on that basis we define the norm, the orthonormal basis ets. Now my question would be why exactly in "real" Euclidean space (our normal physical space) we take the inner product to be the equal to:
    [tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]
    in the orthonormal basis. Of course this works but what is the reasoning behind it? Many thanks.

    Radek
     
  9. Feb 11, 2015 #8

    jbunniii

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Given an orthonormal basis ##(e_n)_{n=1}^{N}## for ##\mathbb{R}^N##, we want
    $$\langle e_j, e_k \rangle = \begin{cases}
    1 & \text{if}\ \ j = k \\
    0 & \text{if}\ \ j \neq k
    \end{cases}$$
    and we want the inner product to be linear on both sides: if ##v,w,x \in \mathbb{R}^N## and ##a,b \in \mathbb{R}##, then
    $$\langle av + bw, x\rangle = a\langle v,x\rangle + b\langle w,x\rangle$$
    and
    $$\langle x, av + bw\rangle = a\langle x,v\rangle + b\langle x,w\rangle$$
    These conditions force the form you indicated. To see this, let ##v,w## be arbitrary vectors in ##\mathbb{R}^N##. Then, since ##(e_n)## is a basis, there are unique coefficients ##a_n## and ##b_n## such that
    $$v = a_1 e_1 + \cdots + a_N e_N\ \ \text{and}\ \ w = b_1 e_1 + \cdots + b_N e_N$$
    Therefore,
    $$\langle v,w\rangle = \langle a_1 e_1 + \cdots + a_N e_N, b_1 e_1 + \cdots + b_N e_N \rangle$$
    By linearity, the right hand side is equal to
    $$\sum_{j=1}^{N} \sum_{k=1}^{N} a_j b_k \langle e_j, e_k\rangle$$
    Now using the fact that ##\langle e_j, e_k\rangle## is either ##1## when ##j=k##, and ##0## otherwise, this reduces to
    $$\sum_{j=1}^{N} a_j b_j = a_1 b_1 + \cdots + a_N b_N$$
     
  10. Feb 12, 2015 #9
    Suppose you have an abstract finite-dimeninsional vector space, and you want to introduce an nine product there. A simple way is to take a basis and declare it to be orthogonal. then if ##\mathbf b_1, \mathbf b_2, \ldots, \mathbf b_n## is this basis, and ##\mathbf x = \sum x_k \mathbf b_k## and ##\mathbf y = \sum y_k \mathbf b_k## then ##(\mathbf x , \mathbf y) = \sum x_ky_k##, as it was shown in the previous post. In fact, any possible inner product can be obtained this way.

    So, in ##\mathbb R^n## we take the simplest basis (the standard one and declare it to be orthogonal).

    Here I was thinking as a mathematician. But let me now think as a physicist. And the standard coordinate system for a physicist is an orthogonal one, and the scale should be the same in all direction, which exactly means the orthonormal basis.

    I think for a physicist the natural definition of a dot product in 2D or 3D is just ##\|\mathbf a\| \| \mathbf b\| \cos \theta##, it has natural physical interpretation via projection onto a line. Note, that "classical" physicist does not ask what the length is, he just knows how to find it. And the same for the angle. From this definition it can be shown in 2D and 3D that the dot product satisfies all the properties of an inner product, so by picking an orthogonal basis (system of coordinates) we get the standard formula for the dot product.

    Note, that everything here could be made absolutely rigorous, in 2D and 3D, not only for a physicist, but for a mathematical, using axioms of Euclidean geometry in in the plane and in the space (the one you studied in high school).

    But mathematicians often prefer doing it the other way around: first introduce ##\mathbb R^n## with the standard inner product there, and then give to each object in classical geometry its representation in ##\mathbb R^2## or ##\mathbb R^3##. This way gives you much simpler and more elegant proofs, but the disadvantage is that it is less intuitive. But on the other hand, it shows that Euclid's geometry is non-contradictionary, that there is a object satisfying all Euclid's axioms.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Euclidean space: dot product and orthonormal basis
  1. Euclidean space (Replies: 1)

Loading...