Norm of Vector Formed by Two Vectors

In summary: I think we are talking at cross purposes. I think the OP wants to know what the norm of the vector ##\phi = \partial_v + a \partial_x## is. The norm of this vector is not ##g_{00} + g_{11} + 2 g_{01}##. The norm of this vector is ##g_{\mu \nu} A^\mu A^\nu##, where ##A^\mu = (1, a)## are the components of the vector ##\phi##, and ##g_{\mu \nu}## are the components of the metric tensor. This gives......the expression I gave above. I think you are calculating the norm of the vector ##\phi =
  • #1
PhyAmateur
105
2
If we have a vector $$\partial_v$$ and we want o find its norm, we easily say (According to the given metric) that the norm of that vector is:$$ g^{vv}\partial_v\partial_v$$.

My question what if we have a vector that is combination of 2 vectors like: $$\phi =\partial_v + a\partial_x$$ where $a$ is any constant.. How do we find the norm of $$\phi$$?
 
Physics news on Phys.org
  • #2
If I'm understanding you correctly, you're using the partial derivatives with respect to the coordinates as a set of basis vectors for the tangent space. In modern notation, Latin indices usually indicate abstract indices, but my understanding is that you don't mean that. Your expression [itex]g^{vv}\partial_v\partial_v[/itex] has the index v appearing twice on top and twice on the bottom. This would be ungrammatical if you meant there to be an implied sum, so I'm taking this to mean that there is no implied sum here. If your coordinates are something like (u,v,w,x), then your [itex]\phi=\partial_v+a\partial_x[/itex] could also be written in terms of components as (0,1,0,a). To find its norm, you can do the same thing you'd do in ordinary linear algebra by exploiting the bilinearity of the inner product: (0,1,0,a)·(0,1,0,a)=(0,1,0,0)·(0,1,0,0)+(0,0,0,a)·(0,0,0,a)+2(0,1,0,0)·(0,0,0,a).
 
  • #3
Thank you for the reply, but what about the metric? Like when we say, norm of the vector $$v^\mu$$ is$$ g_{\mu\nu}v^{\mu} v^{\nu}$$ Where $$g_{\mu\nu}$$ is the metric of the line element ds^2..
 
Last edited:
  • #4
  • #5
PhyAmateur said:
Thank you for the reply, but what about the metric? Like when we say, norm of the vector $$v^\mu$$ is$$ g_{\mu\nu}v^{\mu} v^{\nu}$$ Where $$g_{\mu\nu}$$ is the metric of the line element ds^2..

I'm not clear what you're asking here. Is there some specific part of my #2 that you have in mind? The expressions I wrote using dot-product notation imply the use of the metric, if that's what you had in mind.

Also, could you clarify whether my interpretation of your notation is what you had in mind? By the way, it also occurs to me that if you're using the partials as basis vectors for the tangent space, then the notation should probably be [itex]g_{vv}\partial_v\partial_v[/itex] (no implied sum). It looks bizarre, but the idea is that if [itex]\partial_v[/itex] is being used as a basis for the tangent space, then it's really an *upper*-index quantity, even though it's notated with a lower index.
 
Last edited:
  • #6
PhyAmateur said:
If we have a vector $$\partial_v$$ and we want o find its norm, we easily say (According to the given metric) that the norm of that vector is:$$ g^{vv}\partial_v\partial_v$$.

My question what if we have a vector that is combination of 2 vectors like: $$\phi =\partial_v + a\partial_x$$ where $a$ is any constant.. How do we find the norm of $$\phi$$?

I'm not sure this is right actually. In tensor notation, when we write a vector A in a coordinate basis (note that we do have to specify our basis to use tensor notation as above, though sometimes papers and textbooks may be lax on this point and assume a coordinate basis is meant if no basis is explicitly given) as ##A^\mu##, we mean something like ##A^0 \partial_0 + A^1 \partial_1,##. In your example, if we let the 0'th coordinate be v and the 1st coordinate be x, ##A^0 \partial_0 + A^1 \partial_1,##.would be your example ##A^v \partial_v + A^x \partial_x##

So the squared norm of a vector A would be ##\sum_{\mu,\nu = 0..n} g_{\mu\nu} A^\mu A^\nu##. To take your specific example , the squared norm of ##\partial_v## would be ##g_{vv}##, not ##g^{vv}##. And you need to take the square root to get the norm rather than the squared norm. This isn't intuitive, but the intuitive upper-lower index matching rule only works most of the time. I believe actually that theoretically, before ##g_{\mu\nu}## would have a precise meaning, you'd need to specify the basis vectors, but I believe it's commonly assumed that when you write a metric tensor using the symbol g, it's implied that you are using a coordinate basis.

If we let ##A^0=1## and ##A^1=0## we have A =##1 \partial_0 + 0 \partial_1 = \partial_0 = \partial_v##. So the squared norm of A would be ##g_{00} ## not ##g^{00}##

I'll leave the case of finding the norm of A where ##A^0=1## and ##A^1=1## as an exercise - for now at least.
 
  • #7
So the answer would be $$A = g_{00} +g_{01} +g_{10} + g_{11}$$?
 
  • #8
So the answer would be $$A = g_{00} +g_{01} +g_{10} + g_{11}$$?
 
  • #9
PhyAmateur said:
So the answer would be ##A = g_{00} +g_{01} +g_{10} + g_{11}## ?

Is that what you get when you expand out the formula pervect gave you? It's this formula:

pervect said:
the squared norm of a vector A would be ##\sum_{\mu,\nu = 0..n}
g_{\mu\nu} A^\mu A^\nu## .
 
Last edited:
  • #10
can you please point out where I went wrong? I expanded according to A0 and A1 where I set $$\mu,\nu =1,2$$
 
  • #11
PhyAmateur said:
So the answer would be $$A = g_{00} +g_{01} +g_{10} + g_{11}$$?

Yes - or at least that's what I get for the squared magnitude. The magnitude would be the square root of the above
 
  • #12
PhyAmateur said:
I expanded according to A0 and A1 where I set
μ,ν=1,2​

Either you expanded wrong or I'm missing something. As I understand it, you are expanding the vector ##\phi = \partial_0 + a \partial_1##. This vector has components ##A^0 = 1##, ##A^1 = a## (where I'm glossing over the fact that the partial derivatives actually have a lower index rather than an upper index; I don't think we need to open that can of worms here). So its (squared) norm is ##\phi^2 = g_{\mu \nu} A^{\mu} A^{\nu} = g_{00} A^0 A^0 + 2 g_{01} A^0 A^1 + g_{11} A^1 A^1## (where I have used the symmetry of the metric tensor, ##g_{01} = g_{10}## ). Substituting gives ##\phi^2 = g_{00} + 2 a g_{01} + a^2 g_{11}##.

pervect said:
at least that's what I get for the squared magnitude.

Not for the vector with two components, which was what I understood the OP to be asking about. See above.
 
  • #13
PeterDonis said:
Not for the vector with two components, which was what I understood the OP to be asking about. See above.

Ah, the confusion comes in that the original poster asked about ##\partial_v + a \partial x## , but I missed the a and instead analyzed ##\partial_v + \partial_x##
 
  • #14
I was thinking about how to explain this better and avoid getting dragged into the details of tensor notation, and thought maybe the following approach might be more helpful:

1) Coordinates. Any event can be described by its coordinates. Names vary, typically one works in a 4-d space time with some coordinates like (t,x,y,z), one might also work in a space -time or space of lower dimensions. For this post, I'll mostly assume a 4-d space-time, and mostly use t,x,y,z as the coordinate names , except when otherwise mentioned.

2) Vectors: We call the partial derivatives with respect to said coordinates vectors. Graphically, they are typically represented by little arrows, drawn in the tangent space. Thus if our coordinates are (t,x,y,z) ##\partial_t, \partial_x, \partial_y, \partial_z## will all be vectors. The abstract properties of vectors are basically that they can be added together, and multiplied by scalars. We can do all of these with partial derivatives.

3) Covectors. Covectors are the duals of vectors. You'll occasionally see them also called one-forms. The gradient with respect to a coordinate, represented by the symbol d, is a covector. Thus if (t,x,y,z) are coordinates, dt, dx, dy, and dz are covectors. Covectors are represented graphically rather similarly to a contour map, by drawing parallel surfaces of constant coordinates. Covectors can also be added together and multipled by scalars.

Below: a graphical depection of a covector via stacked planes (left) and a vector via an arrow (right).
160px-Gradient_1-form.svg.png


4) Combining (also sometimes called composing) vectors and covectors

The following can be taken as identities:

##\partial_x dx = \partial_y dy = \partial_z dz = \partial_t dt = 1##
##\partial_x dy = \partial_x dz = \partial_x dt = 0##
##\partial_y dx = \partial_y dz = \partial_y dt = 0##
##\partial_z dx = \partial_z dy = \partial_z dt = 0##
##\partial_t dx = \partial_t dy = \partial_t dz = 0##

These identities may look trivial - hopefully that makes them easy to remember. I'm not going to attempt any detailed explanation of where these identities came from here, other than to suggest that the reader curious about their origins might review their linear algebra textbook on the topic of vector spaces, the duals of vector spaces, and linear functionals.

[add]It might also be helpful to think about "row vectors" and "column vectors", the pre-tensor notation used for covectors and vectors, and to recall how the product of a row vector and a column vector is a simple scalar.

5) The metric tensor

The metric tensor we will denote by the symbol g, using index free notation. We will use g to compute the lengths of vectors, for this purpose we want to write g as a general linear combination of the products of covectors for reasons that will become apparent. The notation using (t,x,y,z) becomes tedious when writing the metric tensor, it becomes preferable to number our coordinates, letting t = ##x^0##, x = ##x^1##, y=##x^2##, z=##x^3## for instance. Then , using our definition of covectors we can write

g = ##\sum_{i,j=0..3} g_{ij} dx^i dx^j##

Example: If we go to a space of 2 dimensions for simplicity we can write

g = ##g_{00} dx^0 dx^0 + g_{01} dx^0 dx^1 + g_{10} dx^1 dx^0 + g_{11} dx^1 dx^1##

It turns out that g is symmetric, so that ##g_{ij} = g_{ji}## hence the above is equivalent to ##g_{00} dx^0 dx^0 + 2 \, g_{01} dx^0 dx^1 + g_{11} dx^1 dx^1##

Now, how do we use the above definitions to compute the length or mangitude of a vector, as per the example? The answer is if A is a vector, in index free-notation we can write the square of its length as g A A

Let's work an example in a simple case of 2 dimensions, where we have some vector A = p \partial_0 + q \partial_1. Then to get the squared length we write out the expressions for g and A as

##\left[ g_{00} dx^0 dx^0 + g_{01} dx^0 dx^1 + g_{10} dx^1 dx^0 + g_{11} dx^1 dx^1 \right] \left( p \partial_0 + q \partial_1\right) \left( p \partial_0 + q \partial_1\right) ##

Now we apply the identities that say that ##dx^i \partial_j## equals 0 if i is not equal to j, and 1 if i is equal to j. We then get the result

##p^2 g_{00} + 2 p q g_{01} + q^2 g_{11}##

GIven that we know the length and length^2 of a vector must be a scalar, we can see why the entity g that we use to find the length of a vector was written as the sum of products of covectors. The key point is that a covector and a vector can be composed to give a scalar quantity.
 
Last edited:

Related to Norm of Vector Formed by Two Vectors

1. What is the norm of a vector?

The norm of a vector is the length or magnitude of the vector. It is calculated by taking the square root of the sum of the squared components of the vector.

2. How is the norm of a vector calculated?

The norm of a vector is calculated using the formula ||v|| = √(v1² + v2² + ... + vn²), where v1, v2, ..., vn are the components of the vector.

3. What is the significance of the norm of a vector?

The norm of a vector represents the distance or magnitude of the vector. It is useful in various mathematical and scientific calculations, such as finding the magnitude of a force or the distance between two points.

4. How is the norm of a vector related to its direction?

The norm of a vector is independent of its direction. This means that two vectors with the same norm can have different directions. However, the direction of a vector can be determined using its components and the norm.

5. Can the norm of a vector be negative?

No, the norm of a vector is always a positive value. This is because it represents the length or magnitude of the vector, which cannot be negative.

Similar threads

  • Special and General Relativity
Replies
31
Views
3K
  • Special and General Relativity
Replies
2
Views
681
Replies
47
Views
5K
  • Special and General Relativity
Replies
1
Views
229
  • Special and General Relativity
Replies
32
Views
2K
  • Special and General Relativity
Replies
19
Views
1K
  • Special and General Relativity
2
Replies
38
Views
4K
  • Special and General Relativity
Replies
22
Views
3K
Replies
1
Views
819
Replies
1
Views
636
Back
Top