Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Terms explanation

  1. Oct 26, 2012 #1
    Hello All,

    I am new to GR, studying slowly. I am little bit confused between the following terms and their usage:

    (a) Co & contra variant vectors - Do they help in transformation?

    (b) What is a co-ordinate free system? Do co & contra variant vectors help it?

    (c) What is parallel transport? Do parallel transport someway related to co-ordinate free system?

    (d) A invariant system -- is someway related to the above concepts?

    I am just looking for a descriptive explanation on the concepts of the above things.

    Thanks,

    -- Shounak
     
  2. jcsd
  3. Oct 26, 2012 #2

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    (a) See this post and the posts that are linked to at the end of it. (You can skip the first paragraph).

    (b) I doubt that you got that term right. Can you post an exact quote of what you read, or even better, a link to the page at Google Books?

    (c) This is hard to explain in the forum. The best place to read about it is "Riemannian manifolds: An introduction to curvature" by John M. Lee. I highly recommend that one and "Introduction to smooth manifolds" by the same author. ("Riemannian" explains connections, geodesics, parallel transport, covariant derivatives and curvature. "Introduction" explains just about everything else). If all you're looking for is some intuition about the concept, think of a tangent vector for the surface of the earth as a straight rigid arrow pointing in some direction tangent to the surface. Parallel transport is to move the base of this arrow along some curve on the surface, while keeping the arrow pointing in the "same" direction. I put "same" in quotes, because you have to keep it tangent to the surface at all times, so if you parallel transport an arrow from the north pole to the equator, it won't actually point in the same direction, but it's the next best thing.

    (d) I don't understand that term either. I need to see the context.
     
  4. Oct 26, 2012 #3
    From what I know coordinate free systems exist in and of themselves. For instance to find the volume of a cube you need only find the length of a side and v=s^3. The Cartesian, cylindrical, spherical, etc need not be used. Though every coordinate system can be used to derive the formula for the volume. It's where you deal with your problem with what you're given. You don't map it to any coordinate system. This makes the answer you get very powerful because it's almost like an absolute answer, though abstracted.
     
  5. Oct 26, 2012 #4

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I don't think I've heard the term "coordinate free systems" for such things (or for anything else). It sounds like a strange thing to call it.
     
  6. Oct 26, 2012 #5
    I guess the OP means the coordinate free or component-free approach to GR and tensor calculus that spares indices and makes use of the exterior calculus, Cartan moving frames(tetrads), etc.
     
  7. Oct 26, 2012 #6
    Hello,

    Well Fredrik,

    Actually I am taking a step by step approach to Susskind's lectures on General Relativity. The problem that I am encountering is a random approach to GR. I can well understand what you have told regarding parallel transport. Still I have a question. Is somehow eigenvalue or eigen vector related to parallel transport?

    Several things are hovering in my mind, first I am looking for an intuitive approach and then examples in a mathematical approach to things:

    (1) In GR co-ordinate transformation is required......From Euclidean, Cartesian co-ordinate to polar, spherical......

    (2) Co variant and contra variant vectors help in co-ordinate transformation?

    (3) To calculate geometrical figures on a spherical, non-Euclidean spaces, we do differential equations somehow drawing a 'similar to' analogy to Euclidean spaces where Pythagoras theorem take the form of: ds^2.......

    (4) What is the advantage of using Parallel transport?

    I don't know whether I am able to clear it up or not, but co-ordinate transformation, relation to GR......

    Anyway, I would stop here.

    If you can please throw some light on it......

    -- Shounak
     
  8. Oct 26, 2012 #7
    One more thing, I tried to put it up in Math section of this forum but got no replies. Can anybody tell me what is a invariant polynomial?

    -- Shounak
     
  9. Oct 27, 2012 #8

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    (0) Eigenvectors...hm...I don't immediately see a connection between parallel transport and eigenvectors.

    (1) I think you will have to be more specific.

    (2) No. I think you will have to read the stuff I linked to to understand the covariant/contravariant stuff better.

    (3) I don't understand the question.

    (4) This is a very interesting question. In both SR and GR, motion is represented by curves in spacetime. A particle is a physical system whose motion can be represented by only one curve. We would like to single out a set of curves that can represent the motion of non-accelerating particles. In SR, where spacetime is just ##\mathbb R^4## with the Minkowski metric, those curves are just the straight lines. So it's reasonable to expect that some sort of "straight lines" can represent non-accelerating motion in GR too. But what is a "straight line" in a curved space? That's the question we need to answer. The technical term for such a straight line is "geodesic". The definition goes like this: A curve is said to be a geodesic if its tangent vector (the 4-velocity) is parallel transported along the curve.

    Now we can postulate that geodesics represent inertial motion, and then define acceleration as a measure of how much the curve deviates from being a geodesic.
     
  10. Oct 27, 2012 #9

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    No replies often means that the person who asked the question didn't make the effort to make it as easy as possible for other people to answer the question. I think that might be the problem here. I don't know exactly what the author meant by "invariant polynomial". I think I can guess, but wouldn't it be better for both of us (certainly for me) if you posted an exact quote so that I can see the context?
     
  11. Oct 27, 2012 #10

    WannabeNewton

    User Avatar
    Science Advisor

    Sorry I just wanted to really commend you for that recommendation :biggrin: I don't see why it isn't used more often.
     
  12. Oct 27, 2012 #11
    Hello Fredrik,

    Here is the quote:

    "I can't claim to encapsulate all of the reasons why the trace comes up, but mostly I think it has to do with the fact that the trace is the simplest nontrivial example of an invariant polynomial. An invariant polynomial is a homogeneous polynomial function of the entries of a n x n matrix which is unchanged by conjugation with an invertible matrix; in other words, P(g A g^(-1)) = P(A) for any n x n matrix A and any invertible n x n matrix g. The invariance condition is another way of saying that the values of an invariant polynomial are coordinate independent quantities; this is because any change of coordinates can be expressed as conjugation by an invertible matrix. Perhaps you recall that a matrix is really a representation of a linear transformation relative to a coordinate system; from this point of view, invariant polynomials are useful because they can be used to obtain information from a matrix which depends only on the linear transformation that it represents, not the coordinate system chosen.

    As I mentioned, the trace is an invariant polynomial, meaning Tr(A) = Tr(g A g^(-1)). The determinant is another example of an invariant polynomial, and both examples can be obtained as coefficients of the characteristic polynomial, all of which are invariant polynomials. The trace and the determinant seem to come up so often because they are extreme examples of invariant polynomials; the trace is a degree 1 polynomial which depends on a minimal collection of entries of the matrix (it is just the sum of the diagonal entries) while the determinant has the highest sensible degree for an invariant polynomial which depends on as many of the entries as possible. Since every matrix can be conjugated into an upper triangular matrix with its eigenvalues along the diagonal, invariant polynomials have something special to say about the eigenvalues of a matrix; specifically, the trace of a matrix is the sum of the eigenvalues and the determinant is the product of the eigenvalues. Since the trace in particular is so easy to compute, it can be a useful tool for obtaining quick information about the eigenvalues of a matrix without having to actually find them."


    If you can please explain me......
     
  13. Oct 27, 2012 #12

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There's a lot to explain there. I will only try to get you started with the basics. You still need to read the stuff I linked to in my first reply.

    The components, or matrix elements, of a linear operator A with respect to a basis ##\{e_i\}## are defined by ##A_{ij}=(Ae_j)_i##, where the right-hand side denotes the ith component of the vector ##Ae_j## in the given basis. See this post for more about that. Keep in mind that the definition of matrix multiplication is ##(AB)_{ij}=\sum_k A_{ik} B_{kj}##. (Here I'm using a notation with all indices downstairs. In GR, we would usually write ##A^i{}_j## instead of ##A_{ij}##. Also note that we don't usually write out the summation sigmas, because there's always a sum over each of the indices that appear twice, and only those).

    As described in this post, if p is a point in the manifold, then each coordinate system that has p in its domain defines a basis for the tangent space at p. This explains why the basis for the tangent space changes when you change coordinate systems.

    The definition seems clear enough, but it's a bit harder to see what this has to do with coordinate independence. I did a quick calculation, and it seems to me that the last statement is only true if the components of the matrix are the components of a (1,1) tensor. The components of a (2,0) tensors or (0,2) tensors transform differently.

    If you really want to understand these things, you have to challenge yourself to do a lot of exercises to ensure that you understand the basics well. You should e.g. prove that if the coordinates of points in the manifold transform according to ##x'=\Lambda x##, then the corresponding basis vectors transform according to ##e'_i=\Lambda^{-1} e_i##, and the dual basis vectors according to ##e'{}^i=\Lambda e^i##. Then you should be able to understand this:
    $$T'^i{}_j=T(e'{}^i,e'_j) =T(\Lambda^i{}_k e^k,(\Lambda^{-1})^l{}_j e_l) =\Lambda^i{}_k (\Lambda^{-1})^l{}_j T^k{}_l =(\Lambda T\Lambda^{-1})^i{}_j.$$ This is what explains why the components of a (1,1) tensor transform as ##T\to\Lambda T\Lambda^{-1}##.
     
    Last edited: Oct 27, 2012
  14. Oct 27, 2012 #13
    Hello Fredrik,

    Going by the first few lines:

    if I consider a Hermitian matrix

    [3 2-i
    2+i 1]

    and calculate its' complex conjugate it gives the same thing.

    Can this be consider as something invariant?
     
  15. Oct 27, 2012 #14
    Also

    "the values of an invariant polynomial are coordinate independent quantities;

    this is because any change of coordinates can be expressed as conjugation by an

    invertible matrix. "

    It means mathematically what?

    Find the inverse of a matrix and then do the conjugate or?
    .....

    -- Shounak
     
  16. Oct 27, 2012 #15

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I don't see why it would be. And we're working with real manifolds, so all tensor components will be real.


    I think what they mean by "conjugation by an invertible matrix B" is the map ##A\mapsto BAB^{-1}##.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Terms explanation
Loading...