1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Derivative of a vector

  1. Feb 7, 2012 #1
    Hello,

    In lecture today, my professor told us that the derivative of a row vector is a column vector. I worked with vector calculus before and never came across this. I suspect it is a notational issue but would greatly appreciate it if someone could elaborate on this.
     
  2. jcsd
  3. Feb 8, 2012 #2
    I never encountered this convention, and I can't see where it can benefit the presentation.
    This seems like a perfectly good thing to ask your prof.
     
  4. Feb 8, 2012 #3
    Strictly speaking, a vector does not have a derivative. However, if you have a vector-valued function (for example a function representing position as a function of time), then you can certainly consider the derivative of that function, it will simply be another function (the velocity function).

    Also, some people seem to not be bothered by switching a row vector to a column vector, so I suppose in a sense your prof can be right, but I don't like that approach. Suppose you have a vector v(t), then the derivative with respect to t is simply the gradient of that vector function, which yields a row or column vector, depending on how you had it to start with.

    Does this help?
     
  5. Feb 8, 2012 #4

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    The derivative (gradient) of a scalar with respect to a vector, i.e. of scalar function of a vector variable, should be expressed as a dual vector. If you represent the vector variable as a column vector of variables then the derivative (gradient) should be written as a row vector of partial derivatives.

    Beyond that, (parametric) derivatives of row vectors yields row vectors and likewise with column vectors. (parametric = derivatives w.r.t. a parameter, the vectors being functions of a single variable. [itex] \vec{x}(t)[/itex]).

    Row vectors and column vectors live in different (though isomorphic) spaces. However when we work with general vectors (in yet another space) with a given basis we can write the basis expansion using both row and column vectors like this:

    [tex] x\mathbf{i}+y\mathbf{j} + z\mathbf{k} = \left(\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \end{array}\right)\left[\begin{array}{c}x \\ y \\ z \end{array}\right] = \left(\begin{array}{ccc} x & y & z \end{array}\right)\left[\begin{array}{c} \mathbf{i} \\ \mathbf{j} \\ \mathbf{k} \end{array}\right] [/tex]

    This way we can represent general vectors (via a given basis) as either row vectors or column vectors of components and do so interchangibly. I find it convenient to (mostly) use column vectors to represent coordinate vectors and then row vectors to represent the dual gradients. But it is all a matter of convention and convenience.
     
  6. Feb 8, 2012 #5
    Perhaps I should have been more clear in my question. I am looking at dR(w)/dw where w is a vector and R(w) is a scalar function of a vector variable. In this case, jambaugh post seems to make sense as if w is a row vector, than dR(w)/dw is a column vector. Thanks for all your help!
     
  7. Feb 8, 2012 #6

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    Let me further elaborate as to why you get a dual vector. To generalize the idea of a derivative to vector calculus use differentials as local coordinates in the local linear approximation to a function.

    Given [itex] y = f(x)[/itex] then the local linear approximation is:
    [tex] dy = f'(x)dx \quad\quad\text{ that is to say } y+dy = f(x)+f'(x)dx \approx f(x+dx)[/tex]
    Allow either x or y or both to be vectors here. You then have the derivative as a linear operator valued function of x, said linear operator maps [itex]dx[/itex] type objects to [itex]dy[/itex] type objects. We can define it as a limit of a difference quotient if we are careful to avoid what looks like division by a vector:
    [tex] \mathbf{dy} = f'(\mathbf{x})\mathbf{dx} \equiv \lim_{h\to 0} \frac{ f(\mathbf{x}+h\mathbf{dx}) - f(\mathbf{x})}{h}[/tex]
    Here [itex] h[/itex] is a real number and the difference quotient is well defined provided the range and domain of f are vector spaces (so we can add elements and multiply by scalars h and 1/h). That includes of course the case of 1-dimensional vectors we call scalars.

    The nature of [itex]f'(x)[/itex] then is as a linear operator and we have the following cases:
    • dy,dx both scalars: the linear operator [itex]f'[/itex] is just multiplication by a number.
    • dy vector and dx scalar: the linear operator [itex]f'[/itex] maps scalars to vectors and so is multiplication by a vector.
    • dy scalar and dx vector: the linear operator [itex]f'[/itex] maps vectors to scalars and so is a dual vector (linear functional).
    • dy vector and dx vector:[itex] f'[/itex] is a full blown linear operator representable by a matrix.
    Now as I mentioned, I prefer to use column vectors as coordinate vectors and row vectors for dual vectors so that in matrix format the action of [itex]f' [/itex] is left multiplication by a matrix.

    e.g.
    [tex]u = f(x,y);\quad du = f'(x,y)\left[\begin{array}{c}dx \\ dy \end{array}\right] = ( {\scriptsize{\frac{\partial u}{\partial x}\quad \frac{\partial u}{\partial y}}} ) \left[\begin{array}{c}dx \\ dy \end{array}\right][/tex]
    [tex]\left[\begin{array}{c}u \\ v\end{array}\right] = \mathbf{F}(x,y,z) ; \quad \left[\begin{array}{c}du \\ dv \end{array}\right]
    =\mathbf{F}' (x,y,z)\left[\begin{array}{c}dx \\ dy \\ dz \end{array}\right]= \left[\begin{array}{c c c} \scriptsize{ \frac{\partial u}{\partial x}} &\scriptsize{ \frac{\partial u}{\partial y}} &\scriptsize{ \frac{\partial u}{\partial z}}\\
    \scriptsize{ \frac{\partial v}{\partial x}} &\scriptsize{ \frac{\partial v}{\partial y}} &\scriptsize{ \frac{\partial v}{\partial z}}\end{array}\right] \left[\begin{array}{c}dx \\ dy \\ dz \end{array}\right] [/tex]
    Note the derivative of a scalar valued function of vectors is just the gradient and in resolving change of variables one gets the correct form of the gradient by preserving the differential relationship: [itex] du = \nabla u \cdot \mathbf{dr}[/itex].

    2nd Note: Here I'm using primed notation just to match up with single variable calc. notation. More traditionally one uses the [itex]\nabla[/itex] operator. [itex]F' \to \nabla F[/itex], or one may use a Leibniz type notation.

    3rd Note: Reversing my use of rows vs columns would allow one to better express directional derivatives and the differential operator, e.g.:
    [tex] \mathbf{d} =\left( \begin{array}{ccc}dx & dy & dz \end{array}\right) \left[ \begin{array}{c} \partial_x \\ \partial_y \\ \partial_z\end{array}\right][/tex]

    A final note. Here I am treating the differentials simply as local coordinates and not as differential forms per se and not as infinitesimals. In full blown differential geometry of manifolds we can't add points and differentials become cotangent vectors while the partial derivatives become tangent vectors. What we call "vector" and what we call "dual vector" is relative. The distinction between "tangent vector" and "co-tangent vector" is not.
     
  8. Feb 11, 2012 #7
    I'm not sure if this is right, but here's what I think. Suppose
    [tex] \frac{df}{d\vec{x}}: \mathbb{R}^m \rightarrow \mathbb{R}^n [/tex]
    Then [itex] \frac{df}{d\vec{x}} [/itex] is an [itex] n \times m [/itex] matrix.

    Let [itex] \vec{x} \epsilon \mathbb{R}^m [/itex] so that [itex] \vec{x} [/itex] is a [itex] m \times 1 [/itex] column vector. In the special case where n = 1, [itex] \frac{df}{d\vec{x}} [/itex] is a [itex] 1 \times m [/itex] row vector.

    Any thoughts?
     
  9. Feb 12, 2012 #8

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    Did you mean to say: [itex] f:\mathbb{R}^m\to \mathbb{R}^n[/itex]?

    To be ultra-precise, [itex]\frac{df}{d\vec{x}}[/itex] is an [itex]n\times m[/itex] matrix valued function and so [itex]\frac{df}{d\vec{x}}(\vec{x})[/itex] is an [itex]n\times m[/itex] matrix.

    But beyond pedantic trivialities, yes you've got the gist of it.
     
  10. Feb 12, 2012 #9
    [itex] f: \mathbb{R}^m \rightarrow \mathbb{R}^n [/itex] is represented by a [itex] n \times m [/itex] matrix. I don't see how [itex] \frac{df}{d\vec{x}} [/itex] is also represented by a [itex] n \times m [/itex] matrix.

    Would you be able to provide a reference text for vector calculus that also does a fair treatment of matricies?
     
  11. Feb 13, 2012 #10

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    Only if [itex]f[/itex] itself is a linear mapping!
    In the linear case, [itex] f(x) = Mx,\quad \frac{df(x)}{dx} = M,\quad \frac{df(x)}{dx} \cdot dx = M\cdot dx [/itex] where [itex]M[/itex] is an [itex]n\times m[/itex] matrix.

    I don't know of one offhand. You'll probably get more use from separate texts, one on linear algebra, the other a good calculus text.
     
  12. Feb 16, 2012 #11

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    The table of contents of ...
    Advanced Calculus of Several Variables
    looks pretty good. I haven't seen the book itself.
     
  13. Feb 16, 2012 #12
    Vector Calculus by Colley does an excellent job of explaining multivariable calculus in matrix form, and it also explains the linear algebra you need to manipulate these matrices.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Derivative of a vector
  1. Derivative of vectors? (Replies: 1)

Loading...