Matrix of Gradients: Notation Explained

  • Thread starter Thread starter aaaa202
  • Start date Start date
  • Tags Tags
    Matrix
aaaa202
Messages
1,144
Reaction score
2
There is one point in my book, where I am confused about the notation. In index notation the equation is:

dai = ajj ui

In matrix notation I would write this as:

da = (a⋅∇)u

where the term in the parenthis is just a scalar or if you will the unit matrix multiplied by a scalar.

But my book rewrites this as:

da = a ⋅ ∇u (1)

where the latter is a matrix of gradients with elements Aij = ∇jui

I don't understand this last rewriting. If you choose to use this matrix of gradients shouldn't it be:

da = (∇u)a

Or maybe I'm misinterpreting (1). Isn't a in this case a row vector and the matrix of displacement gradients has for example on the first row: ∇xux,∇yux ,∇zux. I would like it to be transposed to make meaning of the above.
 
Physics news on Phys.org
Just so I know we are speaking the same language:
aaaa202 said:
There is one point in my book, where I am confused about the notation. In index notation the equation is:

dai = ajj ui
i.e. ##\begin{pmatrix} da_1\\ da_2 \\ da_3 \end{pmatrix} =\begin{pmatrix} a_1 \frac{\partial u_1}{\partial x_1} &a_2 \frac{\partial u_1}{\partial x_2}& a_3 \frac{\partial u_1}{\partial x_3} \\
a_1 \frac{\partial u_1}{\partial x_1} &a_2 \frac{\partial u_2}{\partial x_2}& a_3 \frac{\partial u_2}{\partial x_3} \\
a_1 \frac{\partial u_3}{\partial x_1} &a_2 \frac{\partial u_3}{\partial x_2}& a_3 \frac{\partial u_3}{\partial x_3} \end{pmatrix} ##

In matrix notation I would write this as:

da = (a⋅∇)u

where the term in the parenthis is just a scalar or if you will the unit matrix multiplied by a scalar.

How exactly would you define ## (a \cdot \nabla)## as a scalar?
But my book rewrites this as:

da = a ⋅ ∇u (1)

where the latter is a matrix of gradients with elements Aij = ∇jui

I don't understand this last rewriting. If you choose to use this matrix of gradients shouldn't it be:

da = (∇u)a

Or maybe I'm misinterpreting (1). Isn't a in this case a row vector and the matrix of displacement gradients has for example on the first row: ∇xux,∇yux ,∇zux. I would like it to be transposed to make meaning of the above.

You are saying that ##\nabla u =
\begin{pmatrix} \frac{\partial u_1}{\partial x_1} & \frac{\partial u_1}{\partial x_2}& \frac{\partial u_1}{\partial x_3} \\
\frac{\partial u_1}{\partial x_1} & \frac{\partial u_2}{\partial x_2}& \frac{\partial u_2}{\partial x_3} \\
\frac{\partial u_3}{\partial x_1} & \frac{\partial u_3}{\partial x_2}& \frac{\partial u_3}{\partial x_3} \end{pmatrix} ##
and ##a = \begin{pmatrix} a_1 & a_2 & a_3 \end{pmatrix} ##
So multiplying would have to be done by ##a^T##, but that would give you a 3x1 matrix out, and appears to be equivalent to:
## \begin{pmatrix} da_1\\ da_2 \\ da_3 \end{pmatrix} = \begin{pmatrix} a_1 \frac{\partial u_1}{\partial x_1} + a_2 \frac{\partial u_1}{\partial x_2} + a_3 \frac{\partial u_1}{\partial x_3} \\
a_1 \frac{\partial u_1}{\partial x_1} + a_2 \frac{\partial u_2}{\partial x_2}+ a_3 \frac{\partial u_2}{\partial x_3} \\
a_1 \frac{\partial u_3}{\partial x_1} + a_2 \frac{\partial u_3}{\partial x_2}+ a_3 \frac{\partial u_3}{\partial x_3} \end{pmatrix} ##

In general, matrix notation is flexible as long as you make sure your dimensions match with the operation you are trying to use. Most physics texts love to vector operations whereas many math and stats texts multiply by transpose matrices. The end result is the same.
 
I agree with the last result is what I want to get. But does that match with what you get in (1)? If I multiply the row vector a from the right with the matrix of gradients you have written, I don't get the last matrix in the above. Do you? Maybe I am simply failing to multiply the row vector by a matrix.
Also I would write the dot product as a scalar in index notation as ajj
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top