# Any notation for component-by-component vector multiplication?

by Curl
Tags: componentbycomponent, multiplication, notation, vector
 P: 751 I have a scalar function and a vector function and I need to make a scalar function as so: k=[kx ky kz] T=T(x,y,z) Function I want: div(k ? gradT) where "?" would be some operator that multiplies each component of k and gradT to make the vector [kx ∂T/dx , ky ∂T/∂y , kz ∂T/∂z] that way I can apply the divergence operator and get: ∂/∂x (kx ∂T/∂x) + ∂/∂y (ky ∂T/∂y) + ∂/∂z (kz ∂T/∂z) So is there some way to express this using elementary notation?
 P: 3 Certeinly there is a notation of that operator, in general if you want to have a operator of that kind you have to put: $$k\nabla\cdot(A)$$ Where A is the vector field that you want to operate. Greetings
P: 284
 Quote by arsenal997 Certeinly there is a notation of that operator, in general if you want to have a operator of that kind you have to put: $$k\nabla\cdot(A)$$ Where A is the vector field that you want to operate. Greetings
k is a vector... what is $$\vec k \nabla$$ ??

P: 284

## Any notation for component-by-component vector multiplication?

I believe it will be

$$\vec \nabla \cdot \left( ( \vec \nabla T)^T \cdot I_3 \vec k \right)^T$$

I think this works
 P: 284 Oh yes... the superscript T means transpose. Treat grad T as a 3 X 1 matrix, and k also as a 3 X 1 matrix. I3 is just the identity matrix for R3.
 P: 284 Actually... $$\left( ( \mathbf{ \nabla } T)^T \mathbf{ I_3 } \textbf{ k } \right) \mathbf{ \nabla }$$ This may be a better way to put it. If we interpret nabla as a 3 X 1 matrix and k is also a 3 X 1 matrix
 P: 751 I can't see what you are trying to do. The Identity matrix does nothing, and multiplying a 1x3 matrix by a 1x3 is not defined.
P: 284
 Quote by Curl I can't see what you are trying to do. The Identity matrix does nothing, and multiplying a 1x3 matrix by a 1x3 is not defined.
when you multiply k by the identity you get a diagonal matrix with the components of k along the diagonals. Then when you multiply the transpose of the gradient of T by this 3 X 3 matrix you get a 1 X 3 matrix inside of the parenthesis. Then the nabla matrix is 3 X 1 so we have a 1 X 3 multiplied by a 3 X 1 which gives the sum of the products of the components.
P: 284
 Quote by AlexChandler when you multiply k by the identity you get a diagonal matrix with the components of k along the diagonals. Then when you multiply the transpose of the gradient of T by this 3 X 3 matrix you get a 1 X 3 matrix inside of the parenthesis. Then the nabla matrix is 3 X 1 so we have a 1 X 3 multiplied by a 3 X 1 which gives the sum of the products of the components.
 P: 1,400 You could express it with the dyadic product as $$\nabla \cdot \text{diag}(\textbf{k} \otimes \nabla T) = \nabla \cdot \text{diag}([k] [\nabla T]^T),$$ taking "diag" to mean "form a vector whose components are the diagonal entries of this matrix". Here [k] is a 3x1 matrix (a column vector), and the transpose of [del T] a 1x3 matrix (row vector), so that their product is a 3x3 matrix. Or simply use the summation sign: $$\sum_{i=1}^{n} \partial_i (k_i \partial_i T) = \sum_{i=1}^{n} \frac{\partial }{\partial x_i}\left ( k_i \frac{\partial T}{\partial x_i} \right ).$$
 Quote by Rasalhague You could express it with the dyadic product as $$\nabla \cdot \text{diag}(\textbf{k} \otimes \nabla T) = \nabla \cdot \text{diag}([k] [\nabla T]^T),$$ taking "diag" to mean "form a vector whose components are the diagonal entries of this matrix".
 P: 1,400 I don't know. I'm learning this stuff from books and the internet, and have never formally taken a course in linear algebra, so I don't know where and when such things would normally be taught. There's a section on dyadics in Snider & Davis: Vector Analysis, 6th edition. Gilbert Strang demonstrates the technique of making an n x n matrix out of vectors by multiplying a column vector by its transpose on the right in one of his MIT linear algebra lectures, which are online here http://ocw.mit.edu/courses/mathemati...ideo-lectures/ Looking at the titles, it's probably Lecture 16: Projection matrices and least squares, or perhaps Lecture 15: Projections onto subspaces. Another place I've seen something similar is in presentations of the relativistic velocity-addition formula, and that did involve the identity matrix. I thought there was an example on the Wikipedia page: http://en.wikipedia.org/wiki/Velocity_addition But it looks like they've replaced it now; or maybe it was a different page where I saw it. I can post details if you're interested. The projection matrix idea works like this. Suppose we want to project a vector x onto a vectot a, then we can this as a matrix equation: $$\frac{a^Tx}{a^T a} \; a = \frac{aa^T}{a^T a} \; x = Px.$$ (If a is unit length, we don't need to worry about the denominator.)