Any notation for component-by-component vector multiplication?

Click For Summary

Discussion Overview

The discussion revolves around the notation and methodology for expressing component-by-component vector multiplication, particularly in the context of applying the divergence operator to a scalar function derived from a vector function. Participants explore various mathematical representations and notations that could facilitate this operation.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant seeks a notation for an operator that multiplies each component of a vector k with the gradient of a scalar function T.
  • Another participant suggests using the notation k\nabla\cdot(A) to represent the operation, where A is the vector field.
  • There is a proposal to express the operation as \vec \nabla \cdot \left( ( \vec \nabla T)^T \cdot I_3 \vec k \right)^T, interpreting the gradient and k as matrices.
  • Some participants discuss the implications of using the identity matrix and the dimensionality of the matrices involved, with one noting that multiplying a 1x3 matrix by another 1x3 matrix is not defined.
  • Another participant suggests that if an operation can transform k into a diagonal matrix, the proposed function should work with that adjustment.
  • One participant introduces the dyadic product notation, suggesting \nabla \cdot \text{diag}(\textbf{k} \otimes \nabla T) as a way to express the operation, and discusses the summation notation as an alternative.
  • There is a mention of learning resources, including books and online lectures, for understanding these mathematical concepts.

Areas of Agreement / Disagreement

Participants express differing views on the appropriate notation and methodology for the operation, with no consensus reached on a single correct approach. The discussion remains unresolved regarding the best representation.

Contextual Notes

Participants highlight potential limitations in their proposed methods, including assumptions about matrix dimensions and the nature of the operations involved. Some mathematical steps remain unresolved, particularly regarding the transformations and operations on matrices.

Curl
Messages
756
Reaction score
0
I have a scalar function and a vector function and I need to make a scalar function as so:

k=[kx ky kz]
T=T(x,y,z)

Function I want:

div(k ? gradT) where "?" would be some operator that multiplies each component of k and gradT to make the vector [kx ∂T/dx , ky ∂T/∂y , kz ∂T/∂z]

that way I can apply the divergence operator and get:
∂/∂x (kx ∂T/∂x) + ∂/∂y (ky ∂T/∂y) + ∂/∂z (kz ∂T/∂z)

So is there some way to express this using elementary notation?
 
Physics news on Phys.org
Certeinly there is a notation of that operator, in general if you want to have a operator of that kind you have to put:

[tex]k\nabla\cdot(A)[/tex]

Where A is the vector field that you want to operate.

Greetings
 
arsenal997 said:
Certeinly there is a notation of that operator, in general if you want to have a operator of that kind you have to put:

[tex]k\nabla\cdot(A)[/tex]

Where A is the vector field that you want to operate.

Greetings

k is a vector... what is [tex]\vec k \nabla[/tex] ??
 
I believe it will be

[tex]\vec \nabla \cdot \left( ( \vec \nabla T)^T \cdot I_3 \vec k \right)^T[/tex]

I think this works
 
Last edited:
Oh yes... the superscript T means transpose. Treat grad T as a 3 X 1 matrix, and k also as a 3 X 1 matrix. I3 is just the identity matrix for R3.
 
Actually...

[tex]\left( ( \mathbf{ \nabla } T)^T \mathbf{ I_3 } \textbf{ k } \right) \mathbf{ \nabla }[/tex]

This may be a better way to put it. If we interpret nabla as a 3 X 1 matrix and k is also a 3 X 1 matrix
 
I can't see what you are trying to do. The Identity matrix does nothing, and multiplying a 1x3 matrix by a 1x3 is not defined.
 
Curl said:
I can't see what you are trying to do. The Identity matrix does nothing, and multiplying a 1x3 matrix by a 1x3 is not defined.

when you multiply k by the identity you get a diagonal matrix with the components of k along the diagonals. Then when you multiply the transpose of the gradient of T by this 3 X 3 matrix you get a 1 X 3 matrix inside of the parenthesis. Then the nabla matrix is 3 X 1 so we have a 1 X 3 multiplied by a 3 X 1 which gives the sum of the products of the components.
 
AlexChandler said:
when you multiply k by the identity you get a diagonal matrix with the components of k along the diagonals. Then when you multiply the transpose of the gradient of T by this 3 X 3 matrix you get a 1 X 3 matrix inside of the parenthesis. Then the nabla matrix is 3 X 1 so we have a 1 X 3 multiplied by a 3 X 1 which gives the sum of the products of the components.

Haha I am sorry you are absolutely right. Let me think about this for a moment :biggrin:
 
  • #10
If you can find an operation that will transform k into a diagonal matrix, then the above function should work if you replace I3 k with that operation.
 
  • #11
You could express it with the dyadic product as

[tex]\nabla \cdot \text{diag}(\textbf{k} \otimes \nabla T) = \nabla \cdot \text{diag}([k] [\nabla T]^T),[/tex]

taking "diag" to mean "form a vector whose components are the diagonal entries of this matrix". Here [k] is a 3x1 matrix (a column vector), and the transpose of [del T] a 1x3 matrix (row vector), so that their product is a 3x3 matrix.

Or simply use the summation sign:

[tex]\sum_{i=1}^{n} \partial_i (k_i \partial_i T) = \sum_{i=1}^{n} \frac{\partial }{\partial x_i}\left ( k_i \frac{\partial T}{\partial x_i} \right ).[/tex]
 
  • #12
Rasalhague said:
You could express it with the dyadic product as

[tex]\nabla \cdot \text{diag}(\textbf{k} \otimes \nabla T) = \nabla \cdot \text{diag}([k] [\nabla T]^T),[/tex]

taking "diag" to mean "form a vector whose components are the diagonal entries of this matrix".

Nice! This is the kind of thing I was trying to find. What course can you take to cover these kinds of operations? A second semester of linear algebra?
 
  • #13
I don't know. I'm learning this stuff from books and the internet, and have never formally taken a course in linear algebra, so I don't know where and when such things would normally be taught. There's a section on dyadics in Snider & Davis: Vector Analysis, 6th edition. Gilbert Strang demonstrates the technique of making an n x n matrix out of vectors by multiplying a column vector by its transpose on the right in one of his MIT linear algebra lectures, which are online here

http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/

Looking at the titles, it's probably Lecture 16: Projection matrices and least squares, or perhaps Lecture 15: Projections onto subspaces. Another place I've seen something similar is in presentations of the relativistic velocity-addition formula, and that did involve the identity matrix. I thought there was an example on the Wikipedia page: http://en.wikipedia.org/wiki/Velocity_addition But it looks like they've replaced it now; or maybe it was a different page where I saw it. I can post details if you're interested.

The projection matrix idea works like this. Suppose we want to project a vector x onto a vectot a, then we can this as a matrix equation:

[tex]\frac{a^Tx}{a^T a} \; a = \frac{aa^T}{a^T a} \; x = Px.[/tex]

(If a is unit length, we don't need to worry about the denominator.)
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
Replies
5
Views
2K
  • · Replies 0 ·
Replies
0
Views
4K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K