Different ways to define vector multiplication?

stoopkid
Messages
6
Reaction score
0
The context in which this question arises (for me), is I was trying to take the curl of the magnetic field of a moving point charge, however my question is purely mathematical. But I will explain the situation anyway. The point charge is located at \vec{r_{0}}, moving with velocity \vec{v}. The magnetic field, \vec{B} is a vector field defined for every point \vec{r} in the space, and is given by:

\vec{B} = \frac{\mu_{0}}{4\pi}\frac{\vec{v}×\left(\vec{r}-\vec{r_{0}}\right)}{\left\|\vec{r}-\vec{r_{0}}\right\|^{3}}

So I took the curl and ended up with the following:
∇×\vec{B} = \frac{1}{\left\|\vec{Δr}\right\|^{5}}\left[\stackrel{\stackrel{\Large\left(3Δx\vec{v}-v_{x}\vec{Δr}\right)\bullet\vec{Δr}}{\normalsize\left(3Δy\vec{v}-v_{y}\vec{Δr}\right)\bullet\vec{Δr}}}{\scriptsize\left(3Δz\vec{v}-v_{z}\vec{Δr}\right)\bullet\vec{Δr}}\right]

where x,y,z are components of \vec{r}, and Δx = (x-x_{0}), and \vec{Δr} = \vec{r} - \vec{r_{0}}.

So I was wondering if there is an operation which takes a vector A, and scalar multiplies each of its components to a vector B, and the resulting vectors become components of a new vector. For example, the x component of the new vector would be: a_{x}\vec{B}, so it would be a "vector of vectors". I was also wondering if there is an operation which would take a vector of vectors \vec{C} and dot products each of its components by another vector, \vec{R}, and the resulting scalars from those operations form the new components, i.e. the x-component of the new vector would be: \vec{C_{x}}\bullet\vec{R}.

If we call the first operation \otimes, and the second operation \odot, then we can rewrite the curl equation:

∇×\vec{B}=\frac{\left(3\vec{Δr}\otimes\vec{v}-\vec{v}\otimes\vec{Δr}\right)\odot\vec{Δr}}{\left\|\vec{Δr}\right\|^{5}}

Both the operations are bilinear and non-commutative. I was wondering if there is a way to generalize the idea of this:

The scalar product a\vec{v} = \left(a*v_{1}, a*v_{2}, a*v_{3}\right), takes the scalar a as a whole distributes it over the components of \vec{v}, and puts each of these products into the component of a new vector. What about a scalar product that instead adds these up? I.e. a\vec{v} = a*v_{1}+a*v_{2}+a*v_{3}.

The dot product \vec{a}\bullet\vec{v} = a_{1}*v_{1} + a_{2}*v_{2} + a_{3}*v_{3} multiplies corresponding components of the two vectors and adds these all up. What about a dot product that instead of adding these into a vector? I.e. \vec{a}\bullet\vec{v} = (a_{1}*v_{1}, a_{2}*v_{2}, a_{3}*v_{3})

Or what about an operation where a vector distributes like a scalar over the components of another vector? This is the operation \otimes that came up in my original problem, where \vec{a}\otimes\vec{v} = (a_{1}\vec{v}, a_{2}\vec{v}, a_{3}\vec{v}), which is a vector of vectors. What about a case where instead of being components of a vector, the components were all added up?

Is there a way in which these are all specific examples of a general "kind of operation", where we can specify how the items being multiplied distribute over each other's components? I know linear algebra covers some of this stuff, but I don't remember learning anything about "vectors of vectors".
 
Physics news on Phys.org
stoopkid said:
The context in which this question arises (for me), is I was trying to take the curl of the magnetic field of a moving point charge, however my question is purely mathematical. But I will explain the situation anyway. The point charge is located at \vec{r_{0}}, moving with velocity \vec{v}. The magnetic field, \vec{B} is a vector field defined for every point \vec{r} in the space, and is given by:

\vec{B} = \frac{\mu_{0}}{4\pi}\frac{\vec{v}×\left(\vec{r}-\vec{r_{0}}\right)}{\left\|\vec{r}-\vec{r_{0}}\right\|^{3}}

So I took the curl and ended up with the following:
∇×\vec{B} = \frac{1}{\left\|\vec{Δr}\right\|^{5}}\left[\stackrel{\stackrel{\Large\left(3Δx\vec{v}-v_{x}\vec{Δr}\right)\bullet\vec{Δr}}{\normalsize\left(3Δy\vec{v}-v_{y}\vec{Δr}\right)\bullet\vec{Δr}}}{\scriptsize\left(3Δz\vec{v}-v_{z}\vec{Δr}\right)\bullet\vec{Δr}}\right]

where x,y,z are components of \vec{r}, and Δx = (x-x_{0}), and \vec{Δr} = \vec{r} - \vec{r_{0}}.

So I was wondering if there is an operation which takes a vector A, and scalar multiplies each of its components to a vector B, and the resulting vectors become components of a new vector.
Before this will make sense you will have to say what you mean by a vector whose components are vectors! There is the "exterior product" of two vectors, the result being a tensor rather than a vector. In a given basis, the exterior product of the two vectors represented by <x_1, y_1, z_1> and <x_2, y_2, z_2> would be represented by
\begin{bmatrix}x_1x_2 & x_1y_2 & x_1z_2 \\ y_1x_2 & y_1y_2 & y_1z_2 \\ z_1x_2 & z_1y_2 & z_1z_2 \end{bmatrix}

For example, the x component of the new vector would be: a_{x}\vec{B}, so it would be a "vector of vectors". I was also wondering if there is an operation which would take a vector of vectors \vec{C} and dot products each of its components by another vector, \vec{R}, and the resulting scalars from those operations form the new components, i.e. the x-component of the new vector would be: \vec{C_{x}}\bullet\vec{R}.

If we call the first operation \otimes, and the second operation \odot, then we can rewrite the curl equation:

∇×\vec{B}=\frac{\left(3\vec{Δr}\otimes\vec{v}-\vec{v}\otimes\vec{Δr}\right)\odot\vec{Δr}}{\left\|\vec{Δr}\right\|^{5}}

Both the operations are bilinear and non-commutative. I was wondering if there is a way to generalize the idea of this:

The scalar product a\vec{v} = \left(a*v_{1}, a*v_{2}, a*v_{3}\right), takes the scalar a as a whole distributes it over the components of \vec{v}, and puts each of these products into the component of a new vector. What about a scalar product that instead adds these up? I.e. a\vec{v} = a*v_{1}+a*v_{2}+a*v_{3}.

The dot product \vec{a}\bullet\vec{v} = a_{1}*v_{1} + a_{2}*v_{2} + a_{3}*v_{3} multiplies corresponding components of the two vectors and adds these all up. What about a dot product that instead of adding these into a vector? I.e. \vec{a}\bullet\vec{v} = (a_{1}*v_{1}, a_{2}*v_{2}, a_{3}*v_{3})

Or what about an operation where a vector distributes like a scalar over the components of another vector? This is the operation \otimes that came up in my original problem, where \vec{a}\otimes\vec{v} = (a_{1}\vec{v}, a_{2}\vec{v}, a_{3}\vec{v}), which is a vector of vectors. What about a case where instead of being components of a vector, the components were all added up?

Is there a way in which these are all specific examples of a general "kind of operation", where we can specify how the items being multiplied distribute over each other's components? I know linear algebra covers some of this stuff, but I don't remember learning anything about "vectors of vectors".
 
What I mean by a vector of vectors is that you have a vector whose components are vectors rather than scalars. For example:

\vec{V} = \left[\stackrel{\stackrel{\LARGE\vec{a}}{\large\vec{b}}}{\vec{c}}\right]

I defined the operation \odot to allow a vector to operate on a vector of vectors such as this so that:

\vec{V}\odot\vec{x} = \left[\stackrel{\stackrel{\LARGE\vec{a}}{\large\vec{b}}}{\vec{c}}\right]\odot\vec{x} = \left[\stackrel{\stackrel{\LARGE\vec{a}\bullet\vec{x}}{\large\vec{b}\bullet \vec{x}}}{\vec{c}\bullet\vec{x}}\right]

The purpose of creating this operation is to allow me to pull the \bullet\vec{Δr} out of the vector and distribute it over each component, thereby simplifying the vector:

∇×\vec{B} = \frac{1}{\left\|\vec{Δr}\right\|^{5}}\left[\stackrel{\stackrel{\LARGE3Δx\vec{v}-v_{x}\vec{Δr}}{\Large3Δy\vec{v}-v_{y}\vec{Δr}}}{3Δz\vec{v}-v_{z}\vec{Δr}}\right]\odot\vec{Δr}

As you can see, each component of this "vector" above is vector valued, and the \odot operation reduces each component to a scalar, making it into a regular vector with scalar components. To further simplify this equation, I defined yet another operation, \otimes, which takes as input two vectors and returns a vector of vectors. It operates as follows:

\vec{a}\otimes\vec{x} = \left[\stackrel{\stackrel{\LARGE a_{1}\vec{x}}{\Large a_{2}\vec{x}}}{a_{3}\vec{x}}\right]

Applying this to my curl equation, I get:
∇×\vec{B} = \frac{\left(3\vec{Δr}\otimes\vec{v}-\vec{v}\otimes\vec{Δr}\right)\odot\vec{Δr}}{\left\|\vec{Δr}\right\|^{5}}

This looks much cleaner than the original vector I gave for the curl, so at the very least, the operations could just be useful shorthand. However, it leads me to wonder if there is some kind of "general" multiplication operation, where you specify how vector A should be distributed over vector B, and how the resulting set of values should be recombined. For example, I could write:

\vec{a}*\vec{x} = (a_{1}*\vec{x}, a_{2}*\vec{x}, a_{3}*\vec{x}), the rule is that the second vector AS A WHOLE is distributed over each of the components of the 1st vector, creating a set of 3 products. These 3 products are then recombined into a vector of vectors. Or I could write:

\vec{a}*\vec{x} = a_{1}*x_{1} + a_{2}*x_{1} + a_{3}*x_{1} + a_{1}*x_{2} + a_{2}*x_{2} + a_{3}*x_{2} + a_{1}*x_{3} + a_{2}*x_{3} + a_{3}*x_{3} = (a_{1} + a_{2} + a_{3})*(x_{1} + x_{2} + x_{3})

and the rule here is that EACH COMPONENT of one of the vectors is distributed over each of the components of the other vector, creating a set of 9 products. These products are then recombined by addition into a scalar.

It seems to me that the nature/workings of each type of multiplication is completely determined by how the vectors distribute over each other to form a set of products, and how this set is recombined to create a final value. So I wonder if there is a way to generalize all this and create an operator which takes two arguments: 1) how the vectors should distribute over each other to create a set of products, and 2) how this resulting set of products should be recombined to create a final value. If we call this general operator ∴(D, R), where D and R are the rules for distribution and recombination, respectively (i.e. the operator ∴ is a function of a rule for distribution and a rule for recombination), then I could write my other operators like this:

\otimes = ∴(D,R)

Where D= "2nd vector distributes AS A WHOLE over the components of the 1st vector", and
R = "The set of products from the distribution become the components of a new vector".

I am trying to capture this idea mathematically
 
Hey stoopkid.

Have you tried creating special matrices or vectors in composition to achieve what you want?

For example if you wanted to mask out the first co-ordinate you could do v * [1 0 0]^t. The second would be v*[0 1 0]^t and so on. You could apply general matrix products to produce a general matrix and you could then move up to tensors if you wanted true multi-linear products,

Since you can compose lots of matrices, you can pick the composition that gives you the right operations and convert them to tensors and do analysis in both spaces (symbolic tensor space or the raw specific matrix space).
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top