Mathematical notation for elementwise multiplication

Mårten
Messages
126
Reaction score
1
Hi,

I wonder if anyone knows of a mathematically established way of writing elementwise multiplication between two vectors? In MATLAB you can write A .* B to indicate that you want to multiply the vectors A and B elementwise. In my case, I have two column vectors, A and B, and I want to multiply them elementwise and get the result in a vector C. A Latex code for this, if it exists, would also be appreciated.

/Mårten
 
Physics news on Phys.org
I would just make a note up front that multiplication between vectors should be taken to be componentwise.
 
How do you mean "up front"? You mean that I explain this in the surrounding text? Okey, that's a possibility, but shouldn't there be a way to express this mathematically?

I was figuring that if you can denote a matrix A=(a_{ij}), which I seen in several texts, then it ought to be possible to write a column vector as \boldsymbol{a}=(a_i) and another column vector as \boldsymbol{b}=(b_i). And then element wise multiplication could possibly be written as a new column vector \boldsymbol{c}=(a_i b_i) ?

Is that an unambiguous way to express what I want, or could it be misinterpreted?

Or what do others here think? Some more suggestions?EDIT: 1) I suppose the same should apply for matrices, so for any given pair of i,j, the element wise multiplication of the matrices (a_{ij}) and (b_{ij}) ought to be denoted as (a_{ij}b_{ij}).
2) As I've understood it, (a_{ij}) denote a matrix, and a_{ij} denote an individual element in that matrix. What distinguishes these two ways of writing, is the parentheses in the former case. Please correct me if I'm wrong.
 
Last edited:
There isn't someone who can confirm that what I'm saying above is correct? Or are there any other ways to denote element wise multiplication?
 
There simply isn't enough use for "element wise multiplication" of vectors for it to have a specific notation. That, together with "element wise multiplication" of matrices would pretty much negate the whole point of defining vectors and matrices.
 
Mårten said:
write a column vector as \boldsymbol{a}=(a_i) and another column vector as \boldsymbol{b}=(b_i). And then element wise multiplication could possibly be written as a new column vector \boldsymbol{c}=(a_i b_i) ?

Is that an unambiguous way to express what I want, or could it be misinterpreted?
Sure, this is unambiguous. Since this is heavily basis-dependent, it is not a usual thing to do. If you want to be fancy: given a finite-dimensional vector space V over F and a fixed basis (e1,..,en), we have
V=\bigoplus_{i=1}^n \mathbb{F}e_i

For every j, define the "projection"

P_j:\bigoplus_{i=1}^n \mathbb{F}e_i\to \mathbb{F}
(\lambda_1e_1,..,\lambda_ne_n)\mapsto \lambda_n.

Then your product of the vectors a and b is the vector c which satisfies

P_i(c)=P_i(a)P_i(b).

Of course, this is just what you said.

Or let T_i be the linear map whose matrix (w.r.t. this basis) has all entries zero, except for entry at row i, column i which has a 1. So it acts as

T_i(a)=T_i(a_1,..,a_n)=(0,..0,a_i,0,...,0).

Then c is the vector

c=\sum_{i=1}^n \left<T_i(a),T_i(b)\right>e_i.

where <..,..> denotes the inner product.
 
Last edited:
HallsofIvy said:
That, together with "element wise multiplication" of matrices would pretty much negate the whole point of defining vectors and matrices.
Hm... I'm still a beginner in linear algebra. What would you say is the whole point with defining vectors and matrices then?

I found it pretty common when you deal with different dataseries, that you would like to do elementwise multiplication. For instance, you could have a vector describing the economic output from different industries. Then you have another vector describing the different growth rates for these industries. So to get the new output after the growth, you multiply the vectors elementwise.

Landau said:
Sure, this is unambiguous. Since this is heavily basis-dependent, it is not a usual thing to do. If you want to be fancy: given a finite-dimensional vector space V over F and a fixed basis (e1,..,en), we have
V=\bigoplus_{i=1}^n \mathbb{F}e_i
I haven't seen that plus symbol before, what does it mean?

Anyhow, thanks for your replies, both of you! :smile:
 
Hi Mårten, the symbol means "direct sum". The sum of two subspaces, with underlying sets U and V, is another vector space defined as

U + V = \left \{ u + v : u \in U, v \in V \right \}.

If the intersection U \cap V = \left \{ \textbf{0} \right \}, that is, if U and V have no vectors in common, then the sum is called the direct sum, and can be written U \oplus V. For subspaces with underlying sets V_1, V_2 etc.,

\bigoplus_{i=1}^n V_i := V_1 + V_2 ... V_n
 
Mårten said:
Hm... I'm still a beginner in linear algebra. What would you say is the whole point with defining vectors and matrices then?
Basically, linear combinations. If we define "element wise" multiplication, without the usual addition of those products, we lose the intermingling of different parts.

I found it pretty common when you deal with different dataseries, that you would like to do elementwise multiplication. For instance, you could have a vector describing the economic output from different industries. Then you have another vector describing the different growth rates for these industries. So to get the new output after the growth, you multiply the vectors elementwise.
But then you add those values so what you doing is an "inner product", not just element wise multiplictation.

I haven't seen that plus symbol before, what does it mean?

Anyhow, thanks for your replies, both of you! :smile:
 
  • #10
Rasalhague said:
Hi Mårten, the symbol means "direct sum". The sum of two subspaces, with underlying sets U and V, is another vector space defined as

U + V = \left \{ u + v : u \in U, v \in V \right \}.

If the intersection U \cap V = \left \{ \textbf{0} \right \}, that is, if U and V have no vectors in common, then the sum is called the direct sum, and can be written U \oplus V. For subspaces with underlying sets V_1, V_2 etc.,

\bigoplus_{i=1}^n V_i := V_1 + V_2 ... V_n
Okey, I think I understand now, sort of.

HallsofIvy said:
But then you add those values so what you doing is an "inner product", not just element wise multiplictation.
No, I'm actually not adding those values, I'm looking at them separately, as I am interested in the individual output for all the separate industries, not the sum of the output from all the industries. I would loose information if I did sum them up. If I'm not getting you wrong...
 
  • #11
Last edited by a moderator:
  • #12
Mårten said:
No, I'm actually not adding those values, I'm looking at them separately, as I am interested in the individual output for all the separate industries, not the sum of the output from all the industries. I would loose information if I did sum them up. If I'm not getting you wrong...

So here's the deal: the inner product is very common in mathematical work (and there's accepted notation for it), and what you describe is not (and so there's not established notation). That's not an unusual situation at all--many mathematical articles start out by defining some notation that's useful for the task at hand, but not standard. So if you want to do componentwise multiplication, just say up front that when you write a<whatever>b, that's what you mean. As far as I know, there isn't any notation for this that everyone will recognize without any explanation on your part.
 
  • #13
Okey, I will do something like that.

Thanks all for the replies! :smile:
 
Back
Top