Mathematical notation for elementwise multiplication

In summary: V_n.So in summary, in MATLAB you can write A.*B to indicate that you want to multiply the vectors A and B elementwise, and then get the result in vector C. However, multiplication between vectors should be taken to be componentwise.
  • #1
Mårten
126
1
Hi,

I wonder if anyone knows of a mathematically established way of writing elementwise multiplication between two vectors? In MATLAB you can write A .* B to indicate that you want to multiply the vectors A and B elementwise. In my case, I have two column vectors, A and B, and I want to multiply them elementwise and get the result in a vector C. A Latex code for this, if it exists, would also be appreciated.

/Mårten
 
Physics news on Phys.org
  • #2
I would just make a note up front that multiplication between vectors should be taken to be componentwise.
 
  • #3
How do you mean "up front"? You mean that I explain this in the surrounding text? Okey, that's a possibility, but shouldn't there be a way to express this mathematically?

I was figuring that if you can denote a matrix [itex]A=(a_{ij})[/itex], which I seen in several texts, then it ought to be possible to write a column vector as [itex]\boldsymbol{a}=(a_i)[/itex] and another column vector as [itex]\boldsymbol{b}=(b_i)[/itex]. And then element wise multiplication could possibly be written as a new column vector [itex]\boldsymbol{c}=(a_i b_i)[/itex] ?

Is that an unambiguous way to express what I want, or could it be misinterpreted?

Or what do others here think? Some more suggestions?EDIT: 1) I suppose the same should apply for matrices, so for any given pair of [itex]i,j[/itex], the element wise multiplication of the matrices [itex](a_{ij})[/itex] and [itex](b_{ij})[/itex] ought to be denoted as [itex](a_{ij}b_{ij})[/itex].
2) As I've understood it, [itex](a_{ij})[/itex] denote a matrix, and [itex]a_{ij}[/itex] denote an individual element in that matrix. What distinguishes these two ways of writing, is the parentheses in the former case. Please correct me if I'm wrong.
 
Last edited:
  • #4
There isn't someone who can confirm that what I'm saying above is correct? Or are there any other ways to denote element wise multiplication?
 
  • #5
There simply isn't enough use for "element wise multiplication" of vectors for it to have a specific notation. That, together with "element wise multiplication" of matrices would pretty much negate the whole point of defining vectors and matrices.
 
  • #6
Mårten said:
write a column vector as [itex]\boldsymbol{a}=(a_i)[/itex] and another column vector as [itex]\boldsymbol{b}=(b_i)[/itex]. And then element wise multiplication could possibly be written as a new column vector [itex]\boldsymbol{c}=(a_i b_i)[/itex] ?

Is that an unambiguous way to express what I want, or could it be misinterpreted?
Sure, this is unambiguous. Since this is heavily basis-dependent, it is not a usual thing to do. If you want to be fancy: given a finite-dimensional vector space V over F and a fixed basis (e1,..,en), we have
[tex]V=\bigoplus_{i=1}^n \mathbb{F}e_i[/tex]

For every j, define the "projection"

[tex]P_j:\bigoplus_{i=1}^n \mathbb{F}e_i\to \mathbb{F}[/tex]
[tex](\lambda_1e_1,..,\lambda_ne_n)\mapsto \lambda_n.[/tex]

Then your product of the vectors a and b is the vector c which satisfies

[tex]P_i(c)=P_i(a)P_i(b).[/tex]

Of course, this is just what you said.

Or let T_i be the linear map whose matrix (w.r.t. this basis) has all entries zero, except for entry at row i, column i which has a 1. So it acts as

[tex]T_i(a)=T_i(a_1,..,a_n)=(0,..0,a_i,0,...,0).[/tex]

Then c is the vector

[tex]c=\sum_{i=1}^n \left<T_i(a),T_i(b)\right>e_i.[/tex]

where <..,..> denotes the inner product.
 
Last edited:
  • #7
HallsofIvy said:
That, together with "element wise multiplication" of matrices would pretty much negate the whole point of defining vectors and matrices.
Hm... I'm still a beginner in linear algebra. What would you say is the whole point with defining vectors and matrices then?

I found it pretty common when you deal with different dataseries, that you would like to do elementwise multiplication. For instance, you could have a vector describing the economic output from different industries. Then you have another vector describing the different growth rates for these industries. So to get the new output after the growth, you multiply the vectors elementwise.

Landau said:
Sure, this is unambiguous. Since this is heavily basis-dependent, it is not a usual thing to do. If you want to be fancy: given a finite-dimensional vector space V over F and a fixed basis (e1,..,en), we have
[tex]V=\bigoplus_{i=1}^n \mathbb{F}e_i[/tex]
I haven't seen that plus symbol before, what does it mean?

Anyhow, thanks for your replies, both of you! :smile:
 
  • #8
Hi Mårten, the symbol means "direct sum". The sum of two subspaces, with underlying sets U and V, is another vector space defined as

[tex]U + V = \left \{ u + v : u \in U, v \in V \right \}.[/tex]

If the intersection [itex]U \cap V = \left \{ \textbf{0} \right \}[/itex], that is, if U and V have no vectors in common, then the sum is called the direct sum, and can be written [itex]U \oplus V[/itex]. For subspaces with underlying sets [itex]V_1[/itex], [itex]V_2[/itex] etc.,

[tex]\bigoplus_{i=1}^n V_i := V_1 + V_2 ... V_n[/tex]
 
  • #9
Mårten said:
Hm... I'm still a beginner in linear algebra. What would you say is the whole point with defining vectors and matrices then?
Basically, linear combinations. If we define "element wise" multiplication, without the usual addition of those products, we lose the intermingling of different parts.

I found it pretty common when you deal with different dataseries, that you would like to do elementwise multiplication. For instance, you could have a vector describing the economic output from different industries. Then you have another vector describing the different growth rates for these industries. So to get the new output after the growth, you multiply the vectors elementwise.
But then you add those values so what you doing is an "inner product", not just element wise multiplictation.

I haven't seen that plus symbol before, what does it mean?

Anyhow, thanks for your replies, both of you! :smile:
 
  • #10
Rasalhague said:
Hi Mårten, the symbol means "direct sum". The sum of two subspaces, with underlying sets U and V, is another vector space defined as

[tex]U + V = \left \{ u + v : u \in U, v \in V \right \}.[/tex]

If the intersection [itex]U \cap V = \left \{ \textbf{0} \right \}[/itex], that is, if U and V have no vectors in common, then the sum is called the direct sum, and can be written [itex]U \oplus V[/itex]. For subspaces with underlying sets [itex]V_1[/itex], [itex]V_2[/itex] etc.,

[tex]\bigoplus_{i=1}^n V_i := V_1 + V_2 ... V_n[/tex]
Okey, I think I understand now, sort of.

HallsofIvy said:
But then you add those values so what you doing is an "inner product", not just element wise multiplictation.
No, I'm actually not adding those values, I'm looking at them separately, as I am interested in the individual output for all the separate industries, not the sum of the output from all the industries. I would loose information if I did sum them up. If I'm not getting you wrong...
 
  • #11
Last edited by a moderator:
  • #12
Mårten said:
No, I'm actually not adding those values, I'm looking at them separately, as I am interested in the individual output for all the separate industries, not the sum of the output from all the industries. I would loose information if I did sum them up. If I'm not getting you wrong...

So here's the deal: the inner product is very common in mathematical work (and there's accepted notation for it), and what you describe is not (and so there's not established notation). That's not an unusual situation at all--many mathematical articles start out by defining some notation that's useful for the task at hand, but not standard. So if you want to do componentwise multiplication, just say up front that when you write a<whatever>b, that's what you mean. As far as I know, there isn't any notation for this that everyone will recognize without any explanation on your part.
 
  • #13
Okey, I will do something like that.

Thanks all for the replies! :smile:
 

Related to Mathematical notation for elementwise multiplication

1. What is elementwise multiplication?

Elementwise multiplication is a mathematical operation where the corresponding elements of two vectors or matrices are multiplied together to form a new vector or matrix. This means that the first element of one vector or matrix is multiplied by the first element of the other vector or matrix, the second element is multiplied by the second element, and so on. This results in a new vector or matrix with the same dimensions as the original vectors or matrices.

2. How is elementwise multiplication represented in mathematical notation?

Elementwise multiplication is typically represented using the symbol "*", similar to regular multiplication. However, to denote elementwise multiplication specifically, the symbol can be surrounded by dots, such as "⋅" or "." For example, if A and B are matrices, A⋅B or A.B would represent elementwise multiplication.

3. What is the difference between elementwise multiplication and regular multiplication?

The main difference between elementwise multiplication and regular multiplication is the way the numbers are multiplied. In regular multiplication, each number in one vector or matrix is multiplied by the corresponding number in the other vector or matrix. In elementwise multiplication, the numbers are multiplied one by one, regardless of their position in the vector or matrix. This means that the dimensions of the resulting vector or matrix will be the same in elementwise multiplication, while they may differ in regular multiplication.

4. What are some common applications of elementwise multiplication?

Elementwise multiplication is commonly used in various fields of science and engineering, such as signal processing, image processing, and statistics. For example, in signal processing, elementwise multiplication can be used to apply filters to signals, and in image processing, it can be used to apply transformations to images. In statistics, it can be used for operations such as calculating covariance and correlation between variables.

5. Is elementwise multiplication commutative?

No, elementwise multiplication is not commutative. This means that the order in which the matrices or vectors are multiplied matters. A⋅B may not be equal to B⋅A in elementwise multiplication, while in regular multiplication, the order does not affect the result.

Similar threads

  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
1
Views
942
  • Linear and Abstract Algebra
Replies
2
Views
944
  • Linear and Abstract Algebra
Replies
1
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
890
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Special and General Relativity
Replies
1
Views
552
Back
Top